June 17, 2019

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

0 bytes left

Around 2003–2004, a friend and I wrote a softsynth that was used in a 64 kB intro. Now, 14 years later, cTrix and Pselodux picked it up and made a really cool 32 kB tune with it! Who would have thought.

(For the record, the synth plus the original Nemesis tune fit under 16 kB given the right packer and some squeezing, even with some LPC samples. But there's heck of a lot more notes in this one :-) )

17 June, 2019 10:45PM

Emmanuel Kasper

Normalize a bunch of audio files to the same loudness

I had a bunch of audio files in a directory, each recorded live with different devices, and it proved very ear-painful to hear the audio files in a playlist because of the difference of loudness.
To normalize audio filesm  you can find a number of tool working with ID3 tags, but after testing with vlc, mplayer, and the pogo mp3 player none of them did produce a measurable change. So I converted everything to wav, normalized the wav files, then converted back to mp3.

delete funny chars and spaces in file names
detox music_dir
converting files to wav is just a matter of
# this uses zsh recursive globbing
for file in **/*.mp3 ; do ffmpeg -i $file  "$(basename $file .mp3).wav"; done

normalizing files with the normalize-audio program, from the debian package of the same name.
# this uses zsh recursive globbing
normalize-audio **/*.wav
converting back to mp3
for file in **/*.wav ; do ffmpeg -b:a 192k -acodec libmp3lame -i $file "$(basename $file .wav).mp3"; done

17 June, 2019 02:30PM by Emmanuel Kasper (noreply@blogger.com)

Manuel A. Fernandez Montecelo

Debian GNU/Linux riscv64 port in mid 2019

It's been a while since last post (Talk about the Debian GNU/Linux riscv64 port at RISC-V workshop), and sometimes things look very quiet from outside even if the people on the backstage never stop working. So this is an update on the status of this port before the release of buster, which should happen in a few weeks and which it will open the way for more changes that will benefit the port.

The Big Picture

First, the big picture(s):

Debian-Ports All-time Graph, 2019-06-17

Debian-Ports All-time Graph, 2019-06-17

As it can be seen in the first graph, perhaps with some difficulty, is that the percent of arch-dependent packages built for riscv64 (grey line) has been around or higher than 80% since mid 2018, just a few months after the port was added to the infrastructure.

Given than the arch-dependent packages are about half of the Debian['s main, unstable] archive and that (in simple terms) arch-independent packages can be used by all ports (provided that the software that they rely on is present, e.g. a programming language interpreter), this means that around 90% of packages of the whole archive has been available for this architecture from early on.

Debian-Ports Quarter Graph, 2019-06-17

Debian-Ports Quarter Graph, 2019-06-17

The second graph shows that the percentages are quite stable (for all architectures, really: the peaks and dips in the graph only represent <5% of the total). This is in part due to the freeze for buster, but it usually happens at other times as well (except in the initial bring-up or in the face of severe problems), and it really shows that even the second-class ports are in quite good health in broad terms.


Note: These graphs are for architectures in the debian-ports infrastructure (which hosts architectures not as well supported as the main ones, the ones present in stable releases). The graphs are taken from the buildd stats page, which also includes the main supported architectures.

A little big Thank You

Together, both graphs are also testament that there are people working on ports at all times, keeping things working behind the scenes, and that's why from a high level view it seems that things “just work”.

More in general, aside from the work of porters themselves, there are also people working on bootstrapping issues that make bringing up ports easier than in the past, or coping better when toolchain support or other issues deal an important blow to some ports. And, of course, all other contributors of Debian help by keeping good tools and building rules that work across architectures, patching the upstream software for needs of several architectures at the same time (endianness, width of basic types), many upstream projects are generic enough that they don't need specific porting, etc.

Thanks to all of you!

Next Steps

Installation on hardware, VMs, etc.

Due to several reasons, among them the limited availability of hardware able to run this Debian port and the limited options to use bootloaders during all this time, the instructions to get Debian running on RISC-V are not the best, easiest, more elegant or very up to date. This is an area to improve in the next months.

Meanwhile, there's a Debian RISC-V's wiki page with instructions to get a chroot working in a HiFive Unleashed board as shipped, without destroying the initial factory set-up.

Specially Vagrant Cascadian and Karsten Merker have been working on the area of booting the system, and there are instructions to set-up a riscv64 Qemu VM and boot it with u-boot and opensbi. Karsten is also working to get support in debian-installer, the main/canonical way to install Debian systems (perhaps less canonical nowadays with the use of OS images, but still hugely important).

Additionally, it would be nice to have images publicly available and ready to use, for both Qemu and hardware available like the HiFive Unleashed (or others that might show up in time), but although there's been some progress on that, it's still not ready and available for end users.

The last 10%+ of the archive

So, what's the remaining work left to have almost all of the archive built for this architecture? What's left to port, as such?

The main blockers to get closer to 100% of packages built are basically LLVM and Rust (which, in turn, depends on LLVM).

Currently there are more than 500 packages from the Rust ecosystem in the archive (which is about 4%), and they cannot be built and used until Rust has support for the architecture. And Rust needs LLVM, there's no Rust compiler based on GCC or other toolchains (as it's the case of Go, for example, in which there's a gcc-go compiler in addition to their own golang-go), so this is the only alternative.

Firefox is the main high-level package that depends on Rust, but many packages also depend on librsvg2 to render SVG images, and this library has been converted to Rust. We're still using the C version for that, but it cannot be sustained in the long term.

Aside from Rust, other packages directly depend or use LLVM to some extent, and this is not fully working for riscv64 at the moment, but it is expected that during 2019 the support of LLVM for riscv64 will be completed.

There are other programming language ecosystems that need attention, but they represent a really low percentage (only dozens of packages, of more than 12 thousand; and with no dependencies outside that set). And then, of course, there is long tail of packages that cannot be built due to a missing dependency, lack of support for the architecture or random failures -- together they make a substantial number of the total, but they need to be looked at and solved almost on a case-by-case basis.

Finally, when the gates of the unstable suite open again after the freeze for the stable release of buster, we will see tools with better support and patches can be accepted again to support riscv64, so we can hope that things will improve at a faster rate soon :-)

17 June, 2019 02:00AM by Manuel A. Fernandez Montecelo

June 16, 2019

Russ Allbery

Review: Abaddon's Gate

Review: Abaddon's Gate, by James S.A. Corey

Series: The Expanse #3
Publisher: Orbit
Copyright: 2013
ISBN: 0-316-23542-3
Format: Kindle
Pages: 540

Abaddon's Gate is the third book in the Expanse series, following Caliban's War. This series tells a single long story, so it's hard to discuss without spoilers for earlier books although I'll try. It's a bad series to read out of order.

Once again, solar system politics are riled by an alien artifact set up at the end of the previous book. Once again, we see the fallout through the eyes of multiple viewpoint characters. And, once again, one of them is James Holden, who starts the book trying to get out of the blast radius of the plot but is pulled back into the center of events. But more on that in a moment.

The other three viewpoint characters are, unfortunately, not as strong as the rest of the cast in Caliban's War. Bull is the competent hard-ass whose good advice is repeatedly ignored. Anna is a more interesting character, a Methodist reverend who reluctantly leaves her wife and small child to join an interfaith delegation (part of a larger delegation of artists and philosophers, done mostly as a political stunt) to the alien artifact at the center of this book. Anna doesn't change that much over the course of the book, but her determined, thoughtful kindness and intentional hopefulness was appealing to read about. She also has surprisingly excellent taste in rich socialite friends.

The most interesting character in the book is the woman originally introduced as Melba. Her obsessive quest for revenge drives much of the plot, mostly via her doing awful things but for reasons that come from such a profound internal brokenness, and with so much resulting guilt, that it's hard not to eventually feel some sympathy. She's also the subject of the most effective and well-written scene in the book: a quiet moment of her alone in a weightless cell, trying to position herself in its exact center. (Why this is so effective is a significant spoiler, but it works incredibly well in context.)

Melba's goal in life is to destroy James Holden and everything he holds dear. This is for entirely the wrong reasons, but I had a hard time not feeling a little bit sympathetic to that too.

I had two major problems with Abaddon's Gate. The first of them is that this book (and, I'm increasingly starting to feel, this series) is about humans doing stupid, greedy, and self-serving things in the face of alien mystery, with predictably dire consequences. This is, to be clear, not in the slightest bit unrealistic. Messy humans being messy in the face of scientific wonder (and terror), making tons of mistakes, but then somehow muddling through is very in character for our species. But realistic doesn't necessarily mean entertaining.

A lot of people die or get seriously injured in this book, and most of that is the unpredictable but unsurprising results of humans being petty assholes in the face of unknown dangers instead of taking their time and being thoughtful and careful. The somewhat grim reputation of this series comes from being relatively unflinching about showing the results of that stupidity. Bad decisions plus forces that do not care in the slightest about human life equals mass casualties. The problem, at least for me personally, is this is not fun to read about. If I wanted to see more of incompetent people deciding not to listen to advice or take the time to understand a problem, making impetuous decisions that make them feel good, and then turning everything to shit, I could just read the news. Bull as a viewpoint character doesn't help, since he's smart enough to see the consequences coming but can't stop them. Anna is the one character who manages to reverse some of the consequences by being a better person than everyone else, and that partly salvages the story, but there wasn't enough of that.

The other problem is James Holden. I was already starting to get annoyed with his self-centered whininess in Caliban's War, but in Abaddon's Gate it turns into eye-roll-inducing egomania. Holden seems convinced that everything that happens is somehow about him personally, and my tolerance for self-centered narcissists is, shall we say, at a historically low ebb. There's a point late in this book when Holden decides to be a sexist ass to Naomi (I will never understand what that woman sees in him), and I realized I was just done. Done with people pointing out to Holden that he's just a wee bit self-centered, done with him going "huh, yeah, I guess I am" and then making zero effort to change his behavior, done with him being the center of the world-building for no good reason, done with plot armor and the clear favor of the authors protecting him from consequences and surrounding him with loyalty he totally doesn't deserve, done with his supposed charisma which is all tell and no show. Just done. At this point, I actively loathe the man.

The world-building here is legitimately interesting, if a bit cliched. I do want to know where the authors are going with their progression of alien artifacts, what else humanity might make contact with, and what the rest of the universe looks like. I also would love to read more about Avasarala, who sadly didn't appear in this book but is the best character in this series so far. I liked Anna, I ended up surprising myself and liking Melba (or at least the character she becomes), and I like most of Holden's crew. But I may be done with the series here because I'm not sure I can take any more of Holden. I haven't felt this intense of dislike for a main series character since I finally gave up on The Wheel of Time.

Abaddon's Gate has a lot of combat, a lot of dead people, and a lot of gruesome injury, all of which is belabored enough that it feels a bit padded, but it does deliver on what it promises: old-school interplanetary spaceship fiction with political factions, alien artifacts, some mildly interesting world-building, and, in Melba, some worthwhile questions about what happens after you've done something unforgivable. It doesn't have Avasarala, and therefore is inherently far inferior to Caliban's War, but if you liked the previous books in the series, it's more of that sort of thing. If Holden has been bothering you, though, that gets much worse.

Followed by Cibola Burn.

Rating: 6 out of 10

16 June, 2019 05:17AM

June 15, 2019

hackergotchi for Erich Schubert

Erich Schubert

Chinese Citation Factory

RetractionWatch published in Feburary 2018 an article titled “A journal waited 13 months to reject a submission. Days later, it published a plagiarized version by different authors”, indicating that in the journal Multimedia Tools and Applications (MTAP) may have been manipulated in the editorial process.

Now, more than a year later, Springer apparently has retracted additional articles from the journal, as mentioned in the blog For Better Science. On the downside, Elsevier has been publishing many of these in another journal now instead…

I am currently aware of 22 32 retractions associated with this incident. One would have expected to see a clear pattern in the author names, but they seem to have little in common except Chinese names and affiliations, and suspicious email addresses (also, usually only one author has an email at all). It almost appears as if the identities are made up. And most retracted papers clearly contained citation spam: they cite a particular author very often, usually in a single paragraph. Interestingly, there are some exceptions where I did not spot obvious citation spam, so my guess is that they also sold authorship (apparently there is a market for this, c.f., https://www.sciencemag.org/news/2017/07/china-cracks-down-after-investigation-finds-massive-peer-review-fraud).

The retraction notices typically include the explanation “there is evidence suggesting authorship manipulation and an attempt to subvert the peer review process”, confirming the earlier claims by Retraction Watch. One of the articles was: “Received: 7 January 2018 /Revised: 10 January 2018 /Accepted: 10 January 2018” – yes, it claims to have had two rounds of peer review within three days. This should have triggered a “red alert” at Springer publishing.

So I used the CrossRef API to get the citations from all the articles (I tried SemanticScholar first, but for some of the retracted papers it only had the self-cite of the retraction notice), and counted the citations in these papers. Data is not perfect, and there can be name mismatches and incomplete data here. But overall, the data looks pretty clean (as far as I can tell, Springer provided this data to CrossRef). Results using SemanticScholar were similar, but based on fewer articles.

Essentially, I am counting how many citations authors lost by the retractions.

Here is the “high score” with the top 10 citation losers:

Author Citations lost Cited in papers Reference share Retractions
L Zhang 455 26 54.4% 2
M Song 76 23 10.8% 0
C Chen 71 22 10.3% 0
X Liu 69 22 10.0% 0
R Zimmermann 65 20 10.4% 0
J Bu 55 20 8.7% 0
D Tao 48 20 7.7% 0
Z Liu 38 20 6.0% 0
Y Gao 33 23 5.2% 0
X Li 27 17 5.7% 0

This is a surprisingly clear pattern: In 26 of the 32 retracted papers included here, L. Zhang was cited on average 17.5 times, being co-author of over 50% of the references - such citations should have raised a red flag during real paper review. Of two of the other retracted papers on my list, he was an author.

The next authors on this list seem to be there because of co-authoring with L. Zhang earlier, and hence receiving some share of his citations. In fact, if we ignore all references co-authored by L. Zhang, no author receives more than 5 citations. If we would distribute each citation uniformly across all authors (instead of giving each a full citation, which emphasizes papers with many authors), L. Zhang would receive 36% of the citation mass on average, and the second-most receiving author, R. Zimmermann, only 2.7%.

So this very clearly suggests that L. Zhang manipulated the MTAP journal to boost his citation index. And it is quite disappointing how long it took until Springer retracted those articles! Judging by the For Better Science article, there may be even more affected papers, and hence more citation count boosting.

15 June, 2019 10:02PM by Erich Schubert

hackergotchi for Joey Hess

Joey Hess

hacking water

From water insecurity to offgrid, solar pumped, gravity flow 1000 gallons of running water.

I enjoy hauling water by hand, which is why doing it for 8 years was not really a problem. But water insecurity is; the spring has been drying up for longer periods in the fall, and the cisterns have barely been large enough to get through.

And if I'm going to add storage, it ought to be above the house, so it can gravity flow. And I have this 100 watt array of 20 year old solar panels sitting unused after my solar upgrade. And a couple of pumps for a pressure tank system that was not working when I moved in. And I stumbled across an odd little flat spot halfway up the hillside. And there's an exposed copper pipe next to the house's retaining wall; email to Africa establishes that it goes down and through the wall and connects into the plumbing.

a solar panel with a large impact crater; the glass has cracked into thousands of peices but is still hanging together barely perceptable flat spot on tree covered hillside a copper pipe sheathed in black plastic curves out of the ground next to a wall

So I have an old system that doesn't do what I want. Let's hack the system..

(This took a year to research and put together, including learning a lot about plumbing.)

Run a cable from the old solar panels 75 feet over to the spring. Repurpose an old cooler as a pumphouse, to keep the rain off the Shurflow pump, and with the opening facing so it directs noise away from living areas. Add a Shurflow 902-200 linear current booster to control the pump.

red cooler attached to a tree with a pump in it. water is streaming out of one of the two pipes attached to it circuit board with terminals labeled PUMP, PV, HIGH, LOW, GND, FLOAT SWITCH 50 gallon water barrel perched on a hillside with some hoses connected to it

Run a temporary pipe up to the logging road, and verify that the pump can just manage to push the water up there.

Sidetrack into a week spent cleaning out and re-sealing the spring's settling tank. This was yak shaving, but it was going to fail. Build a custom ladder because regular ladders are too wide to fit into it. Flashback to my tightest squeezes from caving. Yuurgh.

very narrow concrete water tank with its concrete lid opened and a rough wooden ladder sticking out of it interior of water tank drained with muck covering the bottom and plaster flaking from walls interior of water tank with walls bright and new, water level sensors and pipe to pump

Install water level sensors in the settling tank, cut a hole for pipe, connect to pumphouse.

Now how to bury 250 feet of PEX pipe a foot deep up a steep hillside covered in rock piles and trees that you don't want to cut down to make way for equipment? Research every possibility, and pick the one that involves a repurposed linemans's tool resembling a medieval axe.

hillside strewn in large rocks with trees wherever there are not rocks just unboxed trenching tool looks like a large black metal spatula 30 feet of very narrow trench comes out of the woods and along the side of the house past the satellite internet dish lines drawn over a photo of the hillside show the pipe's curving route to the top

Dig 100 feet of 1 inch wide trench in a single afternoon by hand. Zeno in on the rest of the 300 foot run. Gain ability to bury underground cables without raising a sweat as an accidental superpower. Arms ache for a full month afterwards.

Connect it all up with a temporary water barrel, and it works! Gravity flow yields 30 PSI!

Pressure-test the copper pipe going into the house to make sure it's not leaking behind the retaining wall. Fix all the old leaky plumbing and fixtures in the house.

water pressure guage connected to PEX pipe coming out of trench and connecting to copper pipe that goes into house modern restaurant-style sprung arched faucet with water flowing into the kitchen sink

Clear a 6 foot wide path through the woods up the hill and roll up two 550 gallon Norwesco water tanks. Haul 650 pounds of sand up the hill, by hand, one 5 gallon bucket at a time. Level and prepare two 6 foot diameter pads.

Joey standing in front of a black 4x4 pickup truck with a large white 550 gallon water tank on its side in the bed and arching high above Joey from the back as he rolls water tank up a forested hill six foot circle marked off with rope and filled with sand; a water tank is in the background and tools are strewn around the cramped worksite

Build a buried manifold with valves turned by water meter key. Include a fire hose outlet just in case.

in a hole in the ground between two water tanks is a complex assembly of blue pipes and brass fittings, with several valves close-up of old cracked solar panel the two tanks installed, high on a hillside

Begin filling the tanks, unsure how long it will take as the pump balances available sunlight and spring flow.

15 June, 2019 04:02PM

François Marier

OpenSUSE 15 LXC setup on Ubuntu Bionic 18.04

Similarly to what I wrote for Fedora, here is how I was able to create an OpenSUSE 15 LXC container on an Ubuntu 18.04 (bionic) laptop.

Setting up LXC on Ubuntu

First of all, install lxc:

apt install lxc
echo "veth" >> /etc/modules
modprobe veth

turn on bridged networking by putting the following in /etc/sysctl.d/local.conf:

net.ipv4.ip_forward=1

and applying it using:

sysctl -p /etc/sysctl.d/local.conf

Then allow the right traffic in your firewall (/etc/network/iptables.up.rules in my case):

# LXC containers
-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and apply these changes:

iptables-apply

before restarting the lxc networking:

systemctl restart lxc-net.service

Creating the container

Once that's in place, you can finally create the OpenSUSE 15 container:

lxc-create -n opensuse15 -t download -- -d opensuse -r 15 -a amd64

To see a list of all distros available with the download template:

lxc-create -n foo --template=download -- --list

Logging in as root

Start up the container and get a login console:

lxc-start -n opensuse15 -F

In another terminal, set a password for the root user:

lxc-attach -n opensuse15 passwd

You can now use this password to log into the console you started earlier.

Logging in as an unprivileged user via ssh

As root, install a few packages:

zypper install vim openssh sudo man
systemctl start sshd
systemctl enable sshd

and then create an unprivileged user:

useradd francois
passwd francois
cd /home
mkdir francois
chown francois:100 francois/

and give that user sudo access:

visudo  # uncomment "wheel" line
groupadd wheel
usermod -aG wheel francois

Now login as that user from the console and add an ssh public key:

mkdir .ssh
chmod 700 .ssh
echo "<your public key>" > .ssh/authorized_keys
chmod 644 .ssh/authorized_keys

You can now login via ssh. The IP address to use can be seen in the output of:

lxc-ls --fancy

15 June, 2019 03:15AM

hackergotchi for Eddy Petri&#537;or

Eddy Petrișor

How to generate a usable map file for Rust code - and related (f)rustrations

Intro


Cargo does not produce a .map file, and if it does, mangling makes it very unusable. If you're searching for the TLDR, read from "How to generate a map file" on the bottom of the article.

Motivation

As a person with experience in embedded programming I find it very useful to be able to look into the map file.

Scenarios where looking at the map file is important:
  • evaluate if the code changes you made had the desired size impact or no undesired impact - recently I saw a compiler optimize for speed an initialization with 0 of an array by putting long blocks of u8 arrays in .rodata section
  • check if a particular symbol has landed in the appropriate memory section or region
  • make an initial evaluation of which functions/code could be changed to optimize either for code size or for more readability (if the size cost is acceptable)
  • check particular symbols have expected sizes and/or alignments

Rustrations 

Because these kind of scenarios  are quite frequent in my work and I am used to looking at the .map file, some "rustrations" I currently face are:
  1. No map file is generated by default via cargo and information on how to do it is sparse
  2. If generated, the symbols are mangled and it seems each symbol is in a section of its own, making per section (e.g. .rodata, .text, .bss, .data) or per file analysys more difficult than it should be
  3. I haven't found a way disable mangling globally, without editing the rust sources. - I remember there is some tool to un-mangle the output map file, but I forgot its name and I find the need to post-process suboptimal
  4. no default map file filename or location - ideally it should be named as the crate or app, as specified in the .toml file.

How to generate a map file

Generating map file for linux (and possibly other OSes)

Unfortunately, not all architectures/targets use the same linker, or on some the preferred linker could change for various reasons.

Here is how I managed to generate a map file for an AMD64/X86_64 linux target where it seems the linker is GLD:

Create a .cargo/config file with the following content:

.cargo/config:
[build]
    rustflags = ["-Clink-args=-Wl,-Map=app.map"]

This should apply to all targets which use GLD as a linker, so I suspect this is not portable to Windows integrated with MSVC compiler.

Generating a map file for thumb7m with rust-lld


On baremetal targets such as Cortex M7 (thumbv7m where you might want to use the llvm based rust-lld, more linker options might be necessary to prevent linking with compiler provided startup code or libraries, so the config would look something like this:
.cargo/config: 
[build]
target = "thumbv7m-none-eabi"
rustflags = ["-Clink-args=-Map=app.map"]
The thins I dislike about this is the fact the target is forced to thumbv7m-none-eabi, so some unit tests or generic code which might run on the build computer would be harder to test.

Note: if using rustc directly, just pass the extra options

Map file generation with some readable symbols

After the changes above ae done, you'll get an app.map file (even if the crate is of a lib) with a predefined name, If anyone knows ho to keep the crate name or at least use lib.map for libs, and app.map for apps, if the original project name can't be used.

The problems with the generated linker script are that:
  1. all symbol names are mangled, so you can't easily connect back to the code; the alternative is to force the compiler to not mangle, by adding the #[(no_mangle)] before the interesting symbols.
  2. each symbol seems to be put in its own subsection (e.g. an initalized array in .data.

Dealing with mangling

For problem 1, the fix is to add in the source #[no_mangle] to symbols or functions, like this:

#[no_mangle]
pub fn sing(start: i32, end: i32) -> String {
    // code body follows
}

Dealing with mangling globally

I wasn't able to find a way to convince cargo to apply no_mangle to the entire project, so if you know how to, please comment. I was thinking using #![no_mangle] to apply the attribute globally in a file would work, but is doesn't seem to work as expected: the subsection still contains the mangled name, while the symbol seems to be "namespaced":

Here is a some section from the #![no_mangle] (global) version:
.text._ZN9beer_song5verse17h0d94ba819eb8952aE
                0x000000000004fa00      0x61e /home/eddy/usr/src/rust/learn-rust/exercism/rust/beer-song/target/release/deps/libbeer_song-d80e2fdea1de9ada.rlib(beer_song-d80e2fdea1de9ada.beer_song.5vo42nek-cgu.3.rcgu.o)
                0x000000000004fa00                beer_song::verse
 
When the #[no_mangle] attribute is attached directly to the function, the subsection is not mangled and the symbol seems to be global:

.text.verse    0x000000000004f9c0      0x61e /home/eddy/usr/src/rust/learn-rust/exercism/rust/beer-song/target/release/deps/libbeer_song-d80e2fdea1de9ada.rlib(beer_song-d80e2fdea1de9ada.beer_song.5vo42nek-cgu.3.rcgu.o)
                0x000000000004f9c0                verse
I would prefer to have a cargo global option to switch for the entire project, and code changes would not be needed, comment welcome.

Each symbol in its section

The second issue is quite annoying, even if the fact that each symbol is in its own section can be useful to control every symbol's placement via the linker script, but I guess to fix this I need to custom linker file to redirect, say all constants "subsections" into ".rodata" section.

I haven't tried this, but it should work.

15 June, 2019 12:24AM by eddyp (noreply@blogger.com)

Utkarsh Gupta

GSoC Bi-Weekly Report - Week 1 and 2

Hello there.
The last two weeks have been adventurous. Here’s what happened.
My GSoC project is to package a software called Loomio. A little about Loomio:
Loomio is a decision-making software, designed to assist groups with the collaborative decision-making process.
It is a free software web-application, where users can initiate discussions and put up proposals.

Loomio is mostly written in Ruby, but also includes some CoffeeScript, Vue, JavaScript, with a little HTML, CSS.
The idea is to package all the dependencies of Loomio and get Loomio easily installable on the Debian machines.

The phase 1, that is, the first 4 weeks, were planned to package the Ruby and the Node dependencies. When I started off, I hit an obstacle. Little did we know about how to go about packaging complex applications like that.
I have been helping out in packages like gitlab, diaspora, et al. And towards the end of the last week, we learned that loomio needs to be done like diaspora.
First goes the loomio-installer, then would come the main package, loomio.

Now, the steps that are to be followed for loomio-installer are as follows:
» Get the app source.
» Install gem dependencies.
» Create database.
» Create tables/run migrations.
» Precomiple assets (scss -> css, et al).
» Configure nginx.
» Start service with systemd.
» In case of diaspora, JS front end is pulled via wrapper gems and in case of gitlab, it is pulled via npm/yarn.
» Loomio would be done with the same way we’re doing gitlab.

Thus, in the last two weeks, the following work has been done:
» Ruby gems’ test failures patched.
» 18 gems uploaded.
» Looked into loomio-installer’s setup.
» Basic scripts like nginx configuration, et al written.

My other activities in Debian last month:
» Updated and uploaded gitlab 11.10.4 to experimental (thanks to praveen).
» Uploaded gitaly, gitlab-workhorse.
» Sponsored a couple of packages (DM access).
» Learned Perl packaging and packaged 4 modules (thanks to gregoa and yadd).
» Learned basic Python packaging.
» Helping DC19 Bursary team (thanks to highvoltage).
» Helping DC19 Content team (thanks to terceiro).

Plans for the next 2 weeks:
» Get the app source via wget (script).
» Install gem and node dependencies via gem install and npm/yarn install (script).
» Create database for installer.
» Precomiple assets (scss -> css, et al).

I hope the next time I write a report, I’ll have no twists and adventures to share.

Until next time.
:wq for today.

15 June, 2019 12:04AM

June 14, 2019

hackergotchi for Olivier Berger

Olivier Berger

Virtual Labs presentation at the HubLinked meeting in Dublin

We have participated to the HubLinked workshop in Dublin this week, where I delivered a presentation on some of our efforts on Virtual Labs, in the hope that this could be useful to the partners designing the “Global Labs” where students will experiment together for Software Engineering projects.

In this presentation (PDF) I introduced our partners to the Labtainers and Antidote Open Source projects, which are quite promising for designing “virtual labs” using VMs and/or containers.

Thomas and I have recorded the speech, and I’ve used obs and kdenlive to edit the recording.

Here’s the results (unfortunately, the sound is of low quality):

Feel free to comment, ask, etc.

14 June, 2019 11:31AM by Olivier Berger

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, May 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 214 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 17 hours (out of 14 hours allocated plus 10 extra hours from April, thus carrying over 7h to June).
  • Adrian Bunk did 0 hours (out of 8 hours allocated, thus carrying over 8h to June).
  • Ben Hutchings did 18 hours (out of 18 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated plus 0.25 extra hours from April, thus carrying over 0.25h to June).
  • Emilio Pozuelo Monfort did 33 hours (out of 18 hours allocated + 15.25 extra hours from April, thus carrying over 0.25h to June).
  • Hugo Lefeuvre did 18 hours (out of 18 hours allocated).
  • Jonas Meurer did 15.25 hours (out of 17 hours allocated, thus carrying over 1.75h to June).
  • Markus Koschany did 18 hours (out of 18 hours allocated).
  • Mike Gabriel did 23.75 hours (out of 18 hours allocated + 5.75 extra hours from April).
  • Ola Lundqvist did 6 hours (out of 8 hours allocated + 4 extra hours from April, thus carrying over 6h to June).
  • Roberto C. Sanchez did 22.25 hours (out of 12 hours allocated + 10.25 extra hours from April).
  • Sylvain Beucler did 18 hours (out of 18 hours allocated).
  • Thorsten Alteholz did 18 hours (out of 18 hours allocated).

Evolution of the situation

May was a calm month, nothing really changed compared to April, we are still at 214 hours funded by month. We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 34 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

14 June, 2019 07:20AM by Raphaël Hertzog

Candy Tsai

Outreachy Week 4: Weekly Report

Just a normal weekly report this week. Can’t believe I’ve been in the Outreachy program for a month!

Progress for this week

Week 5 tasks

  • Fix the self service section merge request
  • Enhance the concept UI for the history section
  • Outreachy blog post

14 June, 2019 07:03AM by Candy Tsai

June 13, 2019

Julian Andres Klode

Encrypted Email Storage, or DIY ProtonMail

In the previous post about setting up a email server, I explained how I setup a forwarder using Postfix. This post will look at setting up Dovecot to store emails (and provide IMAP and authentication) on the server using GPG encryption to make sure intruders can’t read our precious data!

Architecture

The basic architecture chosen for encrypted storage is that every incoming email is delivered to postfix via LMTP, and then postfix runs a sieve script that invokes a filter that encrypts the email with PGP/MIME using a user-specific key, before processing it further. Or short:

postfix --ltmp--> dovecot --sieve--> filter --> gpg --> inbox

Security analysis: This means that the message will be on the system unencrypted as long as it is in a Postfix queue. This further means that the message plain text should be recoverable for quite some time after Postfix deleted it, by investigating in the file system. However, given enough time, the probability of being able to recover the messages should reduce substantially. Not sure how to improve this much.

And yes, if the email is already encrypted we’re going to encrypt it a second time, because we can nest encryption and signature as much as we want! Makes the code easier.

Encrypting an email with PGP/MIME

PGP/MIME is a trivial way to encrypt an email. Basically, we take the entire email message, armor-encrypt it with GPG, and stuff it into a multipart mime message with the same headers as the second attachment; the first attachment is a control information.

Technically, this means that we keep headers twice, once encrypted and once decrypted. But the advantage compared to doing it more like most normal clients is clear: The code is a lot easier, and we can reverse the encryption and get back the original!

And when I say easy, I mean easy - the function to encrypt the email is just a few lines long:

def encrypt(message: email.message.Message, recipients: typing.List[str]) -> str:
    """Encrypt given message"""
    encrypted_content = gnupg.GPG().encrypt(message.as_string(), recipients)
    if not encrypted_content:
        raise ValueError(encrypted_content.status)

    # Build the parts
    enc = email.mime.application.MIMEApplication(
        _data=str(encrypted_content).encode(),
        _subtype='octet-stream',
        _encoder=email.encoders.encode_7or8bit)

    control = email.mime.application.MIMEApplication(
        _data=b'Version: 1\n',
        _subtype='pgp-encrypted; name="msg.asc"',
        _encoder=email.encoders.encode_7or8bit)
    control['Content-Disposition'] = 'inline; filename="msg.asc"'

    # Put the parts together
    encmsg = email.mime.multipart.MIMEMultipart(
        'encrypted',
        protocol='application/pgp-encrypted')
    encmsg.attach(control)
    encmsg.attach(enc)

    # Copy headers
    headers_not_to_override = {key.lower() for key in encmsg.keys()}

    for key, value in message.items():
        if key.lower() not in headers_not_to_override:
            encmsg[key] = value

    return encmsg.as_string()

Decypting the email is even easier: Just pass the entire thing to GPG, it will decrypt the encrypted part, which, as mentioned, contains the entire original email with all headers :)

def decrypt(message: email.message.Message) -> str:
    """Decrypt the given message"""
    return str(gnupg.GPG().decrypt(message.as_string()))

(now, not sure if it’s a feature that GPG.decrypt ignores any unencrypted data in the input, but well, that’s GPG for you).

Of course, if you don’t actually need IMAP access, you could drop PGP/MIME and just pipe emails through gpg --encrypt --armor before dropping them somewhere on the filesystem, and then sync them via ssh somehow (e.g. patching maildirsync to encrypt emails it uploads to the server, and decrypting emails it downloads).

Pretty Easy privacy (p≥p)

Now, we almost have a file conforming to draft-marques-pep-email-02, the Pretty Easy privacy (p≥p) format, version 2. That format allows us to encrypt headers, thus preventing people from snooping on our metadata!

Basically it relies on the fact that we have all the headers in the inner (encrypted) message. To mark an email as conforming to that format we just have to set the subject to p≥p and add a header describing the format version:

       Subject: =?utf-8?Q?p=E2=89=A1p?=
       X-Pep-Version: 2.0

A client conforming to p≥p will, when seeing this email, read any headers from the inner (encrypted) message.

We also might want to change the code to only copy a limited amount of headers, instead of basically every header, but I’m going to leave that as an exercise for the reader.

Putting it together

Assume we have a Postfix and a Dovecot configured, and a script gpgmymail written using the function above, like this:

def main() -> None:
    """Program entry"""
    parser = argparse.ArgumentParser(
        description="Encrypt/Decrypt mail using GPG/MIME")
    parser.add_argument('-d', '--decrypt', action="store_true",
                        help="Decrypt rather than encrypt")
    parser.add_argument('recipient', nargs='*',
                        help="key id or email of keys to encrypt for")
    args = parser.parse_args()
    msg = email.message_from_file(sys.stdin)

    if args.decrypt:
        sys.stdout.write(decrypt(msg))
    else:
        sys.stdout.write(encrypt(msg, args.recipient))


if __name__ == '__main__':
    main()

(don’t forget to add missing imports, or see the end of the blog post for links to full source code)

Then, all we have to is edit our .dovecot.sieve to add

filter "gpgmymail" "myemail@myserver.example";

and all incoming emails are automatically encrypted.

Outgoing emails

To handle outgoing emails, do not store them via IMAP, but instead configure your client to add a Bcc to yourself, and then filter that somehow in sieve. You probably want to set Bcc to something like myemail+sent@myserver.example, and then filter on the detail (the sent).

Encrypt or not Encrypt?

Now do you actually want to encrypt? The disadvantages are clear:

  • Server-side search becomes useless, especially if you use p≥p with encrypted Subject.

    Such a shame, you could have built your own GMail by writing a notmuch FTS plugin for dovecot!

  • You can’t train your spam filter via IMAP, because the spam trainer won’t be able to decrypt the email it is supposed to learn from

There are probably other things I have not thought about, so let me know on mastodon, email, or IRC!

More source code

You can find the source code of the script, and the setup for dovecot in my git repository.

13 June, 2019 08:47PM

hackergotchi for Bits from Debian

Bits from Debian

100 Paper cuts kick-off

Introduction

Is there a thorny bug in Debian that ruins your user experience? Something just annoying enough to bother you but not serious enough to constitute an RC bug? Are grey panels and slightly broken icon themes making you depressed?

Then join the 100 papercuts project! A project to identify and fix the 100 most annoying bugs in Debian over the next stable release cycle. That also includes figuring out how to identify and categorize those bugs and make sure that they are actually fixable in Debian (or ideally upstream).

The idea of a papercuts project isn't new, Ubuntu did this some years ago which added a good amount of polish to the system.

Kick-off Meeting and DebConf BoF

On the 17th of June at 19:00 UTC we're kicking off an initial brainstorming session on IRC to gather some initial ideas.

We'll use that to seed discussion at DebConf19 in Brazil during a BoF session where we'll solidify those plans into something actionable.

Meeting details

When: 2019-06-17, 19:00 UTC Where: #debian-meeting channel on the OFTC IRC network

Your IRC nick needs to be registered in order to join the channel. Refer to the Register your account section on the OFTC website for more information on how to register your nick.

You can always refer to the debian-meeting wiki page for the latest information and up to date schedule.

Hope to see you there!

13 June, 2019 06:30PM by Jonathan Carter

June 12, 2019

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Nageru email list

The Nageru/Futatabi community is now large enough that I thought it would be a good idea to make a proper gathering place. So now, thanks to Tollef Fog Heen's hosting, there is a nageru-discuss list. It's expected to be low-volume, but if you're interested, feel free to join!

As for Nageru itself, there keeps being interesting development(s), but that's for another post. :-)

12 June, 2019 01:35PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.9.500.2.0

armadillo image

A new RcppArmadillo release based on a new Armadillo upstream release arrived on CRAN, and will get to Debian shortly. It brings a few upstream changes, including extened interfaces to LAPACK following the recent gcc/gfortran issue. See below for more details.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 610 other packages on CRAN.

Changes in RcppArmadillo version 0.9.500.2.0 (2019-06-11)

  • Upgraded to Armadillo release 9.500.2 (Riot Compact)

    • Expanded solve() with solve_opts::likely_sympd to indicate that the given matrix is likely positive definite

    • more robust automatic detection of positive definite matrices by solve() and inv()

    • faster handling of sparse submatrices

    • expanded eigs_sym() to print a warning if the given matrix is not symmetric

    • extended LAPACK function prototypes to follow Fortran passing conventions for so-called "hidden arguments", in order to address GCC Bug 90329; to use previous LAPACK function prototypes without the "hidden arguments", #define ARMA_DONT_USE_FORTRAN_HIDDEN_ARGS before #include <armadillo>

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 June, 2019 11:58AM

hackergotchi for Martin Michlmayr

Martin Michlmayr

ledger2beancount 1.8 released

I released version 1.8 of ledger2beancount, a ledger to beancount converter.

I ran ledger2beancount over the ledger test suite and made it much more robust. If ledger2beancount 1.8 can't parse your ledger file properly, I'd like to know about it.

Here are the changes in 1.8:

  • Add support for apply year
  • Fix incorrect account mapping of certain accounts
  • Handle fixated commodity and postings without amount
  • Improve behaviour for invalid end without apply
  • Improve error message when date can't be parsed
  • Deal with account names consisting of a single letter
  • Ensure account names don't end with a colon
  • Skip ledger directives eval, python, and value
  • Don't assume all filenames for include end in .ledger
  • Support price directives with commodity symbols
  • Support decimal commas in price directives
  • Don't misparse balance assignment as commodity
  • Ensure all beancount commodities have at least 2 characters
  • Ensure all beancount metadata keys have at least 2 characters
  • Don't misparse certain metadata as implicit conversion
  • Avoid duplicate commodity directives for commodities with name collisions
  • Recognise deferred postings
  • Recognise def directive

Thanks to Alen Siljak for reporting a bug.

You can get ledger2beancount from GitHub.

12 June, 2019 09:32AM by Martin Michlmayr

June 11, 2019

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in May 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Like in previous release cycles I published a new version of debian-games at the end to incorporate the latest archive changes. Unfortunately, Netbeans, the Java IDE, cuyo and holdingnuts didn’t make it and I demoted them to Suggests.
  • A longstanding graphical issue (#871223) was resolved in Neverball where stars in goal points where displayed as squares. As usual something (OpenGL-related?) must have changed somewhere but in the end the installation of some missing png files made the difference. How it worked without them before remains a mystery.
  • I sponsored two uploads which were later unblocked for Buster. Bernat reported a crash in etw, a football simulation game ported from the AMIGA. Fortunately Steinar H. Gunderson could provide a patch quickly. (#928240)
  • A rebuild of marsshooter, a great looking space shooter with an awesome soundtrack, may have been the trigger for a segmentation fault. Jacob Nevins stumbled over it and Bernhard Übelacker provided a patch to fix missing return statements.  (#929513)

Debian Java

  • I provided a security update for jackson-databind to fix CVE-2019-12086 (#929177) in Buster and prepared DSA-4452-1 to fix the remaining 11 CVE in Stretch.
  • Unfortunately Netbeans will not be in Buster. There were at least two issues why I could not recommend our Debian version, clear regressions in comparison to the version in Stretch. I found it odd that the severest one was fixed in Ubuntu shortly after the removal from testing. I surely would have appreciated the patch for Debian too. At the moment I don’t believe I will continue to work on Netbeans, very time consuming to get it in shape for Debian, too many dependencies, where the slightest changes in r-deps may cause bugs in Netbeans, nobody else in the Java team is really interested and most Java developers probably install the upstream version. A really bad combination.

Misc

Debian LTS

This was my thirty-ninth month as a paid contributor and I have been paid to work 18 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • I investigated CVE-2019-0227, axis and suggested to mark it as unimportant. I triaged CVE-2019-0227, ampache as no-dsa for Jessie.
  • DLA-1798-1. Issued a security update for jackson-databind fixing 1 CVE.
  • DLA-1804-1. Issued a security update for curl fixing 1 CVE.
  • DLA-1816-1. Issued a security update for otrs2 fixing 2 CVE.
  • DLA-1753-3. Issued a regression update for proftpd-dfsg. When the creation of a directory failed during sftp transfer, the sftp session would be terminated instead of failing gracefully due to a non-existing debug logging function.
  • DLA-1821-1. I’m currently testing the next security update of phpmyadmin. I triaged or fixed 19 CVE.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my twelfth month and I have been paid to work 8 hours on ELTS (15 hours were allocated). I intend to use the remaining hours in June.

  • I investigated three CVE in pacemaker, CVE-2018-16877, CVE-2018-16878, CVE-2019-3885 and found that none of them affected Wheezy.
  • ELA-127-1. Issued a security update for linux and linux-latest fixing 15 CVE.

Thanks for reading and see you next time.

11 June, 2019 08:27PM by Apo

Petter Reinholdtsen

More sales number for my Free Culture paper editions (2019-edition)

The first book I published, Free Culture by Lawrence Lessig, is still selling a few copies. Not a lot, but enough to have contributed slightly over $500 to the Creative Commons Corporation so far. All the profit is sent there. Most books are still sold via Amazon (83 copies), with Ingram second (49) and Lulu (12) and Machette (7) as minor channels. Bying directly from Lulu bring the largest cut to Creative Commons. The English Edition sold 80 copies so far, the French 59 copies, and Norwegian only 8 copies. Nothing impressive, but nice to see the work we put down is still being appreciated. The ebook edition is available for free from Github.

Title / language Quantity
2016 jan-jun 2016 jul-dec 2017 jan-jun 2017 jul-dec 2018 jan-jun 2018 jul-dec 2019 jan-may
Culture Libre / French 3 6 19 11 7 6 7
Fri kultur / Norwegian 7 1 0 0 0 0 0
Free Culture / English 14 27 16 9 3 7 3
Total 24 34 35 20 10 13 10

It is fun to see the French edition being more popular than the English one.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

11 June, 2019 02:05PM

hackergotchi for Bits from Debian

Bits from Debian

DebConf19 welcomes its sponsors!

DebConf19 logo

DebConf19 is taking place in Curitiba, Brazil, from 21 July to 28 July 2019. It is the 20th edition of the Debian conference and organisers are working hard to create another interesting and fruitful event for attendees.

We would like to warmly welcome the first 29 sponsors of DebConf19, and introduce you to them.

So far we have three Platinum sponsors.

Our first Platinum sponsor is Infomaniak. Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

Next, as a Platinum sponsor, is Google. Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf since more than ten years, and is also a Debian partner.

Lenovo is our third Planinum sponsor. Lenovo is a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office solutions and data center solutions. This is their first year sponsoring DebConf.

Our Gold sponsor is Collabora, a global consultancy delivering Open Source software solutions to the commercial world. Their expertise spans all key areas of Open Source software development. In addition to offering solutions to clients, Collabora's engineers and developers actively contribute to many Open Source projets.

Our Silver sponsors are: credativ (a service-oriented company focusing on open-source software and also a Debian development partner), Cumulus Networks, (a company building web-scale networks using innovative, open networking technology), Codethink (specialists in system-level software infrastructure supporting advanced technical applications), the Bern University of Applied Sciences (with over 6,800 students enrolled, located in the Swiss capital), Civil Infrastructure Platform, (a collaborative project hosted by the Linux Foundation, establishing an open source “base layer” of industrial grade software), \WIT (offering a secure cloud solution and complete data privacy via Kubnernetes encrypted hardware virtualisation), Hudson-Trading, (a company researching and developing automated trading algorithms using advanced mathematical techniques), Ubuntu, (the Operating System delivered by Canonical), NHS (with a broad product portfolio, they offer solutions, amongst others, for data centres, telecommunications, CCTV, and residential, commercial and industrial automation), rentcars.com who helps customers find the best car rentals from over 100 rental companies at destinations in the Americas and around the world, and Roche, a major international pharmaceutical provider and research company dedicated to personalized healthcare.

Bronze sponsors: 4Linux, IBM, zpe, Univention, Policorp, Freexian, globo.com.

And finally, our Supporter level sponsors: Altus Metrum, Pengwin, ISG.EE, Jupter, novatec, Intnet, Linux Professional Institute.

Thanks to all our sponsors for their support! Their contributions make it possible for a large number of Debian contributors from all over the globe to work together, help and learn from each other in DebConf19.

Become a sponsor too!

DebConf19 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf19 website at https://debconf19.debconf.org.

11 June, 2019 12:20PM by znoteer and Laura Arjona Reina

June 09, 2019

hackergotchi for Keith Packard

Keith Packard

snek-1.0

Snek 1.0

I've released version 1.0 of Snek today.

Features

  • Python-inspired. Snek is a subset of Python: learning Snek is a great way to start learning Python.

  • Small. Snek runs on an original Arduino Duemilanove board with 32kB of ROM and 2kB of RAM. That's smaller than the Apollo Guidance Computer

  • Free Software. Snek is licensed under the GNU General Public License (v3 or later). You will always be able to get full source code for the system.

Ports

Hosts

Documentation

Read the Snek manual online or in PDF form:

09 June, 2019 10:48PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#22: Using Rocker and PPAs for Fun and Profit

Welcome to the 22nd post in the reasonably rational R recommendations series, or R4 for short.

This post premieres something new: a matching video in lightning talk style:

The topic is something we had mentioned a few times before in this r^4 blog series, for example in this post on finding deb packages as well as in this post on binary installations. Binaries rocks, where available, and Michael Rutter’s PPAs should really be known and used more widely. Hence the video and supporting slides.

09 June, 2019 06:18PM

littler 0.3.8: Several nice new features

max-heap image

The nineth release of littler as a CRAN package is now available, following in the thirteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. And it is (in my very biased eyes) better as it allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript converted to only more recently.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH.

A few examples are highlighted at the Github repo, as well as in the examples vignette.

This release extends the support for options("Ncpus") to the scripts install.r and install2.r (which has docopt support) making installation of CRAN packages proceed in parallel and thus quite a bit faster. We also added a new script to run tests from the excellent tinytest package, made the rhub checking scripts more robust to the somewhat incomplete latex support there, and updated some documentation.

The NEWS file entry is below.

Changes in littler version 0.3.8 (2019-06-09)

  • Changes in examples

    • The install.r and install2.r scripts now use parallel installation using options("Ncpu") on remote packages.

    • The install.r script has an expanded help text mentioning the environment variables it considers.

    • A new script tt.t was added to support tinytest.

    • The rhub checking scripts now all suppress builds of manual and vignettes as asking for working latex appears to be too much.

  • Changes in package

    • On startup checks if r is in PATH and if not references new FAQ entry; text from Makevars mentions it too.
  • Changes in documentation

    • The FAQ vignette now details setting r to PATH.

CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 June, 2019 04:49PM

Giovanni Mascellani

DQIB, the Debian Quick Image Baker

Debian supports (either officially or unofficially) a lot of architectures, which is of course a nice thing. Sometimes you want to play with some exotic architecture you are not familiar with, or you want to debug a problem with that architecture, but you do not have a computer implementing that architecture. Fortunately QEMU is able to emulate most of the architectures supported by Debian (ia64 being an exception), however it can be difficult to install it or to find ready-to-use images on the Internet (there are some, but usually they are quite a few years old). Let's also say that for some reason you cannot or do not want to use the Debian porterboxes (maybe you are not a DD, or you want to mess up with the network, or you want to be root). What do you do?

Mostly for the fun of hacking on some exotic architectures, I tried to brew together a little script, the Debian Quick Image Baker (DQIB). It is basically a wrapper that calls qemu-debootstrap with the right options (where "right" means "those that I have experimentally found to work"), with some thin icing layer on top. qemu-debootstrap is basically another wrapper on top of debootstrap, which of course does the heavy lifting, and qemu-user-static, that allows debootstrap to run executables for foreign architectures.

With DQIB you can quickly create working images for most Debian official architectures (i386, amd64, mips, mipsel, mips64el, armhf, arm64, ppc64el). s390x works, but requires a little workaround because of a little bug that was fixed in recent QEMU versions. Images for armel can be created, but the only Linux kernel offered by Debian for armel does not work on any QEMU machine. I don't know of a workaround here. I would also like to support non official architectures, but this is work in progress. For all the non official architecture, either qemu-debootstrap fails for some reason, or I cannot find the right options to make the Debian-distributed kernel running (except for riscv64, where I know how to make the kernel work, but it requires some non trivial changes to the DQIB script; however, the riscv64 panorama is very dynamical and things could change in very little time).

You can either clone the repository and run DQIB on you computer (check out the README), or download pre-baked images regenerated weekly by a CI process (which include the right command line to launch QEMU; see above for the definition of "right").

(You might ask why this is hosted on Gitlab.com instead of the Debian Developer's obvious choice. The reason is that the artifacts generated by the CI are rather large, and I am not sure DSA would be happy to have them on their servers)

Have fun, and if know how to support more architectures please let me know!

09 June, 2019 01:00PM by Giovanni Mascellani

hackergotchi for Jonathan McDowell

Jonathan McDowell

NIDevConf 19 slides on Home Automation

The 3rd Northern Ireland Developer Conference was held yesterday, once again in Riddel Hall at QUB. It’s a good venue for a great conference and as usual it was a thoroughly enjoyable day, with talks from the usual NI suspects as well as some people who were new to me. I finally submitted a talk this year, and ended up speaking about my home automation setup - basically stringing together a bunch of the information I’ve blogged about here over the past year or so. It seemed to go well other than having a bit too much content for the allocated time, but I got the main arc covered and mostly just had to skim through the additional information. I’ve had a similar talk accepted for DebConf19 this Summer, with a longer time slot that will allow me to go into a bit more detail about how Debian has enable each of the pieces.

Slides from yesterday’s presentation are below; if you’re a regular reader I doubt there’ll be anything new and it’s a slide deck very much intended to be talked around rather than stand alone so if you weren’t there they’re probably not that useful. I believe the talk was recorded, so I’ll update this post with a link once that’s available (or you can check the NIDevConf Youtube channel yourself).

Note that a lot of the slides have very small links at the bottom which will take you to either a blog post expanding on the details, or an external reference I think is useful.

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

</embed>

Also available for direct download.

09 June, 2019 10:50AM

June 08, 2019

Thorsten Alteholz

My Debian Activities in May 2019

FTP master

Nothing changed compared to last month, so this was again a quiet month. I only accepted 126 packages and rejected 15 uploads. The overall number of packages that got accepted was 156.

Debian LTS

This was my fifty ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 18h. During that time I did LTS uploads or prepared security uploads of:

  • [DLA 1783-1] atftp security update for two CVEs
  • [DLA 1803-1] php5 security update for three CVEs
  • [DLA 1807-1] vcftools security update for three CVEs
  • [DLA 1811-1] miniupnpd security update for six CVEs

I also helped the maintainer of lemonldap-ng to create his DLA 1791-1. Further I created a package for testing bind9 and wpa, but both failed miserably in the wild, so I have to start from scratch here.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twelfth ELTS month.

During my allocated time I uploaded:

  • ELA-120-1 of php5 for one CVE
  • ELA-122-1 of curl for one CVE

As like LTS, the bind9 package did not really work, thanks to Roberto C. Sánchez for telling me this.

I also did some days of frontdesk duties.

Other stuff

I uploaded a new upstream version of …

I uploaded a new package for …

On my Go challenge I uploaded golang-github-joyent-gosign, golang-golang-x-xerrors, golang-gopkg-ldap.v3, golang-github-ovh-go-ovh

08 June, 2019 11:47AM by alteholz

Emmanuel Kasper

PowerShell on Debian

I heard some time ago that Microsoft released their interactive and
scripting language PowerShell under an opensource license (MIT) but I completely missed that they were providing a repository and ready to use packages for your favorite distribution.

Anyway an apt-get away and that's it:



New-Object net.sockets.tcpclient("libera.cc", 80) opens a TCP connection to a target host, a quick way to test if a port is open ( look for Connected: True for a successful socket creation)

08 June, 2019 11:18AM by Emmanuel Kasper (noreply@blogger.com)

June 07, 2019

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Sinonym

I’d like to use “sinonym” as another word for an immoral act. Or perhaps to refer to the Chinese name for something. Sadly, I think it might just be another word for another word.

07 June, 2019 06:46PM by Benjamin Mako Hill

hackergotchi for Norbert Preining

Norbert Preining

Nessie Mystery: Finally Solved

The long standing mystery of Nessie, the Monster of Loch Ness, has finally been resolved! And that by my daughter of three years!

In a lovely present my daughter got recently in Berlin, a book full of sheep, in particular Shaun the Sheep (in German a so called “Wimmelbuch”), my daughter spotted Nessie on one of the images. It is clearly to be seen in the above image.

And with Sherlock Holmes worthy detectiveness, she realized the Nessie is in fact a sheep, and even more, it is Shaun. The reason is quite simple:

Both have this very strange mouth!

Finally the world can rest in peace.

07 June, 2019 01:04PM by Norbert Preining

Candy Tsai

Outreachy Week 1 – Week 3: Working Remotely is Hard

Time flies! I am already into the 3rd week of the internship, which is also a perfect time for a retrospective. This is an honest share to what working remotely has been like for me and also a report to record what I’ve been doing these three weeks.

About Working Remotely

A walkthrough of my first few days

Woke up in the morning with a sense of guilt. It felt extremely weird not having to rush through the morning chores and get onto the train for work. At 10 I finally sat down on the couch in the living room, stared at the TV in front of me. Without much thought I turned it on and when it got turned off, it was already noon. “What the ****!”, my head was banging inside as more guilt poured into me.

“I really can’t go on like this”, my brain whispered to myself. Finally, I got up and cleaned out a room for “work”. Whenever I sit at this spot, it means I cannot do anything else but work.

Home used to be a place where only relaxation took place, it was super hard to concentrate on a task for more than one hour. I really had to physically clear out a space that’s exclusive for work and while at it I should do nothing else but work. Without a doubt it was guilt that pushed me forward during the first week.

11 hours of time difference

I have two mentors (terceiro and elbrus) and we live in three completely different time zones. The weekly sync meeting took place at 19:00 for me, which is 8:00 for terceiro and 13:00 for elbrus. Most of what I’m working on relates to terceiro, so that meant we have 11 hours of time difference.

To get through this, I made a decision to cut my time in half towards the end of week 2. I will work 6 hours in the day and 2 hours during midnight. This way, if I had to discuss or ask questions, I can have two chances each day. My mentors are also in full-time jobs, so I wouldn’t want to trouble them too much (but I still bump into issues every while and then).

Working out a system

I admit that I am not a person with good self control. In order to commit 40 hours to work each week, this is the current strategy I have started using at week 2.

  • Log what I did every day
  • Do not eat at “the working spot”
  • Make sure I have “off” hours

Log what I did every day

I tried to write a todo list every day, but that didn’t work out well for me. Sometimes I do get stuck on stuff, and that gives me a lot of pressure for not getting things “done”. At the end of the day, my morale got low and then the next day I started with no energy.

By logging what I did every day, especially the obstacles I bumped into and how it got solved kept me energetic, because I know I am learning something every day. When you didn’t do anything that day, meaning that the log is completely empty, guilt will build up and make sure you do something before the day ends.

Do not eat at “the working spot”

Not doing anything else at “the working spot” was actually quite easy except for eating. I thought I could get more done when I ate at my desk, but that was totally wrong. Before working remotely, taking a break after lunch was my habit. It’s not hard to imagine that after I finished lunch at “the working spot”, I started fumbling around and that put me into a late start for the afternoon.

Make sure I have “off” hours

This was the most difficult of all! “Decide for yourself when to work and when not to work”, this was probably a slogan to get people sold on working remotely. When there is no hard lines on getting to work and getting off work often becomes, working a bit the whole day. You feel like you are working or thinking about the project 24 hours, but it doesn’t seem like you have done much for a 24 hour workload.

Finally, after three weeks of adjustments, I think I can see the silver lining of how this is going to work. Hope all goes well until the end of the internship!

Week 1: Vagrant

During the application process, quite a few applicants were stuck at setting up the development environment for debci. I would like to improve the experience. At first, I used docker because I was more familiar with it. After finding an empty VagrantFile in the repository, I decided to try something new and went with vagrant. Learned some shell script and user permissions during the process and the biggest find was to use the debian/contrib-stretch64 box rather than debian/stretch64. The contrib version contains the guest addition stuff.

Week 2: Starting First Tasks

I worked on opening a self service section for users to requests tests themselves this week. Got stuck a little bit every step of the way, but not really stuck. Interesting finds that I would like to log.

  • For static file generation, create a symlink in public directory to the file to be generated in public/data/.html and add it to bin/debci-generate-html
  • Sinatra automatically loads layout.erb if it exists, and you can change set :views, <PATH> to change the default directory to store the erb files
  • In order to use Ruby to print to stdout when using foreman start you have to add $stdout.sync = true before the print runs
  • Use the general test API in order to fulfill all the user stories
  • Decide to use Ruby to generate dynamic HTML through erb templates, because it could be more customizable
  • Use Sinatra::ContentFor to seperate JavaScript files into the templates, and create dummy functions in lib/debci/html.rb for static file generation to work

Before the internship, I worked as a front end developer, which means JavaScript, JavaScript, and more JavaScript these days. Interesting when I found out that debci would also like the site to work perfectly without JavaScript functioning. This got me thinking about all those time we just neglected users who don’t enable JavaScript. I got curious and checked big sites like Facebook, Twitter, Gmail and found that you will get this big fat warning about turning off JavaScript. Although you get this big fat warning, they all left a “nonJS workable” version available, which is an interesting find.

Week 3: Searching, Pagination & UX

This week, besides the Self Service section, I also tried fixing the search package part of the index page. Currently it just takes what the user inputs and redirect them to the page which may or may not exist.

Things I did or should log down for this week:

We discussed and decided to work on non-JS versions of the Self Service Section. With JavaScript enabled, the UX will be more complicated which requires more time to generate the whole concept and I would like things to work first since it doesn’t exist yet.

That’s all for now, these weekly updates will start to come “weekly” after this post.

07 June, 2019 05:09AM by Candy Tsai

June 06, 2019

Reproducible builds folks

Reproducible Builds in May 2019

Welcome to the May 2019 report from the Reproducible Builds project! In our reports we outline the most important things which have been up to in and around the world of reproducible builds & secure toolchains over the past month.

As a quick recap, whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users pre-compiled. The motivation behind reproducible builds effort is to ensure no malicious flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing third-parties to come to a consensus on whether a build was compromised.

In this month’s report, we will cover:

  • Media coverageMore supply chain attacks, Reproducible Builds at conferences, etc.
  • Upstream newsMozilla updates their add-on policy, etc.
  • Distribution workDebian Installer progress, openSUSE updates.
  • Software developmentA try.diffoscope.org rewrite, more upstream patches, etc.
  • Misc newsFrom our mailing list, etc.
  • Getting in touchHow to contribute, contact details, etc.

If you are interested in contributing to our project, please visit our Contribute page on our website.


Media coverage


Upstream news

The IPFSPackage Managers Special Interest Group” is gathering research around package management, much of which is relevant to the Reproducible Builds effort.

Atharva Lele plans to work on reproducible builds for the Buildroot embedded Linux project as part of Google Summer of Code, ensuring that two instances of buildroot running with the same configuration for the same device yield the same result.

Mozilla’s latest update to the Firefox add-on policy now dictates that add-ons may contain “transpiled, minified or otherwise machine-generated code” but Mozilla needs to review a copy of the human-readable source code. The author must provide this information to Mozilla during submission along with instructions on how to reproduce the build.


Distribution work

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution.

Holger Levsen filed a wishlist request requesting that Debian’s .buildinfo build environment specification documents from the Debian Long Term Support (LTS) project are also distributed by the build/archive infrastructure so that the reproducibility status of these security packages can be validated.

There was yet more progress towards making the Debian Installer images reproducible. Following-on from last months, Chris Lamb performed some further testing of the generated images and requested a status update which resulted in a call for testing the possible removal of a now-obsolete workaround that is hindering progress.

68 reviews of Debian packages were added, 30 were updated and 11 were removed this month, adding to our knowledge about identified issues. Chris Lamb discovered, identified and triaged two new issue types, the first identifying randomness in Fontconfig .uuid files [] and another randomness_in_output_from_perl_deparse.

Finally, GNU Guix announced its 1.0.0 release.


Software development

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream wherever possible. This month, we wrote a large number of such patches, including:

Finally, Vagrant Cascadian submitted a patch for u-boot boot loader fixing reproducibility when building a new type of compressed image. This was subsequently merged in version 2019.07-rc2.

diffoscope

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. It does not define reproducibility, but rather provides a helpful and human-readable guidance for packages that are not reproducible, rather than relying essentially-useless diffs.

  • Chris Lamb:

    • Support the latest PyPI package repository upload requirements by using real reStructuredText comments instead of the raw directive [] and by stripping out manpage-only parts of the README rather than using the only directive [].

    • Fix execution of symbolic links that point to the bin/diffoscope entry point in a checked-out version of our Git repository by fully resolving the location as part of dynamically calculating Python’s module include path. []

    • Add a Dockerfile [] with various subsequent fixups [][][].

    • Published the resulting Docker image in diffoscope’s container registry and updated the diffoscope homepage to provide “quick start” instructions on how to use diffoscope via this image.

  • Mattia Rizzolo:

    • Uploaded version 115 to Debian experimental.
    • Adjust various build and test-dependencies, including specifying the ffmpeg video encoding tool/library and the Black code formatter [] in the build-dependencies [] and reinstating the oggvideotools and procyon-decompiler as test dependencies, now that are no-longer buggy [], etc.
    • Make the Debian autopkgtests not fail when a limited subset of “required tools” are temporarily unavailable. [][][]

In addition, Santiago Torres altered the behaviour of the tests to ensure compatibility with various versions of file(1) [] and Vagrant Cascadian added support for various external tools in GNU Guix [] and updated the version of diffoscope in that distribution [].

try.diffoscope.org

Chris Lamb made a large number of following changes to the web-based (“no installation required”) version of the diffoscope tool, try.diffoscope.org:

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. The following changes were done in the last month:

  • Holger Levsen made the following (Debian-related changes):

    • Reduce the number of cron(8) mails for synchronising .buildinfo files from eight to one per day. []
    • Run rsync2buildinfos.debian.net script every other hour now that it just produces one mail per day. [][]
    • Execute the package scheduler every 2 hours (instead of 3). []
    • Switch the Codethink and OSUOSL nodes to use our updated email relay system. [][]
    • Deal with the (rare) cases of .buildinfo files with the same name. [][]
    • Save and mail the package scheduler results once a day instead of mailing ~8 times a day. []
  • In addition, Holger Levsen made the following distribution-agnositic changes:

    • Notify the #reproducible-builds (not #debian-reproducible) about Jenkins rebooting and send notifications about offline hosts to this former channel. [][]
    • Prevent the Jenkins log from growing to over 100G in size. []
  • Mattia Rizzolo:

    • Use a special code so that remote builds can abort themselves by passing back the command to the “master”. [][][][]
    • Fix a pattern matching bug to ensure all “zombie” processes are found. []
    • flake8 the chroot-installation.yaml.py file. []
    • Set a known HTTP User Agent for Git, so that server can recognise us. []
    • Allow network access for the debian-installer-netboot-images Debian package. []

Finally, Vagrant Cascadian removed the deprecated --buildinfo-id from the pbuilder(8) configuration. [] and Holger Levsen [][][][][][] Mattia Rizzolo [] and Vagrant Cascadian all performed a large amount of build node maintenance, system & Jenkins administration.

Project website

Chris Lamb added various fixes for larger/smaller screens [], added a logo suitable for printing physical pin badges [] and refreshed the opening copy text on our SOURCE_DATE_EPOCH page.

Bernhard M. Wiedemann then documented a more concise C code example for parsing the SOURCE_DATE_EPOCH environment variable [][] and Holger Levsen added a link to a specific bug blocking progress in openSUSE to our Who is involved? page [].


Misc news

Lastly, Sam Hartman, the current Debian Project Leader, wrote on the debian-devel mailing list:

The reproducible builds world has gotten a lot further with bit-for-bit identical builds than I ever imagined they would. []

Thanks, Sam!


Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:



This month’s report was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

06 June, 2019 12:50PM

hackergotchi for Debian GSoC Kotlin project blog

Debian GSoC Kotlin project blog

Converting buildfiles to groovy; week 2 update

I spent the the first two weeks on updating build files of Kotlin to groovy so
that we can reduce the dependency on kotlin-dsl while packaging Kotlin.

task("dist") is the task that we call inorder to build Kotlin 1.3.30 so it would be
enough to translate and deal with only those subprojects which are involed in
the "dist" task graph. I wrote this code here find out exactly which tasks from
which subprojects are being called; the build files of these subprojects are also
shown.

There was a total of about 83 subprojects involved in the build process, out of
these about 67 had build files written in kotlin-dsl. During this week I
translated about 40 of those buildfiles, so only 27 more remain. I have also
translated the root projects build file. My work could be seen here.

The build logic for the build files are written in kotlin and placed in the
$rootDir/buildSrc directory. Kotlin supports writing custom extension functions
like fun String.customefunction() but groovy doesnt support this behaviour, so I
had to introduce intermediate functions for these functions in the build logic
so that these function can be invoked from within the groovy buildfiles.

I am also ignoring all publishing code logic also the test logics that look like
it may take a lot of time to translate. However I translate test logics as long
as they dont take up much time. Once I finish with translating business mostly
by next week I will go on ahead with trying to build the project with only the
relevant subporjects as child projects.

Also one thing I noticed is that I forgot to mention the source of intellij-core
and the version we need to package kotlin. Here is the source of intellij-core
183.5153 which is the version we need. Here is a link to the work I have done so
far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates weekly.

06 June, 2019 11:51AM by Saif Abdul Cassim

Converting build files to Groovy; week 2 update

I spent the first two weeks on updating build files of Kotlin to groovy so that we can reduce the dependency on kotlin-dsl while packaging Kotlin.

task("dist") is the task that we call in order to build Kotlin 1.3.30 so it would be enough to translate and deal with only those subprojects which are involed in the "dist" task graph. I wrote this code here find out exactly which tasks from which subprojects are being called; the build files of these subprojects are also shown.

There was a total of about 83 subprojects involved in the build process, out of these about 67 had build files written in kotlin-dsl. During this week I translated about 40 of those build files, so only 27 more remain. I have also translated the root projects build file. My work could be seen here.

The build logic for the build files are written in Kotlin and placed in the $rootDir/buildSrc directory. Kotlin supports writing custom extension functions like fun String.customfunction() but groovy doesnt support this behaviour, so I had to introduce intermediate functions for these functions in the build logic so that these function can be invoked from within the groovy buildfiles.

I am also ignoring all publishing code logic also the test logics that look like it may take a lot of time to translate. However I translate test logics as long as they don’t take up much time. Once I finish with translating business mostly by next week I will go on ahead with trying to build the project with only the relevant subprojects as child projects.

Also one thing I noticed is that I forgot to mention the source of intellij-core and the version we need to package Kotlin. Here is the source of intellij-core 183.5153 which is the version we need. Here is a link to the work I have done so far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I’ll try to maintain this blog and post the major updates weekly.

06 June, 2019 11:51AM by Saif Abdul Cassim

June 05, 2019

hackergotchi for Steve Kemp

Steve Kemp

Radio gaga, the odessy of automation

Recently I wanted to monitor the temperature and humidity of a sauna. I figured the safest way to go would be to place a battery-powered temperature/humidity sensor on a shelf, the kind of sensor that is commonly available on AliExpress for €1-5 each.

Most of the cheap "remote sensors" transmit their data over a short 433Mhz radio-transmission. So I just assumed it'd be possible to work something out.

The first step was to plug an SDR-dongle into my laptop, that worked just fine when testing, I could hear "stuff". But of course a Sauna is wood-lined, and beyond a tiled-shower area. In practice I just couldn't recieve the signal if my laptop lived in its usual location.

So I came up with a fall-back plan:

  • Wire a 433Mhz receiver to an ESP8266 device.
  • Sniff the radio-transmission.
    • Decode it
    • Inject into an MQ-host, via WiFi

Since the receiver could be within 10m of the transmitter I figured that would work fine - and it did. The real problem came when I tried to do this. There are a few projects you can find for acting as a 433Mhz -> WiFi bridge and none of them understood the transmission(s) my sensor was submitting.

In the end I had to listen for packets, work out the bit-spacing, and then later work out the actual contents of the packets. All by hand.

Anyway the end result is that I have something which will sniff the packets from the radio-transmitter, correctly calculate the temperature/humidity values and post them to MQ. From MQ a service polls the values and logs them to SQLite for later display. As a bonus I post to Slack the first time the temperature exceeds 50 &degree; a day:

  • "Hot? It's like a sauna in here."
  • "Main steam on, somebody set us up the beer."
  • etc.

Next week I'll talk about how I had a similar (read: identical) problem reacting to the 433Mhz transmission triggered by a doorbell. None of the gateways I looked at logged a thing when the button was pressed. So I'll have to save the transmission via rtl_433, analyze it with audacity, and do the necessary.

For reference these are the three existing firmwares/solutions/projects I tried; on a Wemos Mini D1:

  • https://github.com/xoseperez/espurna
  • https://github.com/arendst/Sonoff-Tasmota
    • Liked this one best for the UI.
  • https://github.com/1technophile/OpenMQTTGateway
    • This was sparse, but recommended too.
    • Tried two different modules: RF, RFPilight, wasn't clear what the difference was, no change anyway.

05 June, 2019 05:30PM

hackergotchi for Gregory Colpart

Gregory Colpart

Mini-DebConf Marseille 2019

We’ve had the idea to organize a mini-DebConf in Marseille when we were in Toulouse in 2017. After participating in many DebConfs (mini or not), getting into organizing such an event seemed a good way to give back and contribute to the Debian project.

Fast-forward to end of 2018. We’ve gathered a few motivated people and settled for a 50/70 participants event on May 25th/26th. We’ve chosen an appropriate venue in down-town Marseille. I won’t dwelve into organization details (call for speakers, sessions recording, scheduling…) since we plan to share our experience in a rather detailed “Howto organize a mini-DebConf” in the coming days/weeks.

It started on Wednesday 22nd, the wonderful DebConf video team arrived for a 3-days sprint. We had prepared a space for them to work at the venue. This gave a lot more time to setup the conference room than usual. But the main goal of the sprint was to teach new members of the team how to set everything up and be independent, in preparation of the upcoming mini-DebConf Hamburg a couple of weeks later.

On Friday 24th, the french localization team arrived for a 1-day sprint. Most of them had never met in person! They said this gave a huge boost to the team.

Most of the participants arrived Friday afternoon. The “Front Desk” was ready to welcome them with their badge and branded t-shirt. For ecological reasons, we’ve decided to restrict as much as possible useless goodies, like bags, pens or sponsors marketing material. A PDF booklet had been sent for participants to print at home if they wanted. The Debian France team had the usual goodies to sell at the front desk: mugs, hats, durable tote bags, stickers of all sizes…

On Friday evening, we’ve had a mini CheeseWineBOF with various local food (cheese, wine, pastis, olives, fruits, vegetables…) and some brought by participants. Thanks to Elena for the great italian cheese and also Judith and Tzafrir!

While the video team was struggling with a faulty cable making a ground loop, everyone was then invited to the Provence Linux Users Group meetup. Florence Devouard – a prominent member of the Wikipedia community – gave us a very nice presentation of the Wikipedia and Wikimedia history. The evening ended with a local tradition/specialty: pizzas of Marseille. The conference was already on the right track.

Saturday morning marked the official start of mini-DebConf! We opened the doors at 08:30 with a welcoming breakfast: home-made cookies, fresh coffee beans, fruit juice… During the whole weekend we’ve offered fresh, local, home-made vegetarian food. And with the goal to minimize waste we’ve chosen not to use any disposables. Besides durable dishes/glasses/cutlery, a lot of ceramic mugs were available with tape and pens to customize them and keep them around.

75 people registered. This was the maximum capacity of the venue. 73 people really came, which is a very good attendance rate, especially since it was an all-free conference, where people usually register without being sure to attend. We even had a few non-registered people.

Jérémy gave the welcome talk to a full room of Debian enthusiasts, with an overview of the schedule, presentation of the sponsors, reminder of the Code of Conduct and the photo policy, and various useful/important information…

The first session started at 10 am with a quite technical talk. Cyril Brulebois – Debian Installer release manager – presented how to visualize the migration of packages to Testing. He proposed a tool to visualize dependencies and “excuses” to help understand why a certain package might be blocked from migrating to Testing.

Then, Peter Green – co-founder of the Raspbian project – presented autoforwardportergit the tool he has made and is using to automate the creation of modified Debian packages for Raspbian.

After the coffee break, Raphaël Hertzog told us about 5 years of Debian LTS funding and what’s next. He explained the history of Debian LTS and how it works : managing the sponsors, spreading the workload between developers, the Extended LTS offer, the infrastructure… How to fund contributions in the Debian community is a hot topic which sparked a lot of questions, and even a Lightning Talk on Sunday.

During the lunch break – while the video team was training new volunteers to their tools – everyone was invited to a vegetarian (and mostly vegan) mediterranean buffet. We are very proud of offering home-made food, with local fresh products. Nothing was wasted and everyone was satisfied.

Benoît had organized a KSP (Key Signing Party) that took place after lunch. Approximately 20 people exchanged and verified their GPG key, to broaden their web of trust.

The second session started with Elena “of Valhalla” Grandi, presenting the ActivityPub protocol for federated social networks like Mastodon, Pixelfed, etc.

Coming from Madrid, Spain, Laura Arjona Reina presented the Debian Welcome Team and their work towards new participants in the Debian Project.

The Debian France organization was presented by Denis Briand – its newly elected president – to introduce the various projects and intended actions (eg. more frequent mini-DebConf events in France). Many people – including french people – discovered the existence of the organization and its important role in the whole Debian community as one of the few Trusted Organizations.

We continued with Frédéric Lenquette presenting « Hardening and Secure Debian Buster ». He explored all the opportunities to secure a Debian 10 setup.

For the final talk of the day, part of the french localization team (Thomas Vincent, Jean-Philippe Mengual and Alban Vidal) introduced us to their work : the workflow, what can be done by newcomers, etc.

Saturday evening – end of the first day – all the participants were invited to a social event at la Cane Bière, a beer place next to the venue, with a free drink for everyone (materialized by a token made from reused PCB). A few groups then formed and went to various restaurants.

Sunday morning, after another healthy breakfast, part of the DebConf video team (Nicolas Dandrimont and Louis-Philippe Véronneau) presented their amazing hardware and software setup. Everything is documented and as much free software as possible.

A series of 6 Lightning Talks, organized by Eda covered many various technical and non-technical topics : « kt-update » (Jean-François Brucker), « the Debian Constitution » (Judit Foglszinger), « Elections, Democracy, European Union » (Thomas Koch), « voting methods in Debian » (Raphaël Hertzog), « encrypt the whole disk with LUKS2 » (Cyril Brulebois), « OMEMO – the big fish in the Debian bowl » (Martin) and « Paye ton Logiciel Libre » (Victor).

After some closing remarks by Jérémy, it was already time to pack the video gear. A “brown bag lunch” was available. Some wanted to stay at the venue, to talk, hack… Some were already on their way back home. Many others had registered for a mini Day Trip ; they went down the main avenue of Marseille, to board for the Frioul islands for a good walk in the sun and a quick swim in the sea.

We sincerely want to thank a lot of people for this amazing weekend. Thank you to all 75 participants who came from all around the world (Canada, USA, Israel, UK, Germany, Italy, Spain, Switzerland, Belgium, Australia…). Thank you to the great video team who makes an amazing job capturing and streaming the content of many Debian events. Thank you to Debian France for organizing the event and to the sponsors : Bearstech, Logilab and Evolix. Thank you to La Maison du Chant for the great venue. Thank you to Valentine and Celia for the delicious and much complimented food. Thank you to Florence Devouard for the nice presentation Friday night. Thank you to all the speakers for the time and effort they put to make great content. Thank you to all volunteers who helped making this a great community event : Tristan, Anaïs, Benoît, Juliette, Ludovic, Jessica, Éric, Quentin F. and Jérémy D., with a special mention to Eda, Moussa, Quentin L. and Alban for their involvement and dedication and finally thank you to Sabiha and Jérémy L. who jumped with me in this crazy adventure many months ago : you rock!

Twitter : https://twitter.com/MiniDebConf_MRS
Mastodon : https://mamot.fr/@minidebconf_mrs
Photos : https://minidebcloud.labs.evolix.org/apps/gallery/s/keMJaK5o3D384RA
Videos : http://meetings-archive.debian.net/pub/debian-meetings/2019/miniconf-marseille/

05 June, 2019 01:15PM by Gregory Colpart

June 04, 2019

Jonas Meurer

debian lts report 2019.05

Debian LTS report for May 2019

This month I was allocated 17 hours. I spent 15.25 hours on the following issues:

  • DLA 1766-1: OpenPGP signature spoofing in evolution. On this issue I actually spent way more time than expected during April. I took over some of the remaining hours to May.
  • DLA 1778-1: Several vulnerabilities in symfony, a PHP web application framework.
  • DLA 1791-1: Several vulnerabilities in drupal7, a PHP web site platform.

04 June, 2019 05:24PM

Petter Reinholdtsen

Official MIME type "text/vnd.sosi" for SOSI map data

Just 15 days ago, I mentioned my submission to IANA to register an official MIME type for the SOSI vector map format. This morning, just an hour ago, I was notified that the MIME type "text/vnd.sosi" is registered for this format. In addition to this registration, my file(1) patch for a pattern matching rule for SOSI files has been accepted into the official source of that program (pending a new release), and I've been told by the team behind PRONOM that the SOSI format will be included in the next release of PRONOM, which they plan to release this summer around July.

I am very happy to see all of this fall into place, for use by the Noark 5 Tjenestegrensesnitt implementations.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

04 June, 2019 07:20AM

June 03, 2019

hackergotchi for Mike Gabriel

Mike Gabriel

My Work on Debian LTS/ELTS (May 2019)

In May 2019, I have worked on the Debian LTS project for 23.75 hours (as planned) and on the Debian ELTS project for another 10 hours (as planned) as a paid contributor.

LTS Work

  • Upload to jessie-security: 389-ds-base (DLA 1779-1), 1 CVE [1]
  • Upload to jessie-security: qt4-x11 (DLA 1786-1), 5 CVEs [2]
  • Upload to jessie-security: libav (DLA 1809-1), 2 CVEs [3]
  • Prepare a test-build for qemu [4]. Testing still pending.
  • Prepare a test-build for mupdf [5]. Testing still pending.
  • Triaging of open CVEs for 12 packages

ELTS Work

  • Dive deeply into questionable issues that were open for pacemaker.
    • CVE-2018-16877/pacemaker -> not affected
    • CVE-2018-16878/pacemaker -> ignored -> not affected
  • Upload to wheezy-lts: sqlite3 (ELA 123-1), 1 CVE [6]
  • Upload to wheezy-lts: glib2.0 (ELA 125-1), 1 CVE [7]

References

03 June, 2019 12:49PM by sunweaver

hackergotchi for Julien Danjou

Julien Danjou

Advanced Functional Programming in Python: lambda

Advanced Functional Programming in Python: lambda

A few weeks ago, I introduced you to functional programming in Python. Today, I'd like to go further into this topic and show you so more interesting features.

Lambda Functions

What do we call lambda functions? They are in essence anonymous functions. In order to create them, you must use the lambda statement:

>>> lambda x: x
<function <lambda> at 0x102e23620>

In Python, lambda functions are quite limited. They can take any number of arguments; however they can contain only one statement and be written on a single line.

They are mostly useful to be passed to high-order functions, such as map():

>>> list(map(lambda x: x * 2, range(10)))
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

This will apply the anonymous function lambda x: x * 2 to every item returned by range(10).

functools.partial

Since lambda functions are limited to being one line long, it's often that they are used to specialize longer version of an existing function:

def between(number, min=0, max=1000):
    return max > number > min

# Only returns number between 10 and 1000
filter(lambda x: between(x, min=10), range(10000))

Our lambda is finally just a wrapper of the between function with one of the argument already set. What if we would have a better way, without the various lambda limitations, to write that? That's where functools.partial comes handy.

import functools
def between(number, min=0, max=1000):
    return max > number > min

# Only returns number between 10 and 1000
atleast_10_and_upto = functools.partial(between, min=10)
# Return number betweens 10 and 1000
filter(atleast_10_and_upto, range(10000))

# Return number betweens 10 and 20
filter(lambda x: atleast_10_and_upto(x, max=20), range(10000))

The functools.partial function returns a specialized version of the between function, where min is already set. We can store them in a variable, use it, reuse it, as much as we want. We can pass it a max argument, as shown in the second part — using a lambda! You can mix and matches those two as you prefer and what seems clearer for you.

Common lambda

There is a type of lambda function that is pretty common: the attribute or item getter. They are typically used a key function for sorting or filtering.

Here's a list of 200 tuples containing two integers (i1, i2). If you want to use only i2 as the sorting key, you would write:

mylist = list(zip(range(40, 240), range(-100, 100)))

sorted(mylist, key=lambda i: i[1])

Which works fine, but make you use lambda. You could rather use the operator module:

import operator

mylist = list(zip(range(40, 240), range(-100, 100)))

sorted(mylist, key=operator.itemgetter(1))

This does the same thing, except it avoids using lambda altogether. Cherry-on-the-cake: it is actually 10% faster on my laptop.

I hope that'll make you write more functional code!

03 June, 2019 10:19AM by Julien Danjou

June 02, 2019

Petter Reinholdtsen

The space rover coquine, or how I ended up on the dark side of the moon

A while back a college and friend from Debian and the Skolelinux / Debian Edu project approached me, asking if I knew someone that might be interested in helping out with a technology project he was running as a teacher at L'école franco-danoise - the Danish-French school and kindergarden. The kids were building robots, rovers. The story behind it is to build a rover for use on the dark side of the moon, and remote control it. As travel cost was a bit high for the final destination, and they wanted to test the concept first, he was looking for volunteers to host a rover for the kids to control in a foreign country. I ended up volunteering as a host, and last week the rover arrived. It took a while to arrive after it was built and shipped, because of customs confusion. Luckily we were able fix it quickly with help from my colleges at work.

This is what it looked like when the rover arrived. Note the cute eyes looking up on me from the wrapping

Once the robot arrived, we needed to track down batteries and figure out how to build custom firmware for it with the appropriate wifi settings. I asked a friend if I could get two 18650 batteries from his pile of Tesla batteries (he had them from the wrack of a crashed Tesla), so now the rover is running on Tesla batteries.

Building the rover firmware proved a bit harder, as the code did not work out of the box with the Arduino IDE package in Debian Buster. I suspect this is due to a unsolved license problem with arduino blocking Debian from upgrading to the latest version. In the end we gave up debugging why the IDE failed to find the required libraries, and ended up using the Arduino Makefile from the arduino-mk Debian package instead. Unfortunately the camera library is missing from the Arduino environment in Debian, so we disabled the camera support for the first firmware build, to get something up and running. With this reduced firmware, the robot could be controlled via the controller server, driving around and measuring distance using its internal acoustic sensor.

Next, With some help from my friend in Denmark, which checked in the camera library into the gitlab repository for me to use, we were able to build a new and more complete version of the firmware, and the robot is now up and running. This is what the "commander" web page look like after taking a measurement and a snapshot:

If you want to learn more about this project, you can check out the The Dark Side Challenge Hackaday web pages.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

02 June, 2019 09:55PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, May 2019

I was assigned 18 hours of work by Freexian's Debian LTS initiative and worked all those hours this month.

I released Linux 3.16.66, and then prepared and released Linux 3.16.67 with a small number of fixes. I backported the updated Linux 4.9 packages from Debian 9.9, uploaded them and issued DLA-1771.

I had a little advance notice of the MDS speculative execution flaws, and started backporting the mitigations for these to older stable branches, starting with a version for Linux 4.14. I backported to 4.9 (Debian stretch/jessie) first, then to 4.4 (CIP) and 3.16 (Debian jessie). The charge for this time was accordingly split between CIP and Freexian.

I backported the security update for Linux 4.9 from stretch to jessie and issued DLA-1787.

The backport of mitigations to Linux 3.16 took longest to finish, as the x86 kernel exit path was substantially rewritten between 3.16 and 4.4. I needed to apply the mitigation in multiple assembly-language routines rather then a single C function, and before that I needed to backport support for static_branch patching in assembly-language source files. I sent the changes out for review and testing as Linux 3.16.68-rc1, and as Debian packages on people.debian.org. Since no problems were found, I released Linux 3.16.68, uploaded updated packages, and issued DLA-1799.

02 June, 2019 06:39PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

mini-DebConf Marseille 2019

I was in Marseille last week for the mini-DebConf the fine folks at Debian France organised and it was great! It was my first time there and I really enjoyed the city.

The venue was lovely and perfectly adapted to the size of the conference. The main auditorium was joy to work in: blinds on the windows to minimize the sun glare, a complete set of stage lighting and plenty of space to set up our gear.

If you couldn't attend the conference, you can always watch the talks on our video archive.

The highlight of my trip was the daytrip to the nearby Frioul archipelago. Although we repeatedly got attacked by angry seagulls (they were protecting their chicks), the view from the south shore of the Pomègues Island was amazing. It was also the first time I went on a daytrip during a mini-DebConf and I think it should happen more often!

A grafiti in Frioul saying 'Beware of Seagulls'

If you ever are in Marseille and are looking to have a good time culinary-wise, I really recommend booking a table at La Boîte à sardine, a local fish restaurant. It's a little expensive, but the fish and shellfish we ate was definitely worth paying for.

For something a little less hard on your wallet, Le Resto Provençal serves amazing local Provençal cuisine at very reasonable prices. We went there with a large group and everyone really enjoyed their meal.

Finally, if you want to have a beer afterwards, La Cane Bière is one (if not the best) place in Marseille to drink craft beer. They have a wide selection of local beers on tap and also sell many bottles from all over Europe. Note that they close at 22:00 though, even during weekends.

View from the south shore of the Pomègues Island

Thanks again to the local team for hosting us, I really had a good time!

02 June, 2019 04:00AM by Louis-Philippe Véronneau

Utkarsh Gupta

Becoming a Debian Maintainer in 90 days!

NOTE: I know this should have been posted long, long back :(
However, life happened and I couldn’t. Anyway, here it goes..

I started contributing to open source around a year back and on 1st January 2019 to Debian, specifically (wasn’t really a new year resolution, though :P).
I’ll be honest here. The reason behind taking the “Debian road” was solely to distract myself from the mental abuse I was going through.

Raju was the person who started helping me out, both, personally and professionally. He’s the one who taught me packaging from scratch with utmost patience and kept answering all my stupid doubts :D
To be honest, if it weren’t for him, I wouldn’t have been here, at this position today.

Since I wanted to distract myself from various stuff, I learned things quickly and kept working, consistently.
I turned up on IRC every single day since then. Praveen became both, my guru and my package sponsorer. He kept uploading and I kept packaging. This went on for a month until my difficulty level was bumped. From basic Ruby gems and Node libraries, I was given gems and modules that had test failures to debug and had a weirdly different build system. This made me uncomfortable. I complained. To which, Praveen said and I quote,
"If you want to keep working on simple stuff, then it's not gonna help you move forward. And it's your loss. No one else would care. So it's your call."

There was probably no option there, was it? :P
I took it on. Struggled for a few days but it became normal and I made it through. Like they say, “It gets better :)”, it did!
I took a little more challenging stuff, understood more concepts. Fixed test failures, RC bugs and learned a lot of stuff (still a lot, lot more to learn, though) in the process, like understanding about the Debian release cycle, how the migration of package takes place, setting up your own repositories, et al.

In this process, I also met another JS guru, Xavier. He did not only corrected my mistakes and sponsored my packages, but also helped me in actually understanding a lot of things. From the mailing list, we started conversing over private mail threads and soon, in a span of 3 months, the thread stretched over to 300 mails!

In early March, I was told that I could apply for the position of the Debian Maintainer, if only I understood the process of when to upload a package to experimental and when to unstable. I was given a few packages as a test by Praveen for the same.
And luckily, I passed. This meant that the only part remaining was to fulfil the initial keysigning requirement. For which, there was a Mini DebConf, Delhi around the corner.

As it happened, Praveen, Abhijith, and Sruthi came to the Mini DebConf from Kerala and I got my keys signed by them! :D
Soon after, I applied for becoming a DM.

I was lucky enough to get 3 advocates, Praveen, Xavier, and Abhijith.
Here’s my NM Process (#605).
And in a few days, I realized that I became the youngest Debian Maintainer in India \o/

I thank myself for being consistent, no matter what I started for :P
Also, much thanks to Sanyam. He really kept me going. Also, to Jatin and Sakshi.

Lastly, thanks to the Debian community. Debian has really been an amazing journey, an amazing place, and an amazing family. I am just hoping to make it to DebConf and meet all the people I adore \o/

Until next time.
:wq for today.

02 June, 2019 04:00AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

What have I been doing recently?

What have I been doing recently? I was reading up on Zip file format and history which brings back so much memory. Wondered if EOC parsing is deterministic or not.

02 June, 2019 12:40AM by Junichi Uekawa

Jaskaran Singh

GSoC Project Overview & Week 1

Overview:

Here’s a quick rundown on my project for this summer:

The Debian Patch Porting System aims to systematize and partially automate the security patch porting process.

The number of security vulnerability identifiers is quite large- these are relevant to specific distributions, organizations and applications. Each organization handles security vulnerabilities that are relevant to them in their own way. MITRE’s vulnerability identifier called Common Vulnerabilities and Exposures (CVE) is global, and most advisories are somehow related to a CVE.

The purpose of the system is to unify all these algorithmically for easy patch finding, management and application. The system would be able to take any vulnerability as input and extract patches w/r/t that vulnerability. Patches can be collected by employing certain patch finding methods. Some of these methods are to crawl sites, trackers, and various distributions’ respositories. Along with that, general purpose information about that vulnerability and its equivalent identifiers for other organizations could also be collected to get the vulnerability’s complete profile. This profile could then be stored in a NoSQL database.

Following this, the system would then test whether the patches are applicable for the upstream source that they are for. Patching heuristics can be employed to test the patch’s applicability in the source package. Some of these heuristics are fuzzing, patching w/r/t offsets, etc.

The nature of the system is to be generic enough so that it can fit in with Debian (maybe allow use with the Debian Security Tracker), or act independently as well.

GSoC:

The focus in this GSoC would be to design a flexible crawler i.e. a patch finder. The patch finder would primarily find patches for a given vulnerability, and optionally a given source package. Implementing the patch finder would require a modular implementation of certain patch finding methods, which would be based extensively on web crawling. A very simple example of this is to extract patches from fixed versions of a Debian package, as these are mostly in the debian/patches folder.

My mentors for this project are Luciano Bello and László Böszörményi (GCS). Luciano started this project in 2017 and implemented it, but due to its limited scope, he felt re-doing what has been done with a broader vision in mind was called for. You can find his and his team’s work here.

Prior to this week was the Community Bonding Period. Most of that time was spent in discussion about the ins and outs of the system, its scope, ultimate goal, and design. We also decided on how and when conference conversations should take place, mainly on Hangouts.

GSoC’s coding period commenced on 27th May 2019. Due to my university examinations till 3rd June 2019, I found putting in the required 40 hours for the first week a bit difficult, so I chose a task that would require minimal effort and could be completed in under week. I decided on setting up the base of the implementation, mostly classes that would be used in the input and output of the patch finder, and then tested these. I won’t go into implementation specific details as there’s not much to talk about yet code-wise. You can check out the code and follow our progress in the GitHub repository. As our prime language, we’ve chosen Python3.

Feel free to contact me on the address in the footer.

02 June, 2019 12:00AM

June 01, 2019

hackergotchi for Jonathan Carter

Jonathan Carter

Free Software Activities (2019-05)

A relatively quiet free software month, I’m feeling the Debian Buster final freeze fatigue for sure. Also dealt with a bunch of personal and work stuff that kept me busy otherwise, and also, haven’t been very good at logging activities so this will be short one…

Debian packaging work

2019-05-01: Upload live-wrapper (0.10) to debian unstable (Closes: #927216, #927217).

2019-05-01: Upload live-tasks (0.6) to debian unstable (Closes: #924211, #924214, #925331).

2019-05-16: Upload live-tasks (0.7) to debian unstable (Closes: #866391)

2019-05-24: File unblock request for live-tasks (0.7)

Debian live

2019-05-01: Update debian-live local scripts to fix stale fstab, duplicate sources.list entries.

2019-05-02: Full testing on daily build media.

2019-05-02: Prepare Debian Live RC1 announcement.

2019-05-03: Spot check testing on RC1 builds.

DebConf

2019-05-01: Submit DebConf BoF proposal for “100 Paper cuts kick-off“.

2019-05-01: Post initial bursary results.

2019-05-02: Post bursary results for smaller (<$200) amounts.

2019-05-02: Submit DebConf BoF proposal for “Debian Live“.

And lots of bursaries admin throughout the month. To be honest I’m glad that the bulk of it is mostly over.

01 June, 2019 06:37PM by jonathan

Molly de Blanc

Free software activities (May, 2019)

A white plate with a small pancake, a blueberry, a grape, a piece of pineapple, a piece or orange, a strawberry, and a swirl of maple syrup in the shape of the Debian logo.

Personal

  • I attended the two-day Open Source Initiative board of directors Spring face-to-face meeting where I joined the Staffing and Fundraising committees and was elected President of the board of directors. More details on this upcoming in my next OSI blog post.
  • We had some OSI meetings in addition to the F2F — namey the Staffing Committee and then meeting between the GM, myself, and the VP.
  • I submitted to the Open Source Summit EU CfP.
  • May brought the 11th instance of someone being mean to me on the internet.
  • We had an Anti-harassment team meeting.
  • Outreachy and GSoC hurtled forward.
  • I did a bunch of writing on free and open source software.
  • I finally celebrated becoming a DD by having a pancake party with some of my favorite free software people. Believe me, some of you who didn’t make it (or didn’t know about it) were sorely missed.
  • Free as in Freedom published their interview with me and the recording of my talk from Copyleft Conf 2019!

Professional

  • Exciting things to be announced IN THE FUTURE!

01 June, 2019 02:34PM by mollydb

hackergotchi for Wouter Verhelst

Wouter Verhelst

Moved!

A few months ago, I moved from Mechelen, Belgium to Cape Town, South Africa.

Muizenberg Beach

No, that's not the view outside my window. But Muizenberg beach is just a few minutes away by car. And who can resist such a beach? Well, if you ignore the seaweed, that is.

Getting settled after a cross-continental move takes some time, especially since the stuff that I decided I wanted to take with me would need to be shipped by boat, and that took a while to arrive. Additionally, the reason of the move was to move in with someone, and having two people live together for the first time requires some adjustment in their daily routine; and that is no different for us.

After having been here for several months now, however, things are pulling together:

I had a job lined up before I moved to cape town; but there were a few administrative issues in getting started, which meant that I had to wait at home for a few months while my savings were being reduced to almost nothingness. That is now mostly resolved (except that there seem to be a few delays with payment, but nothing that can't be resolved).

My stuff arrived a few weeks ago. Unfortunately the shipment was not complete; the 26" 16:10 FullHD monitor that I've had for a decade or so now, somehow got lost. I contacted the shipping company and they'll be looking into things, but I'm not hopeful. Beyond that, there were only a few minor items of damage (one of the feet of the TV broke off, and a few glasses broke), but nothing of real consequence. At least I did decide to go for the insurance which should cover loss and breakage, so worst case I'll just have to buy a new monitor (and throw away a few pieces of debris).

While getting settled, it was harder for me to spend quality time on doing free software- or Debian-related things, but I did manage to join Gerry and Mark to do some FOSDEM-related video review a while ago (meaning, all the 2019 videos that could be released have been released now), and spent some time configuring the Debian SReview instance for the Marseille miniconf last weekend, and will do the same for the Hamburg one that will be happening soon. Additionally, there have been some comments on the upstream nbd mailinglist that I cooperated with.

All in all, I guess it's safe to say that I'm slowly coming out of hibernation. Up next: once the payment issues have been fully resolved and I can spend money with a bit more impudence, join a local choir and/or orchestra and/or tennis club, and have some off time.

01 June, 2019 12:48PM

Russ Allbery

podlators 4.12

This release only fixes a test suite issue. I've been putting it off for ages because I was hoping to pick up some previous discussions and make some more substantive changes, but that hasn't happened yet and I keep getting mail from failing tests. Worse, a few other people have investigated the problem helpfully, and I don't want to waste more of anyone's time!

Also, I noticed I'd not posted anything but book reviews for this month, so wanted to do at least one software release, even if trivial.

Anyway, sometimes the Encode module gets loaded before the test suite for podlators, which makes it impossible to test the warnings that happen if Encode isn't available. That's fine, except that the test failed entirely in that case, instead of being skipped. This release fixes it to be skipped properly.

You can get the latest release from the podlators distribution page.

01 June, 2019 03:58AM

May 31, 2019

Paul Wise

FLOSS Activities May 2019

Changes

Issues

Review

Administration

  • Debian: answer SSH question, redirect LDAP change mail, fix some critical typos in LDAP, restart bacula after postgres restart
  • Debian wiki: forward questions to page authors, answer wiki support questions, whitelist email domains, whitelist email addresses, ping folks with bouncing email addresses, disable accounts with bouncing email
  • Debian package tracking system: deploy changes, configure automatic disabling of unused oldstable suites
  • Debian derivatives census: restore corrupted file from backups, disable the code that corrupted the file, deploy changes

Communication

Sponsors

The leptonlib/tesseract-lang/tesseract/sysstat Debian uploads and the ufw feature request were sponsored by my employer. All other work was done on a volunteer basis.

31 May, 2019 11:41PM

Sylvain Beucler

Debian LTS - May 2019

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) project, which extends the security support for past Debian releases, as a paid contributor.

In May, the monthly sponsored hours were split evenly among contributors depending on their max availability - I declared max 30h and got 18h.

  • firefox-esr: jessie-security update, security-ish issue with modules signing authority, backporting stretch's
  • CVE-2018-19969/phpmyadmin: attempt backporting the 49 patches and decide against it since they merely mitigate the CSRF issues but certainly break the testsuite
  • CVE-2018-20839/systemd: attempt to reproduce issue in Jessie, conclude no-dsa due to non-reproducibility and regressions introduced by the patch
  • CVE-2019-2697/openjdk-7: triage (sync with previous uploaders, conclude "not-affected")
  • CVE-2019-0227/axis: triage (clarify SSRF situation, sync with packager, conclude "unfixed")
  • dns-root-data: discuss potential update, conclude not relevent due to no reverse dependencies
  • gradle, kdepim: update triage info

Incidentally, last month I mentioned how regularly updating a 19MB text file caused issues in Git - it appears it's even breaking salsa.debian.org! Sadly conversation between involved parties appears difficult.

If you'd like to know more about LTS security, I recommend you check:

31 May, 2019 03:22PM

Debian LTS - May 2019

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) project, which extends the security support for past Debian releases, as a paid contributor.

In May, the monthly sponsored hours were split evenly among contributors depending on their max availability - I declared max 30h and got 18h.

  • firefox-esr: jessie-security update, security-ish issue with modules signing authority, backporting stretch's
  • CVE-2018-19969/phpmyadmin: attempt backporting the 49 patches and decide against it since they merely mitigate the CSRF issues but certainly break the testsuite
  • CVE-2018-20839/systemd: attempt to reproduce issue in Jessie, conclude no-dsa due to non-reproducibility and regressions introduced by the patch
  • CVE-2019-2697/openjdk-7: triage (sync with previous uploaders, conclude "not-affected")
  • CVE-2019-0227/axis: triage (clarify SSRF situation, sync with packager, conclude "unfixed")
  • dns-root-data: discuss potential update, conclude not relevent due to no reverse dependencies
  • gradle, kdepim: update triage info

Incidentally, last month I mentioned how regularly updating a 19MB text file caused issues in Git - it appears it's even breaking salsa.debian.org! Sadly conversation between involved parties appears difficult.

If you'd like to know more about LTS security, I recommend you check:

31 May, 2019 03:14PM

hackergotchi for Bits from Debian

Bits from Debian

Debian welcomes its GSoC 2019 and Outreachy interns

GSoC logo

Outreachy logo

We're excited to announce that Debian has selected seven interns to work with us during the next months: two people for Outreachy, and five for the Google Summer of Code.

Here is the list of projects and the interns who will work on them:

Android SDK Tools in Debian

Package Loomio for Debian

Debian Cloud Image Finder

Debian Patch Porting System

Continuous Integration

Congratulations and welcome to all the interns!

The Google Summer of Code and Outreachy programs are possible in Debian thanks to the efforts of Debian developers and contributors that dedicate part of their free time to mentor interns and outreach tasks.

Join us and help extend Debian! You can follow the interns weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or on each project's team mailing lists.

31 May, 2019 12:15PM by znoteer

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in May 2019

Here is my monthly update covering what I have been doing in the free software world during May 2019 (previous month):

  • As part of my duties of being on the board of directors of the Open Source Initiative I attended our biannual face-to-face board meeting in New York, attending the OSI's local event organised by Open Source NYC in order to support my colleagues who were giving talks, as well as participated in various licensing discussions, advocacy activities etc. throughout the rest of the month over the internet.

  • For the Tails privacy-oriented operating system, I attended an online "remote sprint" where we worked collaboratively on issues, features and adjacent concerns regarding the move to Debian buster. I particularly worked on a regression in Fontconfig to ensure the cache filenames remain determinstic [...] as well as reviewed/tested release candidates and others' patches.

  • Gave a few informal talks to Microsoft employees on Reproducible Builds in Seattle, Washington.

  • Opened a pull request against the django-markdown2 utilitiy to correct the template tag name in a documentation example. [...]

  • Hacking on the Lintian static analysis tool for Debian packages:


Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.

Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

  • Gave a number of informal talks to Microsoft employeers on Reproducible Builds in Seattle, Washington.

  • Drafted, published and publicised our monthly report.

  • Authored and submitted 5 patches to fix reproducibility issues in fonts-ipaexfont, ghmm, liblopsub, ndpi & xorg-gtest.

  • I spent some time our website this month, adding various fixes for larger/smaller screens [...], added a logo suitable for printing physical pin badges [...]. I also refreshed the text on our SOURCE_DATE_EPOCH page.

  • Categorised a huge number of packages and issues in the Reproducible Builds "notes" repository, kept isdebianreproducibleyet.com up to date [...] and posted some branded merchandise to other core team members.

I also made the of following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:

  • Support the latest PyPI package repository upload requirements by using real reStructuredText comments instead of the raw directive [...] and by stripping out manpage-only parts of the README rather than using the only directive [...].

  • Fix execution of symbolic links that point to the bin/diffoscope entry point in a checked-out version of our Git repository by fully resolving the location as part of dynamically calculating Python's module include path. [...]

  • Add a Dockerfile [...] with various subsequent fixups [...][...][...].

  • Published the resulting Docker image in the diffoscope container registry and updated the diffoscope homepage to provide "quick start" instructions on how to use diffoscope via this image.

Finally, I made a large number of following changes to my web-based ("no installation required") version of the diffoscope tool, try.diffoscope.org:


Debian

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Investigated and triaged CVE-2019-12217, CVE-2019-12219, CVE-2019-12220, CVE-2019-12221 and CVE-2019-12222 in libsdl1.2/libsdl2, simplesamlphp, freeimage & firefox-esr for jessie LTS, and capstone (CVE-2016-7151), sysdig (CVE-2019-8339), enigmail (CVE-2019-12269), firefox-esr (CVE-2019-1169) & sdl-image1.2 (CVE-2019-12218) for wheezy LTS.

  • Frontdesk duties, responding to user/developer questions, reviewing others' packages, etc.

  • Issued DLA 1793-1 for the dhcpcd5 network management protocol client to fix a read overflow vulnerability.

  • Issued DLA 1805-1 to fix a use-after-free vulnerability in minissdpd, a network device discovery daemon where a remote attacker could abuse this to crash the process.

  • Issued ELA-119-1 and DLA 1801-1 for zookeeper (a distributed co-ordination server) where users who were not authorised to read any data were still able to view the access control list.

  • For minissdpd, I filed an appropriate tracking bug for its outstanding CVE (#929297) and then fixed it in the current Debian stable distribution, proposing its inclusion in the next point release via #929613.


Uploads

  • redis (5:5.0.5-1) — New upstream release.

  • python-django (2:2.2.1-1) — New upstream release.

  • bfs (1.4.1-1) — New upstream release.

I also made the following non-maintainer uploads (NMUs) to fix release-critical bugs in Debian buster:

  • coturn (4.5.1.1-1.1) — Don't ship the (empty) /var/lib/turn/turndb SQLite database and generate it on-demand in the post-installation script to avoid overwriting it on upgrade/reinstall. (#929269)

  • libzorpll (7.0.1.0~alpha1-1.1) — Apply a patch from Andreas Beckmann to add suitable Breaks for smoother upgrades from stretch. (#928883)

  • mutt (1.10.1-2.1) — Prevent undefined behaviour when parsing invalid Content-Disposition mail headers. (#929017)


FTP Team

As a Debian FTP assistant I ACCEPTed 16 packages: cc-tool, gdal, golang-github-joyent-gosign, golang-github-mgutz-str, golang-github-mgutz-to, golang-github-ovh-go-ovh, golang-github-src-d-gcfg, golang-golang-x-xerrors, golang-gopkg-ldap.v3, libgit2, nodejs, opensbi, openzwave, rustc, u-boot & websocketd.

31 May, 2019 11:01AM

Russ Allbery

Review: Bad Blood

Review: Bad Blood, by John Carreyrou

Publisher: Alfred A. Knopf
Copyright: 2018
ISBN: 1-5247-3166-8
Format: Kindle
Pages: 302

Theranos was a Silicon Valley biotech startup founded by Elizabeth Holmes in 2003. She was a sophomore chemical engineering major at Stanford University when she dropped out to start the company. Theranos's promised innovation was a way to perform blood tests quickly and easily with considerably less blood than was used by normal testing methods. Their centerpiece product was supposed to be a sleek, compact, modern-looking diagnostic device that could use a finger-stick and a small ampule of blood to run multiple automated tests and provide near-immediate results.

Today, Holmes and former Theranos president Ramesh "Sunny" Balwani are facing federal charges of wire fraud. Theranos, despite never producing a working product, burned through $700 million of venture capital funding. Most, possibly all, public demonstrations of their device were faked. Most of their partnerships and contracts fell through. For the rare ones where Theranos actually did testing, they either used industry-standard equipment (not their own products) or sent the samples to other labs.

John Carreyrou is the Wall Street Journal reporter who first broke the story of Theranos's fraud in October of 2015. This book is an expansion of his original reporting. It's also, in the last third or so, the story of that reporting itself, including Theranos's aggressive attempts to quash his story, via both politics and targeted harassment, which were orchestrated by Theranos legal counsel and board member David Boies. (If you had any respect for David Boies due to his association with the Microsoft anti-trust case or Bush v. Gore, this book, along with the similar tactics his firm appears to have used in support of Harvey Weinstein, should relieve you of it. It's depressing, if predictable, that he's not facing criminal charges alongside Holmes and Balwani.)

Long-form investigative journalism about corporate malfeasance is unfortunately a very niche genre and deserves to be celebrated whenever it appears, but even putting that aside, Bad Blood is an excellent book. Carreyrou provides a magnificent and detailed account of the company's growth, internal politics, goals, and strangely unstoppable momentum even while their engineering faced setback after setback. This is a thorough, detailed, and careful treatment that draws boundaries between what Carreyrou has sources for and what he has tried to reconstruct. Because the story of the reporting itself is included, the reader can also draw their own conclusions about Carreyrou's sources and their credibility. And, of course, all the subsequent legal cases against the company have helped him considerably by making many internal documents part of court records.

Silicon Valley is littered with failed startups with too-ambitious product ideas that were not practical. The unusual thing about Theranos is that they managed to stay ahead of the money curve and the failure to build a working prototype for surprisingly long, clawing their way to a $10 billion valuation and biotech unicorn status on the basis of little more than charisma, fakery, and a compelling story. It's astonishing, and rather scary, just how many high-profile people like Boies they managed to attract to a product that never worked and is probably scientifically impossible as described in their marketing, and just how much effort it took to get government agencies like the CMS and FDA to finally close them down.

But, at the same time, I found Bad Blood oddly optimistic because, in the end, the system worked. Not as well as it should have, and not as fast as it should have: Theranos did test actual patients (badly), and probably caused at least some medical harm. But while the venture capital money poured in and Holmes charmed executives and negotiated partnerships, other companies kept testing Theranos's actual results and then quietly backing away. Theranos was forced to send samples to outside testing companies to receive proper testing, and to set up a lab using traditional equipment. And they were eventually shut down by federal regulatory agencies, albeit only after Carreyrou's story broke.

As someone who works in Silicon Valley, I also found the employment dynamics at Theranos fascinating. Holmes, and particularly Balwani when he later joined, ran the company in silos, kept secrets between divisions, and made it very hard for employees to understand what was happening. But, despite that, the history of the company is full of people joining, working there for a year or two, realizing that something wasn't right, and quietly leaving. Theranos management succeeded in keeping enough secrets that no one was able to blow the whistle, but the engineers they tried to hire showed a lot of caution and willingness to cut their losses and walk away. It's not surprising that the company seemed to shift, in its later years, towards new college grads or workers on restrictive immigration visas who had less experience and confidence or would find it harder to switch companies. There's a story here about the benefits of a tight job market and employees who feel empowered to walk off a job. (I should be clear that, while a common theme, this was not universal, and Theranos arguably caused one employee suicide from the stress.)

But if engineers, business partners, a reporter, and eventually regulatory agencies saw through Theranos's fraud, if murkily and slowly, this is also a story of the people who did not. If you are inclined to believe that the prominent conservative Republican figures of the military and foreign policy establishment are wise and thoughtful people, Bad Blood is going to be uncomfortable reading. James Mattis, who served as Trump's Secretary of Defense, was a Theranos booster and board member, and tried to pressure the Department of Defense into using the company's completely untested and fraudulent product for field-testing blood samples from soldiers. One of Carreyrou's main sources was George Shultz's grandson, who repeatedly tried to warn his grandfather of what was going on at Theranos while the elder Republican statesman was on Theranos's board and recruiting other board members from the Hoover Institute, including Henry Kissinger. Apparently the film documentary version of Bad Blood is somewhat kinder to Shultz, but the book is methodically brutal. He comes across as a blithering idiot who repeatedly believed Holmes and Theranos management over his grandson on the basis of his supposed ability to read and evaluate people.

If you are reading this book, I do recommend that you search for video of Elizabeth Holmes speaking. Carreyrou mentions her personal charisma, but it's worth seeing first-hand, and makes some of Theranos's story more believable. She has a way of projecting sincerity directly into the camera that's quite remarkable and is hard to describe in writing, and she tells a very good story about the benefits of easier and less painful (and less needle-filled) blood testing. I have nothing but contempt for people like Boies, Mattis, and Shultz who abdicated their ethical responsibility as board members to check the details and specifics regardless of personal impressions. In a just world with proper legal regulation of corporate boards they would be facing criminal charges along with Holmes. But I can see how Holmes convinced the media and the public that the company was on to something huge. It's very hard to believe that someone who touts a great advancement in human welfare with winning sincerity may be simply lying. Con artists have been exploiting this for all of human history.

I've lived in or near Palo Alto for 25 years and work in Silicon Valley, which made some of the local details of Carreyrou's account fascinating, such as the mention of the Old Pro bar as a site for after-work social meetings. There were a handful of places where Carreyrou got some details wrong, such as his excessive emphasis on the required non-disclosure agreements for visitors to Theranos's office. (For better or ill, this is completely routine for Silicon Valley companies and regularly recommended by corporate counsel, not a sign of abnormal paranoia around secrecy.) But the vast majority of the account rang true, including the odd relationship between Stanford faculty and startups, and between Stanford and the denizens of the Hoover Institute.

Bad Blood is my favorite piece of long-form journalism since Bethany McLean and Peter Elkin's The Smartest Guys in the Room about Enron, and it is very much in the same mold. I've barely touched on all the nuances and surprising characters in this saga. This is excellent, informative, and fascinating work. I'm still thinking about what went wrong and what went right, how we as a society can do better, and the ways in which our regulatory and business system largely worked to stop the worst of the damage, no thanks to people like David Boies and George Shultz.

Highly recommended.

Rating: 9 out of 10

31 May, 2019 03:43AM

May 30, 2019

hackergotchi for Jonathan Dowland

Jonathan Dowland

Multi-architecture OpenShift containers

Following the initial release of RHEL8-based OpenJDK OpenShift container images, we have now pushed PPC64LE and Aarch64 architecture variants to the Red Hat Container Registry. This is the first time I've pushed Aarch64 images in particular, and I'm excited to work on Aarch64-related issues, should any crop up!

30 May, 2019 08:56AM

May 29, 2019

hackergotchi for Sean Whitton

Sean Whitton

Debian Policy call for participation -- May 2019

There has been very little activity in recent weeks (preparing the Debian buster release is more urgent than the Policy Manual for most contributors), so the list of bugs I posted in February is still valid.

29 May, 2019 11:35PM

hackergotchi for Bits from Debian

Bits from Debian

Ask anything you ever wanted to know about Debian Edu!

Debian Edu

You have heard about Debian Edu or Skolelinux, but do you know exactly what we are doing?

Join us on the #debian-meeting channel on the OFTC IRC network on 03 June 2019 at 12:00 UTC for an introduction to Debian Edu, a Debian pure blend created to fit the requirements of schools and similar institutions.

You will meet Holger Levsen, contributing to Debian Edu since 2005 and member of development team. Ask him anything you ever wanted to know about Debian Edu!

Your IRC nick needs to be registered in order to join the channel. Refer to the Register your account section on the oftc website for more information on how to register your nick.

You can always refer to the debian-meeting wiki page for the latest information and up to date schedule.

29 May, 2019 03:30PM by Jonathan Carter

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Spring cleanup

What you can probably spot from past posts on my blog, my open source contributions are heavily focused on Weblate and I've phased out many other activities. The main reason being reduced amount of free time with growing family, what leads to focusing on project which I like most. It's fun to develop it and it seems like it will work business wise as well, but that's still something to be shown in the future.

Anyway it's time to admit that I will not spend much time on other things in near future.

Earlier this year, I've resigned from being phpMyAdmin project admin. I was in this role for three years and I've been contributing to the project for 18 years. It has been time, but I haven't contributed significantly in last few months. I will stay with the project for few more months to handle smooth transition, but it's time to say good bye there.

On the Debian project I want to stay active, but I've reduced my involvement and I'm looking for maintainers for some of my packages (mostly RPM related). The special case is the phpMyAdmin package where I was looking for help since 2017, but it still didn't help from the package becoming heavily outdated with security issues what lead to it's removal from Buster. It seems that this has triggered enough attention to resurrect work on the updated packages.

Today I've gone through my personal repos on GitHub and I've archived bunch of them. These have not received any attention for years (many of them were dead by the time I've imported them to GitHub) and it's good to clearly show that to random visitors.

I'm still main developer behind Gammu, but I'm not really doing there more than occasional review of pull requests and merging them. I don't want to abandon the project without handing it out to somebody else, but the problem is that there is nobody else right now.

Filed under: Debian English Gammu SUSE

29 May, 2019 08:15AM

Russ Allbery

Review: Nimona

Review: Nimona, by Noelle Stevenson

Publisher: HarperTeen
Copyright: 2015
ISBN: 0-06-227822-3
Format: Graphic novel
Pages: 266

Ballister Blackheart is a supervillain, the most notorious supervillain in the kingdom. He used to be a knight, in training at the Institute alongside his friend Goldenloin. But then he defeated Goldenloin in a joust and Goldenloin blew his arm off with a hidden weapon. Now, he plots against the Institute and their hero Sir Goldenloin, although he still follows certain rules.

Nimona, on the other hand, is not convinced by rules. She shows up unexpectedly at Ballister's lair, declaring herself to be his sidekick, winning him over to the idea when she shows that she's also a shapeshifter. And Ballister certainly can't argue with her effectiveness, but her unconstrained enthusiasm for nefarious schemes is rather disconcerting. Ballister, Goldenloin, and the Institute have spent years in a careful dance with unspoken rules that preserved a status quo. Nimona doesn't care about the status quo at all.

Nimona is the collected form of a web comic published between 2012 and 2014. It has the growth curve of a lot of web comics: the first few chapters are lightweight and tend more towards gags, the art starts off fairly rough, and there is more humor than plot. But by chapter four, Stevenson is focusing primarily on the fascinating relationship between Ballister and Nimona, and there are signs that Nimona's gleeful enthusiasm for villainy is hiding something more painful. Meanwhile, the Institute, Goldenloin's employer, quickly takes a turn for the sinister. They're less an organization of superheroes than a shadow government with some dubious goals, and Ballister starts looking less like a supervillain and more like a political revolutionary.

Nimona has some ideas about revolution, most of them rather violent.

At the start of this collection, I wasn't sure how much I'd like it. It's mildly amusing in a gag sort of way while playing with cliches and muddling together fantasy, science fiction, faux-medieval politics, sinister organizations, and superheros. But the story deepens as it continues. Ballister starts off caring about Nimona because he's a fundamentally decent person, but she becomes a much-needed friend. Nimona's villain-worship, to coin a phrase, turns into something more nuanced. And while that's happening, the Institute becomes increasingly sinister, and increasingly dangerous. By the second half of the collection, despite the somewhat excessive number of fight scenes, it was very hard to put down.

Sadly, I didn't think that Stevenson landed the ending. It's not egregiously bad, and the last page partly salvages it, but it wasn't the emotionally satisfying catharsis that I was looking for. The story got surprisingly dark, and I wanted a bit more of a burst of optimism and happiness at the end.

I thought the art was good but not great. The art gets more detailed and more nuanced as the story deepens, but Stevenson stays with a flat, stylized appearance to her characters. The emotional weight comes mostly from the dialogue and from Nimona's expressive transformations rather than the thin and simple faces. But there's a lot of energy in the art, a lot of drama when appropriate, and some great transitions from human scale to the scale of powerful monsters.

That said, I do have one major complaint: the lettering. It's hand-lettered (so far as I can tell) in a way that adds a distinctive style, but the lettering is also small, wavers a bit, and is sometimes quite hard to read. Standard comic lettering is, among other things, highly readable in small sizes; Stevenson's more individual lettering is not, and I occasionally struggled with it.

Overall, this isn't in my top tier of graphic novels, but it was an enjoyable afternoon's reading that hooked me thoroughly and that I was never tempted to put down. I think it's a relatively fast read, since there are a lot of fight scenes and not a lot of detail that invites lingering over the page. I wish the lettering were more uniform and I wasn't entirely happy with the ending, but if slowly-developing unexpected friendship, high drama, and an irrepressible shapeshifter who is more in need of a friend than she appears sounds like something you'd like, give this a try.

Rating: 7 out of 10

29 May, 2019 04:28AM