March 30, 2020

hackergotchi for Mike Gabriel

Mike Gabriel

Mailman3 - Call for Translations (@Weblate)

TL;DR; please help localizing Mailman3 [1]. You can find it on hosted Weblate [2].The next component releases are planned in 1-2 weeks from now. Thanks for your contribution! If you can't make it now, please consider working on Mailman3 translations at some later point of time. Thanks!

Time has come for Mailman3

Over the last months I have found an interest in Mailman3. Given the EOL of Python2 in January 2002 and also being a heavy Mailman2 provider for various of my projects and also for customers, I felt it was time to look at Mailman2's successor: Mailman3 [1].

One great novelty in Mailman3 is the strict split up between backend (Mailman Core), and the frontend components (django-mailan3, Postorius, Hyperkitty). All three are Django applications. Postorius is the list management web frontend whereas Hyperkitty is an archive viewer. Other than in Mailman2, you can also drop list posts into Hyperkitty directly (instead of sending a mail to the list). This makes Hyperkitty also some sort of forum software with a mailing list core in the back. The django-mailman3 module knits the previous two together (and handles account management, login dialog, profile settings, etc.).

Looking into Mailman3 Upstream Code

Some time back in midst 2019 I decided to deploy Mailman3 at a customers site and also for my own business (which still is the test installation). Living and working in Germany, my customers' demand often is a fully localized WebUI. And at that time, Mailman3 could not provide this. Many exposed parts of the Mailman3 components were still not localized (or not localizable).

Together with my employee I put some hours of effort into providing merge requests, filing bug reports, request better Weblate integration (meaning: hosted Weblate). It felt a bit like setting the whole i18n thing in motion.

Call for Translations

Over the past month I had to focus on other work and two days ago I was delighted that Abhilash Raj (one of the Mailman3 upstream maintainers) informed me (via closing one of the related bugs [3]) that Mailman3 is now fully integrated with the hosted Weblate service and a continous translation workflow is set to go.

The current translation stati of the Mailman3 component are at ~ 10%. We can do better than this, I sense.

So, if you are a non-native English speaker and feel like contributing to Mailman3, please visit the hosted Weblate site [2], sign up for an account (if you don't have one already), and chime in into the translation of one of the future mailing list software suites run by many FLOSS projects all around the globe. Thanks a lot for your help.

As a side note, if you plan working on translating Mailman Core into your language (and can't find it in the list of supported languages), please request this new language via the Weblate UI. All other components have all available languages enabled by default.


30 March, 2020 07:47AM by sunweaver

hackergotchi for Axel Beckert

Axel Beckert

How do you type on a keyboard with only 46 or even 28 keys?

Some of you might have noticed that I’m into keyboards since a few years ago — into mechanical keyboards to be precise.


It basically started with the Swiss Mechanical Keyboard Meetup (whose website I started later on) was held in the hackerspace of the CCCZH.

I mostly used TKL keyboards (i.e. keyboards with just the — for me useless — number block missing) and tried to get my hands on more keyboards with Trackpoints (but failed so far).

At some point a year or two ago, I looking into smaller keyboards for having a mechanical keyboard with me when travelling. I first bought a Vortex Core at Candykeys. The size was nice and especially having all layers labelled on the keys was helpful, but nevertheless I soon noticed that the smaller the keyboards get, the more important is, that they’re properly programmable. The Vortex Core is programmable, but not the keys in the bottom right corner — which are exactly the keys I wanted to change to get a cursor block down there. (Later I found out that there are possibilities to get this done, either with an alternative firmware and a hack of it or desoldering all switches and mounting an alternative PCB called Atom47.)

40% Keyboards

So at some point I ordered a MiniVan keyboard from The Van Keyboards (MiniVan keyboards will soon be available again at The Key Dot Company), here shown with GMK Paperwork (also bought from and designed by The Van Keyboards):

The MiniVan PCBs are fully programmable with the free and open source firmware QMK and started to use that more and more instead of bigger keyboards.


With the MiniVan I learned the concepts of layers. Layers are similar to what many laptop keyboards do with the “Fn” key and to some extent also what the German standard layout does with the “AltGr” key: Layers are basically alternative key maps you can switch with a special key (often called “Fn”, “Fn1”, “Fn2”, etc., or — especially if there are two additional layers — “Raise” and “Lower”).

There are several concepts how these layers can be reached with these keys:

  • By keeping the Fn key pressed, i.e. the alternative layer is active as long as you hold the Fn key down.
  • One-shot layer switch: After having pressed and released the Fn key, all keys are on the alternative layer for a single key press and then you are back to the default layer.
  • Layer toggle: Pressing the Fn key once switches to the alternative layer and pressing it a second time switches back to the default layer.
  • There are also a lot of variants of the latter variant, e.g. rotating between layers upon every key press of the Fn key. In that case it seems common to have a second special key which always switches back to the default layer, kinda Escape key for layer switching.
My MiniVan Layout

For the MiniVan, two additional layers suffice easily, but since I have a few characters on multiple layers and also have mouse control and media keys crammed in there, I have three additional layers on my MiniVan keyboards:

“TRNS” means transparent, i.e. use the settings from lower layers.

I also use a feature that allows me to bind different actions to a key depending if I just tap the key or if I hold it. Some also call this “tap dance”. This is especially very popular on the usually rather huge spacebar. There, the term “SpaceFn” has been coined, probably after this discussion on Geekhack.

I use this for all my layer switching keys:

  • The left spacebar is space on tap and switches to layer 1 if hold. The right spacebar is a real spacebar, i.e. already triggers a space on key press, not only on key release.

    Layer 1 has numbers on the top row and the special characters of the number row in the second row. It also has Home/End and Page Up/Down on the cursor keys.

  • The key between the Enter key and the cursor-right key (medium grey with a light grey caret in the picture) is actually the Slash and Question Mark key, but if hold, it switches me to layer 2.

    Layer 2 has function keys on the top row and also the special characters of the number row in the second row. On the cursor keys it has volume up and down as well as the media keys “previous” and “next”.

  • The green key in the picture is actually the Backslash and Pipe key, but if hold, it switches me to layer 3.

    On layer 3 I have mouse control.

With this layout I can type English texts as fast as I can type them on a standard or TKL layout.

German umlauts are a bit more difficult because it requires 4 to 6 key presses per umlaut as I use the Compose key functionality (mapped to the Menu key between the spacebars and the cursor block. So to type an Ä on my MiniVan, I have to:

  1. press and release Menu (i.e. Compose); then
  2. press and hold either Shift-Spacebar (i.e. Shift-Fn1) or Slash (i.e. Fn2), then
  3. press N for a double quote (i.e. Shift-Fn1-N or Fn2-N) and then release all keys, and finally
  4. press and release the base character for the umlaut, in this case Shift-A.

And now just use these concepts and reduce the amount of keys to 28:

30% and Sub-30% Keyboards

In late 2019 I stumbled upon a nice little keyboard kit “shop” on Etsy — which I (and probably most other people in the mechanical keyboard scene) didn’t take into account for looking for keyboards — called WorldspawnsKeebs. They offer mostly kits for keyboards of 40% size and below, most of them rather simple and not expensive.

For about 30€ you get a complete sub-30% keyboard kit (without switches and keycaps though, but that very common for keyboard kits as it leaves the choice of switches and key caps to you) named Alpha28 consisting of a minimal Acrylic case and a PCB and electronics set.

This Alpha28 keyboard is btw. fully open source as the source code, (i.e. design files) for the hardware are published under a free license (MIT license) on GitHub.

And here’s how my Alpha28 looks like with GMK Mitolet (part of the GMK Pulse group-buy) key caps:

So we only have character keys, Enter (labelled “Data” as there was no 1u Enter key with that row profile in that key cap set; I’ll also call it “Data” for the rest of this posting) and a small spacebar, not even modifier keys.

The Default Alpha28 Layout

The original key layout by the developer of the Alpha28 used the spacbar as Shift on hold and as space if just tapped, and the Data key switches always to the next layer, i.e. it switches the layer permanently on tap and not just on hold. This way that key rotates through all layers. In all other layers, V switches back to the default layer.

I assume that the modifiers on the second layer are also on tap and apply to the next other normal key. This has the advantage that you don’t have to bend your fingers for some key combos, but you have to remember on which layer you are at the moment. (IIRC QMK allows you to show that via LEDs or similar.) Kinda just like vi.

My Alpha28 Layout

But maybe because I’m more an Emacs person, I dislike remembering states myself and don’t bind bending my fingers. So I decided to develop my own layout using tap-or-hold and only doing layer switches by holding down keys:

A triangle means that the settings from lower layers are used, “N/A” means the key does nothing.

It might not be very obvious, but on the default layer, all keys in the bottom row and most keys on the row ends have tap-or-hold configurations.

Basic ideas
  • Use all keys on tap as labelled by default. (Data = Enter as mentioned above)
  • Use different meanings on hold for the whole bottom row and some edge column keys.
  • Have all classic modifiers (Shift, Control, OS/Sys/Win, Alt/Meta) on the first layer twice (always only on hold), so that any key, even those with a modifier on hold, can be used with any modifier. (Example: Shift is on A hold and L hold so that Shift-A is holding L and then pressing A and Shift-L is holding A and then pressing L.)
Bottom row if hold
  • Z = Control
  • X = OS/Sys/Win
  • C = Alt/Meta
  • V = Layer 3 (aka Fn3)
  • Space = Layer 1 (aka Fn1)
  • B = Alt/Meta
  • N = OS/Sys/Win
  • M = Ctrl
Other rows if hold
  • A = Shift
  • L = Shift
  • Enter = Layer 2 (aka Fn2)
  • P = Layer 4 (aka Fn4)
How the keys are divided into layers
  • Layer 0 (Default): alphabetic keys, Space, Enter, and (on hold) standard modifiers
  • Layer 1: numbers, special characters (most need Shift, too), and some more common other keys, e.g.
    • Space-Enter = Backspace
    • Space-S = Esc
    • Space-D = Tab
    • Space-F = Menu/Compose
    • Space-K = :
    • Space-L = '
    • Space-B = ,
    • Space-N = .
    • Space-M = /, etc.
  • Layer 2: F-keys and less common other keys, e.g.
    • Enter-K = -
    • Enter-L = =
    • Enter-B = [
    • Enter-N = ]
    • Enter-M = \, etc.)
  • Layer 3: Cursor movement, e.g.
    • scrolling
    • and mouse movement.
    • Cursor cross is on V-IJKL (with V-I for Up)
    • V-U and V-O are Home and End
    • V-P and V-Enter are Page Up/Down.
    • Mouse movement is on V-WASD
    • V-Q
    • V-E and V-X being mouse buttons
    • V-F and V-R is the scroll wheel up down
    • V-Z and V-C left and right.
  • Layer 4: Configuring the RGB bling-bling and the QMK reset key:
    • P-Q (the both top corner keys) are QMK reset to be able to reflash the firmware.
    • The keys on the right half of the keyboard control the modes of the RGB LED strip on the bottom side of the PCB, with the upper two rows usually having keys with some Plus and Minus semantics, e.g. P-I and P-K is brightness up and down.
    • The remaining left half is unused and has no function at all on layer 4.
Using the Alpha28

This layout works surprisingly well for me.

Only for Minus, Equal, Single Quote and Semicolon I still often have to think or try if they’re on Layer 1 or 2 as on my 40%s (MiniVan, Zlant, etc.) I have them all on layer 1 (and in general one layer less over all). And for really seldom used keys like Insert, PrintScreen, ScrollLock or Pause, I might have to consult my own documentation. They’re somewhere in the middle of the keyboard, either on layer 1, 2, or 3. ;-)

And of course, typing umlauts takes even two keys more per umlaut as on the MiniVan since on the one hand Menu is not on the default layer and on the other hand, I don’t have this nice shifted number row and actually have to also press Shift to get a double quote. So to type an Ä on my Alpha, I have to:

  1. press and release Space-F (i.e. Fn1-F) for Menu (i.e. Compose); then
  2. press and hold A-Spacebar-L (i.e. Shift-Fn1-L) for getting a double quote, then
  3. press and release the base character for the umlaut, in this case L-A for Shift-A (because we can’t use A for Shift as I can’t hold a key and then press it again :-).


If the characters on upper layers are not labelled like on the Vortex Core, i.e. especially on all self-made layouts, typing is a bit like playing that old children’s game Memory: as soon as you remember (or your muscle memory knows) where some special characters are, typing gets faster. Otherwise, you start with trial and error or look the documentation. Or give up. ;-)

Nevertheless, typing on a sub-30% keyboard like the Alpha28 is much more difficult and slower than on a 40% keyboard like the MiniVan. So the Alpha28 very likely won’t become my daily driver while the MiniVan defacto is my already my daily driver.

But I like these kind of challenges as others like the game “Memory”. So I ordered three more 30% and sub-30% keyboard kits and WorldspawnsKeebs for soldering on the upcoming weekend during the COVID19 lockdown:

  • A Reviung39 to start a new try on ortholinear layouts.
  • A Jerkin (sold out, waitlist available) to try an Alice-style keyboard layout.
  • A Pain27 (which btw. is also open source under the CC0 license) to try typing with even one key less than the Alpha28 has. ;-)

And if I at some point want to try to type with even fewer keys, I’ll try a Butterstick keyboard with just 20 keys. It’s a chorded keyboard where you have to press multiple keys at the same time to get one charcter: So to get an A from the missing middle row, you have to press Q and Z simultaneously, to get Escape, press Q and W simultaneously, to get Control, press Q, W, Z and X simultaneously, etc.

And if that’s not even enough, I already bought a keyboard kit named Ginny (or Ginni, the developer can’t seem to decide) with just 10 keys from an acquaintance. Couldn’t resist when offered his surplus kits. :-) It uses the ASETNIOP layout which was initially developed for on-screen keyboards on tablets.

30 March, 2020 06:51AM by Axel Beckert (

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Using Zoom's web client on Linux

TL;DR: The zoom meeting link you have probably look like this:

To use the web client, use this instead:


Like too many institutions, the school where I teach chose to partner up with Zoom. I wasn't expecting anything else, as my school's IT department is a Windows shop. Well, I guess I'm still a little disappointed.

Although I had vaguely heard of Zoom before, I had never thought I'd be forced to use it. Lucky for me, my employer decided not to force us to use it. To finish the semester, I plan to record myself and talk with my students on a Jitsi Meet instance.

I will still have to attend meetings on Zoom though. I'm well aware of Zoom's bad privacy record and I will not install their desktop application. Zoom does offer a web client. Sadly, on Linux you need to jump through hoops to be able to use it.

Using Zoom's web client on Linux

Zoom's web client apparently works better on Chrome, so I decided to use Chromium.

Without already having the desktop client installed on your machine, the standard procedure to use the web client would be:

  1. Open the link to the meeting in Chromium
  2. Click on the "download & run Zoom" link showed on the page
  3. Click on the "join from your browser" link that then shows up

Sadly, that's not what happens on Linux. When you click on the "download & run Zoom" link, it brings you to a page with instructions on how to install the desktop client on Linux.

You can thwart that stupid behavior by changing your browser's user agent to make it look like you are using Windows. This is the UA string I've been using:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36

With that, when you click on the "download & run Zoom" link, it will try to download a .exe file. Cancel the download and you should now see the infamous "join from your browser" link.

Upon closer inspection, it seem you can get to the web client by changing the meeting's URL. The zoom meeting link you have probably look like this:

To use the web client, use this instead:

Jitsi Meet Puppet Module

I've been playing around with Jitsi Meet quite a bit recently and I've written a Puppet module to install and configure an instance! The module certainly isn't perfect, but should wield a working Jitsi instance.

If you already have a Puppet setup, please give it a go! I'm looking forward receiving feedback (and patches) to improve it.

30 March, 2020 04:00AM by Louis-Philippe Véronneau

hackergotchi for Shirish Agarwal

Shirish Agarwal

Covid 19 and the Indian response.

There have been lot of stories about Coronavirus and with it a lot of political blame-game has been happening. The first step that India took of a lockdown is and was a good step but without having a plan as to how especially the poor and the needy and especially the huge migrant population that India has (internal migration) be affected by it. A 2019 World Economic Forum shares the stats. as 139 million people. That is a huge amount of people and there are a variety of both push and pull factors which has displaced these huge number of people. While there have been attempts in the past and probably will continue in future they will be hampered unless we have trust-worthy data which is where there is lots that need to be done. In the recent few years, both the primary and secondary data has generated lot of controversies within India as well as abroad so no point in rehashing all of that. Even the definition of who is a ‘migrant’ needs to be well-established just as who is a ‘farmer’ . The simplest lucanae in the later is those who have land are known as ‘farmers’ but the tenant farmers and their wives are not added as farmers hence the true numbers are never known. Is this an India-specific problem or similar definition issues are there in the rest of the world I don’t know.

How our Policies fail to reach the poor and the vulnerable

The sad part is most policies in India are made in castles in the air . An interview by the wire shares the conundrum of those who are affected and the policies which are enacted for them (it’s a youtube video, sorry) –

If one with an open and fresh mind sees the interview it is clear that why there was a huge reverse migration from Indian cities to villages. The poor and marginalized has always seen the Indian state as an extortive force so it doesn’t make sense for them to be in the cities. The Prime Minister’s annoucement of food for 3 months was a clear indication for the migrant population that for 3 months they will have no work. Faced with such a scenario, the best option for them was to return to their native places. While videos of huge number of migrants were shown of Delhi, this was the scenario of most states and cities, including Pune, my own city . Another interesting point which was made is most of the policies will need the migrants to be back in the villages. Most of these are tied to the accounts which are opened in villages, so even if they want to have the benefits they will have to migrate to villages in order to use them. Of course, everybody in India knows how leaky the administration is. The late Shri Rajiv Gandhi had famously and infamously remarked once how leaky the Public Distribution system and such systems are. It’s only 10 paise out of rupee which reaches the poor. And he said this about 30 years ago. There have been numerous reports of both IPS (Indian Police Services) reforms and IAS (Indian Administrative Services) reforms over the years, many of the committee reports have been in public domain and in fact was part of the election manifesto of the ruling party in 2014 but no movement has happened on that part. The only thing which has happened is people from the ruling party have been appointed on various posts which is same as earlier governments.

I was discussing with a friend who is a contractor and builder about the construction labour issues which were pointed in the report and if it is true that many a times the migrant labour is not counted. While he shared a number of cases where he knew, a more recent case in public memory was when some labourers died while building Amanora mall which is perhaps one of largest malls in India. There were few accidents while constructing the mall. Apparently, the insurance money which should have gone to the migrant laborer was taken by somebody close to the developers who were building the mall. I have a friend in who lives in Jharkhand who is a labour officer. She has shared with me so many stories of how the labourers are exploited. Keep in mind she has been a labor officer appointed by the state and her salary is paid by the state. So she always has to maintain a balance of ensuring worker’s rights and the interests of the state, private entities etc. which are usually in cahoots with the state and it is possible that lot of times the State wins over the worker’s rights. Again, as a labour officer, she doesn’t have that much power and when she was new to the work, she was often frustrated but as she remarked few months back, she has started taking it easy (routinized) as anyways it wasn’t helping her in any good way. Also there have been plenty of cases of labor officers being murdered so its easier to understand why one tries to retain some sanity while doing their job.

The Indian response and the World Response

The Indian response has been the lockdown and very limited testing. We seem to be following the pattern of UK and U.S. which had been slow to respond and slow to testing. In the past Kerala showed the way but this time even that is not enough. At the end of the day we need to test, test and test just as shared by the WHO chairman. India is trying to create its own cheap test kits with ICMR approval, for e.g. a firm from my own city Pune MyLab has been given approval. We will know how good or bad they are only after they have been field-tested. For ventilators we have asked Mahindra and Mahindra even though there are companies like Allied Medical and others who have exported to EU and others which the Govt. is still taking time to think through. This is similar to how in UK some companies who are with the Govt. but who have no experience in making ventilators are been given orders while those who have experience and were exporting to Germany and other countries are not been given orders. The playbook is errily similar. In India, we don’t have the infrastructure for any new patients, period. Heck only a couple of states have done something proper for the anganwadi workers. In fact, last year there were massive strikes by anganwadi workers all over India but only NDTV showed a bit of it along with some of the news channels from South India. Most mainstream channels chose to ignore it.

On the world stage, some of the other countries and how they have responded perhaps need sharing. For e.g. I didn’t know that Cuba had so many doctors and the politics between it and Brazil. Or the interesting stats. shared by Andreas Backhaus which seems to show how distributed the issue (age-wise) is rather than just a few groups as has been told in Indian media. What was surprising for me is the 20-29 age group which has not been shared so much in the Indian media which is the bulk of our population. The HBR article also makes a few key points which I hope both the general public and policymakers both in India as well as elsewhere take note of.

What is worrying though that people can be infected twice or more as seems to be from Singapore or China and elsewhere. I have read enough of Robin Cook and Michael Crichton books to be aware that viruses can do whatever. They will over time mutate, how things will happen then is anybody’s guess. What I found interesting is the world economic forum article which hypothesis that it may be two viruses which got together as well as research paper from journal from poteome research which has recently been published. The biggest myth flying around is that summer will halt or kill the spread which even some of my friends have been victim of . While a part of me wants to believe them, a simple scientific fact has been viruses have probably been around us and evolved over time, just like we have. In fact, there have been cases of people dying due to common cold and other things. Viruses are so prevalent it’s unbelivable. What is and was interesting to note is that bat-borne viruses as well as pangolin viruses had been theorized and shared by Chinese researchers going all the way back to 90’s . The problem is even if we killed all the bats in the world, some other virus will take its place for sure. One of the ideas I had, dunno if it’s feasible or not that at least in places like Airports, we should have some sort of screenings and a labs working on virology. Of course, this will mean more expenses for flying passengers but for public health and safety maybe it would worth doing so. In any case, virologists should have a field day cataloging various viruses and would make it harder for viruses to spread as fast as this one has. The virus spread also showed a lack of leadership in most of our leaders who didn’t react fast enough. While one hopes people do learn from this, I am afraid the whole thing is far from over. These are unprecedented times and hope that all are maintaining social distancing and going out only when needed.

30 March, 2020 01:07AM by shirishag75

March 29, 2020

Enrico Zini

Molly de Blanc

Computing Under Quarantine

Under the current climate of lock-ins, self-isolation, shelter-in-place policies, and quarantine, it is becoming evident to more people the integral role computers play in our lives. Students are learning entirely online, those who can are working from home, and our personal relationships are being carried largely by technology like video chats, online games, and group messages. When these things have become our only means of socializing with those outside our homes, we begin to realize how important they are and the inequity inherent to many technologies.

Someone was telling me how a neighbor doesn’t have a printer, so they are printing off school assignments for their neighbor. People I know are sharing internet connections with people in their buildings, when possible, to help save on costs with people losing jobs. I worry now even more about people who have limited access to home devices or poor internet connections.

As we are forced into our homes and are increasingly limited in the resources we have available, we find ourselves potentially unable to easily fill material needs and desires. In my neighborhood, it’s hard to find flour. A friend cannot find yeast. A coworker couldn’t find eggs. Someone else is without dish soap. Supply chains are not designed to meet with the demand currently being exerted on the system.

This problem is mimicked in technology. If your computer breaks, it is much harder to fix it, and you lose a lot more than just a machine – you lose your source of connection with the world. If you run out of toner cartridges for your printer – and only one particular brand works – the risk of losing your printer, and your access to school work, becomes a bigger deal. As an increasing number of things in our homes are wired, networked, and only able to function with a prescribed set of proprietary parts, gaps in supply chains become an even bigger issue. When you cannot use whatever is available, and instead need to wait for the particular thing, you find yourself either hoarding or going without. What happens when you can’t get the toothbrush heads for your smart toothbrush due to prioritization and scarcity with online ordering when it’s not so easy to just go to the pharmacy and get a regular toothbrush?

In response to COVID-19 Adobe is offering no-cost access to some of their services. If people allow themselves to rely on these free services, they end up in a bad situation when a cost is re-attached.

Lock-in is always a risk, but when people are desperate, unemployed, and lacking the resources they need to survive, the implications of being trapped in these proprietary systems are much more painful.

What worries me even more than this is the reliance on insecure communication apps. Zoom, which is becoming the default service in many fields right now, offers anti-features like attendee attention tracking and user reporting.

We are now being required to use technologies designed to maximize opportunities for surveillance to learn, work, and socialize. This is worrisome to me for two main reasons: the violation of privacy and the normalization of a surveillance state. It is a violation of privacy, to have our actions tracked. It also gets us used to being watched, which is dangerous as we look towards the future.

29 March, 2020 05:51PM by mollydb

Sven Hoexter

Looking into Envertech Enverbridge EVB 202 SetID tool

Disclaimer: I'm neither an experienced programmer nor proficient in reverse engineering, but I like at least to try to figure out how things work. Sometimes the solution is so easy, that even I manage to find it, still take this with a grain of salt.

I lately witnessed the setup of an Envertech EnverBridge ENB-202 which is kind of a classic Chinese IoT device. Buy it, plug it in, use some strange setup software, and it will report your PV statistics to a web portal. The setup involved downloading a PE32 Windows executable, with an UI that basically has two input boxes and a sent button. You've to input the serial number(s) of your inverter boxes and the ID of your EnverBridge. That made me interested in what this setup process really looks like.

The EnverBridge device itself has on one end a power plug, which is also used to communicate with the inverter via some Powerline protocol, and a network plug with a classic RJ45 end you plug into your network. If you power it up it will request an IPv4 address via DHCP. That brings us to the first oddity, the MAC address is in the BC:20:90 prefix which I could not find in the IEEE lists.

Setting Up the SetID Software

You can download the Windows software in a Zipfile, once you unpack it you end up with a Nullsoft installer .exe. Since this is a PE32 executable we've to add i386 as foreign architecture to install the wine32 package.

dpkg --add-architecture i386
apt update
apt install wine32:i386
unzip Set\
wine Set\ ID.exe

The end result is an installation in ~/.wine/drive_c/Program Files/SetID which reveals that this software is build with Qt5 according to the shipped dll's. The tool itself is in udpCilentNow.exe, and looks like this: Envertech SetID software

The Network Communication

To my own surprise, the communication is straight forward. A single UDP paket is sent to the broadcast address ( on port 8765.

Envertech SetID udp paket in wireshark

I expected some strange binary protocol, but the payload is just simple numbers. They're a combination of the serial numbers of the inverter and the ID of the Enverbridge device. One thing I'm not 100% sure about are the inverter serial numbers, there are two of them, but on the inverter I've seen the serial numbers are always the same. So the payload is assembled like this:

  • ID of the EnverBridge
  • char 9 to 16 of the inverter serial 1
  • char 9 to 16 of the inverter serial 2

If you've more inverter the serials are just appended in the same way. Another strange thing is that the software does close to no input validation, they only check that the inverter serials start with CN and then just extract char 9 to 16.

The response from the EnverBridge is also a single UDP paket to the broadcast address on port 8764, with exactly the same content we've sent.

Writing a Replacement

My result is probably an insult for all proficient Python coders, but based on a bit of research and some cut&paste programming I could assemble a small script to replicate the function of the Windows binary. I guess the usefulness of this exercise was mostly my personal entertainment, though it might help some none Windows users to setup this device. Usage is also very simple:

./ -h
Usage: [options] MIIDs

  -h, --help         show this help message and exit
  -b BID, --bid=BID  Serial Number of your EnverBridge

./ -b 90087654 CN19100912345678 CN19100912345679

This is basically 1:1 the behaviour of the Windows binary, though I tried to add a bit more validation then the original binary and some more error messages. I also assume the serial numbers are also always the same so I take only one as the input, and duplicate it for the paket data.

29 March, 2020 04:58PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Nageru 1.9.2 released

Obviously, the Covid-19 outbreak caused some of my streaming events to be cancelled, but that's a small thing in the big picture. However, I've accumulated a fair amount of changes to both Nageru, my video mixer, and Futatabi, my slow motion video server, this winter and spring. I've packaged them up and released 1.9.2. As usual, you can get both at, and they're also on the way up to Debian unstable. The complete changelog follows:

Nageru and Futatabi 1.9.2, March 29th, 2020

  - Support handling white balance directly in Nageru, without themes
    manually inserting a WhiteBalanceEffect or handling set_wb().
    To use it, call scene:add_white_balance() instead of
    scene:add_effect( If using this functionality,
    white balance will be properly propagated to the MJPEG feed and
    through Futatabi, so that replays get the correct white balance.
    Futatabi's UI will still be uncorrected, though.

  - Make it possible to siphon out a single MJPEG stream, for remote
    debugging, single-camera recording, single-camera streaming via
    Kaeru or probably other things. The URL for this is /feeds/N.mp4
    where N is the card index (starting from zero).

  - The theme can now access some audio settings; it can get (not set)
    number of buses and names, get/set fader volume, get/set mute,
    and get/set EQ parameters.

  - In Futatabi, it is now possible to set custom source labels, with
    the parameter --source-label NUM:LABEL (or -l NUM:LABEL).

  - When the playback speed changes in Futatabi, ease into the new speed.
    The easing period is nominally 200 ms, but it will be automatically
    shortened or lengthened (up to as much as two seconds in extreme
    cases, especially involving very slight speed change) if this
    helps getting back into a cadence of hitting the original frames.
    This can mean significant performance improvements when ramping
    from higher speeds back into 100%.

  - Updates for newer versions of CEF (tested with Chrome 80).

  - Various bugfixes and performance improvements.


29 March, 2020 03:45PM

hackergotchi for Paulo Henrique de Lima Santana

Paulo Henrique de Lima Santana

My free software activities in February 2020

My free software activities in february 2020

March is ending but I finally wrote my monthly report about activities in Debian and Free Software in general for February.

As I already wrote here, I attended to FOSDEM 2020 on February 1st and 2nd in Brussels. It was a amazing experience.

After my return to Curitiba, I felt my energies renewed to start new challenges.

MiniDebConf Maceió 2020

I continued helping to organize MiniDebConf and I got positive answers from 4Linux and and they are sponsorsing the event.


I started to talk with Maristela from IEP - Instituto de Engenharia do Paraná and after some messages and I joined a meeting with her and other members of Câmara Técnica de Eletrônica, Computação e Ciências de Dados.

I explained about FLISOL in Curitiba to them and they agreed to host the event at IEP. I asked to use three spaces: Auditorium for FLISOL talks, Salão Nobre for meetups from WordPress and PostgreSQL Communities, and the hall for Install Fest.

Besides FLISOL, they would like to host other events and meetups from Communities in Curitiba as Python, PHP, and so on. At least one per month.

I helped to schedule a PHP Paraná Community meetup on March.

New job

Since 17th I started to work at Rentcars as Infrastructure Analyst. I’m very happy to work there because we use a lot of FLOSS and with nice people.

Ubuntu LTS is the approved OS for desktops but I could install Debian on my laptop :-)


I signed pgp keys from friends I met in Brussels and I had my pgp key signed by them.

Finally my MR to the DebConf20 website fixing some texts was accepted.

I have watched vídeos from FOSDEM

  1. Until now, I saw these great talks:
  • Growing Sustainable Contributions Through Ambassador Networks
  • Building Ethical Software Under Capitalism
  • Cognitive biases, blindspots and inclusion
  • Building a thriving community in company-led open source projects
  • Building Community for your Company’s OSS Projects
  • The Ethics of Open Source
  • Be The Leader You Need in Open Source
  • The next generation of contributors is not on IRC
  • Open Source Won, but Software Freedom Hasn’t Yet
  • Open Source Under Attack
  • Lessons Learned from Cultivating Open Source Projects and Communities

That’s all folks!

29 March, 2020 10:00AM

March 28, 2020

François Marier

How to get a direct WebRTC connections between two computers

WebRTC is a standard real-time communication protocol built directly into modern web browsers. It enables the creation of video conferencing services which do not require participants to download additional software. Many services make use of it and it almost always works out of the box.

The reason it just works is that it uses a protocol called ICE to establish a connection regardless of the network environment. What that means however is that in some cases, your video/audio connection will need to be relayed (using end-to-end encryption) to the other person via third-party TURN server. In addition to adding extra network latency to your call that relay server might overloaded at some point and drop or delay packets coming through.

Here's how to tell whether or not your WebRTC calls are being relayed, and how to ensure you get a direct connection to the other host.

Testing basic WebRTC functionality

Before you place a real call, I suggest using the official test page which will test your camera, microphone and network connectivity.

Note that this test page makes use of a Google TURN server which is locked to particular HTTP referrers and so you'll need to disable privacy features that might interfere with this:

  • Brave: Disable Shields entirely for that page (Simple view) or allow all cookies for that page (Advanced view).

  • Firefox: Ensure that is set to false in about:config, which it is by default.

  • uMatrix: The "Spoof Referer header" option needs to be turned off for that site.

Checking the type of peer connection you have

Once you know that WebRTC is working in your browser, it's time to establish a connection and look at the network configuration that the two peers agreed on.

My favorite service at the moment is Whereby (formerly, so I'm going to use that to connect from two different computers:

  • canada is a laptop behind a regular home router without any port forwarding.
  • siberia is a desktop computer in a remote location that is also behind a home router, but in this case its internal IP address ( is set as the DMZ host.


For all Chromium-based browsers, such as Brave, Chrome, Edge, Opera and Vivaldi, the debugging page you'll need to open is called chrome://webrtc-internals.

Look for RTCIceCandidatePair lines and expand them one at a time until you find the one which says:

  • state: succeeded (or state: in-progress)
  • nominated: true
  • writable: true

Then from the name of that pair (N6cxxnrr_OEpeash in the above example) find the two matching RTCIceCandidate lines (one local-candidate and one remote-candidate) and expand them.

In the case of a direct connection, I saw the following on the remote-candidate:

  • ip shows the external IP address of siberia
  • port shows a random number between 1024 and 65535
  • candidateType: srflx

and the following on local-candidate:

  • ip shows the external IP address of canada
  • port shows a random number between 1024 and 65535
  • candidateType: prflx

These candidate types indicate that a STUN server was used to determine the public-facing IP address and port for each computer, but the actual connection between the peers is direct.

On the other hand, for a relayed/proxied connection, I saw the following on the remote-candidate side:

  • ip shows an IP address belonging to the TURN server
  • candidateType: relay

and the same information as before on the local-candidate.


If you are using Firefox, the debugging page you want to look at is about:webrtc.

Expand the top entry under "Session Statistics" and look for the line (should be the first one) which says the following in green:

  • ICE State: succeeded
  • Nominated: true
  • Selected: true

then look in the "Local Candidate" and "Remote Candidate" sections to find the candidate type in brackets.

Firewall ports to open to avoid using a relay

In order to get a direct connection to the other WebRTC peer, one of the two computers (in my case, siberia) needs to open all inbound UDP ports since there doesn't appear to be a way to restrict Chromium or Firefox to a smaller port range for incoming WebRTC connections.

This isn't great and so I decided to tighten that up in two ways by:

  • restricting incoming UDP traffic to the IP range of siberia's ISP, and
  • explicitly denying incoming to the UDP ports I know are open on siberia.

To get the IP range, start with the external IP address of the machine (I'll use the IP address of my blog in this example: and pass it to the whois command:

$ whois | grep CIDR

To get the list of open UDP ports on siberia, I sshed into it and ran nmap:

$ sudo nmap -sU localhost

Starting Nmap 7.60 ( ) at 2020-03-28 15:55 PDT
Nmap scan report for localhost (
Host is up (0.000015s latency).
Not shown: 994 closed ports
631/udp   open|filtered ipp
5060/udp  open|filtered sip
5353/udp  open          zeroconf

Nmap done: 1 IP address (1 host up) scanned in 190.25 seconds

I ended up with the following in my /etc/network/iptables.up.rules (ports below 1024 are denied by the default rule and don't need to be included here):

# Deny all known-open high UDP ports before enabling WebRTC for canada
-A INPUT -p udp --dport 5060 -j DROP
-A INPUT -p udp --dport 5353 -j DROP
-A INPUT -s -p udp --dport 1024:65535 -j ACCEPT

28 March, 2020 11:55PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Nintendo Switch - Virtua Racing

Screenshot of Virtua Racing

I don't play many video games any more, and I certainly don't consider myself a gamer. If I have time and brain power, I like to write code or work on my PhD; if I don't have brain power I read, watch movies or TV. However last Summer, lured by the launch titles Super Mario Odyssey and The Legend of Zelda: Breath of the Wild, I bought a Nintendo Switch. I've actually played some games on it now, although I surprised myself by playing lesser-known and indie titles rather than the AAA names. My favourite games are ones I can pick up and play for 5-10 minutes at a time without losing time to tutorials, introductory videos, and other such nonsense. I thought I'd recommend some games. First up, Virtua Racing.

I actually bought this for the office Switch before I had my own. It's re-release of the 1992 arcade racer. I have fond memories of Arcades in the 90s (although I don't think I played this game). It's conceptually simple: three tracks, few options (you can choose manual or automatic transmission). But there's some hidden depth in there: it models slipstream, tyre wear and other effects.

The fun of it for me is trying to beat my personal best and that of my colleagues. But I haven't actually placed above 3rd on the intermediate course or even finished the advanced one, so despite its simplicity, there's plenty of mileage in it for me. It also looks great, rending at a smooth 60fps. The large untextured polygons were once a design decision due to the limitations of the hardware in 1992 and are now an in-fashion aesthetic.

28 March, 2020 02:07PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.17: Robustified

A new release 0.4.17 of RProtoBuf is now on CRAN. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release contains small polishes related to the release 0.4.16 which added JSON support for messages, and switched to ByteSizeLong. This release now makes sure JSON functionality is only tested where available (on version 3 of the Protocol Buffers library), and that ByteSizeLong is only called where available (version 3.6.0 or later). Of course, older versions build as before and remain fully supported.

Changes in RProtoBuf version 0.4.17 (2020-03-xx)

  • Condition use of ByteSizeLong() on building with ProtoBuf 3.6.0 or later (Dirk in #71 fixing #70).

  • The JSON unit tests are skipped if ProtoBuf 2.* is used (Dirk, also #71).

  • The configure script now extracts the version from the DESCRIPTION file ( (Dirk, also #71).

CRANberries provides the usual diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 March, 2020 01:22PM

March 26, 2020

hackergotchi for Jonathan Dowland

Jonathan Dowland

ephemeral note-taking vs preserve-everything

Assume for a minute that that best way to take notes is on paper with pens, pencils etc., and not in a digital device. (I might return to this later).

I'm torn between two extremes for note-taking. One one hand, I find it useful in the short-term to make scrappy, ephemeral notes: ad-hoc daily TODO lists; mind maps and experiments, that I intend to throw away. There are lots of styles of stationary to support this: ring-bound notebooks that are easy to tear leaves from; or literally the back of envelopes.

The other extreme is the "preserve everything" mentality. For my PhD, I have a single, hard-backed notebook that I try to do all my PhD note taking in. I date each entry. There's a history I can refer to, no matter whether what I'm writing is seemingly ephemeral or not.

So I do both. Sometimes I want to refer back to something I did which was ephemeral and I've lost it or thrown it away. And having lots of mis-matched stationary for ephemeral storage is a bit messy, and works against the other extreme to a certain extent.

I've started to wonder whether strictly doing one or the other (and most likely, the "preserve everything" approach) might be a good idea. In practise that would mean settling on a particular notebook format, and disposing of anything else; having separate notebooks by topics (PhD, work, …) with a "catch-all" for the rest.

Does anyone have any advice or useful resources to read on this topic?

26 March, 2020 03:51PM

hackergotchi for Jonathan Carter

Jonathan Carter


I just took my dog for a nice long walk. It’s the last walk we’ll be taking for the next 3 weeks, he already starts moping around if we just skip one day of walking, so I’m going to have to get creative keeping him entertained. My entire country is going on lockdown starting at midnight. People won’t be allowed to leave their homes unless it’s for medical emergencies, to buy food or if their work has been deemed essential.

Due to the Covid-19 epidemic nearing half a million confirmed infections, this has become quite common in the world right now, with about a quarter of the world’s population currently under lockdown and confined to their homes.

Some people may have noticed I’ve been a bit absent recently, I’ve been going through some really rough personal stuff. I’m dealing with it and I’ll be ok, but please practice some patience with me in the immediate future if you’re waiting on anything.

I have a lot of things going on in Debian right now. It helps keeping me busy through all the turmoil and gives me something positive to focus on. I’m running for Debian Project Leader (DPL), I haven’t been able to put in quite the energy into my campaign that I would have liked, but I think it’s going ok under the circumstances. I think because of everything happening in the world it’s been more difficult for other Debianites to participate in debian-vote discussions as well. Recently we also announced Debian Social, a project that’s still in its early phases, but we’ve been trying to get it going for about 2 years, so it’s nice to finally see it shaping up. There’s also plans to put Debian Social and some additional tooling to the test, with the idea to host a MiniDebConf entirely online. No dates have been confirmed yet, we still have a lot of crucial bits to figure out, but you can subscribe to debian-devel-announce and Debian micronews for updates as soon as more information is available.

To everyone out there, stay safe, keep your physical distance for now and don’t lose hope, things will get better again.

26 March, 2020 03:09PM by jonathan

hackergotchi for Axel Beckert

Axel Beckert

Pictures in pure HTML with chafa and aha

I recently stumbled upon chafa, a tool to display pictures, especially color pictures on your ANSI text terminal, e.g. inside an xterm.

And I occasionally use aha, the Ansi HTML Adapter to convert a colorful terminal content into HTML to show off terminal screenshots without the requirement of a picture — so that it also works in e.g. text browsers or for blinds.

Combining chafa and aha: Examples

A moment ago I had the thought what would happen if I feed the output of chafa into aha and expected nothing really usable. But I was surprised by the quality of the outcome.

looks like this after chafa -w 9 -c full -s 160x50 DSCN4692.jpg | aha -n:











Checking the Look in Text Browsers

It even looks not that bad in elinks — as far as I know the only text browser which supports CSS and styles:

In Lynx and Links 2, the text composing the image is displayed only in black and white, but you at least can recognise the edges in the picture:

Same Functionality in One Tool?

I knew there was a tool which did this in one step. Seems to have been png2html.

Tried to play around with it, too, but neither really understood how to use it (seems to require a text file for the characters to be used — why?) nor did I really got it working. It always ran until I aborted it and it never filled the target file with any content.

Additionally, png2html insists on one character per pixel, requiring to first properly resize the image before converting to HTML.

The Keyboard in the Pictures

Oh, and btw., the displayed keyboard is my Zlant. The Zlant is a 40% uniform staggered mechanical keyboard. Currently, only Zlant PCBs are available at 1UP Keyboards (USA), i.e. no complete kits.

It is shown with the SA Vilebloom key cap set, currently available at MechSupply (UK).

26 March, 2020 04:55AM by Axel Beckert (

March 25, 2020

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, February 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In February, 226 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA gave back 12 out of his assigned 14h, thus he is carrying over 2h for March.
  • Ben Hutchings did 19.25h (out of 20h assigned), thus carrying over 0.75h to March.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Dylan Aïssi did 5.5h (out of 4h assigned and 1.5h from January).
  • Emilio Pozuelo Monfort did 29h (out of 20h assigned and 15.75h from January), thus carrying over 6.75h to March.
  • Hugo Lefeuvre gave back the 12h he got assigned.
  • Markus Koschany did 10h (out of 20h assigned and 8.75h from January), thus carrying over 18.75h to March.
  • Mike Gabriel did 5.75h (out of 20h assigned) and gave 12h back to the pool, thus he is carrying over 2.25h to March.
  • Ola Lundqvist did 10h (out of 8h assigned and 4.5h from January), thus carrying over 2.5h to March.
  • Roberto C. Sánchez did 20.25h (out of 20h assigned and 13h from January) and gave back 12.75h to the pool.
  • Sylvain Beucler did 20h (out of 20h assigned).
  • Thorsten Alteholz did 20h (out of 20h assigned).
  • Utkarsh Gupta did 20h (out of 20h assigned).

Evolution of the situation

February began as rather calm month and the fact that more contributors have given back unused hours is an indicator of this calmness and also an indicator that contributing to LTS has become more of a routine now, which is good.

In the second half of February Holger Levsen (from LTS) and Salvatore Bonaccorso (from the Debian Security Team) met at SnowCamp in Italy and discussed tensions and possible improvements from and for Debian LTS.

The security tracker currently lists 25 packages with a known CVE and the dla-needed.txt file has 21 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

25 March, 2020 04:44PM by Raphaël Hertzog

March 24, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

New package RcppDate 0.0.1 now on CRAN!

A new small package with a new C++ header library is now on CRAN. It brings the date library by Howard Hinnant to R. This library has been in pretty wide-spread use for a while now, and adds to C++11/C++14/C++17 what will be (with minor modifications) the ‘date’ library in C++20. I had been aware of it for a while, but not needed thanks to CCTZ library out of Google and our RcppCCTZ package. And like CCTZ, it builds upon std::chron adding a whole lot of functionality and useability enhancement. But a some upcoming (and quite exciting!) changes in nanotime required it, I had a reason to set about packaging it as RcppDate. And after a few days of gestation and review it is now available via CRAN.

Two simple example files are included and can be accessed by Rcpp::sourceCpp(). Some brief excerpts follow.

The first example shows three date constructors. Note how the month (and the leading digits) are literals. No quotes for strings anywhere. And no format (just like our anytime package for R).

  constexpr auto x1 = 2015_y/March/22;
  constexpr auto x2 = March/22/2015;
  constexpr auto x3 = 22_d/March/2015;

Note that these are constexpr that resolve at compile-time, and that the resulting year_month_day type is inferred via auto.

A second example constructs the last day of the months similarly:

  constexpr auto x1 = 2015_y/February/last;
  constexpr auto x2 = February/last/2015;
  constexpr auto x3 = last/February/2015;

For more, see the copious date.h documentation.

The (very bland first) NEWS entry (from a since-added NEWS file) for the initial upload follows.

Changes in version 0.0.1 (2020-01-17)

  • Initial CRAN upload of first version

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 March, 2020 11:29PM

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma 5.18 for Debian

I have been trying out the Plasma Desktop for one week now, and I am very positively surprised. Compared to the clumsy history of KDE3, the current desktop is extremely small-footprint and smooth, surprisingly. Integration is as expected great, and mixing programs from the “other world” (Gtk/Gnome) works also extremely smooth.

If there are a few things I would change, then mostly the chaos about kwallet and Gnome keyring. I would love to have one secret storage, and it seems that Gnome Keyring is preferrable, but this is in flux at the moment. Also, it is not that pressing, because I have moved all my passwords into pass and thus don’t need the secret storage that much anymore.

So, after a bit of working with Plasma, I realized that Debian still ships an old version, the most recent being 5.18.3 LTS. Thus, I embarked onto a journey of updating all the necessary packages, and there are a lot: in total 106 packages I did update (and one new one!) until I finally had a new plasma-desktop package available. If you are interested, there are binaries for amd64 and sources in my Debian repository (WARNING: These packages are for Debian/sid and maybe testing, and cannot be used with Buster!):

deb unstable kde
deb-src unstable kde

As usual, don’t forget to import my GPG key, and all packages are without warranty :-;

There are two packages that I didn’t manage to update: kde-gtk-config which has changed a lot and contains far less files, and breeze-icons which fails on its own symlink tests. If anyone has an idea, please let me know.

If other packages are missing, please also drop me a line and I’ll try to update them.


24 March, 2020 08:20PM by Norbert Preining

hackergotchi for Christoph Berg

Christoph Berg


Users had often asked where they could find older versions of packages from I had been collecting these since about April 2013, and in July 2016, I made the packages available via an ad-hoc URL on the repository master host, called "the morgue". There was little repository structure, all files belonging to a source package were stuffed into a single directory, no matter what distribution they belonged to. Besides this not being particularly accessible for users, the main problem was the ever-increasing need for more disk space on the repository host. We are now at 175 GB for the archive, of which 152 GB is for the morgue.

Our friends from have had a proper archive host ( for some time already, so it was about time to follow suit and implement a proper archive for as well, usable from apt.

So here it is:

The archive covers all past and current Debian and Ubuntu distributions. The apt sources.lists entries are similar to the main repository, just with "-archive" appended to the host name and the distribution:

deb DIST-pgdg-archive main
deb-src DIST-pgdg-archive main

The oldest PostgreSQL server versions covered there are 8.2.23, 8.3.23, 8.4.17, 9.0.13, 9.1.9, 9.2.4, 9.3beta1, and everything newer.

Some example:

$ apt-cache policy postgresql-12
  Installed: 12.2-2.pgdg+1+b1
  Candidate: 12.2-2.pgdg+1+b1
  Version table:
 *** 12.2-2.pgdg+1+b1 900
        500 sid-pgdg/main amd64 Packages
        500 sid-pgdg-archive/main amd64 Packages
        100 /var/lib/dpkg/status
     12.2-2.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages
     12.2-1.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages
     12.1-2.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages
     12.1-1.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages
     12.0-2.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages
     12.0-1.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages
     12~rc1-1.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages
     12~beta4-1.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages
     12~beta3-1.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages
     12~beta2-1.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages
     12~beta1-1.pgdg+1 500
        500 sid-pgdg-archive/main amd64 Packages

Because this is hosted on S3, browsing directories is only supported indirectly by static index.html files, so if you want to look at some specific URL, append "/index.html" to see it.

The archive is powered by a PostgreSQL database and a bunch of python/shell scripts, from which the apt index files are built.

Archiving old distributions

I'm also using the opportunity to remove some long-retired distributions from the main repository host. The following distributions have been moved over:

  • Debian etch (4.0)
  • Debian lenny (5.0)
  • Debian squeeze (6.0)
  • Ubuntu lucid (10.04)
  • Ubuntu saucy (13.10)
  • Ubuntu utopic (14.10)
  • Ubuntu wily (15.10)
  • Ubuntu zesty (17.04)
  • Ubuntu cosmic (18.10)

They are available as "DIST-pgdg" from the archive, e.g. squeeze:

deb squeeze-pgdg main
deb-src squeeze-pgdg main

24 March, 2020 11:08AM

hackergotchi for Anisa Kuci

Anisa Kuci

Outreachy post 5 - Final report

This is my last Outreachy blogpost, as my internship unfortunately has come to an end. It was a blast!

Through the Outreachy internship I gained a lot of valuable knowledge and even though I dont know what the future will bring, I am more confident about myself, my skills in fundraising and my technical abilities now.

During the contribution phase I did quite a lot of research so I could come up with relevant information to add on the Debian wiki as part of the contribution phase tasks. This was helpful for me to build a more deep understanding of the Debian project, the DebConf conferences and the general style of working within Debian.

During this phase I completed most of the tasks given and got onto the mailing lists and IRC channels which are public. This was quite an intense experience by itself as it was like digging into a new job but in a competitive situation as other applicants, of course, were also putting in their best to get one of the 50 Outreachy internships (two in Debian) that were available.

When I got selected I also got access to the email address and the private git repos as they would be needed for me to work on fundraising. I was also provided with an email address that is the one I use for sponsors’ communication. I learned what makes communication look professional and how to think of the recipient when formulating emails, sending replies or creating marketing messages.

As the internship continued I started to learn how the DebConf organizing structure works, with a particular attention on the fundraising team. I have also quickly been given the responsibility to reach out to 100 potential sponsors and combine learning and working experience very nicely.

I was given responsibility of working on the fundraising material (flyer and brochure) for DebConf20, so using LaTeX I updated the files in order to remove the translation system, I added the Israeli currency and the visual design chosen from a public logo proposal contest that is held for each DebConf. My creative side could also be put to good use as the team trusted me with selecting the images for the brochure and making sure that the usage rights granted and attribution are all compliant to Free Software/Creative Commons licences.

I have continuously maintained those materials and was very happy that long term team members even asked me to approve changes they proposed on the fundraising material. The MiniDebConf team for Regensburg have taken the files I created and made them into their fundraising material (brochure, flyer) which is great to see.

MiniDebConf Regensburg commits

I have profited so much from other people’s work, so I am very happy if community members can build on mine. Afterall sharing is caring!

Through the Outreachy travel stipend I was able to attend FOSDEM 2020 and besides meeting many friends from the community and having a fundraising meeting with a sponsor, I was able to also support distributing the DebConf20 fundraising material both in electronic form and as printed versions.

During the Outreachy internship I did two communication waves to potential sponsors and maintained the daily communication with them. I followed the documentation that was already available in the Debian wiki guiding me through the process of invoicing sponsors through the organizations that Debian collaborates with.

As I have been in frequent contact with the sponsors, I have continuously updated the DebConf20 website with sponsor logos and links through git files, improving web pages on the wafer system and doing 18 merge requests in total.

DebConf20 website merge requests

I completed six email templates for the DebConf sponsor communication campaigns. While two of them have already been very useful, I hope my other proposals will be the ones used for the next waves of sponsors contacting.

I committed them to the sponsors git repo so they can be easily accessible and well documented for other DebConf teams or can be recycled for smaller Debian events.

Unfortunately, due to the insecurity about travel around the Corona virus pandemic, the third wave of communication, originally scheduled for the end of my internship in March, has been held. I have committed to the team to do this once the travel restrictions have been lifted and we know that DebConf20 can proceed as planned for August 2020.

So yeah, I am definitely going to stay around and continue to help with DebConf fundraising!

DebConf has been held for two decades and it also has a generic website, so I also updated the static sponsors page on the main website as it is the first landing page for everyone especially for people who are not necessarily insiders in Debian or DebConf.

I have completed a Python tutorial, got the general understanding of the programming language and made some smaller contributions to improve existing tools in the fundraising team based on the learning.

The internship was also focused on documentation, so during the whole internship I have kept notes in order to be able to improve existing documentation and I have also written new material, especially on the tasks that I have been working on more closely:

Every DebConf has a new team every year, in a new country, so I hope this documentation I have worked on will be useful as a jump-starter for them to organize and finance their events.

I would like to take a moment to thank again my mentors Daniel and Karina for all their support, it has been great working with them. Having them as an example, I have learned a lot. Also warm thanks to the DebConf global and local teams which have been very welcoming and always supportive during my internship.

So, as you might already know, if you have read my other Outreachy blog posts, this has been a great experience for me! Outreachy provides an awesome opportunity that would not be available if it was not for the generous sponsors and volunteers making such a program for people from underrepresented groups in the FLOSS community.

I really encourage people from these groups to find the confidence within themselves and apply for Outreachy!

I love Debian

24 March, 2020 10:45AM by Anisa Kuci

Russ Allbery

Review: Lost in Math

Review: Lost in Math, by Sabine Hossenfelder

Publisher: Basic
Copyright: June 2018
ISBN: 0-465-09426-0
Format: Kindle
Pages: 248

Listening to experts argue can be one of the better ways to learn about a new field. It does require some basic orientation and grounding or can be confusing or, worse, wildly misleading, so some advance research or Internet searches are warranted. But it provides some interesting advantages over reading multiple popular introductions to a field.

First, experts arguing with each other are more precise about their points of agreement and disagreement because they're trying to persuade someone who is well-informed. The points of agreement are often more informative than the points of disagreement, since they can provide a feel for what is uncontroversial among experts in the field.

Second, internal arguments tend to be less starry-eyed. One of the purposes of popularizations of a field is to get the reader excited about it, and that can be fun to read. But to generate that excitement, the author has a tendency to smooth over disagreements and play up exciting but unproven ideas. Expert disagreements pull the cover off of the uncertainty and highlight the boundaries of what we know and how we know it.

Lost in Math (subtitled How Beauty Leads Physics Astray) is not quite an argument between experts. That's hard to find in book form; most of the arguments in the scientific world happen in academic papers, and I rarely have the energy or attention span to read those. But it comes close. Hossenfelder is questioning the foundations of modern particle physics for the general public, but also for her fellow scientists.

High-energy particle physics is facing a tricky challenge. We have a solid theory (the standard model) which explains nearly everything that we have currently observed. The remaining gaps are primarily at very large scales (dark matter and dark energy) or near phenomena that are extremely difficult to study (black holes). For everything else, the standard model predicts our subatomic world to an exceptionally high degree of accuracy. But physicists don't like the theory. The details of why are much of the topic of this book, but the short version is that the theory does not seem either elegant or beautiful. It relies on a large number of measured constants that seem to have no underlying explanation, which is contrary to a core aesthetic principle that physicists use to judge new theories.

Accompanying this problem is another: New experiments in particle physics that may be able to confirm or disprove alternate theories that go beyond the standard model are exceptionally expensive. All of the easy experiments have been done. Building equipment that can probe beyond the standard model is incredibly expensive, and thus only a few of those experiments have been done. This leads to two issues: Particle physics has an overgrowth of theories (such as string theory) that are largely untethered from experiments and are not being tested and validated or disproved, and spending on new experiments is guided primarily by a sense of scientific aesthetics that may simply be incorrect.

Enter Lost in Math. Hossenfelder's book picks up a thread of skepticism about string theory (and, in Hossenfelder's case, supersymmetry as well) that I previously read in Lee Smolin's The Trouble with Physics. But while Smolin's critique was primarily within the standard aesthetic and epistemological framework of particle physics, Hossenfelder is questioning that framework directly.

Why should nature be beautiful? Why should constants be small? What if the universe does have a large number of free constants? And is the dislike of an extremely reliable theory on aesthetic grounds a good basis for guiding which experiments we fund?

Do you recall the temple of science, in which the foundations of physics are the bottommost level, and we try to break through to deeper understanding? As I've come to the end of my travels, I worry that the cracks we're seeing in the floor aren't really cracks at all but merely intricate patterns. We're digging in the wrong places.

Lost in Math will teach you a bit less about physics than Smolin's book, although there is some of that here. Smolin's book was about two-thirds physics and one-third sociology of science. Lost in Math is about two-thirds sociology and one-third physics. But that sociology is engrossing. It's obvious in retrospect, but I hadn't thought before about the practical effects of running out of unexplained data on a theoretical field, or about the transition from more data than we can explain to having to spend billions of dollars to acquire new data. And Hossenfelder takes direct aim at the human tendency to find aesthetically appealing patterns and unified explanations, and scores some palpable hits.

I went into physics because I don't understand human behavior. I went into physics because math tells it how it is. I liked the cleanliness, the unambiguous machinery, the command math has over nature. Two decades later, what prevents me from understanding physics is that I still don't understand human behavior.

"We cannot give exact mathematical rules that define if a theory is attractive or not," says Gian Francesco Giudice. "However, it is surprising how the beauty and elegance of a theory are universally recognized by people from different cultures. When I tell you, 'Look, I have a new paper and my theory is beautiful,' I don't have to tell you the details of my theory; you will get why I'm excited. Right?"

I don't get it. That's why I am talking to him. Why should the laws of nature care what I find beautiful? Such a connection between me and the universe seems very mystical, very romantic, very not me.

But then Gian doesn't think that nature cares what I find beautiful, but what he finds beautiful.

The structure of this book is half tour of how physics judges which theories are worthy of investigation and half personal quest to decide whether physics has lost contact with reality. Hossenfelder approaches this second thread with multiple interviews of famous scientists in the field. She probes at their bases for preferring one theory over another, at how objective those preferences can or should be, and what it means for physics if they're wrong (as increasingly appears to be the case for supersymmetry). In so doing, she humanizes theory development in a way that I found fascinating.

The drawback to reading about ongoing arguments is the lack of a conclusion. Lost in Math, unsurprisingly, does not provide an epiphany about the future direction of high-energy particle physics. Its conclusion, to the extent that it has one, is a plea to find a way to put particle physics back on firmer experimental footing and to avoid cognitive biases in theory development. Given the cost of experiments and the nature of humans, this is challenging. But I enjoyed reading this questioning, contrarian take, and I think it's valuable for understanding the limits, biases, and distortions at the edge of new theory development.

Rating: 7 out of 10

24 March, 2020 05:07AM

March 23, 2020

hackergotchi for Norbert Preining

Norbert Preining

De-uglify GTk3 tabs of terminals

If you are puzzled by the indistinguishability of the active tab from inactive tabs in any of the GTK3 based terminal emulators (mate-terminal, gnome-terminal, terminator, …), you are not alone. I have been plagued by that for far too long, and finally found a working solution.

In the above screen shot you see the tabs in the upper part are hardly distinguishable, and in the lower part the active tab is clearly indicated with different colors. Searching for relevant search terms brings up a variety of suggestions, most of them not working. I finally found this one and adapted the code therein to fit my need. The solution is to edit ~/.config/gtk-3.0/gtk.css, and add the following at the end:

notebook tab {
    /* background-color: #222; */
    padding: 0.4em;
    border: 0;
    border-color: #444;
    border-style: solid;
    border-width: 1px;
notebook tab:checked {
    /* background-color: #000; */
    background-image: none;
    border-color: #76C802;
notebook tab:checked label {
    color: #76C802;
    font-weight: 500;
notebook tab button {
    padding: 0;
    background-color: transparent;
    color: #ccc;
notebook tab button:hover {
  border: 0;
  background-image: none;
  border-color: #444;
  border-style: solid;
  border-width: 1px;

Here I didn’t change the background color (commented above), but changed the styling of the tab title and added a frame. The above changes the tab layout for all GTk3 elements that use the notebook widget, if one wants to restrict it to a certain terminal one need to prefix each CSS selector with the correct terminal indicator, as shown in the original post.

Hope that helps.

23 March, 2020 09:44PM by Norbert Preining

hackergotchi for Joey Hess

Joey Hess

quarantimer: a coronovirus quarantine timer for your things

I am trying to avoid bringing coronovirus into my house on anything, and I also don't want to sterilize a lot of stuff. (Tedious and easy to make a mistake.) Currently it seems that the best approach is to leave stuff to sit undisturbed someplace safe for long enough for the virus to degrade away.

Following that policy, I've quickly ended up with a porch full of stuff in different stages of quarantine, and I am quickly losing track of how long things have been in quarantine. If you have the same problem, here is a solution:

Open it on your mobile device, and you can take photos of each thing, select the kind of surfaces it has, and it will track the quarantine time for you. You can share the link to other devices or other people to collaborate.

I anticipate the javascript and css will improve, but it's good enough for now. I will provide this website until the crisis is over. Of course, it's free software and you can also host your own.

If this seems useful, please tell your friends and family about it.

Be well!

This is made possible by my supporters on Patreon, particularly Jake Vosloo.

23 March, 2020 07:39PM

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Two Factor Authentification on gitlab with Yubikey

I wanted to have a working Two Factor Authentification (2FA) setup to login on, Debians’s gitlab instance.
You might already know Two Factor Authentification via a One Time Password (OTP) generating app on your smartphone, like FreeOTP or Google Authenticator. But it is possible to use a physical device, and a keypress on the device is enough to authenticate (speed up things !). Here I am using a Yubikey 4, a popular USB device for Two Factor Authentification which is officially supported by gitlab, and whose tooling is well packaged in Debian.

Get to know the device

Install the needed packages to work with the yubikey
# apt install yubikey-manager libu2f-host0
List connected devices on your usb bus:
$ lsusb
Bus 002 Device 109: ID 1050:0407 Yubikey 4 OTP+U2F+CCID
Get info about the device capability
$ ykman info
Device type: YubiKey 4
Serial number: 1234567
Firmware version: 4.3.7
Enabled USB interfaces: OTP+FIDO+CCID
OTP Enabled
FIDO U2F Enabled
OpenPGP Enabled
PIV Enabled
OATH Enabled
FIDO2 Not available
The capability which interests us here is FIDO U2F. The Yubikey 4 supports Two Factor Authentification via the U2F standard, and this standard is maintained by the FIDO Industry Association, hence the name. As I plan to only use the FIDO U2F capability of the key, I set ‘FIDO’ to be the single mode of the key.
ykman mode FIDO

Testing web browser interaction with Yubico demo system

Now we need to have to have a browser with support for the U2F standard. Firefox has builtin support since Version 67. Debian 10 “Buster” has firefox-esr Version 68, so that will work. For testing yubikeys, the manufacturer has a demo website, where you can test U2F. Go to and follow the “Explore the Yubikey” link.
Once there you will be asked to register an account on yubicom’s demo systems, to which you will add the Yubikey as an Authenticating Device. After that you can add your security key. First step will be to register the device, which will require a light touch on the Yubikey button, and acceptance of this Firefox warning Window, as the demo website wants to know the model of the device.

Firefox message on the yubikey demo site. A normal site with U2F would not require the extended information, and have a simpler popup message.
As soon as the device is registered, you can login and logout and you will be prompted again to lightly touch the Yubikey button to authenticate, in addition to the classical login / password.

Using U2F on gitlab

When you want to register your yubikey for logging on salsa, you need first to register a One Time Password device in Settings -> Account -> Manage two-factor authentication, and Register Universal Two-Factor (U2F) Device. After the usual Firefox Popup, and the light touch on the key button, that'it you have a fast, and reliable Two Factor Authentification !


Each time I have to look on anything close to cryptography / authentification, it is a terminology avalanche. Here we had already 2FA, OTP, U2F, FIDO. And now there is FIDO2 too. It is the next version of the U2F standard, but this time it was named after the standardizing organization, FIDO. The web browser part of FIDO2 is called Webauthn. Also sometimes the whole FIDO2 is called Webauthn too. Easy to get, isn’t it ?

23 March, 2020 05:45PM by Emmanuel Kasper (

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (January and February 2020)

The following contributors got their Debian Developer accounts in the last two months:

  • Gard Spreemann (gspr)
  • Jonathan Bustillos (jathan)
  • Scott Talbert (swt2c)

The following contributors were added as Debian Maintainers in the last two months:

  • Thiago Andrade Marques
  • William Grzybowski
  • Sudip Mukherjee
  • Birger Schacht
  • Michael Robin Crusoe
  • Lars Tangvald
  • Alberto Molina Coballes
  • Emmanuel Arias
  • Hsieh-Tseng Shen
  • Jamie Strandboge


23 March, 2020 02:00PM by Jean-Pierre Giraud

hackergotchi for Gunnar Wolf

Gunnar Wolf

Made in what?

Say… What?

Just bought a 5-pack of 64GB USB keys. Am about to test them to ensure their actual capacity.

… And the label is… Actually true! For basically anything we are likely to encounter, specially in electronics. But still, it demands a photo before opening.

How come I’ve never come across anything like this before? :-]


Of course, just opening the package yielded this much more traditional (and much more permanent) piece of information:

23 March, 2020 07:00AM by Gunnar Wolf

Dima Kogan

org-babel for documentation

So I just gave a talk at SCaLE 18x about numpysane and gnuplotlib, two libraries I wrote to make using numpy bearable. With these two, it's actually quite nice!

Prior to the talk I overhauled the documentation for both these projects. The gnuplotlib docs now have a tutorial/gallery page, which is interesting-enough to write about. Check it out! Mostly it is a sequence of

  • Here's a bit of Python code
  • And here's the plot you get if you run that code

Clearly you want the plots in the documentation to correspond to the code, so you want something to actually run each code snippet to produce each plot. Automatically. I don't want to maintain these manually, and periodically discover that the code doesn't make the plot I claim it does or worse: that the code barfs. This is vaguely what Jupyter notebooks do, but they're ridiculous, so I'm doing something better:

  • The documentation page is a plain-text org-mode file. org is a magical Emacs mode; if you've never heard of it, stop everything and go check it out
  • This org file has a sequence of org-babel snippets. org-babel is a way to include snippets from various languages into org documents. Here I'm using Python, obviously
  • I have org evaluate each snippet to produce the resulting figure, as an .svg
  • I commit this .org file and the .svg plots, and push them to the git repo

That's it. The git repo is hosted by github, which has a rudimentary renderer for .org documents. I'm committing the .svg files, so that's enough to get rendered documentation that looks nice. Note that the usual workflow is to use org to export to html, but here I'm outsourcing that job to github; I just make the .svg files, and that's enough.

Look at the link again: gnuplotlib tutorial/gallery. This is just a .org file committed to the git repo. github is doing its normal org->html thing to display this file. This has drawbacks too: github is ignoring the :noexport: tag on the init section at the end of the file, so it's actually showing all the emacs lisp goop that makes this work (described below!). It's at the end, so I guess this is good-enough.

Those of us that use org-mode would be completely unsurprised to hear that the talk is also written as .org document. And the slides that show gnuplotlib plots use the same org-babel system to render the plots.

It's all oh-so-nice. As with anything as flexible as org-babel, it's easy to get into a situation where you're bending it to serve a not-quite-intended purpose. But since this all lives in emacs, you can make it do whatever you want with a bit of emacs lisp.

I ended up advising a few things (mailing list post here). And I stumbled on an (arguable) bug in emacs that needed working around (mailing list post here). I'll summarize both here.

Handling large Local Variables blocks

The advises I ended up with ended up longer than emacs expected, which made emacs not evaluate them when loading the buffer. As I discovered (see the mailing list post) the loading code looks for the string Local Variables in the last 3000 bytes of the buffer only, and I exceeded that. Stefan Monnier suggested a workaround in this post. Instead of the normal Local Variables block at the end:

Local Variables:
eval: (progn ... ...
             ... ...
             LONG chunk of emacs-lisp

I do this:

(progn ;;local-config
   lisp lisp lisp
   as long as I want

Local Variables:
eval: (progn (re-search-backward "^(progn ;;local-config") (eval (read (current-buffer))))

So emacs sees a small chunk of code that searches backwards through the buffer (as far back as needed) for the real lisp to evaluate. As an aside, this blog is also an .org document, and the lisp snippets above are org-babel blocks that I'm not evaluating. The exporter knows to respect the emacs-lisp syntax highlighting, however.


OK, so what was all the stuff I needed to tell org-babel to do specially here?

First off, org needed to be able to communicate to the Python session the name of the file to write the plot to. I do this by making the whole plist for this org-babel snippet available to python:

;; THIS advice makes all the org-babel parameters available to python in the
;; _org_babel_params dict. I care about _org_babel_params['_file'] specifically,
;; but everything is available
(defun dima-org-babel-python-var-to-python (var)
  "Convert an elisp value to a python variable.
  Like the original, but supports (a . b) cells and symbols
  (if (listp var)
      (if (listp (cdr var))
          (concat "[" (mapconcat #'org-babel-python-var-to-python var ", ") "]")
        (format "\"\"\"%s\"\"\"" var))
    (if (symbolp var)
        (format "\"\"\"%s\"\"\"" var)
      (if (eq var 'hline)
         (if (and (stringp var) (string-match "[\n\r]" var)) "\"\"%S\"\"" "%S")
         (if (stringp var) (substring-no-properties var) var))))))
(defun dima-alist-to-python-dict (alist)
  "Generates a string defining a python dict from the given alist"
  (let ((keyvalue-list
         (mapcar (lambda (x)
                   (format "%s = %s, "
                            "[^a-zA-Z0-9_]" "_"
                            (symbol-name (car x)))
                           (dima-org-babel-python-var-to-python (cdr x))))
     "dict( "
     (apply 'concat keyvalue-list)
(defun dima-org-babel-python-pass-all-params (f params)
    "_org_babel_params = "
    (dima-alist-to-python-dict params))
   (funcall f params)))
   :around #'dima-org-babel-python-pass-all-params))

So if there's a :file plist key, the python code can grab that, and write the plot to that filename. But I don't really want to specify an output file for every single org-babel snippet. All I really care about is that each plot gets a unique filename. So I omit the :file key entirely, and use this advice to generate one for me:

;; This sets a default :file tag, set to a unique filename. I want each demo to
;; produce an image, but I don't care what it is called. I omit the :file tag
;; completely, and this advice takes care of it
(defun dima-org-babel-python-unique-plot-filename
    (f &optional arg info params)
  (funcall f arg info
           (cons (cons ':file
                       (format "guide-%d.svg"
                               (condition-case nil
                                   (setq dima-unique-plot-number (1+ dima-unique-plot-number))
                                 (error (setq dima-unique-plot-number 0)))))
   :around #'dima-org-babel-python-unique-plot-filename))

This uses the dima-unique-plot-number integer to keep track of each plot. I increment this with each plot. Getting closer. It isn't strictly required, but it'd be nice if each plot had the same output filename each time I generated it. So I want to reset the plot number to 0 each time:

;; If I'm regenerating ALL the plots, I start counting the plots from 0
(defun dima-reset-unique-plot-number
    (&rest args)
    (setq dima-unique-plot-number 0))
   :after #'dima-reset-unique-plot-number))

Finally, I want to lie to the user a little bit. The code I'm actually executing writes each plot to an .svg. But the code I'd like the user to see should use the default output: an interactive, graphical window. I do that by tweaking the python session to tell the gnuplotlib object to write to .svg files from org by default, instead of using the graphical terminal:

;; I'm using github to display, so I'm not using the "normal" org
;; exporter. I want the demo text to not contain the hardcopy= tags, but clearly
;; I need the hardcopy tag when generating the plots. I add some python to
;; override gnuplotlib.plot() to add the hardcopy tag somewhere where the reader
;; won't see it. But where to put this python override code? If I put it into an
;; org-babel block, it will be rendered, and the :export tags will be ignored,
;; since github doesn't respect those (probably). So I put the extra stuff into
;; an advice. Whew.
(defun dima-org-babel-python-set-demo-output (f body params)
    (insert body)
    (when (search-forward "import gnuplotlib as gp" nil t)
       "if not hasattr(gp.gnuplotlib, 'orig_init'):\n"
       "    gp.gnuplotlib.orig_init = gp.gnuplotlib.__init__\n"
       "gp.gnuplotlib.__init__ = lambda self, *args, **kwargs: gp.gnuplotlib.orig_init(self, *args, hardcopy=_org_babel_params['_file'] if 'file' in _org_babel_params['_result_params'] else None, **kwargs)\n"))
    (setq body (buffer-substring-no-properties (point-min) (point-max))))
  (funcall f body params))

   :around #'dima-org-babel-python-set-demo-output))

And that's it. The advises in the talk are slightly different, in uninteresting ways. Some of this should be upstreamed to org-babel somehow. Now entirely clear which part, but I'll cross that bridge when I get to it.

23 March, 2020 12:50AM by Dima Kogan

March 22, 2020

Enrico Zini

Notable people

Lotte Reiniger. The Unsung Heroine of Early Animation
Lotte Reiniger pioneered early animation, yet her name remains largely unknown. We pay homage to her life and work, and reflect on why she never received the recognition she deserves.
Stephen Wolfram shares what he learned in researching Ada Lovelace's life, writings about the Analytical Engine, and computation of Bernoulli numbers.
Elizabeth Cochran Seaman[1] (May 5, 1864[2] – January 27, 1922), better known by her pen name Nellie Bly, was an American journalist who was widely known for her record-breaking trip around the world in 72 days, in emulation of Jules Verne's fictional character Phileas Fogg, and an exposé in which she worked undercover to report on a mental institution from within.[3] She was a pioneer in her field, and launched a new kind of investigative journalism.[4] Bly was also a writer, inventor, and industrialist.
Delia Ann Derbyshire (5 May 1937 – 3 July 2001)[1] was an English musician and composer of electronic music.[2] She carried out pioneering work with the BBC Radiophonic Workshop during the 1960s, including her electronic arrangement of the theme music to the British science-fiction television series Doctor Who.[3][4] She has been referred to as "the unsung heroine of British electronic music,"[3] having influenced musicians including Aphex Twin, the Chemical Brothers and Paul Hartnoll of Orbital.[5]
Charity Adams Earley (5 December 1918 – 13 January 2002) was the first African-American woman to be an officer in the Women's Army Auxiliary Corps (later WACS) and was the commanding officer of the first battalion of African-American women to serve overseas during World War II. Adams was the highest ranking African-American woman in the army by the completion of the war.

22 March, 2020 11:00PM

Sylvestre Ledru

Some clang rebuild results (8.0.1, 9.0.1 & 10rc2)

As part of the LLVM release cycle, I am continuing rebuilding the Debian archive with clang instead of gcc to evaluate potential regressions.

Processed results are available on the website: - Now includes some fancy graphs to show the evolution
Raw logs are published on github:

Since my last blog post on the subject (August 2017), Clang is more and more present in the tech ecosystem. It is now the compiler used to build Firefox and Chrome upstream binaries on all the supported architectures/operating systems. More architectures are supported, it has a new linker (lld), a new hybrid IR (MLIR), a lot of checkers in clang-tidy, cross-language linking with Rust, etc.


Now, about Debian results, we rebuilt using 8.0.1, 9.0.1 and 10.0rc2. Results are pretty similar to what we had with previous versions: between 4 to 5% of packages are failing when gcc is replaced by clang.

Some clang rebuild results (8.0.1, 9.0.1 & 10rc2)

Even if most of the software are still using gcc as compiler, we can see that clang has a positive effect on code quality. With many different kinds of errors and warnings found clang over the years, we noticed a steady decline of the number of errors. For example, the number of incorrect C/C++ main declarations has been decreasing years after years:

Some clang rebuild results (8.0.1, 9.0.1 & 10rc2)

Errors found  

The biggest offender is still the qmake changes which doesn't allow the used workaround (replacing /usr/bin/gcc by /usr/bin/clang) - about 250 errors. Most of these packages would probably compile fine with clang. More on the Qt bug tracker. The workaround proposed in the bug isn't applicable for us as we use the dropped-in replacement of the compiler.

The second error is still some differences in symbol generation. Unlike gcc, it seems that clang doesn't generate some symbols (or adds some). As a bunch of Debian packages are checking the list of symbols in the library (for ABI management), the build fails on purpose. For example, with libcec, the symbol _ZN10P8PLATFORM14CConditionImplD1Ev@Base 3.1.0 isn't generated anymore. I am not expecting this to be a big deal: the generated libraries probably works most of the time. More on C++ symbol management in Debian.
I reported this bug upstream a while back:

Current status  

As previously said in a blog post, I don't think there is a strong intensive to go away from gcc for most of the Linux distributions. The big reason for BSD was the license (even if the move to the Apache 2 license wasn't received positively by some of them).
While the LLVM/clang ecosystem clearly won the tooling battle, as a C/C++ compiler, gcc is still an excellent compiler which supports more architecture and more languages.
In term of new warnings and checks, as the clang community moved the efforts in clang-tidy (which requires more complex tooling), out of the box, gcc provides a better experience (as example, see the Firefox meta bug to build with -Werror with the default warnings using gcc 9, gcc 10 and clang trunk for example).

Next steps  

I see some potential next steps to decrease the number of failure:

  • Workaround the Qt/Qmake issue
  • Fix the objective-c header include issues (echo "#include <objc/objc.h>" > foo.m && clang -c foo.m is currently failing)
  • Identify why clang generates more/less symbols that gcc in the library and try to fix that
  • Rebuild the archive with clang-7 - Seems that I have some data problem

Many thanks to Lucas Nussbaum for the rebuilds.

22 March, 2020 10:31PM by sylvestre

Some clang rebuild results (8.0.1, 9.0.1 & 10rc2)

As part of the LLVM release cycle, I am continuing rebuilding the Debian archive with clang instead of gcc to evaluate potential regressions.

Processed results are available on the website: - Now includes some fancy graphs to show the evolution
Raw logs are published on github:

Since my last blog post on the subject (August 2017), Clang is more and more present in the tech ecosystem. It is now the compiler used to build Firefox and Chrome upstream binaries on all the supported architectures/operating systems. More architectures are supported, it has a new linker (lld), a new hybrid IR (MLIR), a lot of checkers in clang-tidy, cross-language linking with Rust, etc.


Now, about Debian results, we rebuilt using 8.0.1, 9.0.1 and 10.0rc2. Results are pretty similar to what we had with previous versions: between 4 to 5% of packages are failing when gcc is replaced by clang.

Some clang rebuild results (8.0.1, 9.0.1 &amp; 10rc2)

Even if most of the software are still using gcc as compiler, we can see that clang has a positive effect on code quality. With many different kinds of errors and warnings found clang over the years, we noticed a steady decline of the number of errors. For example, the number of incorrect C/C++ main declarations has been decreasing years after years:

Some clang rebuild results (8.0.1, 9.0.1 &amp; 10rc2)

Errors found  

The biggest offender is still the qmake changes which doesn't allow the used workaround (replacing /usr/bin/gcc by /usr/bin/clang) - about 250 errors. Most of these packages would probably compile fine with clang. More on the Qt bug tracker. The workaround proposed in the bug isn't applicable for us as we use the dropped-in replacement of the compiler.

The second error is still some differences in symbol generation. Unlike gcc, it seems that clang doesn't generate some symbols (or adds some). As a bunch of Debian packages are checking the list of symbols in the library (for ABI management), the build fails on purpose. For example, with libcec, the symbol _ZN10P8PLATFORM14CConditionImplD1Ev@Base 3.1.0 isn't generated anymore. I am not expecting this to be a big deal: the generated libraries probably works most of the time. More on C++ symbol management in Debian.

Current status  

As previously said in a blog post, I don't think there is a strong intensive to go away from gcc for most of the Linux distributions. The big reason for BSD was the license (even if the move to the Apache 2 license wasn't received positively by some of them).
While the LLVM/clang ecosystem clearly won the tooling battle, as a C/C++ compiler, gcc is still an excellent compiler which supports more architecture and more languages.
In term of new warnings and checks, as the clang community moved the efforts in clang-tidy (which requires more complex tooling), out of the box, gcc provides a better experience (as example, see the Firefox meta bug to build with -Werror with the default warnings using gcc 9, gcc 10 and clang trunk for example).

Next steps  

I see some potential next steps to decrease the number of failure:

  • Workaround the Qt/Qmake issue
  • Fix the objective-c header include issues (echo "#include <objc/objc.h>" > foo.m && clang -c foo.m is currently failing)
  • Identify why clang generates more/less symbols that gcc in the library and try to fix that
  • Rebuild the archive with clang-7 - Seems that I have some data problem

Many thanks to Lucas Nussbaum for the rebuilds.

22 March, 2020 10:31PM by sylvestre

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Big Iron UNIX emulated on ARM

I have somewhere in the basement a DEC Vax workstation, but in the end it was a bigger fun to run an emulated Vax 11/780 (size of two refrigerators) in Beagle Bone Black (size of a big matchbox). For this I used the Dockerfiles available in this git repo using the simh emulator, and tweaked a bit for ARM.
I recorded the boot sequence with the very nice asciinema, also available in the Debian archive, so here is 4.3 BSD, in all its 1986 glory.

22 March, 2020 04:52PM by Emmanuel Kasper (

hackergotchi for Junichi Uekawa

Junichi Uekawa

Tokyo Debian Monthly Meeting.

Tokyo Debian Monthly Meeting. This month's event was held online, and the topic was about online conferencing tools. I gave a talk over Google Hangouts on using OBS and nginx(rtmp). However we had 15 people and exceeded the limit of 10, and tried to switch to a different infrastructure while talking. Set up the OBS broadcasting for livestream, but eventually we switched to an experimental jitsi instance. Our instance worked reasonably well.

22 March, 2020 09:33AM by Junichi Uekawa

March 20, 2020

Molly de Blanc

Seven hundred words on Internet access

I wrote this a few months ago, and never published it. Here you go.

In the summer of 2017, I biked from Boston, MA to Montreal, QC. I rode across Massachusetts, then up the New York/Vermont border, weaving between the two states over two days. I spent the night in Washington County, NY at a bed and breakfast that generously fed me dinner even though they weren’t supposed to. One of the proprietors told me about his history as a physics teacher, and talked about volunteer work he was doing. He somewhat casually mentioned that in his town there isn’t really internet access.

At the time (at least) Washington County wasn’t served by broadband companies. Instead, for $80 a month you could purchase a limited data package from a mobile phone company, and use that. A limited data package means limited access. This could mean no or limited internet in schools or libraries.

This was not the first time I heard about failings of Internet penetration in the United States. When I first moved to Boston I was an intern at One Laptop Per Child. I spoke with someone interested in bringing internet access to their rural community in Maine. They had hope for mesh networks, linking computers together into a web of connectivity, bouncing signals from one machine to another in order to bring internet to everyone.

Access to the Internet is a necessity. As I write this, 2020 is only weeks away, which brings our decennial, nationwide census. There had been discussions of making the census entirely online, but it was settled that people could fill it out “online, by telephone, or via mail” and that households can “answer the questions on the internet or by phone in English and 12 Non-English languages.” [1][2]

This is important because a comprehensive census is important. A census provides, if nothing else, population and demographics information, which is used to assist in the disbursement of government funding and grants to geographic communities. Apportionment, or the redistribution of the 435 seats occupied by members of the House of Representatives, is done based on the population of a given state: more people, more seats.

Researchers, students, and curious people use census data to carry out their work. Non-profits and activist organizations can better understand the populations they serve.

As things like the Census increasingly move online, the availability of access becomes increasingly important.

Some things are only available online – including job applications, customer service assistance, and even education opportunities like courses, academic resources, and applications for grants, scholarships, and admissions.

The Internet is also a necessary point of connection between people, and necessary for building our identities. Being acknowledged with their correct names and pronouns decreases the risk of depression and suicide among trans youths – and one assumes adults as well. [3] Online spaces provide acknowledgment and recognition that is not being met in physical spaces and geographic communities.

Internet access has been important to me in my own mental health struggles and understanding. My bipolar exhibits itself through long, crushing periods of depression during which I can do little more than wait for it to be over. I fill these quiet spaces by listening to podcasts and talking with my friends using apps like Signal to manage our communications.

My story of continuous recovery includes a particularly gnarly episode of bulimia in 2015. I was only able to really acknowledge that I had a problems with food and purging, using both as opportunities to inflict violence onto myself, when reading Tumblr posts by people with eating disorders. This made it possible for me to talk about my purging with my therapist, my psychiatrist, and my doctor in order to modify my treatment plan in order to start getting help I need.

All of these things are made possible by having reliable, fast access to the Internet. We can respond to our needs immediately, regardless of where we are. We can find or build the communities we need, and serve the ones we already live in, whether they’re physical or exist purely as digital.

[1]: Accessed 29.11.2019
[2]: Accessed 29.11.2019
[3]: Accessed 29.11.2019

20 March, 2020 04:08PM by mollydb

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Today (March 20th 2020) is the day to buy music on Bandcamp

Hey folks,

This is a quick blog post to tell you Bandcamp is waiving all their fees on March 20th 2020 (PST). Spread the word, as every penny spent on the platform that day will go back to the artists.

COVID-19 is throwing us all a mean curveball and artists have it especially rough, particularly those who were in the middle of tours and had to cancel them.

If you like Metal, Angry Metal Guy posted a very nice list of artists you might know and want to help out.

If you are lucky enough to have a little coin around, now is the time to show support for the artists you like. Buy an album you liked and copied from a friend or get some merch to wear to your next virtual beer night with your (remote) friends!

Stay safe and don't forget to wash your hands regularly.

20 March, 2020 04:00AM by Louis-Philippe Véronneau

March 19, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.16: Now with JSON

A new release 0.4.16 of RProtoBuf is now on CRAN. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release contains a PR contributed by Siddhartha Bagaria which adds JSON support for messages, which had been an open wishlist item. I also appeased a clang deprecation warning that had come up on one of the CRAN test machines.

Changes in RProtoBuf version 0.4.16 (2020-03-19)

  • Added support for parsing and printing JSON (Siddhartha Bagaria in #68 closing wishlist #61).

  • Switched ByteSize() to ByteSizeLong() to appease clang (Dirk).

CRANberries provides the usual diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 March, 2020 03:29PM

John Goerzen

COVID-19 is serious for all ages. Treat it like WWII

Today I’d like to post a few updates about COVID-19 which I have gathered from credible sources, as well as some advice also gathered from credible sources.


  1. Coronavirus causes health impacts requiring hospitalization in a significant percentage of all adult age groups.
  2. Coronavirus also can cause no symptoms at all in many, especially children.
  3. Be serious about social distancing.

COVID-19 is serious for young adults too

According to this report based on a CDC analysis, between 14% and 20% of people aged 20 to 44 require hospitalization due to COVID-19. That’s enough to be taken seriously. See also this CNN story.

Act as if you are a carrier because you may be infected and not even know it, even children

Information on this is somewhat preliminary, but it is certainly known that a certain set of cases is asymptomatic. This article discusses manifestations in children, while this summary of a summary (note: not original research) suggests that 17.9% of people may not even know they are infected.

How serious is this? Serious.

This excellent article by Daniel W. Johnson, MD, is a very good read. Among the points it makes:

  • Anyone that says it’s no big deal is wrong.
  • If we treat this like WWI or WWII and everyone does the right things, we will be harmed but OK. If many but not all people do the right things, we’ll be like Italy. If we blow it off, our health care system and life as we know it will be crippled.
  • If we don’t seriously work to flatten the curve, many lives will be needlessly lost


I’m going to just copy Dr. Johnson’s advice here:

  1. You and your kids should stay home. This includes not going to church, not going to the gym, not going anywhere.
  2. Do not travel for enjoyment until this is done. Do not travel for work unless your work truly requires it.
  3. Avoid groups of people. Not just crowds, groups. Just be around your immediate family. I think kids should just play with siblings at this point – no play dates, etc.
  4. When you must leave your home (to get groceries, to go to work), maintain a distance of six feet from people. REALLY stay away from people with a cough or who look sick.
  5. When you do get groceries, etc., buy twice as much as you normally do so that you can go to the store half as often. Use hand sanitizer immediately after your transaction, and immediately after you unload the groceries.

I’m not saying people should not go to work. Just don’t leave the house for anything unnecessary, and if you can work from home, do it.

Everyone on this email, besides Mom and Dad, are at low risk for severe disease if/when they contract COVID-19. While this is great, that is not the main point. When young, well people fail to do social distancing and hygiene, they pick up the virus and transmit it to older people who are at higher risk for critical illness or death. So everyone needs to stay home. Even young people.

Tell every person over 60, and every person with significant medical conditions, to avoid being around people. Please do not have your kids visit their grandparents if you can avoid it. FaceTime them.

Our nation is the strongest one in the world. We have been through other extreme challenges and succeeded many times before. We WILL return to normal life. Please take these measures now to flatten the curve, so that we can avoid catastrophe.

I’d also add that many supermarkets offer delivery or pickup options that allow you to get your groceries without entering the store. Some are also offering to let older people shop an hour before the store opens to the general public. These could help you minimize your exposure.

Other helpful links

Here is a Reddit megathread with state-specific unemployment resources.

Scammers are already trying to prey on people. Here are some important tips to avoid being a victim.

Although there are varying opinions, some are recommending avoiding ibuprofen when treating COVID-19.

Bill Gates had some useful advice. Here’s a summary emphasizing the need for good testing.

19 March, 2020 03:09PM by John Goerzen

March 18, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.2.7

A new release 0.2.7 of RcppCCTZ is now at CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—using copies in their packages which remains less than ideal.

This version adds internal extensions, contributed by Leonardo, which support upcoming changes to the nanotime package we are working on.

Changes in version 0.2.7 (2020-03-18)

  • Added functions _RcppCCTZ_convertToCivilSecond that converts a time point to the number of seconds since epoch, and _RcppCCTZ_convertToTimePoint that converts a number of seconds since epoch into a time point; these functions are only callable from C level (Leonardo in #34 and #35).

  • Added function _RcppCCTZ_getOffset that returns the offset at a speficied time-point for a specified timezone; this function is only callable from C level (Leonardo in #32).

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 March, 2020 09:53PM

hackergotchi for Mike Gabriel

Mike Gabriel


Today's address to the public by the German chancellor. I am totally chiming in with here. Please all across the world, help to #FlattenTheCurve:

light+love & sei gesund!

18 March, 2020 08:13PM by sunweaver

hackergotchi for Norbert Preining

Norbert Preining

Fixing mate-terminal URL highlighting

One of the recent updates in Debian swept in changes so that mate-terminal couldn’t highlight URLs anymore (Debian bug report, upstream bug report). I got so fed up with this that I fixed it and send a pull request. Update Debian packages for amd64 Debian/sid are in my usual repo at


18 March, 2020 01:48AM by Norbert Preining

Antoine Beaupré

How can I trust this git repository?

Join me in the rabbit hole of git repository verification, and how we could improve it.

Problem statement

As part of my work on automating install procedures at Tor, I ended up doing things like:

git clone REPO

... something eerily similar to the infamous curl pipe bash method which I often decry. As a short-term workaround, I relied on the SHA-1 checksum of the repository to make sure I have the right code, by running this both on a "trusted" (ie. "local") repository and the remote, then visually comparing the output:

$ git show-ref master
9f9a9d70dd1f1e84dec69a12ebc536c1f05aed1c refs/heads/master

One problem with this approach is that SHA-1 is now considered as flawed as MD5 so it can't be used as an authentication mechanism anymore. It's also fundamentally difficult to compare hashes for humans.

The other flaw with comparing local and remote checksums is that we assume we trust the local repository. But how can I trust that repository? I can either:

  1. audit all the code present and all the changes done to it after

  2. or trust someone else to do so

The first option here is not practical in most cases. In this specific use case, I have audited the source code -- I'm the author, even -- what I need is to transfer that code over to another server.

(Note that I am replacing those procedures with Fabric, which makes this use case moot for now as the trust path narrows to "trust the SSH server" which I already had anyways. But it's still important for my fellow Tor developers who worry about trusting the git server, especially now that we're moving to GitLab.)

But anyways, in most cases, I do need to trust some other fellow developer I collaborate with. To do this, I would need to trust the entire chain between me and them:

  1. the git client
  2. the operating system
  3. the hardware
  4. the network (HTTPS and the CA cartel, specifically)
  5. then the hosting provider (and that hardware/software stack)
  6. and then backwards all the way back to that other person's computer

I want to shorten that chain as much as possible, make it "peer to peer", so to speak. Concretely, it would eliminate the hosting provider and the network, as attackers.

OpenPGP verification

My first reaction is (perhaps perversely) to "use OpenPGP" for this. I figured that if I sign every commit, then I can just check the latest commit and see if the signature is good.

The first problem here is that this is surprisingly hard. Let's pick some arbitrary commit I did recently:

commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
Author: Antoine Beaupré <>
Date:   Mon Mar 16 14:37:28 2020 -0400

    fix test autoloading

    pytest only looks for file names matching `test` by default. We inline
    tests inside the source code directly, so hijack that.

diff --git a/fabric_tpa/pytest.ini b/fabric_tpa/pytest.ini
new file mode 100644
index 0000000..71004ea
--- /dev/null
+++ b/fabric_tpa/pytest.ini
@@ -0,0 +1,3 @@
+# we inline tests directly in the source code
+python_files = *.py

That's the output of git log -p in my local repository. I signed that commit, yet git log is not telling me anything special. To check the signature, I need something special: --show-signature, which looks like this:

commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature faite le lun 16 mar 2020 14:37:53 EDT
gpg:                avec la clef RSA 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Bonne signature de « Antoine Beaupré <> » [ultime]
gpg:                 alias « Antoine Beaupré <> » [ultime]
gpg:                 alias « Antoine Beaupré <> » [ultime]
gpg:                 alias « Antoine Beaupré <> » [ultime]
gpg:                 alias « Antoine Beaupré <> » [ultime]
Author: Antoine Beaupré <>
Date:   Mon Mar 16 14:37:28 2020 -0400

    fix test autoloading

    pytest only looks for file names matching `test` by default. We inline
    tests inside the source code directly, so hijack that.

Can you tell if this is a valid signature? If you speak a little french, maybe you can! But even if you would, you are unlikely to see that output on your own computer. What you would see instead is:

commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature made Mon Mar 16 14:37:53 2020 EDT
gpg:                using RSA key 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Can't check signature: No public key
Author: Antoine Beaupré <>
Date:   Mon Mar 16 14:37:28 2020 -0400

    fix test autoloading

    pytest only looks for file names matching `test` by default. We inline
    tests inside the source code directly, so hijack that.

Important part: Can't check signature: No public key. No public key. Because of course you would see that. Why would you have my key lying around, unless you're me. Or, to put it another way, why would that server I'm installing from scratch have a copy of my OpenPGP certificate? Because I'm a Debian developer, my key is actually part of the 800 keys in the debian-keyring package, signed by the APT repositories. So I have a trust path.

But that won't work for someone who is not a Debian developer. It will also stop working when my key expires in that repository, as it already has on Debian buster (current stable). So I can't assume I have a trust path there either. One could work with a trusted keyring like we do in the Tor and Debian project, and only work inside that project, that said.

But I still feel uncomfortable with those commands. Both git log and git show will happily succeed (return code 0 in the shell) even though the signature verification failed on the commits. Same with git pull and git merge, which will happily push your branch ahead even if the remote has unsigned or badly signed commits.

To actually verify commits (or tags), you need the git verify-commit (or git verify-tag) command, which seems to do the right thing:

$ LANG=C.UTF-8 git verify-commit b3c538898b0ed4e31da27fc9ca22cb55e1de0000
gpg: Signature made Mon Mar 16 14:37:53 2020 EDT
gpg:                using RSA key 7B164204D096723B019635AB3EA1DDDDB261D97B
gpg: Can't check signature: No public key

At least it fails with some error code (1, above). But it's not flexible: I can't use it to verify that a "trusted" developer (say one that is in a trusted keyring) signed a given commit. Also, it is not clear what a failure means. Is a signature by an expired certificate okay? What if the key is signed by some random key in my personal keyring? Why should that be trusted?

Worrying about git and GnuPG

In general, I'm worried about git's implementation of OpenPGP signatures. There has been numerous cases of interoperability problems with GnuPG specifically that led to security, like EFAIL or SigSpoof. It would be surprising if such a vulnerability did not exist in git.

Even if git did everything "just right" (which I have myself found impossible to do when writing code that talks with GnuPG), what does it actually verify? The commit's SHA-1 checksum? The tree's checksum? The entire archive as a zip file? I would bet it signs the commit's SHA-1 sum, but I just don't know, on the top of my head, and neither do git-commit or git-verify-commit say exactly what is happening.

I had an interesting conversation with a fellow Debian developer (dkg) about this and we had to admit those limitations:

<anarcat> i'd like to integrate pgp signing into tor's coding practices more, but so far, my approach has been "sign commits" and the verify step was "TBD"

<dkg> that's the main reason i've been reluctant to sign git commits. i haven't heard anyone offer a better subsequent step. if torproject could outline something useful, then i'd be less averse to the practice.

i'm also pretty sad that git remains stuck on sha1, esp. given the recent demonstrations. all the fancy strong signatures you can make in git won't matter if the underlying git repo gets changed out from under the signature due to sha1's weakness

In other words, even if git implements the arcane GnuPG dialect just so, and would allow us to setup the trust chain just right, and would give us meaningful and workable error messages, it still would fail because it's still stuck in SHA-1. There is work underway to fix that, but in February 2020, Jonathan Corbet described that work as being in a "relatively unstable state", which is hardly something I would like to trust to verify code.

Also, when you clone a fresh new repository, you might get an entirely different repository, with a different root and set of commits. The concept of "validity" of a commit, in itself, is hard to establish in this case, because an hostile server could put you backwards in time, on a different branch, or even on an entirely different repository. Git will warn you about a different repository root with warning: no common commits but that's easy to miss. And complete branch switches, rebases and resets from upstream are hardly more noticeable: only a tiny plus sign (+) instead of a star (*) will tell you that a reset happened, along with a warning (forced update) on the same line. Miss those and your git history can be compromised.

Possible ways forward

I don't consider the current implementation of OpenPGP signatures in git to be sufficient. Maybe, eventually, it will mature away from SHA-1 and the interface will be more reasonable, but I don't see that happening in the short term. So what do we do?

git evtag

The git-evtag extension is a replacement for git tag -s. It's not designed to sign commits (it only verifies tags) but at least it uses a stronger algorithm (SHA-512) to checksum the tree, and will include everything in that tree, including blobs. If that sounds expensive to you, don't worry too much: it takes about 5 seconds to tag the Linux kernel, according to the author.

Unfortunately, that checksum is then signed with GnuPG, in a manner similar to git itself, in that it exposes GnuPG output (which can be confusing) and is likely similarly vulnerable to mis-implementation of the GnuPG dialect as git itself. It also does not allow you to specify a keyring to verify against, so you need to trust GnuPG to make sense of the garbage that lives in your personal keyring (and, trust me, it doesn't).

And besides, git-evtag is fundamentally the same as signed git tags: checksum everything and sign with GnuPG. The difference is it uses SHA-512 instead of SHA-1, but that's something git will eventually fix itself anyways.

kernel patch attestations

The kernel also faces this problem. Linus Torvalds signs the releases with GnuPG, but patches fly all over mailing list without any form of verification apart from clear-text email. So Konstantin Ryabitsev has proposed a new protocol to sign git patches which uses SHA256 to checksum the patch metadata, commit message and the patch itself, and then sign that with GnuPG.

It's unclear to me what this solves, if anything, at all. As dkg argues, it would seem better to add OpenPGP support to git-send-email and teach git tools to recognize that (e.g. git-am) at least if you're going to keep using OpenPGP anyways.

And furthermore, it doesn't resolve the problems associated with verifying a full archive either, as it only attests "patches".


Unhappy with the current state of affairs, the author of fwupd (Richard Hughes) wrote his own protocol as well, called jcat, which provides signed "catalog files" similar to the ones provided in Microsoft windows.

It consists of a "gzip-compressed JSON catalog files, which can be used to store GPG, PKCS-7 and SHA-256 checksums for each file". So yes, it is yet again another wrapper to GnuPG, probably with all the flaws detailed above, on top of being a niche implementation, disconnected from git.

The Update Framework

One more thing dkg correctly identified is:

<dkg> anarcat: even if you could do exactly what you describe, there are still some interesting wrinkles that i think would be problems for you.

the big one: "git repo's latest commits" is a loophole big enough to drive a truck through. if your adversary controls that repo, then they get to decide which commits to include in the repo. (since every git repo is a view into the same git repo, just some have more commits than others)

In other words, unless you have a repository that has frequent commits (either because of activity or by a bot generating fake commits), you have to rely on the central server to decide what "the latest version" is. This is the kind of problems that binary package distribution systems like APT and TUF solve correctly. Unfortunately, those don't apply to source code distribution, at least not in git form: TUF only deals with "repositories" and binary packages, and APT only deals with binary packages and source tarballs.

That said, there's actually no reason why git could not support the TUF specification. Maybe TUF could be the solution to ensure end-to-end cryptographic integrity of the source code itself. OpenPGP-signed tarballs are nice, and signed git tags can be useful, but from my experience, a lot of OpenPGP (or, more accurately, GnuPG) derived tools are brittle and do not offer clear guarantees, and definitely not to the level that TUF tries to address.

This would require changes on the git servers and clients, but I think it would be worth it.

Other Projects


There are other tools trying to do parts of what GnuPG is doing, for example minisign and OpenBSD's signify. But they do not integrate with git at all right now. Although I did find a hack] to use signify with git, it's kind of gross...


Unsurprisingly, this is a problem everyone is trying to solve. Golang is planning on hosting a notary which would leverage a "certificate-transparency-style tamper-proof log" which would be ran by Google (see the spec for details). But that doesn't resolve the "evil server" attack, if we treat Google as an adversary (and we should).


Python had OpenPGP going for a while on PyPI, but it's unclear if it ever did anything at all. Now the plan seems to be to use TUF but my hunch is that the complexity of the specification is keeping that from moving ahead.


Docker and the container ecosystem has, in theory, moved to TUF in the form of Notary, "a project that allows anyone to have trust over arbitrary collections of data". In practice however, in my somewhat limited experience, setting up TUF and image verification in Docker is far from trivial.

Android and iOS

Even in what is possibly one of the strongest models (at least in terms of user friendliness), mobile phones are surprisingly unclear about those kind of questions. I had to ask if Android had end-to-end authentication and I am still not clear on the answer. I have no idea of what iOS does.


One of the core problems with everything here is the common usability aspect of cryptography, and specifically the usability of verification procedures. We have become pretty good at encryption. The harder part (and a requirement for proper encryption) is verification. It seems that problem still remains unsolved, in terms of usability. Even Signal, widely considered to be a success in terms of adoption and usability, doesn't properly solve that problem, as users regularly ignore "The security number has changed" warnings...

So, even though they deserve a lot of credit in other areas, it seems unlikely that hardcore C hackers (e.g. git and kernel developers) will be able to resolve that problem without at least a little bit of help. And TUF seems like the state of the art specification around here, it would seem wise to start adopting it in the git community as well.

Update: git 2.26 introduced a new gpg.minTrustLevel to "tell various signature verification codepaths the required minimum trust level", presumably to control how Git will treat keys in your keyrings, assuming the "trust database" is valid and up to date. For an interesting narrative of how "normal" (without PGP) git verification can fail, see also A Git Horror Story: Repository Integrity With Signed Commits.

18 March, 2020 01:16AM

March 17, 2020

hackergotchi for Mike Gabriel

Mike Gabriel

Time for home office! Time for X2Go?

Most of us IT people should be in home office by now. If not, make sure you'll arrange that with your employers, cooperation partners, contractors, etc. Please help flatten the curve.

X2Go as your Home Office solution

If your computer at work runs a GNU/Linux desktop and you can SSH into it, then it might be time for you to try out X2Go [1]. Remote desktop access under GNU/Linux.

Free Support for simple Client-Server Setups

If your daily work is related to health care, municipal work, medical research, etc. (all those fields that are currently working under very high demands), please join the #x2go IRC channel on Freenode [2] and I'll do my very best to help you with setting up X2Go.

Professional Support for Large Scale Setups

If you run a business and need X2Go support site-wide, brokerage support, etc. please consider asking for professional support [3].


17 March, 2020 09:31PM by sunweaver

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.4: Lots of goodies

rcpp logo

The fourth maintenance release 1.0.4 of Rcpp, following up on the 10th anniversary and the 1.0.0. release sixteen months ago, arrived on CRAN this morning. This follows a few days of gestation at CRAN. To help during the wait we provided this release via drat last Friday. And it followed a pre-release via drat a week earlier. But now that the release is official, Windows and macOS binaries will be built by CRAN over the next few days. The corresponding Debian package will be uploaded as a source package shortly after which binaries can be built.

As with the previous releases Rcpp 1.0.1, Rcpp 1.0.2 and Rcpp 1.0.3, we have the predictable and expected four month gap between releases which seems appropriate given both the changes still being made (see below) and the relative stability of Rcpp. It still takes work to release this as we run multiple extensive sets of reverse dependency checks so maybe one day we will switch to six month cycle. For now, four months still seem like a good pace.

Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 1873 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 191 in BioConductor. And per the (partial) logs of CRAN downloads, we are running steasy at one millions downloads per month.

This release features quite a number of different pull requests by seven different contributors as detailed below. One (personal) highlight is the switch to tinytest.

Changes in Rcpp version 1.0.4 (2020-03-13)

  • Changes in Rcpp API:

    • Safer Rcpp_list*, Rcpp_lang* and Function.operator() (Romain in #1014, #1015).

    • A number of #nocov markers were added (Dirk in #1036, #1042 and #1044).

    • Finalizer calls clear external pointer first (Kirill Müller and Dirk in #1038).

    • Scalar operations with a rhs matrix no longer change the matrix value (Qiang in #1040 fixing (again) #365).

    • Rcpp::exception and Rcpp::stop are now more thread-safe (Joshua Pritikin in #1043).

  • Changes in Rcpp Attributes:

    • The cppFunction helper now deals correctly with mulitple depends arguments (TJ McKinley in #1016 fixing #1017).

    • Invisible return objects are now supported via new option (Kun Ren in #1025 fixing #1024).

    • Unavailable packages referred to in LinkingTo are now reported (Dirk in #1027 fixing #1026).

    • The sourceCpp function can now create a debug DLL on Windows (Dirk in #1037 fixing #1035).

  • Changes in Rcpp Documentation:

    • The .github/ directory now has more explicit guidance on contributing, issues, and pull requests (Dirk).

    • The Rcpp Attributes vignette describe the new invisible return object option (Kun Ren in #1025).

    • Vignettes are now included as pre-made pdf files (Dirk in #1029)

    • The Rcpp FAQ has a new entry on the recommended importFrom directive (Dirk in #1031 fixing #1030).

    • The bib file for the vignette was once again updated to current package versions (Dirk).

  • Changes in Rcpp Deployment:

    • Added unit test to check if C++ version remains remains aligned with the package number (Dirk in #1022 fixing #1021).

    • The unit test system was switched to tinytest (Dirk in #1028, #1032, #1033).

Please note that the change to execptions and Rcpp::stop() in pr #1043 has been seen to have a minor side effect on macOS issue #1046 which has already been fixed by Kevin in pr #1047 for which I may prepare a release for the Rcpp drat repo in a day or two.

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2356 previous questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 March, 2020 12:00PM

hackergotchi for Norbert Preining

Norbert Preining

Brave broken (on Debian?)

Update a few hours later: an update came in, version 1.7.62, which fixed this misbehavior!!!!

I have been using the Brave browser now for many months without any problem, in particular I use the -dev version, which is rather on the edge of development. Unfortunately, either due to Linux kernel changes (I am using the latest stable kernel, at the moment 5.5.9) or due to Brave changes, the browser has become completely unusable: CPU spikes consistently to 100% and more, and the dev browser tries to upload crash reports every 5sec.

The problem is shown in the kernel log as

traps: brave[36513] trap invalid opcode ip:55d4e0cb1895 sp:7ffc4b08ff00 error:0 in brave[55d4dfe01000+78e5000]

I have checked older versions, as well as beta versions, all of them exhibit the same problem. I have disabled all extensions, without any change in behavior. I also have reported this problem in the Brave community forum, without any answer. This means, for now I have to switch back to Firefox, which works smoothly.

17 March, 2020 12:53AM by Norbert Preining

March 16, 2020

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

WFH tip: Xpra

Norway is going into lockdown over Covid-19. Everybody is encouraged to WFH if possible, all organized sports (indoors and outdoors) is prohibited, schools and univerisities are closed…

Like many others, I live in quasi-quarantine. I organize my life around new routines, which give life structure: Work. Sleep. Run. Talk to friends and family online. I wonder, what will become of those who don't have these knobs on which to hang their safety nets? What happens when you're actually sick and have nothing to do except watch a steady stream of news, from which you can pick the worst available at any time?

But what I wanted to share was a tip I was given recently: If you want to work from home, and text-only SSH won't do (or rdesktop to Windows machines, of course), there are better solutions than X forwarding. VNC is the classic version (TigerVNC appears to be the most mature fork?). I don't think NX really is active anymore. But Xpra seems to have the most momentum.

Xpra ticks most of the boxes: You can run it over SSH. It has good Windows and macOS clients. It has various forms of compression, applied automatically (everything from “none” to H.264). And supposedly, you can run xpra --replace to hook into the compositor and take over the remote session without any screenscraping.

It's not perfect; I've had various bugs. But I've sent the tip on to a number of coworkers who seemed very happy about it, so now I'm sharing it with you. Happy WFH-ing, and remember to wash your hands!

16 March, 2020 09:02PM

hackergotchi for Bits from Debian

Bits from Debian

Official communication channels for Debian

From time to time, we get questions in Debian about our official channels of communication and questions about the Debian status of who may own similarly named websites.

The main Debian website is our primary medium of communication. Those seeking information about current events and development progress in the community may be interested in the Debian News section of the Debian website. For less formal announcements, we have the official Debian blog Bits from Debian, and the Debian micronews service for shorter news items.

Our official newsletter Debian Project News and all official announcements of news or project changes are dual posted on our website and sent to our official mailing lists debian-announce or debian-news. Posting to those mailing lists is restricted.

We also want to take the opportunity to announce how the Debian Project, or for short, Debian is structured.

Debian has a structure regulated by our Constitution. Officers and delegated members are listed on our Organizational Structure page. Additional teams are listed on our Teams page.

The complete list of official Debian members can be found on our New Members page, where our membership is managed. A broader list of Debian contributors can be found on our Contributors page.

If you have questions, we invite you to reach the press team at

16 March, 2020 01:00PM by Laura Arjona Reina, Ana Guerrero Lopez and Donald Norwood

Matthias Klumpp

Maintain release info easily in MetaInfo/Appdata files

This article isn’t about anything “new”, like the previous ones on AppStream – it rather exists to shine the spotlight on a feature I feel is underutilized. From conversations it appears that the reason simply is that people don’t know that it exists, and of course that’s a pretty bad reason not to make your life easier 😉

Mini-Disclaimer: I’ll be talking about appstreamcli, part of AppStream, in this blogpost exclusively. The appstream-util tool from the appstream-glib project has a similar functionality – check out its help text and look for appdata-to-news if you are interested in using it instead.

What is this about?

AppStream permits software to add release information to their MetaInfo files to describe current and upcoming releases. This feature has the following advantages:

  • Distribution-agnostic format for release descriptions
  • Provides versioning information for bundling systems (Flatpak, AppImage, …)
  • Release texts are short and end-user-centric, not technical as the ones provided by distributors usually are
  • Release texts are fully translatable using the normal localization workflow for MetaInfo files
  • Releases can link artifacts (built binaries, source code, …) and have additional machine-readable metadata e.g. one can tag a release as a development release

The disadvantage of all this, is that humans have to maintain the release information. Also, people need to write XML for this. Of course, once humans are involved with any technology, things get a lot more complicated. That doesn’t mean we can’t make things easier for people to use though.

Did you know that you don’t actually have to edit the XML in order to update your release information? To make creating and maintaining release information as easy as possible, the appstreamcli utility has a few helpers built in. And the best thing is that appstreamcli, being part of AppStream, is available pretty ubiquitously on Linux distributions.

Update release information from NEWS data

The NEWS file is a not very well defined textfile that lists “user-visible changes worth mentioning” per each version. This maps pretty well to what AppStream release information should contain, so let’s generate that from a NEWS file!

Since the news format is not defined, but we need to parse this somehow, the amount of things appstreamcli can parse is very limited. We support a format in this style:

Version 0.2.0
Released: 2020-03-14

 * Important thing 1
 * Important thing 2

 * New/changed feature 1
 * New/changed feature 2 (Author Name)
 * ...

 * Bugfix 1
 * Bugfix 2
 * ...

Version 0.1.0
Released: 2020-01-10

 * ...

When parsing a file like this, appstreamcli will allow a lot of errors/”imperfections” and account for quite a few style and string variations. You will need to check whether this format works for you. You can see it in use in appstream itself and libxmlb for a slightly different style.

So, how do you convert this? We first create our NEWS file, e.g. with this content:

Version 0.2.0
Released: 2020-03-14

 * The CPU no longer overheats when you hold down spacebar

Version 0.1.0
Released: 2020-01-10

 * Now plays a "zap" sound on every character input

For the MetaInfo file, we of course generate one using the MetaInfo Creator. Then we can run the following command to get a preview of the generated file: appstreamcli news-to-metainfo ./NEWS ./org.example.myapp.metainfo.xml - Note the single dash at the end – this is the explicit way of telling appstreamcli to print something to stdout. This is how the result looks like:

<?xml version="1.0" encoding="utf-8"?>
<component type="desktop-application">
    <release type="stable" version="0.2.0" date="2020-03-14T00:00:00Z">
        <p>This release fixes the following bug:</p>
          <li>The CPU no longer overheats when you hold down spacebar</li>
    <release type="stable" version="0.1.0" date="2020-01-10T00:00:00Z">
        <p>This release adds the following features:</p>
          <li>Now plays a "zap" sound on every character input</li>

Neat! If we want to save this to a file instead, we just exchange the dash with a filename. And maybe we don’t want to add all releases of the past decade to the final XML? No problem too, just pass the --limit flag as well: appstreamcli news-to-metainfo --limit=6 ./NEWS ./org.example.myapp.metainfo.tmpl.xml ./result/org.example.myapp.metainfo.xml

That’s nice on its own, but we really don’t want to do this by hand… The best way to ensure the MetaInfo file is updated, is to simply run this command at build time to generate the final MetaInfo file. For the Meson build system you can achieve this with a code snippet like below (but for CMake this shouldn’t be an issue either – you could even make a nice macro for it there):

ascli_exe = find_program('appstreamcli')
metainfo_with_relinfo = custom_target('gen-metainfo-rel',
    input : ['./NEWS', 'org.example.myapp.metainfo.xml'],
    output : ['org.example.myapp.metainfo.xml'],
    command : [ascli_exe, 'news-to-metainfo', '--limit=6', '@INPUT0@', '@INPUT1@', '@OUTPUT@']

In order to also translate releases, you will need to add this to your .pot file generation workflow, so (x)gettext can run on the MetaInfo file with translations merged in.

Release information from YAML files

Since parsing a “no structure, somewhat human-readable file” is hard without baking an AI into appstreamcli, there is also a second option available: Generate the XML from a YAML file. YAML is easy to write for humans, but can also be parsed by machines.The YAML structure used here is specific to AppStream, but somewhat maps to the NEWS file contents as well as MetaInfo file data. That makes it more versatile, but in order to use it, you will need to opt into using YAML for writing news entries. If that’s okay for you to consider, read on!

A YAML release file has this structure:

Version: 0.2.0
Date: 2020-03-14
Type: development
- The CPU no longer overheats when you hold down spacebar
- Fixed bugs ABC and DEF
Version: 0.1.0
Date: 2020-01-10
Description: |-
  This is our first release!

  Now plays a "zap" sound on every character input

As you can see, the release date has to be an ISO 8601 string, just like it is assumed for NEWS files. Unlike in NEWS files, releases can be defined as either stable or development depending on whether they are a stable or development release, by specifying a Type field. If no Type field is present, stable is implicitly assumed. Each release has a description, which can either be a free-form multi-paragraph text, or a list of entries.

Converting the YAML example from above is as easy as using the exact same command that was used before for plain NEWS files: appstreamcli news-to-metainfo --limit=6 ./NEWS.yml ./org.example.myapp.metainfo.tmpl.xml ./result/org.example.myapp.metainfo.xml If appstreamcli fails to autodetect the format, you can help it by specifying it explicitly via the --format=yaml flag. This command would produce the following result:

<?xml version="1.0" encoding="utf-8"?>
<component type="console-application">
    <release type="development" version="0.2.0" date="2020-03-14T00:00:00Z">
          <li>The CPU no longer overheats when you hold down spacebar</li>
          <li>Fixed bugs ABC and DEF</li>
    <release type="stable" version="0.1.0" date="2020-01-10T00:00:00Z">
        <p>This is our first release!</p>
        <p>Now plays a "zap" sound on every character input</p>

Note that the 0.2.0 release is now marked as development release, a thing which was not possible in the plain text NEWS file before.

Going the other way

Maybe you like writing XML, or have some other tool that generates the MetaInfo XML, or you have received your release information from some other source and want to convert it into text. AppStream also has a tool for that! Using appstreamcli metainfo-to-news <metainfo-file> <news-file> you can convert a MetaInfo file that has release entries into a text representation. If you don’t want appstreamcli to autodetect the right format, you can specify it via the --format=<text|yaml> switch.

Future considerations

The release handling is still not something I am entirely happy with. For example, the release information has to be written and translated at release time of the application. For some projects, this workflow isn’t practical. That’s why issue #240 exists in AppStream which basically requests an option to have release notes split out to a separate, remote location (and also translations, but that’s unlikely to happen). Having remote release information is something that will highly likely happen in some way, but implementing this will be a quite disruptive, if not breaking change. That is why I am holding this change back for the AppStream 1.0 release.

In the meanwhile, besides improving the XML form of release information, I also hope to support a few more NEWS text styles if they can be autodetected. The format of the systemd project may be a good candidate. The YAML release-notes format variant will also receive a few enhancements, e.g. for specifying a release URL. For all of these things, I very much welcome pull requests or issue reports. I can implement and maintain the things I use myself best, so if I don’t use something or don’t know about a feature many people want I won’t suddenly implement it or start to add features at random because “they may be useful”. That would be a recipe for disaster. This is why for these features in particular contributions from people who are using them in their own projects or want their new usecase represented are very welcome.

16 March, 2020 11:40AM by Matthias

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

Deconstructing the term «control freak»

Control freaks. You may have called people like this. Or you may have been called one yourself. Maybe you got angry. Or, on the contrary, you felt like being a control freak is a feature, because who would notice all these little details that are not exactly perfect if not you? This post is an attempt to deconstruct the term.

Etymological considerations

Control - from latin contra- rotulus - refers to the copy of an account register as a means of verification of the original register. So control is about keeping track, not making mistakes, i.e. it's about doing a perfect calculation. The meaning of control in our society is linked to authority, or to policing — like in crowd control. What English calls an inspector, is a Kontrolleur in German or contrôleur in French. The word is also linked to manufacturing, as in quality control. We also talk about being in control, which is linked to the desire of being autonomous, to be able to act on one's own account. Being in control can also designate a need to preserve one's own integrity: if I have integrity (in the sense of feeling whole), I can self-determine, which is the most fundamental requirement for having one's own identity.

Freak - refers to someone who does not fit the norm. Calling someone a freak is per se problematic, because it blames a person for being atypical, abnormal. Freak is a word that, while pointing a finger at difference, at the same time denies people the right to be different and diverse. People can reappropriate such words to make it clear that not fitting the norm is a wanted expression of their diversity — we can see that if we follow the history of the word queer for an example.

Types of control freaking

To me, being a control freak is mostly a feature, if it's about controlling my own life and my own effectiveness. Control freaking becomes troublesome when it's about controlling other people's effectiveness. And it becomes highly problematic when it's about controlling other people's lives.

Controlling one's own life

Having control over one's own life is a basic human need. In particular people who are part of minorities, people who face discrimination and oppression every day have an increased need for self-determination, and the need to see their existence and their identity acknowledged and accepted by the rest of the world. Too often do we experience that other people (generally the ones with privilege, or not facing the same oppression) try to own the narrative over our lives (1)(2). Calling this control freakish is missing the point, and is probably a sign of looking at it from a perspective of privilege.

Controlling one's own effectiveness

People produce things. People write code, make ceramics, or write texts, for example. People have a need to control their own effectiveness: they set up routines, product tests, documentation. This type of control is a feature, and can help to release better software, better texts, or perfectly burnt clay pots — ultimately controlling one's own effectiveness helps to learn from one's mistakes and to improve routines over time.

The grey zone

The grey zone describes the zone in between wanting to control one's own effectiveness and wanting to control other people's effectiveness. Here we could situate the type of control freak who does not delegate tasks to others for fear of seeing them done differently then they had expected them to be done. This can be harmful particularly to the person who does not delegate: they can get overworked or burnt out. This type of control freakism might be linked to perfectionism.

Controlling other people's effectiveness

When we work with other people, our own effectiveness might get in the way of other people's effectiveness, or vice versa. Indeed, it happens - not only in work contexts - that people find themselves in setups in which mutual responsibilities and autonomies conflict with one another because one person is dependent on the other for making decisions or moving things forward as they see fit. (There is generally a relation of dependency between two conflicting parties that is worth looking at (3).) Add to this the fact that, when delegating a task, some people have a hard time also delegating the responsibility and autonomy needed to resolve the task. They lack trust that another person can also do the work, or want that person to do the work exactly in the same way they would do it. (In some cases this can be related to Founder's Syndrome and can result in organizations staying stuck with one or a small group of founders holding knowledge and power, and preventing the organization from growing. Page 11 in the booklet "Working with conflict in our groups" describes how such an informal hierarchy can come into being in grassroot groups.)

The (perfectly valid) need behind this type of control freaking could be to make sure that a group of people builds a successful product, releases a fact-checked documentary, or creates a publication without mistakes. But controlling other people's effectiveness as a strategy to satisfy this need can create a non-cooperative climate in which people do not meet each other on eye level, but are dependent on each other, experience a lack autonomy, a break of boundaries, or sometimes feel authority to be overexerted.

Acknowledging the need to build a good product, it is possible to create the appropriate strategies to guarantee that the involved people can meet each other on eye level: for example by clearly defining and documenting role-responsibility-accountability along with appropriate decision making processes, or by distributing leadership (← you should totally click on that link!), by looking at inclusive leadership models, by learning from past mistakes, by instating feedback cycles, by making boundaries between the direction of an organization and the day-to-day work clear.

An organization I work with has the rule that people can make decisions for themselves if the decision only affects their work, while decisions that affect a team should be made with the team, and decisions that affect the organization as a whole need to be made at the organizational level. They call that a no-brainer, but in organizations with traditional hierarchies, or in grown grassroot environments that have never clearly defined and assigned responsibilities and accountabilities (aka "functional roles") this is not so obvious at all.

Controlling someone else's life

This type of control freak does not only desire to control their own life but for a reason or another wants to know and control what other people do, think (especially about the control freak), decide, or how they live. In some cases, this type of control freak might even want to force upon others things they should do as a way to be accepted by the control freak. It's what we call narcissism, harassment, abuse. It is unacceptable.


In summary, the distinctions I came up with in this post describe the boundaries along which control freakism takes hold of someone else's effectiveness or life — and ultimately prevents them from self-determining. In German we have the word übergriffig which describes that someone is infringing someone else's boundaries — they are over - grabbing, seizing, grasping, taking hold of.

Which type of control freak are you, if any?

(1) Like when a West-Berliner in a round of 15 East Germans arrogantly talks about the time when the wall came down and tells the story that he could not go shopping between Thursdays and Sundays because the East Germans bought too many products in the supermarkets — while this event marked an unimaginable rift in the biographies of 17 million East Germans, 15 of whom are sitting right in front of him. Thirty years after 1989, many of us are finally starting to question this publicly (← links in German language).

(2) Women experience it similary regularly, see Men Explain Things to me, a book by Rebecca Solnit.

(3) By the way, rather than having "recruited the wrong person" conflict may intrinsically arise as part of certain work relationships, simply due to the inter-dependencies of roles or workers, like in a delivery chain.

16 March, 2020 10:30AM by ulrike

March 15, 2020

Enrico Zini

Antoine Beaupré

Remote presence tools for social distancing

As a technologist, I've been wondering how I can help people with the rapidly spreading coronavirus pandemic. With the world entering the "exponential stage" (e.g. Canada, the USA and basically all of Europe), everyone should take precautions and limit practice Social Distancing (and not dumbfuckery). But this doesn't mean we should dig ourselves in a hole in our basement: we can still talk to each other on the internet, and there are great, and free, tools available to do this. As part of my work as a sysadmin, I've had to answer questions about this a few times and I figured it was useful to share this more publicly.

Just say hi using whatever

First off, feel free to use the normal tools you normally use: Signal, Facetime, Skype, Zoom, and Discord can be fine to connect with your folks, and since it doesn't take much to make someone's day please do use those tools to call your close ones and say "hi". People, especially your older folks, will feel alone and maybe scared in those crazy times. Every little bit you can do will help, even if it's just a normal phone call, an impromptu balcony fanfare, a remote workout class, or just a sing-along from your balcony, anything goes.

But if those tools don't work well for some reason, or you want to try something new, or someone doesn't have an iPad, or it's too dang cold to go on your balcony, you should know there are other alternatives that you can use.


We've been suggesting our folks use a tool called "Jitsi". Jitsi is a free software platform to host audio/video conferences. It has a web app which means anyone with a web browser can join a session. It can also do "screen sharing" if you need to work together on a project.

There are many "instances", but here's a subset I know about:

You can connect to those with your web browser directly. If your web browser doesn't work, try switching to another (e.g. if Firefox doesn't work, try Chrome and vice-versa). There are also apps for desktop and mobile apps (F-Droid, Google Play, Apple Store) that will work better than just using your browser.

Jitsi should scale for small meetings up to a dozen people.


... but beyond that, you might have trouble doing a full video-conference with a lot of people anyways. If you need to have a large conference with a lot of people, or if you have bandwidth and reliability problems with Jitsi, you can also try Mumble.

Mumble is an audio-only conferencing service, similar to Discord or Teamspeak, but made with free software. It requires users to install an app but there are clients for every platform out there (F-Droid, Google Play, Apple Store). Mumble is harder to setup, but is much more efficient in terms of bandwidth and latency. In other words, it will just scale and sound better.

Mumble ships with a list of known servers, but you can also connect to those trusted ones:

  • - Mayfirst (see also their instructions on how to use it, hosted in New York city
  • - Riseup, an autonomous collective, hosted in Seattle (ask me if you need their password) not a public service
  • - systemli, a left-wing network and technics-collective, hosted in Berlin

Live streaming

If for some reason those tools still don't scale, you might have a bigger problem on your hands. If your audience is over 100 people, you will not be able to all join in the same conference together. And besides, maybe you just want to broadcast some news and do not need audio or video feedback from the audience. In this case, you need "live streaming".

Here, proprietary services are Twitch, and Youtube. But the community also provides alternatives to those. This is more complicated to setup, but just to get you started, I'll link to:

For either of those tools, you need an app on your desktop. The Mayfirst instructions use OBS Studio for this, but it might be possible to hotwire VLC to stream video from your computer as well.

Text chat

When all else fails, text should go through. Slack, Twitter and Facebook are the best known alternatives here, obviously. I would warn against spending too much time on those, as they can foment harmful rumors and can spread bullshit like a virus on any given day. The situation does not make that any better. But it can be a good way to keep in touch with your loved ones.

But if you want to have a large meetings with a crazy number of people, text can actually accomplish wonders. Internet Relay Chat also known as "IRC" (and which oldies might have experienced for a bit as mIRC) is, incredibly, still alive at the venerable age of 30 years old. It is mainly used by free software projects, but can be used by anyone. Here are some networks you can try:

Those are all web interface to the IRC networks, but there are also a plenitude of IRC apps you can install on your desktop if you want the full experience.

Whiteboards and screensharing

I decided to add this section later on because it's a frequently mentioned "oh but you forgot..." comment I get from this post.

  • Big Blue Button - seems to check all the boxes: free software, VoIP integration, whiteboarding and screen sharing, works from a web browser
  • CodiMD: collaborative text editor with UML and diagrams support
  • Excalidraw: (collaborative) whiteboard tool that lets you easily sketch diagrams that have a hand-drawn feel

I'll also mention that collaborative editors, in general, like Etherpad are just great for taking minutes because you don't have that single person with the load of writing down what people are saying and is too busy to talk. Google Docs and Nextcloud have similar functionality, of course.

Common recommendations

Regardless of the tools you pick, audio and video streaming is a technical challenge. A lot of things happen under the hood when you pick up your phone and dial a number, and sometimes using a desktop, it can be difficult to get everything "just right".

Some advice:

  1. get a good microphone and headset: good audio really makes a difference in how pleasing the experience will be, both for you and your peers. good hardware will reduce echo, feedback and other audio problems. (see also my audio docs)

  2. check your audio/video setup before joining the meeting, ideally with another participant on the same platform you will use

  3. find a quiet place to meet: even a good microphone will pick up noises from the environment, if you reduce this up front, everything will sound better. if you do live streaming and want high quality recording, considering setting up a smaller room to do recording. (tip: i heard of at least one journalist hiding in a closer full of clothes to make recordings, as it dampens the sound!)

  4. mute your microphone when you are not speaking (spacebar in Jitsi, follow the "audio wizard" in Mumble)

If you have questions or need help, feel free to ask! Comment on this blog or just drop me an email (see contact), I'd be happy to answer your questions.

Other ideas

Inevitably, when I write a post like this, someone writes something like "I can't believe you did not mention APL!" Here's a list of tools I have not mentioned here, deliberately or because I forgot:

  • Nextcloud Talk - needs access to a special server, but can be used for small meetings (less than 5, or so i heard)
  • Jabber/XMPP - yes, I know, XMPP can do everything and it's magic. but I've given up a while back, and I don't think setting up audio conferences with multiple enough is easy enough to make the cut here
  • Signal - signal is great. i use it every day. it's the primary way I do long distance, international voice calls for free, and the only way I do video-conferencing with family and friends at all. but it's one to one only, and the group (text) chat kind of sucks

Also, all the tools I recommend above are made of free software, which means they can be self-hosted. If things go bad and all those services stop existing, it should be possible for you to run your own instance.

Let me know if I forgot anything, but in a friendly way. And stay safe out there.

Update: a similar article from the good folks at systemli also recommends Mastodon, Ticker, Wikis and Etherpad.

Update 2: same, at SFC, which also mentions Firefox Send and Etherpad (and now I wish I did).

15 March, 2020 04:38PM

Andreas Metzler

balance sheet snowboarding season 2019/20

Looking at the date I am posting this it will not come as a surprise that the season was not a strong one.

For pre-opening I again booked a carving fresh-up workshop with Pure Boarding in Pitztal (November 21 to November 24). Since the resort extends to more than 3400m of atitude the weather can be harsh. This year we had strong southern winds and moved to Riffelsee for Saturday, which turned out to be a really nice resort.

Natural snow turned out to be a rare commodity this year, but nevertheless I only rode at Diedamskopf where they rely on tnatural snow almost eclusively. They again managed to prepare quality slopes with natural snow. First day on snow was 15 December followed by a long pause until December 28. Having loads of unused vacation days I did three- or four-day work weeks in January and start of February whenever work (thanks, colleagues!) weather and conditions were nice. In hindsight this saved the season.

Since resorts have closed yesterday or will close today due to COVID-19 I am missing out on riding in Warth/Salober in spring.

While the number of a days is a low number, the quality of riding was still high.

Health-wise jumper's knee has kept me from finally really improving on my backside turn.

Well here is the balance-sheet, excluding pre-opening in Pitztal:

  2019/20 2018/19 2017/18 2016/17 2015/16 2014/15 2013/14 2012/13 2011/12 2010/11 2009/10 2008/09 2007/08 2006/07 2005/06
number of (partial) days 21 33 29 30 17 24 30 23 25 30 30 37 29 17 25
Damüls 0 2 8 4 4 9 29 4 10 23 16 10 5 10 10
Diedamskopf 21 26 19 23 12 13 1 19 14 4 13 23 24 4 15
Warth/Schröcken 0 5 2 3 1 2 0 0 1 3 1 4 0 3 0
total meters of altitude 160797 296308 266158 269819 138037 224909 274706 203562 228588 203918 202089 226774 219936 74096 124634
highscore 9375 10850 11116 12245 11015 13278 12848m 13885m 13076m 10976m 11888m 11272m 12108m 8321m 10247m
# of runs 374 701 616 634 354 530 597 468 516 449 462 551 503 189 309

15 March, 2020 12:18PM by Andreas Metzler

Dima Kogan

numpysane and broadcasting in C

Since the beginning, the numpysane library provided a broadcast_define() function to decorate existing Python routines to give them broadcasting awareness. This was very useful, but slow. I just did lots of typing, and now I have a flavor of this in C (the numpysane_pywrap module; new in numpysane<