April 21, 2019

hackergotchi for Keith Packard

Keith Packard


Snek and a Balloon

(you can click on the picture to watch the model in action)

This represents the scale of projects we typically do in our Lego class. A motor, a couple of sensors and some simple logic. Here's the Snek program driving the balloon:

# Make the balloon go up and down

motor = (D9, D8)
top = A0
bottom = A1

def up():
    while read() < .8:

def down():
    while read() < .8:

def play():
    while True:


Light Sensors and Lights

I like to use light sensors with robotics and wanted to make some new ones sensitive to visible light. I found these Vishay TEPT5600 photo-transistors which are sensitive to most visible light. Using those with a 470kΩ resistor generated a good range of outputs for indoor lights.

For a light source, I'm using Cree 5mm green LEDs with a 5V power supply and a 100Ω current limiting resistor, these draw about 20mA and generate a lot of light.

For both of these, I take a 1x4 brick and drill a 3/16" hole in one end and a 1/8" in the other. The 5mm device fits snugly in the larger hole and the wires thread out through the other hole. The 1/8W resistor is tucked inside the brick.

You can see both of these in action in the example — the white bricks contain LEDs and the red bricks contain light sensors. The path between the light and sensor is interrupted by the model, allowing the program to determine when the motion has completed.


This model uses the Circuit Cube motors that I discovered last month. There's one visible near the middle of the Ferris wheel and another one just behind the Duemilanove board driving the balloon mechanism.

These motors have a built-in gear reduction and so they offer low speed and high torque. For Lego modelers, that's a pretty useful combination as the older non-geared motors always involved a long gear-train or worm gear to provide a reasonable balance of speed and power. For the balloon, there is a string wrapping around an axle driven directly by one of these motors.

Simple Models with Simple Programs

This class uses Lego to build simple mechanical devices that are controlled with simple computer programs. The Snek environment is the latest in a long history of computer systems which started with Lego Logo on the Apple ][ over twenty years ago.

21 April, 2019 04:59AM

April 20, 2019

hackergotchi for Jonathan Carter

Jonathan Carter

Debian project leader elections 2019

A few weeks ago, after a few days of internal turmoil on the matter, I committed to running for DPL.

The original nomination period was the week before, with no one stepping up for the position and with our current leader indicating that he wouldn’t be available to serve another term. This resulted in a bit of a knee-jerk reaction and slight panic, with threads popping up on the debian mailing lists pondering things like what a leaderless Debian would look like and whether we perhaps need more of a team than a single person to be the leader the project. There was also some discussion about burnout and whether it’s even fair to put so much pressure on a single person, and whether it’s possible to delegate more of the DPL’s responsibilities. The press also picked up on this and there were a few big stories on the matter that lead some to be slightly nervous on the matter.

Of course (as the LWN article pointed out), Debian’s constitution is quite thorough and has made provision for cases like these. If no one steps up, the nomination period is extended for another week, and it only took one such extension to produce 5 new candidates (of which one retracted soon afterwards due to time commitments).

I mentioned internal turmoil at the beginning of the post, this was because up until a few days before my self-nomination, I’ve been very confident, and consistently so for a very long time, that I never want to run for DPL. The work that I care about and spend most attention on doesn’t at all require me being a DPL. Also, having more responsibility in areas that I’d rather let others take care of sounded a bit daunting. I’d much rather spend time on technical and more general community issues than very specific interpersonal problems or administrative tasks like reading and approving budget proposals, sending out developer certificates, etc. On top of that, I was aware that running for DPL and opening myself like that means that I open myself to a very wide array of critique, that people might put everything I say under a microscope and try to tear it apart, and that running for DPL means being prepared for that.

Despite that turmoil, a small nagging part kept asking the questions “But what if?”, what if I were DPL, what would I do? What would I change? What would I do as DPL that would make Debian better, and better as a DPL than I just could as a normal debian developer? These questions helped form in my head what my platform would look like, why I wanted to run for DPL, and how the rest of my campaign would shaped up. This year is also unique for me compared to previous years in that I will actually have time over the next year to focus on DPL-like activities. That, combined with the plans that were shaping up that I’m very enthusiastic about, convinced me that it’s time to step up and proceed with my self-nomination.

Directly after the nomination period, the campaign period starts. There are surprisingly few rules (or even guidance) regarding the campaign period, the majority of it is where Debian developers (or anyone really, but mostly DDs) ask questions to the DPL candidates about their stance and policy on certain matters, how they plan to take action and often a few really tough hypothetical situations. Some questions even took advantage of the fact that April fools day falls in the campaign period, which led to some odd and interesting questions. Overall, I really enjoyed the campaign period. I was preparing myself for the worst case scenario before campaigning started, but what actually happened next astonished me. We had all kinds of Debian developers coming forward with high quality, productive discussions on all kinds of aspects which ranged from internal technical policies, how we work with upstreams, community matters, diversity, the DPL role itself, how Debian is changing and how to keep it relevant, community turnover, how we deal with money, how we market ourselves and so one. It was productive discussion and I enjoyed it.

Regardless of the outcome of this election, I’m very happy that I stepped up as a DPL candidate, and I’m very satisfied with how my campaign went and how I answered my questions. I’m also very happy that the elections turned out so vibrant and productive and I dare say even exciting. I hope that this will continue to happen for future elections, because it’s clear to me that a productive election period is good for the health of Debian.

In the future, someone may be reading this and ponder “Should I run for DPL?”. My advice would be to first take some stock and figure out if you’re at a place in your life where you can actually do that (Did you just start a new job? Would your employer support you? Did you recently get married, have kids? How’s your health? Can you afford to spend lots of unpaid time doing DPL work? etc…) and then also consider why you’d want to be DPL and what you’d like to achieve with such a role. If you weigh up all the aspects and it still feels right for you, then by all means go for it. In my opinion, even if you’re not elected you still help make Debian better by participating in the election process.

Voting closes in around 12 hours at the time of writing this. Good luck to all the candidates and thank you to everyone who participated in making this such a fantastic and surreal experience!

2019-04-21: Update: Congratulations to Sam Hartman who is our new DPL! We’ve been talking off-list throughout the election process and Sam knows he has my full support and we even plan to collaborate on certain areas, more news to follow in the near future.

20 April, 2019 12:00PM by jonathan

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Weblate 3.6

Weblate 3.6 has been released today. It brings rewritten notifications, user data download and several other improvements. It also sets depreciation timeline for Python 2 installations - after April 2020 Weblate will only support Python 3.

Full list of changes:

  • Add support for downloading user data.
  • Addons are now automatically triggered upon installation.
  • Improved instructions for resolving merge conflicts.
  • Cleanup addon is now compatible with app store metadata translations.
  • Configurable language code syntax when adding new translations.
  • Warn about using Python 2 with planned termination of support in April 2020.
  • Extract special chars from the source string for visual keyboard.
  • Extended contributor stats to reflect both source and target counts.
  • Admins and consistency addons can now add translations even if disabled for users.
  • Fixed description of toggle disabling Language-Team header manipulation.
  • Notify users mentioned in comments.
  • Removed file format autodetection from component setup.
  • Fixed generating MO file for monolingual PO files.
  • Added digest notifications.
  • Added support for muting component notifications.
  • Added notifications for new alerts, whiteboard messages or components.
  • Notifications for administered projects can now be configured.
  • Improved handling of three letter language codes.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

20 April, 2019 11:00AM

hackergotchi for Keith Packard

Keith Packard


CrickitSnek — snek on the Adafruit Crickit

I got a Crickit FeatherWing from Adafruit today. This board is supposed to act as an I/O expander for all of the Feather boards, but it's a completely operational SAMD21 machine with a pile of useful GPIO bits:

  • 4 “Capacitive Touch” pins (which are just regular GPIOs)
  • 8 Analog input/digital output pins
  • 4 Digital I/O pins
  • 4 High-current 5V digital output pins
  • 2 H-bridge motor controllers
  • 1 Audio amplifier
  • 1 high current output designed for NeoPixels

It's also got a USB port and an on-board NeoPixel, plus headers to plug in a Feather board.

There's no Crystal on the Crickit

To save cost, the Crickit design doesn't include any crystal at all. That required re-configuring the SAMD21 clock configuration to synchronize the 48MHz system clock from USB instead of from the 32.768kHz crystal present on the Metro and Feather boards. Once I had done this, the Crickit board appeared on USB and Snek was running.

Naming the Crickit pins

There are a bunch of separate I/O groupings on the Crickit board, and I wanted to make it easy to remember how to use them. Providing convenient names for each pin seemed like the the best plan.

On the Metro and Duemilanove boards, I just used numbers for all of the pins; 0-13 for the digital pins and 14-19 for the analog pins. This matches the Arduino conventions, although it doesn't provide the convenient 0-5 numbering for analog input. Having names for these pins will also be nice.

So, I hacked up the Snek 'builtins' mechanism to include builtins with numeric values. Now you can have as many builtin values as you want. I first replaced the wacky lexer hacks for values like 'math.pi', 'True' and 'False' and then went and added names for the pins on all devices.

Snek vs TI DRV8833

The TI DRV8833 motor controller chip on the Crickit has two pins per motor -- you set one pin to ground and drive the other with a PWM signal, which pin you PWM selects the direction of the motor. This doesn't directly map to how I expected motor controllers to work in Snek. Snek expects to have one pin control direction and the other control speed.

I had to do a bit of magic to make this work, but the joy of having an interpreter between the application and the hardware is having the ability to hide these kinds of details from the application

Minor Crickit Schematics Error

The Crickit Schematics linked from the Crickit Downloads page have the DRIVE labels flipped -- DRIVEIN1 is actually hooked to PB10, DRIVEIN2 to PB11, DRIVEIN3 to PA12 and DRIVEIN4 to PA13. The Eagle schematics on github are correct; it would be nice to have the images updated as those don't require downloading proprietary software to view.


All of these bits are in the Snek git repository and should be released in the next Snek version (0.97?).

20 April, 2019 07:45AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Wanted to read up on how TLA+ works.

Wanted to read up on how TLA+ works. Then I noticed that Leslie's page has lots of videos, and now I'm watching him speak. It might be the way people do it but now I can't seem to focus on a video long enough...

20 April, 2019 05:56AM by Junichi Uekawa

April 19, 2019

hackergotchi for Urvika Gola

Urvika Gola

Attending SREcon’19, Brooklyn

It was my first time attending the SRECON series and also one of the big step into learning about Site Reliability and Engineering. The conference had jam packed sessions on site reliability, Chaos engineering, Code reviewing culture, Incidents, SLOs and much more.

My notes would be redundant because there is nothing better than a comprehensive  write up about the conference by Tanya Reilly, you can read it here ->

Thanks to the organizers for putting up such an overwhelming, knowledgeable and fun conference! 🙂

19 April, 2019 06:51PM by urvikagola

Shashank Kumar

Event report for DebUtsav Delhi 2019

Event report for DebUtsav Delhi 2019

The Debian India Community in Delhi along with Mozilla Delhi/NCR community organized DebUtsav Delhi 2019 on 9 and 10 March, 2019.

For those who are unware, DebUtsav is an Indian style version of a typical Mini Debian Conference.

This was the first Debian related conference to be organized in the Northern region of India. We have had Mini Debian conferences previously in Mumbai, Pune, Hyderabad and in different cities of state of Kerala. But this was the very first one in the Northern Region.

DebUtsav was made possible with the help from our sponsors

The event took place at National Institute of Public Finance and Policy (NIPFP), Delhi.

The event schedule was divided into two tracks.

  • Debian Track and FOSS Track. The Debian Track was specific for the talks or workshops related to Debian.
  • The FOSS track included talks/workshops ranging from General Free Software, Gender Diversity a Digital Security and privacy. The full schedule is present on the website DebUtsav.in

Debian Track

On the first day of DebUtsav

Debian Track

Although you can check the talks happened at the talks in the link to schedule. Some of the notable talks were Pirate Praveen on Debian Packaging in times of Docker and Flatpak. Abhijit Introduced us to the Debian LTS project. While Raju Devidas talked on "How to become a Debian Developer?" There were also workshops by Utkarsh Gupta and Sagar Ippalpalli on Packaging of Ruby Gems and Node Modules.

Debian Track

FOSS Track

Ashutosh Singh, also known as Juggernaut, talked on getting started into Open Source and Debian. The other talks in FOSS track included, a talk on Cryptography & Cryptanalysis by Ishan Malik. While Mohit Phulera talked about Setting up and using Google Vision API's. Thing to note here that both were first time speakers. DebUtsav got them started. We also had a Rust 101 workshop by Swarnim Arun. Karmanya Talked about Building Data Apps with Grames. The last talk for the day in FOSS track was by Himanshu, it had a funky name to it, SELinux: For the Asgardians

FOSS Track

Second Day of DebUtsav

Debian Track

The first talk of the day was on an Introduction to the Hamara Linux project by Shivani Bharadwaj and Raju Devidas. They introduced the Hamara Project to the one's present as well as talked about there upcoming release of Hamara codenamed Svastik.

Hamara Svastik by rajudev

Introduction to Hamara Project by Shivani and rajudev

Later in the Debian track, we had the first ever Bug Squashing Party BSP in India. For the BSP we had 2 DD's and 2 DM's and many other active Debian contributors present. They helped people present to go through the Debian bug tracking system BTS and find bugs that they can help solve. Although we did not manage to get any bugsolved during the duration of BSP, we did managed to get some people get familiar with Debian's bug tracking eco-system, introduced them to the various teams within Debian in which they can co-ordinate to get started for solving bugs as well as contributing to Debian in general. The BSP proved to be a good starting point for Utkarsh Gupta. Within two-three weeks after the BSP he has already solved many bugs including 15 RC bugs.

Sruthi co-ordinating BSP Bug Squashing Party

FOSS track

The first talk in the FOSS track was on Digital Security and Privacy by Shashikanth.

Digital security and Privacy

In the second talk our DM Sruthi Chandran raised the question on How gender diverse and inclusinve is the Free Software Community?. She co-ordinated the sicussion very well and made sure that the people present during the session got involved.

How gender diverse are free software communities?

Later in the Day we had talks by Pirate Praveen on Take back our data and Freedoms. Vipul Gupta talked about digging Opportunities in Open Source. Last talk which was scheduled for the foss track did not take place because of the un-availability of the speaker Ranjith Raj Vasam. Everyone got involved into the Debian BSP instead.

Talk by Vipul Gupta

On the sidelines of the schedule of the second day, the podcast team of Decompiled was also present. With the efforts of the team, Raju Devidas interviewed Pirate Praveen. In the interview they talked about work done by Praveen over the years as well as his journey in Debian project.

decompiled interview


Some statistics from DebUtsav

  • For both the days combined, around 120 people registered for attending DebUtsav, out of them around 95 attended the conference.
  • Around 10-15, unregistered attendees/on spot registrations were also present.
  • We had a total of 14 Women Attendees.

DebUtsav Delhi was lucky to have the presence of - 2 Debian Developers Pirate Praveen & Abhijith PA. - 2 Debian Maintainers Sruthi Chandran and Sagar Ippalpalli.

It is during the conversations that happened at the conference that we realized that there are a lot of first's happening in the Debian India Community.

  • Sruthi Chandran is the first Women Debian Maintainer from India, also soon to be the first women Debian Developer from India.
  • Presence of DD's proved useful for Utkarsh Gupta as he got his initial requirements for a DM fulfilled with his keys signed by the DD's present at DebUtsav. He is at the time of writing this event report, the youngest DM from India.
  • Currently Sruthi Chandran and Raju Devidas have there DD applications in process. If they get approved, we will be crossing the Double Digits of DD's from India for the first time. They are 9 now and with two more we will get to 11.
  • This was the first Debian related conference in Delhi or anywhere in Northern India.

DebUtsav Delhi proved very productive in introducing many new people to Debian project and to free software in general. Also it provided an opportunity for the Debian contributors from Delhi to meet the Debian Developers.

The conference was made possible by the continuous efforts of people from the communities of Mozilla Delhi NCR and Indian Linux User's Group Delhi. Some of the people involved in the effort could be seen on the Team section of debutsav.in

Thanks a lot to everyone for puting up there efforts for DebUtsav Delhi. We are looking forward to having another Debian event in Delhi and more events around India.

19 April, 2019 06:30PM by Shashank Kumar

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tint 0.1.2: Some cleanups

A new version 0.1.2 of the tint package is arriving at CRAN as I write this. It follows the recent 0.1.1 release which included two fabulous new vignettes featuring new font choices. The package name expands from tint is not tufte as the package offers a fresher take on the Tufte-style for html and pdf presentations.

However, with the new vignettes in 0.1.1 we now had four full-length vignettes which made the package somewhat bulky. So for this release I reorganized things a little, added two new shorter vignettes with links to the full-length vignettes but keeping the size more constrained.

Two screenshots for the first pages of the Lato and Garamond vignettes follow (and are links to a higher-resolution full-length pdf versions):

The new vignettes can also be browsed from CRAN: html variant and pdf variant. They also show the new theme borrowed with thanks and credits from ggtufte.

The full list of changes is below.

Changes in tint version 0.1.2 (2019-04-19)

  • Two new shorter html and pdf vignettes have been added (with references to the longer vignettes) reducing package size.

  • New helper function 'theme_tint' based on a similar function in package 'ggtufte'.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 April, 2019 03:13PM

Molly de Blanc


I became a Debian Developer towards the end of 2018. I started the process in August 2017 at DebConf in Montreal. Over the course of 17 months I wrote emails, searched the Debian wiki, and learned a lot about the project.

What’s a non-uploading Debian Developer (DD)?

A non-uploading DD is one who does not upload packages to the Debian code base. A non-uploading DD:

  • is member of the Debian project;
  • is allowed to vote about issues regarding the whole project;
  • can log in on most systems that keep Debian running; and
  • has access to the debian-private mailing list.

Why become a DD?

I had two main reasons for becoming a DD: I was told debian-private was mostly vacation notices and baby pictures. I also wanted to vote in the DPL elections.

It turns out -private contains ZERO baby pictures, and choosing who to vote for in DPL elections is hard work.

There are other reasons to become a non-uploading DD. I found that the most compelling one, from my perspective, was the authority and respect that comes along with it. When representing the project, which I have done on several occasions, it’s easier to get things done when you have a title or formal affiliation attached to your name.

I joined the A-H team with the understanding that I would become a DD — they preferred someone with official status in the project be on that team. In addition to empty promises of baby pictures, I became a DD because I wanted to take on more responsibility in the project.

There’s also a certain amount of the feeling of belonging that goes along with becoming a formal member of a project. There’s a lot to say about the value of the recognition of your peers — that they consider you a part of the team.

What do you do for Debian?

I’m on the Outreach and Anti-harassment teams.


The Outreach team coordinates Debian participation in internship programs, specifically, and currently, Google Summer of Code and Outreachy. We participate in Outreachy twice a year, and GSoC during the Northern Hemisphere summer. This mostly includes a lot of paperwork, emailing people to make sure they’re on task, and talking with the home organizations of Google and Outreachy.

Since I do not mentor any projects, this work is fairly condensed, though very demanding. It’s time sensitive, with externally imposed deadlines not of our own creation.

During a period of several weeks — and application periods overlap in March — we:

  • confirm with the Debian Project Leader funding for Outreachy;
  • put our calls for mentors;
  • assist mentors in finding co-mentors when appropriate;
  • evaluate projects, separating them into “approved” and “unapproved” categories, based on whether they meet the Debian participation criteria;
  • fill out the application forms for GSoC and Outreachy;
  • make announcements, calls for mentors, calls for projects, and calls for applicants;
  • field questions and requests from applicants;
  • keep up with mentors during the application period;
  • make formal decisions about the number of interns and who they are based on requests from mentors, available funding, and an amorphous process of reading mentor reports and trying best to judge who will not only be a successful intern, but who will be a successful mentor for a project;
  • keep up with mentors and interns during the period of the internship;
  • make sure everyone gets invoiced and paid appropriately;
  • make sure everyone has a good time; and
  • general other things as they come up.

As a total process, it’s quite consuming at times, but relaxed at other times. I would say that administering for Outreachy is an “easier” process, as the mentors are (generally, overall, usually) more experienced and self-managing. GSoC is also a much bigger program.


I could, and likely will, write a much longer post about this. I gave a talk at FOSDEM on the activities and operating procedures of the team. The quick summary is that we meet every fortnight, discuss reported incidents, and either: make recommendations to other teams about how to proceed or send personal emails to the individuals involved, pointing out inappropriate behaviors, and asking people to be more professional in their project participation.

What did your process include?

Mostly emails. A lot of emails, and back and forth with my AM (application manager). I’m sure many people say this, but my AM was great.

I went through the initial steps — deciding to apply after many, many people convinced me of the validity of my contributions to the project; getting my keysigned by an appropriate number of DDs; recruiting advocates for my application; etc.

Then came the questions and the tests. A big chunk of questions were around philosophy, policy, and procedures of the project. We covered licensing questions, the DFSG, the philosophy of user freedom, how different things within the Debian project are decided, and a bunch of other sections.

There where a technical section of my application, which covered more policy and procedure around how things are done within the Debian project. I worked on a bug (one on a piece of web site content) and submitted the edit on salsa, Debian’s instance of git. I collaborated in documents on storm.debian.org, logged into servers using ssh, and encrypted and decrypted a number of files over the course of the procedure.

Why did it take so long?

I started my application in August of 2017, and got my welcome email December 26, 2018. I joked that I was going for the longest application period.

It took so long largely because my AM and I were both very, very busy. When faced with free time, we both frequently agreed to make the decision to instead work on our respective Debian work, rather than the application process. I think I speak for both of us when I say we agreed a lot of the other projects we were working on were more timely than my application.

At DC18, I did a personal sprint on my application, and my AM kindly did a personal sprint reviewing it. We met over IRC to handle final steps later that fall. I finished in November, days before the November keyring update, and my application was reviewed in December.

19 April, 2019 02:43PM by mollydb

Iustin Pop

Debian DPL election 2019

As planned for this long weekend (in Switzerland), I went and re-read the platforms, and cast my vote. Which made me very happy, because I voted in the elections for the first time in a long while…

But it didn’t start like that. The first call for nominations ended up with no (valid) nominations, and the follow-up thread was a bit depressing (high load for the DPL role, unclear expectations, etc.) For a time, it looked like the project is drifting, and some of the trolling on the list definitely didn’t help. I managed to prod a bit the thread and there was a nice reply from Lucas which seems to open the gates—the discussion livened up, and after many more meaningful mails, we ended up with 4 candidates. That’s, in my memory, a very good result.

Which means I went and read the platforms multiple times, and tried to follow the campaign as well (not so successful though), and today I cast my vote. After the initial sad result, very happy to see there are still people who are willing to invest significant time into Debian and its future. Thanks all!

Coupled with the (hopefully soon) upcoming Buster release, and the fact that I managed to update all my small packages before it, I feel much better both about Debian and my involvement with it than a year ago.

Now just looking forward to Python 2 removal :)

19 April, 2019 11:43AM

April 18, 2019

Giovanni Mascellani

Paris BSP and this blog

Hello everybody!

I've never had a blog up to today, and apparently now I do. Why? Well, it happened that there was a Debian Bug Squashing Party in Paris a few weeks ago, and I thought that it might be nice to go, meet some nice people and humbly help releasing Buster. Great news is that the Debian project is willing to reimbourse its members some of the expenses for taking part to a BSP, asking in return to communicate publicly about what you do during a BSP so that others are motivated to participate as well.

So I guessed that might be the occasion for me to start a blog, and start writing something about what I do in Debian. Here it goes!

It was my first BSP ever, and I am very happy of it. We met for a couple of days at very nice Mozilla's office in Paris (I think they are moving and we were at the old one, though). We were probably around 15 people, mostly from France, Belgium, the Netherlands and UK (which is not surprising if you look at the high-speed rail map in Europe; or any map of Europe, as a matter of facts).

The great thing of a BSP is that you have a very short loop to other developers. Since a BSP is all about getting RC bugs closed, it is useful to talk directly to Release Team members, and discuss whether they would unblock your fix or not when you are not sure. This saves a lot in terms of human bandwidth and context-switching. Also, I had the occasion to watch more experienced developers in action and learn how to tackle issues I haven't dealt with ever before, like bumping up a library SONAME.

So here is what I managed to do during the BSP, in roughly chronological order.

  • Bug #920209: that is a bug I already had written a patch for a few months ago, but the whole thing had stagnated and the patch was never uploaded. A simple ping to the maintainer was enough: an easy one!

  • Bug #917711: a FTBFS caused by failing tests, in turn caused by a broken internal interface between two libraries. I was able to cook up a patch, but I was not really sure it was the right thing, so I opened a bug upstream, to which upstream never replied. However, upstream itself wrote a similar patch a few days after, which was then up uploaded to Debian by ivodd.

  • Bug #925957: this is a bug on a package I maintain, so it was imperative to get it done. fstransform is a tool to convert a filesystem to another format (like, XFS to ext4) without having to copy all the data to another filesystem and then back (see the upstream readme for more details). It is no mistery to anyone that such an operation is inherently dangerous, so both the program and its documentation warn prominently that data loss are always an option. The idea is: if fstransform can be useful for you, use it, but never on data you cannot afford to lose (you should always have a backup of such data, anyway). So this bug was about a reproducible failure case for fstransform when it was ran with a too little copying buffer. Of course the best thing is to have a proper fix, but since the upstream bug had stalled for an year an a half without a solution, the road of fixing it myself seemed implausible, especially because I do not know fstransform's internals. So my fix was to add a warning message (that the user has to explicitly acknowledge); this fits my model for fstransform: the user is advised that things might go wrong, but if they are ok, then good for them. This was also unblocked by the Release Team. Of course it is not the ideal solution, but at least the user knows what to expect from the program, which can still be useful if you are properly warned: given that Buster should be released as soon as possible this is, to me, a reasonable compromise. The bug submitter did not agree and reopened the bug. Fortunately a few days later the upstream author managed to find the proper fix, which I then uploaded.

  • Bug #912467: a FTBFS due to an updated reverse dependencies, a relatively common between Java packages. A partial patch had already been begun by andrewsh, but was never completed. Using that as a base and drawing from the corresponding upstream patch, it was not difficult to fix the build, as the API changes were mostly cosmetic. I uploaded my patch to salsa, but not the the archive, because I would like someone else to review it. So far, it has not happened, and maybe I should have another look at it.

  • Bug #917501: a rather nasty Eisenbug, that I could not really understand during the BSP: the bug occurred when I built with sbuild, but not when I build directly in my system, so that I could not debug it. Yesterday it was finally traced back to the usage of eatmydata by sanvila, which is plausible because I also use eatmydata with sbuild (it really boosts package installation!). The bug was downgraded to normal severity, and I guess it should also be reassigned to eatmydata, since it would be eatmydata's duty to be transparent to what is happening above it.

  • Bug #915319: a FTBFS due to a bad library detection procedure by CMake. The detection of libsmbclient.h failed because CMake insisted to compile with -std=c89 (or so), while that header requires C11. Easy to patch.

  • Bug #870267: Very very easy: the bug had already been solved, but was left open.

  • Bug #877773: it was quite easy to understand what had to be done here (i.e., a SONAME bump), but I had never done it. So I asked around and jcristau showed me how to do it, including requesting a mini transition and stuff. Very instructive, and, I think, really one of the points of gathering together instead of working everybody at home.

  • Bug #905697: I ported some of the kdepimlibs libraries to libical3, so that libical2 can be dropped from the archive. The difficult part was to backtrace the corresponding upstream patches, because in the meantime the libraries had reorganized in different repositories. In the end I found this patch, and it was not difficult to make it ready for Debian.

  • One last thing not coming from a bug: Debian Buster should finally support Secure Boot, so that the computer's firmware can cryptographically validate the operating system before launching it. The Debian EFI team recently updated the Secure Boot page on the wiki with a clear description of how it works and how to enable it in Debian. I reviewed it and sent an email with my thoughts to the relevant mailing list.

So here it is, my blog and my report. Now, Debian, give me my money! :-P

Many many thanks to Mozilla for allowing us to use their spaces for the BSP, and to jcristau and olasd who organized it. Now let's try to get Buster released and maybe see you in Brazil (I'm not sure yet I will be able to come, because of [INSERT_REASONS_HERE], but I hope so).

18 April, 2019 09:00AM by Giovanni Mascellani

April 16, 2019

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, March 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, 204 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 14 hours (out of 14 hours allocated).
  • Adrian Bunk did 8 hours (out of 8 hours allocated).
  • Ben Hutchings did 22.5 hours (out of 20 hours allocated plus 16.5 extra hours from February, thus carrying over 14 hours to April).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 26 hours (out of 29.5 hours allocated + 2.5 extra hours from February, thus carrying over 6h to April).
  • Hugo Lefeuvre did 20 hours (out of 20 hours allocated).
  • Markus Koschany did 29.5 hours (out of 29.5 hours allocated).
  • Mike Gabriel did 14 hours (out of 10 hours allocated + 4 extra hours from February).
  • Ola Lundqvist did 8.5 hours (out of 8 hours allocated + 2 extra hours from last month, thus carrying over 1.5h to April).
  • Roberto C. Sanchez did 12 hours (out of 12 hours allocated).
  • Sylvain Beucler did 29.5 hours (out of 29.5 hours allocated).
  • Thorsten Alteholz did 29.5 hours (out of 29.5 hours allocated).

Evolution of the situation

In March we had one new contributor, Sylvain Beucler, though we lost Antoine Beaupré. Thankfully we also gained Jonas Meurer starting in April, yet we are are still very much looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

On a positive note, we are also pleased to welcome a new French university among LTS sponsors: Université Grenoble Alpes.

The security tracker currently lists 36 packages with a known CVE and the dla-needed.txt file has 39 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

16 April, 2019 10:23AM by Raphaël Hertzog

hackergotchi for Bits from Debian

Bits from Debian

Debian Web Team Sprint 2019

The Debian Web team held a sprint for the first time, in Madrid (Spain) from March 15th to March 17th, 2019.

We discussed the status of the Debian website in general, review several important pages/sections and agreed on many things how to improve them.

For the sections we want to reorganise (mainly the homepage and a new section "download" which will group our current "CD" and "distrib" sections) , we have designed this workflow:

  • Create branches in the webwml repo,
  • Agreed on the new or modified content (started already during the sprint), and work on them further after the sprint
  • Review a lot of related open bugs to see if we can address them with the new content (done during the sprint)
  • Create bug reports for the things that cannot be solved quickly to keep them tracked (started during the sprint)
  • We agreed we should get further help from web designers/information architects (pending)
  • Once the English version is more or less settled, call for translations on the branch (pending)
  • If we have English and the main translations ready, merging into the master branch (pending)
  • We will try to have at least the homepage and the download section ready for the Buster release.

We also agreed that the press delegates should decide what new News entry is worth to be posted in the homepage instead of showing the last 6 entries.

For some other pages or areas (e.g. doc/books, misc/merchandise, /users) we found that the content is outdated and the team can not maintain it, we agreed in issuing a call for help (request for adoption) and if we cannot find volunteers for those pages/areas, we'll remove the content or move it to wiki.debian.org at the end of 2019.

We have agreed that we'll need to reduce the size (number of pages) of the website (*see some numbers about statistics at bottom) so it's more sustainable to keep the whole website up-to-date (content wise), so we'll remove some pages having content already covered in other pages, having content that currently is easy discoverable with a web search engine, can be maintained better in the wiki, etc.

We have talked a bit about certain other aspects like point release workflow, the build time of the website, team memberships and governance. In general the sprint has shown that for most of the discussed topics the migration to git as VCS and the existence of Salsa is a huge step forward for the usability and attractiveness for contributors of the webwml repository.

The core webteam is happy that the sprint has also attracted new people to jump in and which are also members of the webteam now. We welcome Thomas Lange and Carsten Schoenert in our team!

Finally, we have passed time together to socialize and knowing each other better, and got very motivated to continue working on the web.

Group photo of the participants in the Web Team Sprint Left to right: Rhonda D'Vine, Laura Arjona Reina, Thomas Lange, Carsten Schoenert, Steve McIntyre

A more detailed report has been sent to the debian-www mailing list.

The participants would like to thank all donors to the Debian project who helped to cover a large part of our expenses.

16 April, 2019 07:20AM by Laura Arjona Reina

April 15, 2019

hackergotchi for Olivier Berger

Olivier Berger

Labtainers in a Web desktop through noVNC X11 proxy, full docker containers

Here’s another recording of a demo I’ve made of our efforts on running Labtainers with a Web desktop.

This time, we’re using a modified version of DoroWu’s noVNC X11 wrapper container (“docker-ubuntu-vnc-desktop“), to provide an X11 Display to labtainers.

I initially adapted DoroWu’s container image by using Debian instead of Ubuntu, but it appeared that noVNC 1.0.0 was a bit too old in the Debian package. So I switched back to the same version as the one DoroWu chose, and this time no more disconnections.

Also, we’re now rebuilding the Labtainer master container from upstream’s Git repo. Fresh Labs every day 😉

Labtainers in a Web desktop through noVNC X11 proxy, full containers from Olivier Berger on Vimeo.

See our previous post on a similar topic for the context.

You can find the Dockerfile and scripts at : https://gitlab.com/olberger/virtual-labs/tree/xfce4/labtainer-docker, and the 2 container images have been put on the DockerHub.

We’d welcome some feedback on whether you find this interesting.

15 April, 2019 03:08PM by Olivier Berger

hackergotchi for Shirish Agarwal

Shirish Agarwal

Hackergram Journey Part 2 – The Conclusion

If you have not read part 1 of the Hackergram Journey, I would suggest you to do that before reading it. This will give you more of a complete picture so you can understand where I’m coming from.

Breakfast, Lunch and talking with locals

The previious evening when I was put up at Lake Heritage, I went out and got 2-3 Electrals. I had ordered Dal and Rice in the evening but again there was no taste hence didn’t eat much. I forced myself to have at least 5-6 spoonfulls at one level to see if it will stay and also that I needed to have some food in the stomach . While the night was not peaceful, it was much more peaceful than at the Camp. I probably woke up handful of times but each time looking at the lake just filled me with peace. At times, I wished I was all better so could have spend the whole night just seeing the lake. Anyways, somehow the night passed, although not without showing me all the beautiful views of the lake and the different plays of light 🙂 . The amateuer story-writer in me could think of so many stories which were and are waiting to be tapped in such a beautiful place. I kept wondering maybe this is why so many writers of Indian origin and even westerners come to places like these to write. I could understand why Ruskin Bond, a famous Children’s writer chose to make Uttarakhand and Mussorie his base rather than anywhere else. I know of quite a few writers in Hindi, Urdu fiction who wrote on crime, sex, human interest stories and were a craze in North India right from 1960’s till late 1990’s who came from such places and eventually settled in Delhi. Seeing the tranquility of the place and then places like Delhi, it is easy to see the dissonance the writers might have felt which they poured in their writing. I have to say that I felt Bhimtal is a place where one could pen many a fantasy story as its still somewhat of a raw place still.

Anyways, morning came , did my morning routine staff and ignoring Gauri’s suggestion took a bath. While it pained quite a bit at the knee, I felt like a whole different person. Came down to see what they have for breakfast. I didn’t want anything fancy just a toast and eggs or Pohe . Because they had neither at the hotel, was told of a small shop which was 20 meters from the Hotel. Went there, had tea, bread and egg and whole lot of conversation about Bhimtal and surroundings. According to the Gentleman, the local population of Bhimtal alone is approx. 40k according to some survey done couple of years back. I dunno if it would be sane or not to compare the population with Shivaji Nagar, Pune which is around 5-6 sq.km and has around 0.5 million residents and probably another 20-25% roving population due to the high number of schools and colleges that are in and around Shivaji Nagar has and lot of people come to Pune for study or/and work as has been shared time before. Maybe somebody local to Bhimtal, either Sanjay or Karan would probably be able to tell or share more of the same.

As I was feeling better, I interacted a bit with locals and came to know that there had been proposals and even people had drilled part of the mountain to have a factory which the locals protested and with passed a legislation in the local Municipality so such further incursions are not done or can be avoided. While I did see some construction in and around Bhimtal it seemed to be much more in control then what I had seen in Bhagsu, Dharamshala, Himachal Pradesh and elsewhere. While here they still have enough water, Bhagsu and many other places in Himchal Pradesh where in the summer months they are completely dry. Few years back, I had taken part in the Chota Char Dham Yatra by myself in memory of my grandmother (mother’s side) except for Kedarnath had done all the other three. Had seen first-hand the kind of ecological destruction that Tehri Dam and its sisters were doing on the environment. I even lived in Tehri town which was built a little away but on top of the Dam and had come to know the helplessness of the people living their even before the 2013 flash floods happened. As shared in part 1, it is concerning for all if unchecked development takes place .

For lunch, I wanted to try Rice and Kadhi . While I have made Kadhi at home and my mum makes great Kadhi, the hotel didn’t have it so took another trip to a stall where they made Rice and Kadhi and had that in the Hotel Room. While wasn’t able to finish it fully, was able to finish 3/4th of the portion which made me feel good. It was tasy and filling. A part was also probably being cautious although did make sure to have lot of electral water so that I was constantly being hydrated even if it meant I had to go to the loo few times. Anyways lunch happened, had a lie for sometime, came down and saw that Sanjay and Karan had come to pick me for Kathgodham.

The Journey from Bhimtal to Kathgodam Railway Station.

As my bags were mostly packed, they just had to be fetched. While Karan did the hotel checkout thing, Sanjay helped me with the luggage and putting it in his car. The journey which lasted about 30-45 minutes was beautiful. Sanjay had Acousic blues which added to the beauty of the place and the ride. While Karan had questions for me as he wanted to know about Javascript, my mind was still unfolding hence decided to stick to the very basics . While I don’t remember if I told him about js-team in Debian, their work on Salsa . For those who don’t know what Salsa is, it is a gitlab instance which Debian uses and also contributes to. I possibly might not have shared the JS-Tutorial although that is more aimed at packaging javascript nodejs modules for Debian. I think Karan was more into upstream development hence told him about some of the web frameworks but obviously not all as still was in recovery mode and then again it is very much a personal choice what tool you choose to express yourself. For e.g. I find wordpress easier to use while there are many people who rave about medium while there are others who use their own very basic static sites using one of the tools mentioned in the list. So it probably is a good idea to just let them be and let them figure out what works for them. I did share that Pune has around 100 or so co-working places and there are some big names like TRIOS and others who are making quite some noise. There has also been quite some noises made about co-living . For Sanjay I did tell him that either Pune or most metros have more or less many places which have open mic nights. Although, the newest-oldest trend has been to open your place. for singers or performers. I had been to places such as these for more than a decade or more but now it’s a bit more formalised rather than something between friends. I shared about balconytv which Sanjay knew about. We also had discussions about Indian blues, melody etc. and time went by. Before we knew it, we were in/near Kathgodam Railway Station.

Kathgodam Railway Station

It is possible that I may have been to Kathgodam Railway Station before but that probably may have been a decade or more and somehow didn’t know/realize to know I was so close to Nainital and other places. Here’s a screenshot taken by a friend Siddharth S. who did the trip the other way from Delhi to Kathgodam or KGM and was able to take the picture from above.

Kathgodam or KGM Junction view from bridge, Copyright – Siddharth S.

As can be seen it looks so beautiful. Vinay, part of the hackergram team provided the tickets for KGM – Delhi Shatabdi Express.

KGM – NDLS Shatabdi Journey (12039)

While entering itself, I knew it was an LHB rake/train which while more comfortable than ICF Rakes are prone to more jerks than ICF Rakes. Unfortunately, this Rake i.e. 12039 has only 2 options, AC Chair Car and Executive Chair Car, no sleeper but as it is short distance doesn’t matter so much. While I have been probably been more than half a dozen times been in LHB Rakes and n number of times in ICF Rakes, I somehow enjoy the ICF Rakes more as they are open and people are usually more friendly . This time though, I was grateful as I wouldn’t have to speak people so much. To add to my surprise, I found that this one also had meals included. On some Shatabdi trains, you have the meals included, while on others you can choose not to. While there was more than enough servings of meals and snacks, I took only some as I didn’t want any nasty incidents to occur till Delhi and from then on to Pune. I would however say, I enjoyed the ice-cream (Amul) which was provided and also took bit of lunch that they provided. IRCTC food was ok, but nothing great to write home about apart from the ice-cream but that probably is the ice-cream manufacturer taking care. While Sanjay and Karan helped me with the luggage to the seat, it started right on time at 15:30 hrs. My neighbour for the remainder of the journey turned out to be a Gunman (Z-Force) who was duty-bound with another of his brethen to some politician who was travelling in A-1 (Executive Chair Car) . The travel itself was pretty straight forward. The route was Kathgodam (KGM) – Haldwani (HDW) – Lal Kuan (LKU) – Rudrapur City (RUPC) – Rampur (RMU) – Moradabad (MB) – Ghaziabad (GZB) and finally NDLS (New Delhi) . While the time was told to be 21:30 at New Delhi, it entered NDLS at 21:45 . I was a bit in a panicky mode at my flight was at 23:30 and although I had been told it should be enough time for the Airport. I didn’t know as it all depended on connectivity. The one thing I should have banked on was to buy some water bottles at KGM itself as my bottle of water had finished pretty early and it was risky to try and get a bottle from any of the intervening Stations .

NDLS at night

The Race to DIAL

The above is the New Delhi Railway as it looks from outside at night. I came down from Platform 16 to the side from where I took the picture. I knew I didn’t have much of a time, so took this while on a semi-run. The bad thing about NDLS is that there are no signs anywhere to tell where the Metro is. I knew that there was a metro connection from there till Delhi Aerocity but didn’t know where. I was mis-guided (probably not intentionally) but it took quite an effort to reach the Metro Station. From where I took the photograph, the Metro Entrance is at 45 degrees angle and then you have to go down. Somehow I managed to reach to the Metro Station. buy the token for the train, take the escalator. Time 22:15 . The clock near the Metro showed 10 minutes for the arrival. 5 minutes later, the Metro arrived and I figured that the time it was showing was the time for departure. I reached Delhi Aerocity at 22:45 . At the end while coming up there were tickets to Delhi Terminal T1-C which as I had shared before I thought was a different terminal. I asked a few people and they said it was walkable distance. I walked a bit and asked some more people and learnt that the Airport (T1 and T1 -C ) were one and the same and it still was 4-5 kms. away . They suggested to take a bus or a rick. I asked the Rickshaw guys and sure enough they started quoting some ridiculous figures. I asked the bus guy and he said INR 30/- I bought the ticket and was tensed but soon the bus started. I was on tenterhooks but kept saying ‘it’s alright’ because there was no alternative. I somehow managed to get to the Airport and to the baggage drop and asked if I still was on time. Either seeing the desperation on my face or whatever, she allowed me to do the bag drop, time 23:15 . I didn’t have time to do anything. Fortunately, there was no queues in security and was able to go through. I just managed to reach the Indigo gate when they were making last announcements before closing the boarding. Time 23:20 – 23:25 . As luck would have it, there were 5-6 people who also were late like me. The Indigo Gate attendant/personnel had no choice and asked for a vehicle so we could all get on. Meanwhile, I put my boarding pass and my Government issued ID in my bag for safe-keeping. My boarding pass was 28-E

Indigo Boarding Pass

When I dis-embarked the vehicle at the Airplane, I knew I had to go to the back of the plane and realized I need to show the boarding pass again before . I fished in my backpack, showed the boarding pass, used the Aircraft passenger stairs/boarding ramp with my hands-on baggage. As I was about to put my boarding pass in my backpack in a special envelope I found that my Govt. ID seemed to be missing. While I didn’t want to create a hassle but still had to inform the flight attendant so with great reluctance informed them. It took a few minutes while I tried to have as much water I could as I hadn’t had water for the last 5-6 hours and was stressed. I didn’t know what the flight attendants thought . Soon the skipper came and I showed him the boarding pass . Interestingly, he took a look at my surname and seeing it was Agarwal, assured me nothing was amiss and I should take my seat. I was wondering what would the scene have been if it was something else. Anyways, they tried to fish for the Govt. ID between the Gate and where the vehicle had dropped me and didn’t find it.

I was told to file a complaint in lost and found once I reached Pune Airport. We probably left Delhi around 23:45 – 23:50 or thereabouts. I had a feeling that there may be some hole in my envelope but didn’t want to investigate then. We still reached Pune on time. I disembarked, freshened myself, drank lots of water, again freshened myself, went to baggage claim, got my other backpack from the baggage claim and opened up the area where I kept my Govt. issued Id . Sure enough, it was right there, lying by its lonesome. I again put the two together, exited the Airport, went to the Parking area, took a rickshaw, called up home and was at home in about 45 odd minutes. There was relief from both sides. Over an early breakfast, mother told me that she also hadn’t been able to sleep for the last 3 days. After breakfast we slept like logs for 8-10 hours. Made lunch and slept again. Pune is and was burning as we are in the middle of summer and the temperature is around 36 – 38 degree celcius in the morning and sometimes humid in the evening. So while I’m now all well and good, it surely was an adventure .

What would I have done differently

If I had some idea, I probably would have taken the flight a day before, either take a room near NDLS or at Paharganj, a place which has lot of beauty in by itself. While I had and have extended family at Delhi, at times it is easier to just be with myself and I have found Paharganj to be a good place. There are many places in Paharganj from fancy to budget hotels and lot of them have beautiful frescos like the one I am sharing below.


While the above Fresco was done by somebody in the bathroom they had in Lake Heritage, I have seen many such mural paintings in many places. Paharganj for one, even South Goa old Portugese homes have lot of these and elsewhere. So

So I would have come to Delhi, stayed the day, meet friends and then taken the early Shatabdi from NDLS to KGM and reached KGM at around 12:30 in the afternoon. The hackergram had thought of this way too but discarded it as other people were arriving at 9:00 – 9:30 in the morning so it would have been impossible to reach NDLS in time. Even my plane had touched down at 5:30 and I was out of the terminal by 0600 hrs. By my own experience it would take anybody at least an hour to get from DIAL to NDLS using the most efficient ways. One thing that is in works and will improve DIAL is a proposed People-mover from Delhi Aerocity to Delhi Airport but that will happen by 2024 as it stands today. Showing a proposed map of the people-mover below.

DIAL proposed people-mover

The link that I am interested in the one in the red going from Aero City Station to T1 . There is also supposed to be a T4 terminal for more International Operations but all of this is supposed to happen by 2024 at latest.

What Steps can the Government take for more tourism in Uttarakhand

While it’s razor edge as Uttarakhand, Kumaon has lot of nature and heritage based tourist attractions. There are number of things that could be improved. For instance, the nearest Airport to Kathgodam is Pantnagar which perhaps could be developed into an Airport which could serve not just Delhi but other places such as Pune, Mumbai etc. but they will have to develop lot of infrastructure for that. I have no clue how big the Airport is and if it could handle something like an A320 or not. The other thing the Government could do is do a double electrified line from Rampur to Kathgodam . Right now it’s a single un-electrified semaphore signal based line. Again, dunno if it is viable, possible or would it endanger forest cover.

Note – Because of Jet Airways troubles as well as Max 737 groundings, the cost of fares shot up. A typical ticket from Pune to Delhi or vice-versa which usually costs around INR 3, 3.5 k/- shot to 7k/- and beyond. If a fifth operator doesn’t come soon, as a dear twitterati recently shared

India is a low ticket-size, high-volume and long term market where companies need a war chest of cash to survive and take on competition. Striking similarities between the ride hailing and the #aviation scene in India. #AvGeek #Uber

Vinamra Longani, Aviation Analyst on Twitter,

15 April, 2019 11:16AM by shirishag75

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

The woes of 520-byte sectors

A couple years ago, I bought a 12G/s SAS disk to see if I could get it to work with the RAID controller with external SFF-8088 ports which came with the system I got while I was working at The Linux Foundation. I got an enclosure to go with it because I was enthusiastic and optimistic about my ability to get things all set up. My plan was to take the 1T disks I had in the storage server I had been using before, but which had since failed and to put them into this new enclosure. I got a couple of SFF-8088 cables, and the enclosure had some SFF-8088 to 7-pin SATA break-outs come with it. I bought an additional dual SFF-8088 to SFF-8087 adapter and a couple of SFF-8087 to 4x SFF-8482 cables. I got the 8482s because I imagined that these would be required to take full advantage of the 6TB 12G/s SAS disk mentioned above.

Unfortunately, when I attached the disks to the controller, none of them worked and I gave up frustrated and disillusioned for a good long time. Last week was spring break, and I took a few days off from work to hang out with my girls. While we weren’t all goofing off together, I was puttering around with a new HBA I got to replace the apparently non-functional HP RAID controller which came with the server.

However, when I removed the HP RAID controller and put in the LSI 9201-16E HBA I got to replace it with, there was again no SCSI love. The lsscsi command showed nothing. But the lspci command showed the controller, and I started digging through documentation to figure out what might be wrong. I eventually stumbled upon a firmware flashing ISO disk image on the Broadcom site. Broadcom apparently purchased LSI not too long ago and is now responsible for managing documentation and downloads for the legacy devices, of which I am now a proud owner.

I was able to build a USB disk from this ISO using unetbootin and emacs. I had to modify the syslinux.cfg file to correct the path and case of the filenames. And for some reason, unetbootin replaced the filename of the disk image from which freedos was supposed to boot with something useless. In any case, after corrections, the entry in syslinux.cfg looks like this:

label ubnentry20
menu label 9201_16e
append initrd=/LSI/IMG/9201_16E.IMG

Unfortunately, the firmware on the 9201_16E.IMG fat filesystem was old, so I loop mounted it and wrote a more recent version of the firmware, mptsas2.rom and sas2flsh.exe to it. I’ve uploaded it to 9201_16E.IMG in case anyone might find it useful. While I’m at it, I suppose I should put the whole usb boot image up for those who might need it. See lsi-flash-usb.img.bz2. To use this disk image, decompress the .bz2 file and dd it to your USB block device, for instance /dev/sdb in my case:

$ bunzip2 /tmp/lsi-flash-usb.img.bz2 | sudo dd of=/dev/sdb

My Proliant DL370 G6 server doesn’t have a fancy new UEFI system, so I used this boot disk to get access to the SAS controller in real mode. Once I booted it and selected 9201_16e (the 21st option from the top), I had to rapidly press the up or down arrow and select the EMM386 option to load the driver. For those playing along at home, I placed the more recent LSI controller firmware and BIOS in the \CJ directory, which has the same structure as the \LSI directory from the original disk image. You should be able to

cd \CJ

and run


Answer the prompts, and afterwards you should have a controller with P20 firmware and BIOS written to it.

Daunting as this was, it only got me most of the way through the process. All of my SATA disks were working at this point, but the SAS disk was not. After some sleuthing, I discovered that the disk was SCSI formatted to 520 byte sectors. Linux does not accept sector sizes that are not a power of 2, and so i was unable to do anything with the disk using fdisk, kpartx, etc. Research pointed me to the sg_format utility, but I was unable to make that work. Here’s the output of the command:

$ sudo sg_format --format --size 512 -vv /dev/sg4
open /dev/sg4 with flags=0x802
    inquiry cdb: 12 00 00 00 24 00 
    SEAGATE   DKS2F-H6R0SS      7FA6   peripheral_type: disk [0x0]
    inquiry cdb: 12 01 00 00 24 00 
    inquiry cdb: 12 01 80 01 00 00 
      Unit serial number: Z4D0M2BF0000W515S4WH
    inquiry cdb: 12 01 83 01 00 00 
      LU name: 5000c50062ba7973
    mode sense (10) cdb: 5a 00 01 00 00 00 00 00 fc 00 
    mode sense (10): pass-through requested 252 bytes (data-in) but got 28 bytes
Mode Sense (block descriptor) data, prior to changes:
Mode sense number of blocks maxed out, set longlba
    mode sense (10) cdb: 5a 10 01 00 00 00 00 00 fc 00 
    mode sense (10): pass-through requested 252 bytes (data-in) but got 36 bytes
Mode Sense (block descriptor) data, prior to changes:
  <<< longlba flag set (64 bit lba) >>>
  Number of blocks=11473076960 [0x2abd942e0]
  Block size=520 [0x208]
    mode select (10) cdb: 55 11 00 00 00 00 00 00 22 00 
    mode select (10) parameter list
00 00 00 00 01 00 00 10  00 00 00 00 00 00 00 00
00 00 00 00 00 00 04 00  01 0a 0c 14 ff 00 00 00
05 00
mode select (10):
Descriptor format, current; Sense key: Illegal Request
Additional sense: Parameter list length error
  Descriptor type: Sense key specific: Field pointer:
        Error in Command: byte 7 bit 7
  Descriptor type: Field replaceable unit code: 0x5
  Descriptor type: Vendor specific [0x80]
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 
 Raw sense data (in hex):
        72 05 1a 00 00 00 00 1c  02 06 00 00 cf 00 07 00
        03 02 00 05 80 0e 00 00  00 00 00 00 00 00 00 00
        00 00 00 00
MODE SELECT command: Illegal request sense key, apart from Invalid opcode

I was about to give up, but for some reason did not. I tried to find firmware and read any documentation about my disk, a Seagate ST6000NM0024. Eventually I found a github repo called ToolBin. In this repo there is a program named SeaChest_Format_121_1183_64 which seems to use a proprietary SCSI format command to tell the drive to format itself using 512-byte sectors. I can’t say for certain yet that it works, but it did not immediately fail as sg_format did. The command I used (and this will vary for you, depending on what sg_scan -i tells you is your scsi generic device) is:

./SeaChest_Format_121_1183_64 -d /dev/sg5 --fastFormat 1 --formatUnit 512 --confirm I-understand-this-command-will-erase-all-data-on-the-drive

Assuming this command results in success after the 12 hour run, I will purchase another 7 of these drives and fill out my disk array!

Sun Apr 14 13:39:48 PDT 2019
        Format Unit Progress = 3.61%

Thanks to the #linux-raid channel for helping me through this!


Mon Apr 15 08:16:00 PDT 2019
        Format Unit Progress = 99.97%
Mon Apr 15 08:17:00 PDT 2019
        Format Unit Progress = 100.00%

[70874.677389] sd 3:0:2:0: device_block, handle(0x0012)
[70876.176199] sd 3:0:2:0: device_unblock and setting to running, handle(0x0012)
[70876.249541] mpt2sas_cm0: removing handle(0x0012), sas_addr(0x5000c50062ba7971)
[70876.253344] mpt2sas_cm0: removing : enclosure logical id(0x500062b20059ecc0), slot(1)
[70886.680412] scsi 3:0:3:0: Direct-Access     SEAGATE  DKS2F-H6R0SS     7FA6 PQ: 0 ANSI: 5
[70886.684166] scsi 3:0:3:0: SSP: handle(0x0012), sas_addr(0x5000c50062ba7971), phy(2), device_name(0x5000c50062ba7970)
[70886.688833] scsi 3:0:3:0: SSP: enclosure_logical_id(0x500062b20059ecc0), slot(1)
[70886.750049] sd 3:0:3:0: Attached scsi generic sg5 type 0
[70886.750143] sd 3:0:3:0: [sdd] Spinning up disk...
[70887.765498] .
[70888.789522] .
[70889.813591] .
[70890.837718] .
[70891.861715] .
[70892.885876] .
[70893.909889] .
[70894.933912] .
[70895.957989] .
[70896.982028] .
[70898.006133] .
[70899.030215] .
[70900.054269] .
[70901.078340] .
[70902.102359] .
[70903.478491] .
[70904.502557] .
[70906.823608] ready
[70906.826165] sd 3:0:3:0: [sdd] 11721045168 512-byte logical blocks: (6.00 TB/5.46 TiB)
[70906.829779] sd 3:0:3:0: [sdd] 4096-byte physical blocks
[70906.834708] sd 3:0:3:0: [sdd] Write Protect is off
[70906.837364] sd 3:0:3:0: [sdd] Mode Sense: 13 00 10 08
[70906.838142] sd 3:0:3:0: [sdd] Write cache: disabled, read cache: enabled, supports DPO and FUA
[70906.867106] sd 3:0:3:0: [sdd] Attached SCSI disk

$ sudo ./SeaChest_Info_141_1183_64  -d /dev/sg5 -i
 SeaChest_Info - Seagate drive utilities - NVMe Enabled
 Copyright (c) 2014-2018 Seagate Technology LLC and/or its Affiliates, All Rights Reserved
 SeaChest_Info Version: 1.4.1-1_18_3 X86_64
 Build Date: Oct 18 2018
 Today: Mon Apr 15 14:52:33 2019

/dev/sg5 - DKS2F-H6R0SS - Z4D0M2BF0000W515S4WH - SCSI
        Vendor ID: SEAGATE 
        Model Number: DKS2F-H6R0SS    
        Serial Number: Z4D0M2BF0000W515S4WH
        Firmware Revision: 7FA6
        World Wide Name: 5000C50062BA7973
        Copyright: Copyright (c) 2014 Seagate All rights reserved 
        Drive Capacity (TB/TiB): 6.00/5.46
        Temperature Data:
                Current Temperature (C): 45
                Highest Temperature (C): Not Reported
                Lowest Temperature (C): Not Reported
        Power On Time:  21 days 11 hours 8 minutes 
        Power On Hours: 515.13
        MaxLBA: 11721045167
        Native MaxLBA: Not Reported
        Logical Sector Size (B): 512
        Physical Sector Size (B): 4096
        Sector Alignment: 0
        Rotation Rate (RPM): 7200
        Form Factor (inch): 3.5
        Last DST information:
                DST has never been run
        Long Drive Self Test Time:  10 hours 2 minutes 
        Interface speed:
                Port 0 (Current Port)
                        Max Speed (GB/s): 12.0
                        Negotiated Speed (Gb/s): 6.0
                Port 1
                        Max Speed (GB/s): 12.0
                        Negotiated Speed (Gb/s): Not Reported
        Annualized Workload Rate (TB/yr): 0.00
        Total Bytes Read (MB): 6.39
        Total Bytes Written (MB): 14.99
        Encryption Support: Not Supported
        Cache Size (MiB): Not Reported
        Read Look-Ahead: Not Supported
        Write Cache: Disabled
        SMART Status: Good
        ATA Security Information: Not Supported
        Firmware Download Support: Full, Segmented, Deferred
        Specifications Supported:
        Features Supported:
                Application Client Logging
                Self Test
                Automatic Write Reassignment
                Automatic Read Reassignment
                Informational Exceptions [Mode 0]
                Translate Address
                Format Unit

15 April, 2019 01:22AM by C.J. Collier

April 14, 2019

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in March 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. ( a bit later than usual) If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Lars Kruse reported a bug in the gui-sdl2 theme of Freeciv, the famous strategy game, which I could quickly fix.  (#923563)
  • I fixed RC bug #922947 in retroarch-assets because of a change in fonts-roboto that broke symlinks to font files.
  • Pedro Pena and Carlos Donizete Froes packaged two new games for Debian, Infinitetux (Pedro) and Pekka Kana 2 (Carlos). I reviewed and sponsored both games and they are currently waiting in the NEW queue. Infinitetux is a Super Mario like game written in Java. The original author of the game is no one else than Markus Persson, the developer of Minecraft. This game is one of his previous works that used the original game content from Nintendo. However Pedro completely replaced the artwork with freely available images and sounds. Quite interesting for Java developers: The game requires no third-party libraries and uses only classes from the JDK. Pekka Kana 2 is another jump-and-run game from Finnish creator Janne Kivilahti. He kindly released his game under a permissive BSD-2-clause license.

Debian Java

  • I tackled several RC bugs in Java packages this month.
  • libjogl2-java (#887140): The package failed to build on several non-supported architectures. Since we are already glad that it works on amd64 I had to limit the support in debian/control to those architectures where the package may be useful.
  • lucene-solr (#919638): Solr refused to start with Tomcat 9 because of more strict permissions in Tomcat’s systemd service file. I initially tried to fix this in Tomcat but had to add a new systemd conf file to lucene-solr that overrides the permissions now.
  • javahelper (#923756): I implemented a workaround for Javadoc build failures that started to occur only two months ago after the OpenJDK 11 package was upgraded.
  • owasp-java-html-sanitizer (#923654): I removed the now non-existent build-dependency on libjsr305-java-doc.
  • sweethome3d (#924594): I had to replace the virtual dependency on icedtea-netx-common with icedtea-netx.
  • I triaged a RC bug in libitext-java (#923364). Unfortunately the bug submitter did not provide further information.
  • It is a bit sad that Netbeans is currently affected by a severe bug which makes it impossible to create new Java projects. (#925509) I tried to fix it but I am stuck now. Help is appreciated.
  • I provided a patch to fix RC bug #923759 in netlib-java.


  • The  ublock-origin addon does not work anymore with Firefox 66 in unstable (#925337) which is caused by a value in its manifest file, incognito:split, that is not supported by Firefox. Previous versions of Firefox just emitted a warning, now it is fatal. The same value works fine with Chromium. At the moment we provide one webextension package for both browsers in Debian but it looks like we have to consider to provide two different packages of ublock-origin again, to avoid such pitfalls in the future. I have filed #926586 to get more feedback.

Debian LTS

This was my thirty-seventh month as a paid contributor and I have been paid to work 29,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 25.03.2019 until 31.03.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in twig, ruby2.1, znc, wpa, cloud-init, dovecot, edk2, activemq, bwa, tomcat8, mosquitto, gpsd, nuget, rails, robocode, libav and clamav.
  • DLA-1708-1. Issued a security update for zabbix fixing 2 CVE.
  • DLA-1711-1. Issued a security update for systemd fixing 1 CVE.
  • DLA-1733-1. Issued a security update for wpa fixing 1 CVE.
  • DLA-1736-1. Issued a security update for dovecot fixing 1 CVE.
  • DLA-1738-1. Issued a security update for gpsd fixing 1 CVE.
  • DLA-1739-1. Issued a security update for rails fixing 2 CVE.
  • DLA-1753-1. Issued a security update for proftpd-dfsg to fix several memory leaks. However it turned out that under certain conditions #926719 the daemon now closes sftp connections. This appears to be an upstream bug that was fixed in version 1.3.6. I will investigate if we have to revert to the previous version or if we can move forward.
  • DLA-1755-1. Issued a security update for graphicsmagick fixing 6 CVE.
  • While I was working on DLA-1755-1 I discovered a regression in jasper which I addressed with DLA-1628-2.


Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my tenth month and I have been paid to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 11.03.2019 until 17.03.2019 and I triaged CVE in cron, ntp, gdk-pixbuf, glib2.0 and libssh2.
  • ELA-92-1. Issued a security update for xmltooling fixing 1 CVE.
  • ELA-94-1. Issued a security update for openssh fixing 3 CVE.
  • ELA-105-1. Issued a security update for sqlalchemy fixing 2 CVE.
  • I started to work on src:linux and will provide a new package next week.

Thanks for reading and see you next time.

14 April, 2019 03:33PM by Apo

April 13, 2019

hackergotchi for Jonathan Carter

Jonathan Carter

Help test Debian Live


During the stretch release period, it became apparent that very few people had been testing Debian Live, and some nasty bugs were discovered only during final release testing. The final stretch images for Debian live wasn’t quite up to the quality the Debian community deserved, and it lead to Steve McIntyre asking “IMPORTANT: Do live Debian images have a future?“.

I decided to get involved and have been doing testing and bug fixes throughout the buster release cycle, and with today’s builds, I think we’re at a point where we have something good that’s ready for wide-scale testing.

The Buster live images come with something new that a bunch of other distributions have also adopted, which is the Calamares installer. Calamares is an independent installer project (They call it “The universal installer framework”) which offers a Qt based interface for installing a system. It doesn’t replace debian-installer on the live images, rather, it serves a different audience. Calamares is really easy to use, with friendly guided partitioning and really simple full-disk encryption setup. It doesn’t cover all the advanced features of debian-installer (although it very recently got RAID support) and it doesn’t have an unattended install mode either. However, for 95%+ of desktop and laptop users, Calamares is a much easier way to get a system installed, which makes it very appropriate for live systems. For anyone who needs anything more complicated, or who’s doing a mass-install, debian-installer is still available in both text and GUI forms.

An image is worth a thousand words, so here’s a bunch of screenshots showing what Calamares looks like on our Gnome live image:

Calamares Intro screen.
Select timezone and localisation.
Select keyboard model, layout and variant.
Partition disk and configure encryption.
Configure user and password.
Confirm choices.
Wait for installer to do the rest.
Reboot or continue live environment.

Download and test

Today’s images are available with the Cinnamon, Gnome, KDE, LXDE, LXqt, Mate, standard (text-only) and Xfce desktop environments for:

I haven’t yet tested the i386 images myself, so anything is possible there. We’re also planning an upcoming beta (well, it will be called a release candidate but that’s because it will be RC1 of debian-installer) so if anyone has some time to do some testing that would be great. It’s especially useful to test on a wide variety of supported hardware and ensure that things work as they should. We’re already looking a lot better than they last cycle, but that’s no reason to be overconfident.

Please file bugs for major problems or hardware support issues. Feature requests bugs or similar bugs aren’t really useful at this stage.

More screenshots

This wasn’t my personal first choice for default wallpaper, but I like its colours and they work really well with all the other elements.

ISO splash image when booting in legacy mode

GRUB boot loader
Plymouth boot splash
GDM Login Screen
Gnome desktop

What about bullseye?

The next Debian release, Debian 11, will be code named ‘bullseye’.

I’m planning to schedule a BoF at DebConf19 for Debian Live where we cover at least the following:

  • Reduce the number of i386 images. We currently have 8 of them and we probably just need one or two light variants for the i386 machines that’s still supported by Debian.
  • Get the desktop teams more involved. And ideally, have them test and sign off for their live variant for alphas, betas and the final release. If you’re a maintainer of a desktop environment, it would be great if you could attend this session.
  • Reduce the number of paper cuts in our live media. We’ve made some progress on these during this cycle, but there are some minor annoyances that remain that we’d like to completely eliminate next time.

Well, if you got this far, thanks for reading! You can also join us on irc on #debian-live and #debian-boot on the oftc network if you have any questions.

13 April, 2019 08:38PM by jonathan

François Marier

Secure ssh-agent usage

ssh-agent was in the news recently due to the matrix.org compromise. The main takeaway from that incident was that one should avoid the ForwardAgent (or -A) functionality when ProxyCommand can do and consider multi-factor authentication on the server-side, for example using libpam-google-authenticator or libpam-yubico.

That said, there are also two options to ssh-add that can help reduce the risk of someone else with elevated privileges hijacking your agent to make use of your ssh credentials.

Prompt before each use of a key

The first option is -c which will require you to confirm each use of your ssh key by pressing Enter when a graphical prompt shows up.

Simply install an ssh-askpass frontend like ssh-askpass-gnome:

apt install ssh-askpass-gnome

and then use this to when adding your key to the agent:

ssh-add -c ~/.ssh/key

Automatically removing keys after a timeout

ssh-add -D will remove all identities (i.e. keys) from your ssh agent, but requires that you remember to run it manually once you're done.

That's where the second option comes in. Specifying -t when adding a key will automatically remove that key from the agent after a while.

For example, I have found that this setting works well at work:

ssh-add -t 10h ~/.ssh/key

where I don't want to have to type my ssh password everytime I push a git branch.

At home on the other hand, my use of ssh is more sporadic and so I don't mind a shorter timeout:

ssh-add -t 4h ~/.ssh/key

Making these options the default

I couldn't find a configuration file to make these settings the default and so I ended up putting the following line in my ~/.bash_aliases:

alias ssh-add='ssh-add -c -t 4h'

so that I can continue to use ssh-add as normal and have not remember to include these extra options.

13 April, 2019 01:45PM

hackergotchi for Lucas Nussbaum

Lucas Nussbaum

Introducing Debian Trends: historical graphs about Debian packaging practices, and “packages smells”

This is a slightly edited copy of the mail sent to debian-devel@.

TL;DR: see https://trends.debian.net and https://trends.debian.net/#smells.

Following this blog post I did some work on setting up a proper framework to graph historical trends about Debian packaging practices. The result is now available at https://trends.debian.net/, and I’m confident that I will be able to update this on a regular basis (every few months).

Additionally (and much more controversially I guess :-) ) I also added an analysis of “package smells”, such as “not using dh”, “not using a recent debhelper compat level”, “not using a 3.0 source format”, etc. I understand that in some cases there might be good reasons to keep those “smells”, but I find it valuable to have them presented in a more actionable way to fix the cases that should be fixed. So there’s a list of smells, sorted by maintainer/uploader.

Given that Debian is currently frozen to prepare the buster release, this is a bad time to start fixing those smells, but I will send a reminder to debian-devel@ once buster is released. (It’s interesting to see how the number of smells plateaued during previous freezes).

13 April, 2019 08:29AM by lucas

hackergotchi for Bits from Debian

Bits from Debian

Bursary applications for DebConf19 are closing in less than 72 hours!

If you intend to apply for a DebConf19 bursary and have not yet done so, please proceed as soon as possible!

Bursary applications for DebConf19 will be accepted until April 15th at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk or organising an event is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the accommodation page.

See you in Curitiba!

DebConf19 banner open registration

13 April, 2019 08:15AM by Laura Arjona Reina, Paulo Santana

hackergotchi for Bdale Garbee

Bdale Garbee

Tenth Anniversary of AltOS

In the early days of the collaboration between Bdale Garbee and Keith Packard that later became Altus Metrum, the software for TeleMetrum was crafted as an application running on top of an existing open source RTOS. It didn't take long to discover that the RTOS was ill-suited to our needs, and Keith had to re-write various parts of it to make things fit in the memory available and work at all.

Eventually, Bdale idly asked Keith how much of the RTOS he'd have to rewrite before it would make sense to just start over from scratch. Keith took that question seriously, and after disappearing for a day or so, the first code for AltOS was committed to revision control on 12 April 2009.

Ten years later, AltOS runs on multiple processor architectures, and is at the heart of all Altus Metrum products.

13 April, 2019 04:24AM

April 12, 2019

hackergotchi for Keith Packard

Keith Packard


SAMD21 USB Fixed on Windows 7

I pretty clearly missed some details when reviewing the USB traces for the SAMD21 driver because I didn't see the IN requests to the IN-Interrupt endpoint of the device. These are supposed to be NAK'd by the device because it never generates any data for this endpoint.

However, I managed to mis-configure the hardware registers controlling this end point and set it up as an OUT-Interrupt endpoint instead. This causes the hardware to simply ignore any IN requests directed at the end point, so the host never sees a NAK packet. All three operating systems (Linux, Mac OS X and Windows) eventually give up trying to communicate with the end point, which causes no ill effects on Linux and Mac OS X.

On Windows, it causes the driver to stop sending IN packets to the regular IN-Bulk endpoint after it receives one packet. “It's inconceivable” I hear you cry, but then all can say is “It's Windows”, where inconceivable becomes reality.

All Fixed.

With that minor mistake fixed, Snek on the Metro M0 Express board is running great with all three operating systems.

12 April, 2019 05:20AM

hackergotchi for Shirish Agarwal

Shirish Agarwal

Hackergram journey part 1- The journey Begins

I will be writing the journey and musings in two parts, the Journey and the return . I was supposed to write about WIDSPune (Women in Data Science, Pune) an event held in Pune last month. While I do have lot of thoughts and observations on that topic, I would do that later as this I feel is important, not just for me but probably everybody who has been afraid to ask for help especially when you feel weak, either physically or emotionally. This might be of some help to some of the brothers and sisters who were at the Christchurch shooting. While I didn’t go through the experience as you did, not by a long shot, but I do hope that you do ask for help as I did and helps you emerge stronger than before. Before starting there are many people I have to thank for the experience and making things nice for me even though I wasn’t able to contribute as much as I wanted.

Thank you and Sorry

The people are in no short order are Arjun Venkatraman and Deepta Arjun for being the voice of reason. Gauri and his team of jungle camp warriors for looking after me. Karan, Sanjay, Vijay and Suryaveer of the Hackergram team who looked at various aspects of travel and such. I cannot thank you all enough. I also need to thank Civicus (Innovation for Change) for funding the travel of all the participants. I also need to say sorry to all the people as because of me all of them suffered and had to divert their energies a lot. Most of all Gauri who was with me the whole night. He is as tough as they come.

The E-mail

Below is an e-mail I received from my friend Arjun roughly one and a half months back.


I’m excited to share that Hackergram has collaborated with Innovation for Change to organize a workshop in India this year. I’m writing to check your availability to attend the Hackergram Innovation Lab 2019 in Sattal, Uttarakhand, India. The theme of the workshop is “Safe Civic Spaces”. The aim of the event is to create a platform for learning and dialogue among participants (individuals, CSOs, communities) with a diverse background to increase collaboration and equip them with the innovative and creative force for expression, reflection, dialogue on safe civic spaces.

We are inviting one participant per organization, however if you feel there is more than one person from your organization who would like to attend, please feel free to get in touch with us. Also, please feel free to share this email with anyone else who might be interested! We will be inviting a total of 20 participants on the basis of first show of interest, an online survey and verbal conversation.

Costs of participation for attendees including food, travel and accommodation will be covered by I4C South Asia Hub for the dates of the event, 8th to 10th March 2019. Participants will assemble by the 7th March and leave on the morning of 11th March after the completion of the event.

The event would span the following main themes :-

a. Creating and Holding Spaces – This theme would cover both physical and virtual spaces, what it takes to create a space, what effort and resources are required to maintain a space and how those can be procured or generated.

b. The Changing Role of Information :- This theme would cover the different ways in which information, both personal as well as external, impacts individual as well as community safety. This includes access rights, skill challenges, liability, privacy and identity.

c. Diversity – A Spectrum of Colours :- This theme is about the range of different expressions that contribute to society and therefore influence how safe we feel and how safe we are. This would include diversity in gender, sexuality, orientation, religion, caste and community.

d. Securing The Future :- This theme is about concerns that impact our individual and shared futures. Safety and care for minors and dependent adults, rehabilitation of people who have gone through distress, sustainable living practices and livelihood creation would be some of the areas covered by this theme.

e. Fixing the present :- This theme revolves around behaviour, process and systemic change that is possible and required in the present to enable positive transformation both at an individual and social level.

Of the above, I thought I could contribute quite a bit in points b. and d. While I knew Arjun personally, I still put up a formal application so that if I get selected, it is only on merit and not just because I knew Arjun and Deepta. Another point which is and was much more pertinent is that this Government had turned the clock back on many of the achievements that many social groups, non-profits etc. had done to bring awareness and have a scientific outlook in society in general. While I/we do and choose to live in a techie bubble in a way, I felt it was a way in which I could contribute and add to the knowledge of other participants about free software and Debian and see if I could help them. The great thing about Debian is not just the technology but also the politics and philosophy behind it which attracts volunteers from all over the world but that is another topic for another day altogether 🙂

After my confirmation, on subsequent conversations, came to know that the plan was to fly all the participants to New Delhi Airport (T1) and from there travel down to Sattal by bus or something.

I had a look of the distance between Delhi and Sattal via web and it seemed to be in-between 4-5 hour journey via various websites . I had traveled just couple of years ago Delhi – Chandigarh (Panjab University) when I was consulting Hamara and we had done the whole thing in two, two and half hour journey even after taking 2-3 breaks . We had started around 1300 hrs. and were in Panjab University by around 1600 hrs. So as the distance seemed similar I guessed the travel would be of similar length and the road will be good which proved to be part of my downfall. I had also been under the influence of Swadesh Darshan, a Government of India initiative which promised that all roads to places of tourism would be great. I had experience of roads in Pune and Southern India which have all been great, short and even my foray from Delhi to Dharamshala had been good that too in a Delhi DTC bus few years back, number of times, although that journey is long.


The first hiccup happened when the dates were clashing with Debutsav Delhi . I had put up couple of proposals for Debutsav Delhi and they were accepted. As the dates of the trip were overlapping, I decided to withdraw from Debutsav Delhi so others could have a chance to share their experience about Debian. Due to some issues on hackergram side, it was decided to change the dates but I didn’t reapply for Debutsav as I didn’t want to travel much and had lot of pending work at my end. Finally the dates were fixed for 26th March to 30th March at Sattal where we would meet and figure out things. As travel dates came closer, the hackergram group made an e-mail group so we could discuss and share ideas and know each other before and after the meetup. So I started trying to figure out people as to who they are and where their interest lie in order to forge bonds and find common interests before the meetup. It does make things easier.

Journey from Pune to Delhi

The journey from Pune to Delhi was uneventful except for the fact Ola is not realiable as can be seen from tweets I had with Ola support . I started way early as I wanted to see some of the new things that Pune Airport had done, they had supposedly added a new apron area, couple of new retail outlets as well as a Bhel puri stall . While I didn’t see any extension to the apron either on the landside or airside, nor was the bhel Puri stall guy selling bhel puris but masala chai, although the chai was good. I was lucky to get an Auto and landed at Pune Airport at around 0100 hrs. As luck would have it, I spotted Spotty B. a friend who is a long time into finance and stock markets. While his real name is something else, he likes to be known as Spotty B. Luckily, on that date there were no queues otherwise I have had troubles at Pune Airport especially while traveling with Indigo, some examples of other people tweeting about issues. Either luckily or because of the increased ticket prices due to the 737 Max grounding and Jet Airways issues, there wasn’t much crowd which was surprising to me. While my friend was also traveling from Pune – Delhi to attend a financial convention and a workshop, he was going via Air Asia. Still we got to talk till 0400 about economy, scams, financial markets where they were headed, some specific talks, companies etc. when we decided to break-up as we wanted to do the whole security gig, I had to do baggage drop etc. and the flight was at 0500 hrs. I printed my boarding pass , did the baggage drop, ran through security and was in boarding gate with still 45 minutes to go. Anyways, time went slowly as I didn’t know couple of people who were to join from Pune for the hackergram meetup.

The plane ride

We started about 5 minutes later (taxing on runway) and I met Tania and Renita on the plane as they came late who are running a non-profit which looks into women safety issues. While Pune at one time was pretty safe for women, it has grown a lot and while it is still safe, it was good to know that there are women who are trying to make spaces safe for women. What goes without saying is that any place which is safe for women are also safe for men, kids, elderly as well so it is much more of an inclusive thing rather than being exclusive. Anyways, the plane was an Airbus A320 single aisle plane. I remember as I asked if we were full and she replied yes, 180 which is A320 limit. We touched down right down on time even though we hovered a bit above Delhi Airport and yet still we touched down on time. As I had to take my baggage from baggage pickup belt, I went quickly and tried to figure out where the Indigo baggage was going to be unloaded. I also refreshed and came out on Delhi T1 . What had confused me was on that on the last e-mail we were told that some participants were coming on T1 and some coming on T1-C , it is only later, when traveling back when I understood they both mean the same thing and would be demolished later to have a single T1 terminal which would be a larger terminal than the one now.

Travel to Sattal by Xylo

Everyone arrived by around 10 a.m. and we were able to move out of the Airport. The hackergram team had booked a Xylo and a Traveller van. While in their plans shared with us, they had mentioned the travel as 8 hours, I had assumed they were buffing it up as we usually do whenever we travel with a group of people. I wanted to be on the Xylo as I wanted to see the nature and obviously the center of the gravity is lower on the Xylo than the Traveller and you are more closer to earth and see more from the Xylo than probably the traveller. Also it had less people and I wanted to be more with nature. We anyways had 3 days to bond with each other. Unforunately, this turned out to be a bad decision. I had no clue the road from New Delhi to Sattal was an offroaders delight. To add to the list of the things, the driver we had was constantly bugging us about how bad the travel company was from which he had come Cleartrip. If I had known that the road would be so bad, I would have made other arrangements for myself. While we did stop at couple of places, riding for 9-10 hours in a Xylo which either had no suspension or just the road was so bad, I had to conrol my water intake. By the time we reached Sattal, I was in no shape to travel any further.

The Trek

After dismounting from the Xylo, my legs were thoroughly shot, cramped after being in the Xylo for so many hours. While the hackergram had shared it before that they will be a short trek, I had counted on a 3-4 hour short ride and not an 8-9 hours bumpy ride with legs shot. Also because I hadn’t had much of water I was both thirsty and hungry. I met Arjun and the team from junglecamp and I knew I could go no further. I asked Arjun and he shared it would be a 20 minute short trek. While I was embarrased, I did reach out to Arjun and asked him to get me one of the more trained personnel as I wasn’t sure I woule be able to do it better.

I have to say that Suryaveer had advised me to stay in a hotel for the night and do the trek tomorrow but as I was unaware about the place, my logic was it was better to stay with the group rather than a hotel which I didn’t know was how far and how I will make the journey back. Also my mobile was not working for some unknown reason (was able to receive calls but not make calls) which also put a hitch on things. So while Arjun stayed behind as the team in the Traveller were still on the way, we decided to go down .

It probably became quickly apparent both to the people who were with me as well as the people from Jungle Camp that I was running on plain fumes. Somehow, I was able to get down to the camp with two people helping me. I was literally using trees as a support structure and somehow made my way down to the jungle camp. By the time I made it to the camp, I was beyond tired. I was also sticky from all the sweat and this is where perhaps I made my second wrong decision. I opend my jacket to let the wind in to take out the sweat. Soon I was freezing and again put on the jacket although it was still wet from my sweat. While I did drink slowly, I was starving and somehow raised myself to have dinner. The dinner was delicious although I didn’t know my nightmare was about to start.

The Night and the Nightmare

Somehow I made it to the tent that was alloted to me after the dinner. While the dinner was delicious, I wasn’t a good picture. I hoped that I would have a long sleep which usually clears up things. But this was not to be. Sometime later, I felt an urgent need to go to the toilet. I somehow had convinced Gauri to share the tent with me, maybe intution I don’t know. So with Gauri and another gentleman of the Jungle Camp, I went to the toilet and retched, vomited and shat at the whole time. What I didn’t know that I was losing my salt and water. Gauri tried his best. I even called mum at home at some unearthly hour and told her what had happened while at the same time I didn’t want to worry her much hence didn’t go into many details.

Anyways, it would have just troubled her and there was nothing she could have done from so far. I had to go to the toilet many times to relive myself and had become pretty weak. Gauri tried all kinds of medicines to see if I could get some relief. I realized how important it would have been if I had taken my med kit instead of just couple of medicines for such an eventuality. I knew of a sleeping pill which would help me sleep but I didn’t remember its name at that time. Also brand names of medicines in Pune and brand names of the same medicine up north will be different. Also sleeping pills are not given over-the-counter but need a prescription which I hadn’t bought. What was needed was to know the formulation which I didn’t know. And in any case, a part of me was also seeing the irony and drama in all this. Somehow I survived the night. In order to be comfortable, I asked Gauri to help me change clothes where he came to know that all the things I was wearing were sticky and wet which in his opinion only added to the problems. While I was embarassed in being nude to change clothes, I was in survival mode. I couldn’t sleep and it could have been for any number of reasons, fear possibly one of them. Simple, irrational fear that if I sleep I may never wake and I didn’t want any incident to mar the work that Arjun was doing. Also my stomach was full of gas and because I was so weak I couldn’t walk it off.

The Morning

Somehow, the morning came around and with it came a whole set of new problems. Probably Gauri had realized that I had lost all my salt by that time. I was given hot water to drink and while that helped, it didn’t help a lot. They tried to help me eat something but anything I would try to eat would feel like leather. It had just no taste. I even joked at the irony with couple of the participants of the irony that here we are, at perhaps one of the most beautiful locations on Earth and how sick I am, the black humor, irony was inescapable for me. Gauri showed me two blue birds which for a moment I thought were plastic bags but turned out to be blue birds. How I wish I could have captured them in a photograph to share it with you all. I do have some pictures of the Jungle Camp, the copyright of all those images lie either with Jungle Camp or/and its participants.

Jungle Camp from afar

While I have no clue who took that particular photograph it does show what it was all about. There is another one of Deeptha who is inspecting the inside of a jungle tent.

Deepta showing inside of a tent

Important Note – A pretty wise person had shared with me this pearl of wisdom years ago that most Indians are fatalistic by nature. And this is true of popular imagination as well as can be seen from the Final Destination movies as well. While we had signed up a legal liability waiver, I also needed to free Arjun and the whole team via an emotional waiver. I called up home and shared in a somewhat loud noise that they had taken excellent care of me (the part being unsaid that if anything did happen to me, they should not be held liable.) I know the guilt I had faced when one of my cousin sisters with whom I was pretty close had taken the extreme step a decade and more ago. I also heard of friends and even enemies who died and was shocked to find a deep sadness even when a so-called enemy had passed away. If things had to go south, I just wanted me to responsible for it, not anybody else. With the talk with Arjun, it also had occured to me that there was a division in loyalties in-between me and the Meetup/Camp and I didn’t want him to let go of the important work he was doing.

Needless to say, it was evident till 10 – 11 a.m. that I was not going to get any better, at least not at that location. As shared before, I had a heart-to-heart talk both with Deeptha and Arjun and they shared it was better for both them and me that I go to the town as possibility of medical help was much easily available rather than there. I hadn’t known or realized as to how much we depend on the salt in our bodies for taste.

I had asked the organizers of the jungle camp to make Dal Rice which they did and it was too salty for me. They tried it twice and it was still too salty for me. They tried giving me salad and somehow I was only able to eat carrot and not the Cucumber or the Armenian Cucumber even though I love it more. A decision was made to move me to a hotel so I could get better and then fly off the next day. While I was torn between remaining there and taking chances of being better and contributing to the group efforts, I knew in my heart this was a better decision. I didn’t know if I had brushed my teeth or not. I asked if it would be ok to take a bath but was told it would be bad idea. Somehow for 2-3 hours Gauri slept somehow and then we took another route which was much easier than the one we attempted the night. This also involved a short boat ride which I would have thoroughly enjoyed if I was not so weak. Soon we were on the other bank and were waiting for Sanjay and Karan of the hackergram team to show up to take me to the hotel. My weakness was so much that I couldn’t tie my own show-laces and had to avail help of somebody from the Jungle Camp team to tie my shoe-laces.

The Hotel – Lake Heritage

This is possibly the best thing they did and could have done under the circumstances. While I have already thanked Sanjay, Karan and Arjun, I still need to thank them separately again for this wonderful medicine. While the Hotel didn’t have much in name of facilities, it had three very important features which helped in the recovery process. The room was clean, the bed nice and a view to die for. It was also very appropriate as I could reflect my own feelings on the lake and vice-versa. Here are some photos of the lake and its surroundings so you have an idea of how beautiful the place was.

Lake Heritage outside, the next day

This is where I was kept for 24 hours. While the locks didn’t function, half the lights didn’t function ( it is an off-season anyways) it still gave me the peace I needed.

View of Bhimtal lake from hotel balcony
Map of the place
Blurred map of the map at Hotel
View of Bhimtal lake at night
Another view of Bhimtal lake at night, with shops opened.
The same lake seen in the morning

Interestingly, I came to know that the building in the middle of the lake is an Acquarium and people still go there. If I had been better I would have explored the lake and all the nearby roads etc. The lake I felt was very much like me, silent on the above and bubbling much underneath.

I wish I had taken more and better photographs of the lake at night as the reality was much more beautiful than these photographs are able to show. It didn’t do justice to the beauty of the lake there.

Tamed ducks at Lake Bhimtal

While I didn’t want to, I simply had to venture out to look at these beautiful ducks which were kept by the boat people. I also wished I had a better camera to take the picture of the map of which I was only able to take one part. Maybe somebody could share the same with me.

I had given my word to the Hotel Branch Manager that I would share photo of the Hotel and I have done that. In addition, I had come to see there was a bus stop just in front of the hotel so I was able to see young couples, school children, wives, husbands, life happening by but all without the sounds that mar our cities. The irony even in this situation with my conversations with Sachin and Karan were, while they were curious about Pune, Mumbai etc. I was equally or more curious about Bhimtal and thereabouts. If luck will have it, I might come there again, taking some precautions which I will share in my next blog post.

12 April, 2019 12:41AM by shirishag75

April 11, 2019

Joerg Jaspert

SSH known_hosts merge by key

So I just ran again into sshs annoying behaviour of storing the same host key a trillionth time in my .ssh/known_hosts file. And then later on, when it changes (for whatever reason), complaining over and over until one manually fixed all those tons of lines.


So I came up with a little hacky python script that takes one or more files in the known_hosts format and merges them by key. So you end up with one line per key, and as many hostnames, IP addresses and whatnot in front.

Note: Does not work with that annoying HashKnownHosts format, and I have no idea what ssh will say if you use one of those @ tags in there. The first one is think is crap, so I don’t use it anywhere, the second I never had to use, so no idea if it breaks or not.


# Copyright (C) 2019 Joerg Jaspert <joerg@debian.org>
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# .
# 1. Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
#    notice, this list of conditions and the following disclaimer in the
#    documentation and/or other materials provided with the distribution.
# .

import argparse

parser = argparse.ArgumentParser(
    description='Merge ssh known host entries by key',
Merges entries in given ssh known_hosts file based on the key. One can also merge from multiple files.
The file should NOT use the HashKnownHosts feature.

parser.add_argument('files', type=str, nargs='+', help='files that should be merged')
parser.add_argument('-o', '--output', type=str, nargs='?', help='output file (defaults is STDOUT). Only opened after merge is complete, so can be used for inplace merge.')
args = parser.parse_args()

if args.output:
  import StringIO
  output = StringIO.StringIO()
  import sys
  output = sys.stdout

hostkeys = {}
for kfile in args.files:
  with open(kfile) as kf:
    for line in kf:
      line_splitted = line.rstrip().split(' ')
      hosts = line_splitted.pop(0).split(',')
      key_type = line_splitted.pop(0)
      key = line_splitted[0]
      if not key in hostkeys:
        hostkeys[key]["hosts"] = {}
      # Store the host entries, uniquify them
      for entry in hosts:

# And now output it all
for key in hostkeys:
  output.write('%s %s %s\n' %
  (','.join(hostkeys[key]["hosts"]), hostkeys[key]["key_type"], key))

if args.output:
  with open(args.output,'w') as f:

11 April, 2019 06:39AM

Elana Hashman

I'm stepping down as maintainer of auditwheel

For the last three years I've been a regular contributor and core maintainer of auditwheel, a Python Packaging Authority (or "PyPA") tool used to build portable binary/extension wheels on Linux. auditwheel's "show" command allows developers to check if their Python wheel's external symbol dependencies comply with the manylinux policies, and its "repair" command enables developers to more easily build policy-compliant wheels inside an appropriate environment like a manylinux Docker image without having to make significant changes to their build processes.

Most recently, at the last Python Packaging Authority sprints in November 2018, I finished work to support the manylinux2010 platform tag in auditwheel. After extensive testing, this functionality was released in version 2.0 in January of 2019.

But why?

auditwheel is a very technically challenging tool to maintain. It requires deep knowledge of dynamic linking, ELF binaries, and symbol versioning on the Linux platform. While this is very exciting technical work, it's not the sort of project that I can work on sustainably in my free time and off hours. I'm currently the only active auditwheel maintainer, and I don't feel like I can give the project the attention it deserves on an ongoing basis, especially given community interest in updating the manylinux specification and supporting new platform policies.

On the bright side, concluding my work with auditwheel and manylinux will allow me to dedicate more quality time to other FOSS projects I'm excited about! In a personal capacity, I have just started a two year term as an individual member of the Open Source Initiative Board of Directors, and I will continue my work in Debian. In a professional capacity, I recently started a new job at Red Hat and I intend to significantly increase my upstream Kubernetes and OpenShift contributions over the next year.

I'm making this announcement now to avoid surprising anyone at PyCon, and I'd love to spend my time at the conference working on a transition plan. I will be giving an introductory talk about auditwheel and the manylinux toolchain if you're interested in learning more about the space and want to get involved! At PyCon, I hope I will have the opportunity to provide some outgoing input on the future of auditwheel and manylinux, especially after the bumpy rollout of manylinux2010.

So long and thanks for all the fish 🐟

11 April, 2019 02:15AM by Elana Hashman

Jelmer Vernooij

Silver Platter

Making changes across the open source ecosystem is very hard; software is hosted on different platforms and in many different version control repositories. Not being able to make bulk changes slows down the rate of progress. For example, instead of being able to actively run a a script that strips out an obsolete header file (say "DM-Upload-Allowed") across all Debian packages, we make the linter warn about the deprecated header and wait as all developers manually remove the deprecated header.

Silver Platter

Silver-platter is a new tool that aids in making automated changes across different version control repositories. It provides a common command-line interface and API that is not specific to a single version control system or hosting platform, so that it's easy to propose changes based on a single script across a large set of repositories.

The tool will check out a repository, run a user-specified script that makes changes to the repository, and then either push those changes to the upstream repository or propose them for merging.

It's specifically built so that it can be run in a shell loop over many different repository URLs.


As an example, you could use the following script (fix-fsf-address.sh) to update the FSF address in copyright headers:


perl -i -pe \
'BEGIN{undef $/;} s/Free Software
([# ]+)Foundation, Inc\., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA/Free Software
\1Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA/smg' *

echo "Update FSF postal address."

Say you a wanted to create a merge proposal with these changes against offlineimap. First, log into GitHub (this needs to be done once per hosting site):

$ svp login https://github.com

To see what the changes would be without actually creating the pull request, do a dry-run:

$ svp run --dry-run --diff ./fix-fsf-address.sh https://github.com/offlineimap/offlineimap
Merge proposal created.
Description: Update FSF postal address.

=== modified file 'offlineimap.py'
--- upstream/offlineimap.py 2018-03-04 03:28:30 +0000
+++ proposed/offlineimap.py 2019-04-06 21:07:25 +0000
@@ -14,7 +14,7 @@
 #    You should have received a copy of the GNU General Public License
 #    along with this program; if not, write to the Free Software
-#    Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+#    Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA

 import os
 import sys

=== modified file 'setup.py'
--- upstream/setup.py       2018-05-01 01:48:26 +0000
+++ proposed/setup.py       2019-04-06 21:07:25 +0000
@@ -19,7 +19,7 @@
 #    You should have received a copy of the GNU General Public License
 #    along with this program; if not, write to the Free Software
-#    Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+#    Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA

 import os
 from distutils.core import setup, Command

Then, create the actual pull request by running:

$ svp run ./fix-fsf-address.sh https://github.com/offlineimap/offlineimap
Reusing existing repository https://github.com/jelmer/offlineimap
Merge proposal created.
URL: https://github.com/OfflineIMAP/offlineimap/pull/609
Description: Update FSF postal address.

This would create a new commit with the updated postal address (if any files were changed) and the commit message Update FSF postal address. You can see the resulting pull request here.

Debian-specific operations

To make working with Debian packaging repositories easier, Silver Platter comes with a wrapper (debian-svp) specifically for Debian packages.

This wrapper allows specifying package names to refer to packaging branches; packaging URLs are retrieved from the Vcs-Git header in a package. For example:

$ debian-svp run ~/fix-fsf-address.sh offlineimap

to fix the same issue in the offlineimap package.

(Of course, you wouldn't normally fix upstream issues like this in the Debian package but forward them upstream instead)

There is also a debian-svp lintian-brush subcommand that will invoke lintian-brush on a packaging branch.

Supported technologies

Silver-Platter currently supports the following hosting platforms:

It works in one of three modes:

  • propose: Always create a pull request with the changes
  • push: Directly push changes back to the original branch
  • attempt-push: Attempt push, and fall back to propose if the current users doesn't have permissions to push to the repository or the branch.


There is a Silver Platter repository on GitHub. Silver Platter is also available as a Debian package in unstable (not buster).

More information

For a full list of svp subcommands, see svp(1).

11 April, 2019 01:04AM by Jelmer Vernooij

April 10, 2019

hackergotchi for Gunnar Wolf

Gunnar Wolf

#FLISOL at Facultad de Ingeniería, UNAM - April 25 and 26 @lidsol @comunidadfi @die_fi_unam @FIUNAM_MX #FIUNAM

After several years of absence, LIDSOL (Laboratorio de Investigación y Desarrollo de Software Libre) is once again organizing a FLISOL!

And what is a FLISOL? It is the Festival Latinoamericano de Investigación y Desarrollo de Software Libre — An activity that started off as an installfest, but went on to become a full set of conferences. Set of conferences? Yes, because FLISOL happens more-or-less simultaneously (the official date is the last Saturday of April, but there is tolerance for it to happen up to a couple of weeks around it) all over Latin America. This year, FLISOL will be held at different cities in Germany, Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba, Ecuador, Spain, El Salvador, Guatemala, Honduras, México, Nicaragua, Panamá, Paraguay, Perú, Rep. Dominicana, Uruguay and Venezuela.

So, besides the install fest that will be continuously happening on tables at the entrance of the auditorium, we will have a very interesting set of talks for Thursday and Friday, April 25 and 26, at the main auditorium of Facultad de Ingeniería (Auditorio Javier Barros Sierra):

A rough translation for the talks (yes, this is aimed at a local audience, but my blog is in English for $reasons ;-) ) follows:

Thursday, April 25
Hackers and Free Software, the immune system of Internet (Hiram Camarillo)
The Debian project: Beyond the mother of all distributions (Gunnar Wolf)
Advantages of communities, and how to contribute (Luis E. Jiménez Robles)
DeepDream with TensofFlow (Alejandro Hernández)
Science + Free Software = <3 (Diego Barriga)
Modbus Protocol: A present risk in Industrial Control Systems (Paulo Contreras Flores)
Friday, April 26
It's not your friend, it's proprietary software (Paul Aguilar)
Introduction to Fedora (Efrén A. Robledo)
Fedora Containers Lab (dockerless containers) (Alex Callejas)
Programmers sought: How to save the world without dying in the attempt (Irene Soria)
git + CI + CD = Devops (Andrés Hernández)
cartel_flisol.jpg148.81 KB
ponencias_flisol.jpg162.29 KB

10 April, 2019 06:36PM by gwolf

Reproducible builds folks

Reproducible Builds: Weekly report #206

Here’s what happened in the Reproducible Builds effort between Sunday March 31 and Saturday April 6 2019:

Packages reviewed and fixed, and bugs filed

Test framework development

  • We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. The following changes were done this week:

  • Chris Lamb:
    • Avoid double spaces in IRC output, eg. “Failed http://example.com/”. []
  • Holger Levsen:
  • Mattia Rizzolo:
    • Apply flake8 to the email2irc.py script. []
    • Install the python3-yaml library everywhere as it is needed by the deploy script. []
    • Special-case the src:debian-installer package as it has “special” download requirements. [] (see #926242)
    • Add the new reproducible-builds.org mail server to our Munin configurations. []
    • Drop the old Alioth OpenSSH key from Jenkins’ authorized_keys. []
    • Node maintenance. []

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Daniel Shahaf, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

10 April, 2019 08:29AM

April 08, 2019

Sylvain Beucler

Debian LTS - March 2019

Debian LTS Logo

In February I had requested to join the Debian LTS project, which extends the security support for past Debian releases, as a paid contributor.
Kuddos to Freexian for pulling this project out.

I was asked to demonstrate a full security update on my own (non paid) which I did with 2 DLAs (Debian LTS Advisory):

  • freedink-dfarc: jessie-security update, applying my own path traversal security fix
  • phmyadmin: jessie-security update, assessing 1 CVE as not affected and fixing another

Incidentally, every Debian Developer can make a direct security upload to jessie-security without prior validation (just follow the guide).


Following the spirit of transparency that animates Debian and Debian Security, here's my report for my first paid month.

In March, the monthly sponsored hours were split evenly among contributors depending on their max availability.
I got 29.5h, which I spent on:

  • nettle/gnutls: investigate local side-channel attack and conclude no-dsa / minor issue
  • symfony: helped test Roberto's update
  • sqlalchemy: jessie-security update for SQL injection, tested and discussed upstream's own backported patch
  • glib2.0: investigate denial of service and mark as no-dsa / no reproducible
  • ghostscript: investigate sandbox break and (lack of) test suite, and conclude we'll backport the next upstream release
  • pdns: jessie-security update for the 'remote' backend
  • Fixes/updates in dla-needed.txt, our (public) list of triaged security issues
  • Fixes in LTS wiki, templates and scripts, in particular wrt https://www.debian.org/lts/security/ integration

If you'd like to know more about LTS security, I recommend you check:

08 April, 2019 02:40PM

hackergotchi for Neil McGovern

Neil McGovern

GNOME ED Update – March

It’s time for another update on what the GNOME Foundation has been up to in March.

We attended a couple of events this month. Firstly, we were at SCaLEx17, which  took place in Pasadena, California from 7th -10th March. It was a busy conference with a large number of attendees. We had a booth, and warm reception from visitors where we sold a large number of t-shirts and received donations on the booth. Talks were given by Matthias Classen on Containerized Desktops for Fun & Profit and Christian Hergert on Modernizing Desktop Linux Application Development.

After this, we were at the Free Software Foundation‘s event, LibrePlanet. As always, it’s a really community driven conference with a load of great talks. Given our historical association with the FSF and the GNU project, we received a really warm welcome with multiple people saying it was great to see us attend in person!

Another notable development is around our own conference, GUADEC. Our sponsorship brochure has now been published and we’re seeking sponsors! If you or someone you know is interested, then please get in touch :) I’ve heard that the Call for Papers will be opening really soon now, so it’s time to start thinking about giving a presentation at the event, if you’re not already. After unfortunately having to miss last year, I’m really looking forward to attending and meeting GNOME contributors and users – I’ll be attending are you?

I’m actually writing this from FOSS North, where we also have a stand. We’re also going to be attending Linux Fest North West later this month, where Sriram Ramkrishna will be joined by our new Development Coordinator.

That’s right, we’ve managed to find an excellent candidate to take on the role of Development Coordinator for the Foundation, making sure we can sustainably support the number of staff we now have, and they have accepted the role. Expect an email to foundation-announce in the next couple of weeks, introducing them.

We’ve made some really good progress on the Foundation’s Annual report, and we’re in the design phase. This means that it won’t be long until we get this finished, and this time well before GUADEC :)

Finally, we’re trialling the use of a Discourse instance as a new way of communicating. The gtk-devel list has moved over, and gnome-builder is also keen to use it rather than the mailing list. While rolling it out, it’s been really interesting to see people who wouldn’t usually talk to us use the platform. I know that sometimes email works well with people’s workflows, and fortunately you can interact with discourse just via email, if you so chose. We’ll see how it goes before I suggest other people move off mailman, but so far results are very positive.

That’s all for now, I’ll write again next month.


08 April, 2019 10:57AM by Neil McGovern

hackergotchi for Daniel Silverstone

Daniel Silverstone

A quarter in review

The 2019 plan - First-quarter review

At the start of the year I blogged about my plans for 2019. For those who don't want to go back to read that post, in summary they are:

  1. Continue to lose weight and get fit. I'd like to reach 80kg during the year if I can
  2. Begin a couch to 5k and give it my very best
  3. Focus my software work on finishing projects I have already started
  4. Where I join in other projects be a net benefit
  5. Give back to the @rustlang community because I've gained so much from them already
  6. Be better at tidying up
  7. Save up lots of money for renovations
  8. Go on a proper holiday

At the point that I posted that, I promised myself to do quarterly reviews and so here is the first of those. I am a few days late/early depending on your view of things, but today seems a good day to do this since it's the day before my 39th birthday.

1. Weight loss

I started the year at 101.7kg, and when I posted my plans I was 101.3kg which wasn't the best start to the year. My goal is 80kg this year, and when I started I figured that'd be pretty easy to achieve given my previous 6 months or so of consistent weight loss. Sadly that has not proven out. It is harder to lose weight as you get lighter; the more you lose the harder it gets.

I started the year out pretty strong, reaching 90kg in early March. Sadly my body seems to really like 90kg. It's not really a "set point" in that I can easily gain weight from there, but losing weight from 90kg has proven to be really tough for me. Since that first time dipping to 90kg in early March, I've been up as high as 91.5kg and down as low as 87.5kg, though I'm currently 88.6kg.

Previously on weight plateaus, I've been able to kick-start loss again either by doing a stronger fast for a few days, or by eating more for a week and then returning to my loss-level diet. This hasn't worked in the past month yet.

Since this month includes when my father was very unwell before ultimately dying last year, and also my birthday, I figure that I'm going to end this plateau with a huge pig-out on protein tomorrow for my birthday and then going somewhat strict with myself for a few weeks to try and get things going again. I know that by my goal estimate, I have basically 8 months to lose 8kg, but I ought to be able to do that in 4 months or even less if I try harder.

Other fitness goals have been sidelined in favour of the couch-to-5k effort (see below) but perhaps in my next review things will change…

I give myself a solid "B" for this

2. Couch to 5k

This was started, and I have been doing my best. It has been a really interesting experience since it has taught me that the "training wall" really exists. One week a while ago, I hit it really hard, and had to stop my run about 10m in, because I simply could not make my legs go any further. Since then I've reduced the amount of non c25k training I do, and that has allowed my body the time to adapt to running more effectively.

I managed a 5k run for the very first time last Thursday, and I'll be attempting another on Thursday of this week. I'm amazed at my progress and super-proud of being able to say that I have pretty much nailed this one down. I have hopes that at the next checkin I'll to have to say that I've met my goal (a parkrun in May, running the whole thing) and so be setting a different fitness goal for the rest of the year.

So far, so good, I'm giving myself an "A+"

3. Finishing projects

I managed to sort things so that Gitano won't drop out of Debian, and I've done a bit more on NetSurf than I did previously, but in the past quarter I've not done a lot on the pub software I mentioned at the start of the year.

Rob and I have booked off a chunk of April, so perhaps I'll get a chance to do some of that then.

Sadly, I've also started a bunch of projects, including beginning to plan work with Lars on a Yarn re-design.

So over-all, this one gets a "C" - satisfactory but could do a lot better.

4. Be a net benefit

I have restricted what projects I am part of this year. My presence in Debian I think just about counts as a net-benefit to the project, but it's a pretty slim margin.

I do my best on the Lua community support (ML, webhosting, etc) but I've not really been a part of the community for a couple of years now. I'd like to think they still appreciate my efforts, but that's about it.

I've really heavily invested my efforts into Rust related stuff as you will see.

I think that, over all, I'm doing okay here, a "B-" since I could do better but I'm not negative-impact anywhere to my knowledge.

5. Give back to the Rust community

This is one software/community place where I think I've done pretty well. I started the year looking at various parts of the Rust ecosystem, but ended up settled into Rustup where I joined the working group and began in earnest to help deal with issues and improve what I see to be the gateway to the Rust ecosystem.

As you will know if you follow this blog, I ended up making the Rustup 1.17.0 release, which was a nerve-wracking experience that I will do better with next time. Fortunately the community is amazing and they were super-forgiving of my cockup.

I also hang around on the #wg-rustup Discord channel and other channels on that server, helping where I can. I've been running the Rustup working group weekly meetings, and I feel like I've made a pretty positive impact on things. Rustup is a slow moving tool which we try and change only conservatively, but it is a critical part of the experience of the Rust ecosystem so we do want to make it as good as it can be.

All in all, I think I deserve an "A" for my efforts here. I could do a bit more and that'd give me the "+" but for now I'm satisfied.

6. Be better at tidying up

I started the year out strongly, doing well at this, but over the months I've slipped back into my bad habits. I'm still very bad at this part of "adulting" and have nothing useful to report beyond "Trying to develop multiple habits simultaneously is super-hard". I think that once the c25k effort is over and I can reduce my running efforts, resuming "normal" gym routines etc, I may have some mental energy left to try and develop better tidying habits.

All in all this is a big fat "F" for fail.

7. Save up money for renovations

I believe that I've been doing pretty well at this. Since I don't go out much, I don't eat out anywhere near as much as I used to, and I don't generally buy toys (e.g. I recently got an upgraded graphics card for free because Rob decided he needed a shinier one and so I got a hand-me-down), I've been able to put a good chunk of cash away. Not enough yet for a new kitchen, but it's getting there slowly. Of course, the next point might eat away a good chunk of it…

I'm giving myself an "A" for this, because I do have a pretty healthy looking savings account, even if I can see all the looming bills for renovations.

8. Go on a proper holiday

Well, this one hasn't happened yet, though I do have a week off booked for after easter, that's in next quarter so I'm guessing it doesn't count for now.

Rob will be off to Japan in the summer, so perhaps I'll take some time off at that point to have a break. Of course, with Rob away, it'd be a staycation which isn't quite what I had in mind, so who knows what I'll do instead. I'm so much of a homebody that a staycation is still a good holiday in my view, but it's not what I meant when I said a "proper holiday" since that needs to be a break from all the "normal" stuff in my view.

Giving myself a "C" here because I've not failed at it, but I've not made major inroads into planning or somesuch.


All-in-all I think I've done pretty well. My scores are B, A+, C, B-, A, F, A, C, which, if we ignore the F is an average of A, though the F does ruin things a little. Hopefully in another 3 months I'll be able to report better progress.

08 April, 2019 08:47AM by Daniel Silverstone

April 07, 2019

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.10: B-PIPE support and other updates

A new version of Rblpapi arrived at CRAN today as version 0.3.10. It regroups several sets of changes (see below). The most notable addition is an extension to use B-PIPE for high-performance bulk data access (in case you are provisioned for it) contributed by Alfred Kanzler in a carefully written and revised pull request.

Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the tenth release since the package first appeared on CRAN in 2016. It covers changes made during last year as well the B-PIPE extension (whereas release 0.3.9 was a spot release to accomodate StagedInstall changes in the upcoming R 3.6.0). Special thanks to Al, Xianing and Davis for these contributions!

Changes in Rblpapi version 0.3.10 (2019-04-02)

  • The start.date format for bdh now allows character values with relative date expressions (John in #267) fixing #266).

  • Relative date values are allowed in return (John in #269 fixing #268).

  • The Rcpp::DateVector class is used instead of a local date vector class (John in #273).

  • The file src/init.c is now redundant and removed as updated registration is in src/RcppExports.cpp (Davis Vaughan in #275 fixing #274).

  • Returned strings are now marked as UTF-8 encoded (Xianying Tan in #278)

  • Date vectors are now initialized with NA (Xianying Tan in #279 fixing an inssue introduced in #273)

  • Authentication for BPIPE is now supported (Alfred Kanzler in #288 with assistance help from Dirk and John)

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

07 April, 2019 08:45PM

hackergotchi for Christoph Egger

Christoph Egger

Midnight Sun CTF 2019 EZDSA Writeup

This is FAUST playing CTF again, this time midnightsun.

Crew: siccegge

OK so we're looking at the EZDSA service. This is a signature service and the task is essentially to recover the signing key. Code is reproduced below.

from hashlib import sha1
from Crypto import Random
from flag import FLAG

class PrivateSigningKey:

    def __init__(self):
        self.gen = 0x44120dc98545c6d3d81bfc7898983e7b7f6ac8e08d3943af0be7f5d52264abb3775a905e003151ed0631376165b65c8ef72d0b6880da7e4b5e7b833377bb50fde65846426a5bfdc182673b6b2504ebfe0d6bca36338b3a3be334689c1afb17869baeb2b0380351b61555df31f0cda3445bba4023be72a494588d640a9da7bd16L
        self.q = 0x926c99d24bd4d5b47adb75bd9933de8be5932f4bL
        self.p = 0x80000000000001cda6f403d8a752a4e7976173ebfcd2acf69a29f4bada1ca3178b56131c2c1f00cf7875a2e7c497b10fea66b26436e40b7b73952081319e26603810a558f871d6d256fddbec5933b77fa7d1d0d75267dcae1f24ea7cc57b3a30f8ea09310772440f016c13e08b56b1196a687d6a5e5de864068f3fd936a361c5L
        self.key = int(FLAG.encode("hex"), 16)

    def sign(self, m):

        def bytes_to_long(b):
            return long(b.encode("hex"), 16)

        h = bytes_to_long(sha1(m).digest())
        u = bytes_to_long(Random.new().read(20))
        assert(bytes_to_long(m) % (self.q - 1) != 0)

        k = pow(self.gen, u * bytes_to_long(m), self.q)
        r = pow(self.gen, k, self.p) % self.q
        s = pow(k, self.q - 2, self.q) * (h + self.key * r) % self.q
        assert(s != 0)

        return r, s

The outer service was not provided but you could pass in base64 encoded byte arrays and got back r and s as already indicated. Looking at the final computation for s we notice that given \((h + k * r)\) and \(h, r\) we can easily recover \(k\). For this to work it would be convenient if the first term ends up being 1. Unfortunately, the easiest way to get there is prevented: \(g^{q-1} = 1\). Fortunately this is not the only exponent where this works and a good candidate is \((q-1 / 2)\).

pow(gen, (q-1)//2, q)

From there the only thing left is solving \(s = (h + k * r)\). Fortunately gmpy has the solution prepackaged again: divm. So we proceed by getting a valid "signature" on \((q-1 / 2)\). The rest is simple calculation:

sha1(binascii.unhexlify("%x" % ((q-1)//2))).hexdigest()

base64.b64encode(binascii.unhexlify("%x" % ((q-1)//2)))

gmpy2.divm(s-h, r, q)

binascii.unhexlify("%x" % 39611266634150218411162254052999901308991)

OK so why does \((q-1 / 2)\) work? Essentially, the field defined \(F_q\) -- calculations mod q -- has q elements additively and \(q-1\) elements multiplicatively(and we're considering exponentiation as repeated multiplication). Therefore it contains cyclic subgroups for all factors of \(q-1\) and for every element \(e\), \(e^o = 1\) where o is the order of the subgroup that element belongs to. as the generator is trivially not \(-1\) -- the subgroup of size 2 -- \((q-1 / 2)\) must be a multiple of the generated group's order.

07 April, 2019 03:21PM

hackergotchi for Steve Kemp

Steve Kemp

Parsing PHP for fun and profit

Recently I've been dealing with a lot of PHP code, and coders. I'm still not a huge fan of the language, but at the same time modern PHP is a world apart from legacy PHP which I dismissed 10ish years ago.

I've noticed a lot of the coders have had a good habit of documenting their code, but also consistently failing to keep class-names up to date. For example this code:


  * Class Bar
  * Comments go here ..
 class Foo

The rest of the file? Almost certainly correct, but that initial header contained a reference to the class Bar even though the implementation presented Foo.

I found a bunch of PHP linters which handle formatting, and coding-style checks, but nothing to address this specific problem. So I wrote a quick hack:

  • Parse PHP files.
    • Look for "/*".
    • Look for "*/".
    • Look for "class".
    • Treat everything else as a token, except for whitespace which we just silently discard.

Once you have a stream of such tokens you can detect this:

  • Found the start of a comment?
    • Clear the contents of any previously saved comment, in lastComment.
    • Append each subsequent token to "lastComment" until you hit EOF, or the end of a comment token.
  • Found a class token?
    • Look at the contents of the lastComment variable and see if it contains "class", after all the class might not have documentation that refers to any class.
    • If there is "class xxx" mentioned check it matches the current class name.

There were some initial false-positives when I had to handle cases like this:

throw new \Exception("class not found");

(Here my naive handling would decide we'd found a class called not.)

Anyway the end result was stable and detected about 150 offenses in the 2000 file codebase I'm looking at.

Good result. Next step was integrating that into the CI system.

And that concludes my recent PHP adventures, using go to help ;)

(Code isn't public; I suspect you could rewrite it in an hour. I also suspect I was over-engineering and a perl script using regexp would do the job just as well..)

07 April, 2019 05:45AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Played with tar FUSE file system implementation.

Played with tar FUSE file system implementation. I wanted to mount by base.tgz image without tar extraction and experimented with the thought. I could not however think of a reasonable way of handling gzip part of image. I need to extract almost all of the image before I can reason with the tar file.

07 April, 2019 05:39AM by Junichi Uekawa

hackergotchi for Shirish Agarwal

Shirish Agarwal

Saving the Ganga

A younger cousin of mine remarked that the Government of the day is so serious about cleaning the Ganga. I didn’t know how to respond. At first I thought he was pulling my leg but when he said it seriously and said so I had no other choice but to call him on it. But rather than just telling him, I thought it would be wiser to share some of the links, news in the media and ask if there is actually any work happening on the ground.

The Ganga as it looks today – Copyright – Down To Earth.

The above picture has been taken from the article Khumb’s over, now God save the Ganga which is written by DTE . This illustrates how the tanneries were closed just for the Khumb, for political mileage and nothing else. This is when there had been two Supreme Court judgements to close Polluting Industries for good and still nothing has been done by the present Govt.

And this is just list of the woes. The real story is that one Clean Ganga activist died and one is missing and the Government prefers to look the other way . The Clean Ganga Panel has met only twice and spent only 18% of the funds have been utilized according to another media report. There are more than dozen or more litigations pending in the Supreme Court about Clean Ganga where the responder is the Govt. of India and the Government has not even given them a single hearing over the last 5 years.

I just came from Uttarakhand, near Nainital . While going, I saw that the National Highway was an off-roader’s delight. Upon asking the locals and the driver it became clear the road doesn’t have any grip, it is landslide prone because of beserk construction. This has been shared and told clearly in an International Journal Science Direct, June 2014 as well as a CNN report which says the same. The same is also said in Jagran Josh , a local newspaper as well.

07 April, 2019 04:05AM by shirishag75

Russ Allbery

Review: Working Effectively with Legacy Code

Review: Working Effectively with Legacy Code, by Michael C. Feathers

Publisher: Prentice Hall
Copyright: 2004
Printing: 2005
ISBN: 0-13-117705-2
Format: Trade paperback
Pages: 419

Suppose that you're familiar with the principles of good software design, you understand the importance of breaking complex code apart into simpler components, you know how to write good test suites, and you can structure new code to be maintainable. However, as is so often the case, your job is not to write green-field code. It's to add or change some behavior of an existing system, and that existing system was written with complete disregard to (or prior to the widespread development of) all of those principles. How do you start?

That's the core topic of this somewhat deceptively titled book. The title arguably overpromises, since there are many aspects of working with legacy code that are not covered in this book (and couldn't be covered in any one book). Feathers further narrows the topic with a rather idiosyncratic definition of legacy: code without unit tests. The point of the techniques discussed here is to restructure the piece of code that you want to modify so that you can add tests (and, specifically, unit tests; Feathers barely mentions the existence of integration tests).

There are many perils in reading a book about programming that's this old, but Working Effectively with Legacy Code holds up surprisingly well, probably due to its very narrow focus. Code examples are in Java, C++, and C, which are still among the languages that one would expect to see in legacy code even today (although are a less comprehensive set than they were). This book is clearly from the early, excited days of agile and extreme programming and lacks some of the nuance that has later developed, but the constraint of working with legacy code forces compromises that keep it from being too "pure" of an agile diatribe, helping its continued relevance. Feathers is obsessed with unit testing, and I'll make some argument against that focus in a moment, but with legacy code defined as code lacking tests, unit tests are a reasonable place to start.

The vast majority of this book, including a long section of mechanical recipes at the back, is devoted to specific techniques to tease apart code where tangles prevent unit testing in isolation. Feathers's goal is to be able to unit-test only the piece of logic that you plan on changing, while making as few changes as possible to the rest of the code to avoid breaking other (untested) behavior. To do this, he describes a wide variety of refactoring techniques, some of which lead directly to better code, and some of which lead to much worse code in the short term but add seams where one can break code apart in a test harness and test it in isolation. The vast majority of these techniques involve using, and abusing, the object system of the language (including interfaces, subclassing, overrides, making methods and data public, and many other approaches), but he also covers techniques that work in C such as interposition or using the preprocessor. All of the C techniques he mentioned are ones that I've used in my own C code, and his analysis of pluses and drawbacks seemed largely accurate in each case, although he's far too willing to just throw a C++ compiler willy-nilly at a large C code base.

The parts of this book that are not focused on breaking up tangled code to allow for testing are focused on techniques for understanding that code. Feathers is a visual thinker, so a lot of his advice uses ways of drawing diagrams that try to capture dependencies, propagation of values, relationships between classes, possible refactorings, and other structural ideas of interest. I am not a very visual thinker when it comes to code structure, so I'm not the best person to critique this part of the book, but it seemed reasonable (and simple) to me.

Feathers mostly stops at getting code into a structure for which one can write unit tests, following his definition of legacy code as code without tests. Although he mentions integration testing a few times, he's also very focused on unit tests as the gold standard of testing, and is therefore extremely fond of fakes, mock classes, and other approaches to test classes in isolation. This goes hand-in-hand with a desire to make those unit tests extremely fast (and therefore extremely simple); Feathers's ideal would be tests that could run with each keystroke in an IDE and highlight test failures the way syntax failures are highlighted.

This is my largest point of disagreement with this book. I understand the appeal; when I started programming in a language that supported easy and flexible mocking (Python with the core mock module), it was a revelation. Those facilities, alongside interfaces and dependency injection which make it easy to substitute fakes inside unit tests, make it possible to test the logic of every class in isolation without setting up a more complex testing environment. But I've subsequently had a number of bad experiences with code that's comprehensively tested in this fashion, which has convinced me that it's more fragile than its advocates seem to acknowledge.

There are two, closely-related problems with this pure unit-testing approach: one starts testing the behavior of classes in isolation instead of the user-visible behavior of the application (which is what actually matters), and one starts encoding the internal structure of those classes in the test suite. The first problem, namely that each class can be correct to its specifications in isolation but the application as a whole could not work properly, can be caught by adding proper integration tests. The second problem is more insidious. One of the purposes of testing is to make refactoring and subsequent behavior changes easier and safer, but if every jot and tittle of the internal code structure is encoded in the test suite via all the mocks and fakes, a simple half hour of work refactoring the code as part of adding new functionality turns into hours of tedious work restructuring the tests to match. The result is to paradoxically discourage refactoring because of the painful changes then required to the tests, defeating one of the purposes of having tests.

Feathers, as is typical of books from the early days of agile, doesn't even mention this problem, and takes it as nearly a tautology that unit testing and mocking out of dependencies is desirable.

One of his repeated themes is finding a way to mock out database layers. I think this is the place where this book shows its age the most, since that discussion focuses removing the need for a managed test database, worries about colliding with other people's use of the same test database, and includes other comments that assume that a database is some external singleton outside of the development environment of the programmer. This already wasn't the case in 2004 when one could spin up a local instance of MySQL; now, with SQLite readily available for fast, temporary databases, it's trivial to write tests without mocking the storage layer (as long as one is careful about SQLite's lack of schema enforcement). For me, this tilts the balance even farther in favor of testing user-visible functionality across moderate-sized parts of the code base rather than isolated unit testing. I prefer to save the mocks and fakes for dependencies that are truly impossible to use in a test environment, such as external hardware or APIs that call out to services outside of the scope of the application.

I'm spending so much time on testing approaches because testing is the hidden core of this book. What Feathers primarily means by "working with" legacy code is testing legacy code, at least the part that you're changing. My dubiousness about his testing focus undermined some of his techniques for me, or at least made me wish for additional techniques that focused on testing integrated features. But, that caveat aside, this is a detailed look at how to untangle code and break dependencies, useful under any testing methodology. Feathers isn't afraid to get into the nitty-gritty and give specific examples and step-by-step instructions for ways to untangle code patterns safely in the absence of tests. Reading the whole book also provided me with a useful feel for the types of options I should be considering when tackling a messy bit of refactoring.

The audience for this book is people who already have a good understanding of object-oriented programming techniques, but are used to well-designed applications and want to expand their knowledge to some of the tricks one can use to unknot gnarly problems. It's nothing revolutionary, but if that describes you, it's a good resource.

Rating: 7 out of 10

07 April, 2019 03:33AM

April 06, 2019

Romain Perier

My work on Debian (March 2019)

This is a summary of what I have done in March 2019 on Debian (well, a bit late)


  • I have bumped linux to 4.19.27 in sid, see commit
  • I have bumped linux to 4.19.28 in sid, see commit
  • I have bumped linux to 5.0.5 in master, see commit
  • I have bumped ethtool to 5.0, see  commit

New features

  • I have re-worked the commit to get support for Raspberry PI 1 and Raspberry PI Zero for armel mergeable in the Linux kernel, see commit
  • Patch for enabling support for CRASH_DUMP for armel and armhf was merged, see commit 
  • Patch for enabling support for STRICT_MODULE_RWX in the Linux kernel was merged, see commit


  • Changed raspi3-firmware to make the consoles easily configurable in cmdline.txt, this closes bug #926325, see commit
  • Changed raspi3-firmware to use arm_64bit instead of the arm_control bitfields for controlling 64bits mode, this closes bug #925062, see commit   
  •  Switched raspi3-firmware to use package.maintscript, that solves lintian warnings, see commit

06 April, 2019 05:12PM by Romain Perier (noreply@blogger.com)

April 05, 2019

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

Hard drive failure in my zpool 😞

I have a storage box in my house that stores important documents, backups, VM disk images, photos, a copy of the Tor Metrics archive and other odd things. I’ve put a lot of effort into making sure that it is both reliable and performant. When I was working on a modern CollecTor for Tor Metrics recently, I used this to be able to run the entire history of the Tor network through the prototype replacement to see if I could catch any bugs.

I have had my share of data loss events in my life, but since I’ve found ZFS I have hope that it is possible to avoid, or at least seriously minimise the risk of, any catastrophic data loss events ever happening to me again. ZFS has:

  • cryptographic checksums to validate data integrity
  • mirroring of disks
  • “scrub” function that ensures that the data on disk is actually still good even if you’ve not looked at it yourself in a while

ZFS on its own is not the entire solution though. I also mix-and-match hard drive models to ensure that a systematic fault in a particular model won’t wipe out all my mirrors at once, and I also have scheduled SMART self-tests to detect faults before any data loss has occured.

Unfortunately, one of my drives in my zpool has failed a SMART self-test.

Unfortunately, one of my drives in my zpool has failed a SMART self-test.

This means I now have to treat that drive as “going to fail soon” which means that I don’t have redundancy in my zpool anymore, so I have to act. Fortunately, in September 2017 when my workstation died, I received some donations towards the hardware I use for my open source work and I did buy a spare HDD for this very situation!

At present my zpool setup looks like:

% zpool status flat
  pool: flat
 state: ONLINE
  scan: scrub repaired 0 in 0 days 07:05:28 with 0 errors on Fri Apr  5 07:05:36 2019

	NAME                                            STATE     READ WRITE CKSUM
	flat                                            ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
	  mirror-1                                      ONLINE       0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
	  gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0

errors: No known data errors

The drives in the two mirrors are 3TB drives, in each mirror is one WD Red and one Toshiba NAS drive. In this case, it is one of the WD Red drives that has failed and I’ll be replacing it with another WD Red. One important thing to note is that you have to replace the drive with one of equal or greater capacity. In this case it is the same model so the capacity should be the same, but not all X TB drives are going to be the same size.

You’ll notice here that it is saying No known data errors. This is because there hasn’t been any issues with the data yet, it is just a SMART failure, and hopefully by replacing the disk any data error can be avoided entirely.

My plan was to move to a new system soon, with 8 bays. In that system I’ll keep the stripe over 2 mirrors but one mirror will run over 3x 6TB drives with the other remaining on 2x 3TB drives. This incident leaves me with only 1 leftover 3TB drive though so maybe I’ll have to rethink this.

Free space remaining in my zpool

Free space remaining in my zpool

My current machine, an HP MicroServer, does not support hot-swapping the drives so I have to start by powering off the machine and replacing the drive.

% zpool status flat
  pool: flat
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
	the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: scrub repaired 0 in 0 days 07:05:28 with 0 errors on Fri Apr  5 07:05:36 2019

	NAME                                            STATE     READ WRITE CKSUM
	flat                                            DEGRADED     0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
	  mirror-1                                      DEGRADED     0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
	    xxxxxxxxxxxxxxxxxxxx                        UNAVAIL      0     0     0  was /dev/gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
	  gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx    ONLINE       0     0     0

errors: No known data errors

The disk that was part of the mirror is now unavailable, but the pool is still functioning as the other disk is still present. This means that there are still no data errors and everything is still running. The only downtime was due to the non-hot-swappableness of my SATA controller.

Through the web interface in FreeNAS, it is possible to now use the new disk to replace the old disk in the mirror: Storage -> View Volumes -> Volume Status (under the table, with the zpool highlighted) -> Replace (with the unavailable disk highlighted).

Running zpool status again:

% zpool status flat
  pool: flat
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Apr  5 16:55:47 2019
	1.30T scanned at 576M/s, 967G issued at 1.12G/s, 4.33T total
	4.73G resilvered, 21.82% done, 0 days 00:51:29 to go

	NAME                                            STATE     READ WRITE CKSUM
	flat                                            ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
	  mirror-1                                      ONLINE       0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0
	    gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx  ONLINE       0     0     0  (resilvering)
	  gptid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx    ONLINE       0     0     0

errors: No known data errors

And everything should be OK again soon, now with the dangerous disk removed and a hopefully more reliable disk installed.

A more optimistic message from FreeNAS

A more optimistic message from FreeNAS

This has put a dent in my plans to upgrade my storage, so for now I’ve added the hard drives I’m looking for to my Amazon wishlist.

As for the drive that failed, I’ll be doing an ATA Secure Erase and then disposing of it. NIST SP 800-88 thinks that ATA Secure Erase is in the same category as degaussing a hard drive and that it is more effective than overwriting the disk with software. ATA Secure Erase is faster too because it’s the hard drive controller doing the work. I just have to hope that my firmware wasn’t replaced with firmware that only fakes the process (or I’ll just do an overwrite anyway to be sure). According to the same NIST document, “for ATA disk drives manufactured after 2001 (over 15 GB) clearing by overwriting the media once is adequate to protect the media from both keyboard and laboratory attack”.

This blog post is also a little experiment. I’ve used a Unicode emoji in the title, and I want to see how various feed aggregators and bots handle that. Sorry if I broke your aggregator or bot.

05 April, 2019 08:13PM

IETF 104 in Prague

Thanks to support from Article 19, I was able to attend IETF 104 in Prague, Czech Republic this week. Primarily this was to present my Internet Draft which takes safe measurement principles from Tor Metrics work and the Research Safety Board and applies them to Internet Measurement in general.

My IETF badge, complete with additional tag for my nick

My IETF badge, complete with additional tag for my nick

I attended with a free one-day pass for the IETF and free hackathon registration, so more than just the draft presentation happened. During the hackathon I sat at the MAPRG table and worked on PATHspider with Mirja Kühlewind from ETH Zurich. We have the code running again with the latest libraries available in Debian testing and this may become the basis of a future Tor exit scanner (for generating exit lists, and possibly also some bad exit detection). We ran a quick measurement campaign that was reported in the hackathon presentations.

During the hackathon I also spoke to Watson Ladd from Cloudflare about his Roughtime draft which could be interesting for Tor for a number of reasons. One would be for verifying if a consensus is fresh, another would be for Tor Browser to detect if a TLS cert is valid, and another would be providing archive signatures for Tor Metrics. (We’ve started looking at archive signatures since our recent work on modernising CollecTor).

On the Monday, this was the first “real” day of the IETF. The day started off for me at the PEARG meeting. I presented my draft as the first presentation in that session. The feedback was all positive, it seems like having the document is both desirable and timely.

The next presentation was from Ryan Guest at Salesforce. He was talking about privacy considerations for application level logging. I think this would also be a useful draft that compliments my draft on safe measurement, or maybe even becomes part of my draft. I need to follow up with him to see what he wants to do. A future IETF hackathon project might be comparing Tor’s safe logging with whatever guidelines we come up with, and also comparing our web server logs setup.

Nick Sullivan was up next with his presentation on Privacy Pass. It seems like a nice scheme, assuming someone can audit the anti-tagging properties of it. The most interesting thing I took away from it is that federation is being explored which would turn this into a system that isn’t just for Cloudflare.

Amelia Andersdotter and Christoffer Långström then presented on differential privacy. They have been exploring how it can be applied to binary values as opposed to continuous, and how it could be applied to Internet protocols like the QUIC spin bit.

The last research presentation was Martin Schanzenbach presenting on an identity provider based on the GNU Name System. This one was not so interesting for me, but maybe others are interested.

I attended the first part of the Stopping Malware and Researching Threats (SMART) session. There was an update from Symantec based on their ISTR report and I briefly saw the start of a presentation about “Malicious Uses of Evasive Communications and Threats to Privacy“ but had to leave early to attend another meeting. I plan to go back and look through all of the slides from this session later.

The next IETF meeting is directly after the next Tor meeting (I had thought for some reason it directly clashed, but I guess I was wrong). I will plan to remotely participate in PEARG again there and move my draft forwards.

05 April, 2019 02:48PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Fourth Annual UK System Research Challenges Workshop

Redworth Hall

Redworth Hall

main talk, thanks [Mark Little](https://twitter.com/nmcl/status/1111219989812527104)

main talk, thanks Mark Little

lightning talk

Amiga lightning talk, thanks Mark Little

I gave a talk on my research at the Fourth Annual UK System Research Challenges Workshop. This is the second time I've attended this conference. Last year I presented on some Red Hat work.

The conference took place at Redworth Hall, a 17th Century Jacobean Manor House converted into a spa Hotel. The main presentations took place in an ornate hall with high ceilings, candelabra and long curtains (definitely not a drop you can buy at Dunelm)

This is the first time I've presented on my research to a public audience. Here's a copy of my presentation slides, with speaker notes. I've tried to annotate the questions from the session into the notes of the last slide.

I also delivered a short lightning talk about my Amiga floppy recovery project which resulted in some really interesting spin-off conversations about historic computing.

Thanks to my employer, Red Hat, for sponsoring the conference and making it possible, and to Jen for doing an excellent job of making sure it could take place.

05 April, 2019 01:47PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

DevSecOps Pune Meetup 4

This I had attended almost a month back. I just didn’t have the time or the energy to blog about it. Thankfully, one of the organizers Rohan Nageskar took the time to blog about it so I don’t have to do much other than share a few of the links I had shared and some which I had forgotten to do on that date. The first one was about usage of A.I. for vulnerability assessment using twitter mentions as a source. While the idea certainly has merit and would go a long way in getting nods to fix vulnerabilities in the code during the whole cycle of development, production, deployment, scaling, maintainance till the time the code or app. or whatever needs to be retired. At the same time however, it is not known how accurate the system would be because at the end, it still relies on human input and humans per-se are bad at threat perception and evaluation as per millions of examples. All the wars that have been fought and are still being fought in whatever name is a strong example of that.

One of the other things that I shared was the Intel Spoiler attack which was just shared just a few hours ago or something so it was pretty fresh at the time. I also shared a bit about where the hardware industry seemed to be heading and it seems at least for the near future that AMD would have the leg up. There’s the whole RISC bit for which chips are already out there and lot more being promised in the coming months and year but that’s a different topic altogether.

Incidentally, while Rohan was sharing about using Ansible for scaling a webapp and how you would have different servers for scaling the webapp. depending on needs, I was wondering that definitely the BJP IT Team would have profited from Rohan’s presentation. While Rohan didn’t go much into specifics of things, it was more or a high-level overview of the process, it did establish some groundwork for any individual or team as to how they could go about it. For newbies they could well read up on the differences on webapp. and website . To my mind, they are one and the same as most sites nowadays are dyanmic in nature due to nature of things.

I also shared about the BJP site hack (unofficially of course) and everybody was nonplussed to learn about it because we all have been told again and again the skills the BJP IT cell has. This is when it was almost 2 weeks when the site was down. Few days later, they did put a sort of site back and stole a bootstrapable theme . There were quite a few hate comments, which can be easily be termed victim-shaming and they had no choice but to respond back . It seems that the people in BJP do not understand either understand or don’t want to understand Intellectual Property Rights or to be more specific, Copyright in this particular case. This specific example is clearly a case of copyright infringment rather than anything else. In fact all of the FOSS environment revolves around credit-sharing apart from monetary compensation.

Anyways, some pictures from the meetup to round off the day 🙂

Me sharing some point probably
All the attendees of the Devops meetup.

In the last picture you can see Rohan at the end right in an orange t-shirt with glasses while Rahul you can see in dead center in white shirt at the center. They were the organizers of the event. Many thanx to Qualys for being the host for the meetup.

05 April, 2019 07:55AM by shirishag75

April 04, 2019

Ian Jackson

Planar graph layout, straight line drawing

My project to make an alternative board for Pandemic Rising Tide needed a program to lay out a planar graph, choosing exact coordinates for the vertices.

(The vertices in question are the vertices of the graph which is the dual of the adjacency graph of the board "squares" - what Pandemic Rising Tide calls Regions. For gameplay reasons the layout wants to be a straight line drawing - that is, one where every boundary is a straight line.)

Existing software

I found that this problem was not well handled by existing Free Software. The leading contender, graphviz, generally produces non-planar layouts even for planar inputs; and it does not provide a way to specify the planar embedding. There are some implementations of "straight line drawing" algorithms from the literature, but these produce layouts which meet the letter of the requirement for the drawing to consist only of nonintersecting straight lines, but they are very ugly and totally unsuitable for use as a game board layout.

My web searches for solutions to this problem yielded only forum postings etc. where people were asking roughly this question and not getting a satisfactory answer.

I have some experience with computer optimisation algorithms and I thought this should be a tractable problem, so I set out to solve it - well, at least well enough for my purposes.

My approach

My plan was to use one of the algorithms from the literature to generate a straight line drawing, and then use cost-driven nonlinear optimisation to shuffle the vertices about into something pretty and useable.

Helpfully Boost provides an implementation of Chrobak & Payne's straight line drawing algorithm. Unfortunately Boost's other planar graph functions were not suitable because they do not remember which face is the outer face. (In planar graph theory and algorithms the region outside the graph drawing is treated as a face, called the outer face.) So I also had to write my own implementations of various preparatory algorithms - yet more yak shaving before I could get to the really hard part.

Having been on a Rust jag recently, I decided on Rust as my implementation language. I don't regret this choice, although it did add a couple of yaks.

Cost function and constraints

My cost function has a number of components:
  • I wanted to minimise the edge lengths.
  • But there was a minimum edge length (for both gameplay and aesthetic reasons)
  • Also I wanted to avoid the faces having sharp corners (ie, small angles between edges at the same vertex)
  • And of course I needed the edges to still come out of each vertex in the right order.
You will notice that two of these are not costs, but constraints. Different optimisation algorithms handle this differently.

Also "the edges to still come out of each vertex in the right order" is hard to express as a continuous quantity. (Almost all of these algorithms demand that constraints take the form of a function which is to be nonnegative, or some such.) My solution is, at each vertex, to add up the angles between successive edges (in the intended order, and always treating each direction difference as a positive angle). Ie, to add up the face corner angles. They should sum to tau: if so, we have gone round once and the order is right. If the edges are out of order, we'll end up going round more than once. If the sum was only tau too much, I defined the violation quantity to be tau minus the largest corner angle; this is right because probably it's just that two edges next to each other are out of order and the face angle has become "negative"; this also means that for a non-violating vertex, the violation quantity is negative but still represents how close to violation we are. (For larger corner angle sums, I added half of the additional angle sum as an additional violation quantity. That seemed good enough in the end.)

Simulated annealing - and visual debug of the optimisation

My first attempt used GSL's simulated annealing functions. I have had reasonable success with siman in the past. The constraints are folded into the cost function. (An alternative approach is to somehow deal with them in the random step function, eg by adjusting violating layouts to similar non-violating ones, but that seemed quite tricky here.)

Siman did not seem to be working at all.

I was hampered by not knowing what was going on so I wrote a visual debug utility which would let me observe the candidate layouts being tried, in real time. (I should have taken my first instinct and done it in Tcl/Tk, but initially Qt seemed like it would be easier. But in the end I had to fight several of Qt's built-in behaviours.)

The visual debug showed me the graph randomly jiggling about without any sign of progress. It was clear that if this was going to work at all it would be far too slow.

More suitable optimisation algorithm

I felt that a gradient descent algorithm, or something like one, would work well for this problem. It didn't seem to me that there would be troublesome local minima. More web searching led me to Steven G. Johnson's very useful NLopt library. As well as having implementations of algorithms I thought would work well, it offered the ability to change algorithm without having to deal with a whole new API.

I quickly found that NLopt's Sbplx algorithm (T. Rowan's Subplex algorithm, reimplemented) did fairly well. That algorithm does not support constraints but the grandly-named Augmented Lagrangian Method can handle that: it adds the constraint violations to the cost. It then reruns the optimisation, cranking up the constraint violation cost factor until none of the constraints are violated by more than the tolerance.

Unfortunately the Augmented Lagrangian Method can convert a problem with a cost function without local minima, into one which does have bad local minima. The Sbplx algorithm is a kind of descent algorithm so it finds a local minimum and hopes it's what you wanted. But unfortunately for me it wasn't: during the initial optimisation, part of the graph "capsized", violating the edge order constraint and leaving a planar layout impossible. The subsequent cranking up of the constraint violation cost didn't help, I think maybe because my violation cost was not very helpful at guiding the algorithm when things were seriously wrong.

But I fixed this by the simple expedient of adding the edge order constraint with a high cost to my own cost function. The result worked pretty well for my simple tests and for my actual use case. The graph layout optimiation takes a couple of minutes. The results are nice, I think.

I made a screen capture video of the optimisation running. (First the debug build which is slower so captures the early shape better; then again with the release build.)


The planar graph layout tool I wrote is plag-mangler.

It's really not very productised, but I think it will be useful to people who have similar problems. Many of the worst features (eg the bad command line syntax) would be easy to fix. OTOH if you have a graph it does badly on, please do file an issue on salsa, as it will guide me to help make the program more general.


See my first post about this project for some proper references to the academic literature etc.

(Edit 2019-04-04 12:55 +0100: Fixed typos and grammar.)

comment count unavailable comments

04 April, 2019 11:56AM

April 03, 2019

Iustin Pop

A small presentation on Linux namespaces

Over the weekend I spent time putting together a few slides on Linux namespaces, mostly because I wanted to understand better (and putting this together helped a lot!), but also because it will be useful to me later, and finally (and really) because I promised to a few colleagues I’ll explain how all this works :)

So the HTML slides are here, and the source is on github. I put the source up because I’m very sure this has lots of mistakes; not only in the intro where I mention FreeBSD jails and OpenVZ a bit (but I have zero experience with both), but also in the main content, so any corrections are more than welcome.

Writing this, and organising it, was actually much more entertaining than I originally thought. It also made me realise that the kernel-level implementation is very powerful, and—at least to the extent that e.g. Debian uses it by default—it’s basically wasted (a lot of lost opportunity). I know there are some tools to use this, but for example why Firefox is not by default namespaced… I don’t know. Food for later thought. Happy to receive information otherwise, of course.

Most of the information is gathered from man pages, Wikipedia (for the historic bits), blog posts, mailing list archives, etc., so I don’t claim a lot of deep original content; the main idea is just to put all this information together in a single place.

Hope this is useful to somebody else, and again, contributions and re-sharing welcome (CC-BY-SA-4.0).

03 April, 2019 07:41PM

hackergotchi for Mike Gabriel

Mike Gabriel

My Work on Debian LTS/ELTS (March 2019)

In March 2019, I have worked on the Debian LTS project for 14 hours (of 10 hours planned plus 4 hours pulled over from February) and on the Debian ELTS project for another 2 hours (of originally planned 6 hours) as a paid contributor.

LTS Work

  • CVE triaging (ntp, glib2.0, libjpeg-turbo, cron, otrs2, poppler)
  • Sponsor upload to jessie-security (aka LTS): cron (DLA 1723-1 [1])
  • Upload to jessie-security (aka LTS): openssh (DLA 1728-1 [2])
  • Upload to jessie-security (aka LTS): libssh2 (DLA 1730-1 [3])
  • Upload to jessie-security (aka LTS): libav (DLA 1740-1 [4])


  • Create .debdiff for cron src:pkg targetting wheezy (but I failed to build it due to two issues with Debian 10 as build machine)
  • Discover and document that kernel boot parameter "vsyscall=emulate" is required for building wheezy packages on Debian 10. (See #844350 and #845942 for details).
  • Bug hunt sbuild bug #926161 in sbuild 0.78.1-1 [5]


03 April, 2019 01:23PM by sunweaver

April 02, 2019

hackergotchi for Shirish Agarwal

Shirish Agarwal

ASAT and ISRO, DRDO merger rumor


For last few days I was not in Pune as had gone to attend a workshop which was funded by Innovation for change . Unfortunately, I was not able to take part in the workshop as the traveling proved to be a bit too much in too short a time. While I would share more in another blog post for the moment, I would like to share about the ASAT test that India conducted. While it’s a positive development, from my perspective there was no need for the Prime Minister to come on-stage and declare that we can shoot down a Satellite at 3k when China can do the same at 38k . So we have a long way to go, in as far as parity with China is concerned. While I’m not sharing the source of this information, this is for all and anybody to see and figure out if you know how to use the web. There are a few things I would share, I didn’t use any private data-sets to get this information, which means it’s available easily online. I did not use tor, the dark web otherwise I probably could have got far more material. Thirdly and more interestingly, if you wanna start your search from scratch, ORF could be a good starting point from an Indian POV although there are many other such think-tanks which could help you in your research.

The only question I have to ask is if we are the weaker party, which is clearly the case herein, then whom are we trying to sell this idea if not the Indian public ? Chinese military satellites are in varying range from 300 km. to 36,000 km. so there is hardly a chance that we would be able to make any significant dent to their military usage. Also using an ASAT on another country’s satellite would be an act of war. As far as Communication satellites are concerned, they are also at 36,000 km. are at the Geostationary orbit so they will not be harmed. There is also a pretty nice animation of the same at wikimedia .

International Politics

While we can understand that Mr. Modi did it for electioneering, it does have impact internationally. Last year when the Chinese did another ASAT test (which the Pentagon guestimate it reached 36k from sea level from their ground and space-based instruments) . The Chinese statement was quite brief and to the point . They said that they did the test and it performed on all the military objectives. This is a sort of perfect statement which doesn’t reveal either what the Chinese military objectives of the test were and what was accomplished. All other Governments either have to rely on their own instrumentation (if they have in space to spy and on lookout for such activities ) or rely on Pentagon’s guestimates and findings which they chose to public. The Americans are also well to not show their hand and may share some information or even share mis-information as this is and would be considered part of Information warfare. This is also precisely the reasons we have ambassadors, diplomats and others who sit together and are engaged in naunced wording. There were no need of an announcement and even if it needed, it could have been done by some mid-level executive on DRDO saying something similar on the lines of what the Chinese said and probably adding we have a long road ahead of us or something like that.

Update – 04/04/2019 – Somebody on twitter shared a link to Dr. Saraswat’s latest interview which was held a few days back .

The answers were designed in the way so as to show that the UPA govt. didn’t show the interest for the ASAT test while the NDA Govt. Even if we do take Dr. Saraswat’s interpretion of how the event happened, it still raises questions rather than answers.

  1. By. Dr. Saraswat’s own admission, it was an informal presentation . While he didn’t go into the details of what he meant by ‘informal presentation’ it could be something akin to somebody asking me to do an informal presentation on Debian. For this, the most I have to do is collect my thoughts, read up a bit onto what’s new, exciting if there is something which catches my eye and at the most have 5-7 pages of slides and depending upon what kind of organization it is, I would share what Debian is. If however, somebody would ask me to make a presentation on a possible Debian deployment, it would consist of knowing and having details of how small or big the network is ? What are the critical points in the network (for e.g. many shops or small businessess have either their custom-designed billing system whose source-code they don’t have and has to be on MS-Windows) while other systems you could potentially do the deployment. Apart from doing the actual deployment, there would be time for training, documentation etc. all of which involve some sort of hard numbers and time which both parties would have to work at to get some sort of understanding of how this different system works.

2. And this is where my question comes in. In the interview it’s also not mentioned what time or date when the presentation was done. Now we all know that 2014 was only a year away, if the presentation was done 6-9 months before elections, it is very much possible that there was no interest because it would be time-consuming and there are no guarantees of a successful test. In fact, before this test which was declared a success, there was another test which was conducted by DRDO which was a failure. This also begs or marks the question as to when did Dr. Saraswat approach NDA or vice-versa and when he started actively working on the project. Did it take 5 years for this to come to this stage or 2 years or less because that would give some more guidance and a way for us to guage future success of the project.

Rumour of Merging DRDO and ISRO

There is also a worrying bit of news that the Government of India is thinking of merging both DRDO and ISRO to be similar structure to what the Chinese have for their space program, which I think will be disastrous for the Indian Space Program, the taxpayer public money as well as the two organizations as well.

DRDO work culture

While my mother had the honor of serving within a sub-set of DRDO and she was friends with few scientists, one of the major grouses for most scientists was the constant shifting of parameters or specifications. To take a very simple example, let’s think that you are told or given a set of specs. of a Maruti 800, a small city car , then a year, year down the half, you are told that the design specifications has changed to now a Station Wagon or a hatchback and when you start to design for those, the specs. are changed again in a year or two to a sports car. Now any car-enthusiast would know that these three are completely different cars having their unique needs, dimensions, center of gravity, steering, fuel consumption, the works. Extrapolate that to a missile or missiles where more often than not, these design changes were at many a times not asked by the Armed forces who would be the actual users but the bureaucracy i.e. civil servants, many from IAS who instead of consulting, using consesus of the people on both sides, instead share and put whatever opinion they have. Of course inter-personality conflicts also do occur and inspite of it DRDO is able to do what it does. Because of quite a few such Inter-personality conflicts, many a brilliant scientist have been forced to leave DRDO and are now either serving private Indian interests or some foreign ones and they repent why they spent their best productive years at DRDO or whatever sub-unit they were into.

ISRO Work culture

While I do not have relatives working in ISRO, I do and did have friends who work or have worked in ISRO. Due to the nature of the work itself, which is more exploratory and peaceful in nature, they are able to collaborate with lot of educational institutions within India and worldwide and even collaborate with organizations like NASA, ESA and others. The civilian beaureacracy has had a more hands-off approach which has resulted in ISRO being able to carry out whatever fantastic achievements they have been able to achieve. The only thing, if they need to learn from this Government, is the ability to find money and do more of promotion of the good work they are doing. Even if ISRO were to do 1% of the promotion that NASA does in promotion with merchandising, they would get more than money back while at the same time inspire millions of young children to take up challenges in space sciences.

So from the above, it is pretty clear it would be disastrous as both have a very different mind-set and ways of working. I remember hearing or conversing with some military gentleman couple of years ago and we were talking on some similar topics. This was on a short train trip. The gentleman remarked, it’s not often that we get things to work right the first time, in any of the fields of endevour the military does. If we do, even some small part, we make sure not to disturb or change it and would make changes around it so it works and fix all the other things and processes till there is cohesion. He went on to share some real-life examples from his work which I have since forgotten but the principle seems good, solid enough at least to me.

Making Organizations Fun

At the very end, I would like to draw attention to Jonathan Carter’s blog post where he shares about Debian and Fun . I found both the art peices most appropriate not just for the organizations listed above, but should be the calling points of any organization which believes in genuine stewardship of whatever organization they have or hope to take forward.

While I would invite everybody who has more than a passing interest in the world of computer science to see Jonathan’s and other potential DPL (Debian Project Leader) platforms as well as their rebuttals, the difference between the two is statements or pictures above is that while the first one is an employer-employee model, the second is more on the volunteer, contributor-steward model. Although as DPL , the only perks the DPL enjoys are speaking about Debian in sometimes exotic locations, although that is more than tempered by being part of Debian Politics and Free software politics which comes with its own rewards, risk scenario and is and can be pretty tricky as has been observed over the years.

02 April, 2019 10:48PM by shirishag75

Reproducible builds folks

Reproducible Builds: Weekly report #205

Here’s what happened in the Reproducible Builds project between March 24th and March 30th 2019:

Don’t forget that Reproducible Builds is part of May/August 2019 round of Outreachy which offers paid internships to work on free software. Internships are open to applicants around the world and are paid a stipend for the three month internship with an additional travel stipend to attend conferences. So far, we received more than ten initial requests from candidates and the closing date for applicants is April 2nd. More information is available on the application page.

Packages reviewed and fixed, and bugs filed

Test framework development

  • We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. The following changes were done this week:

    • Mattia Rizzolo built a static list of SSH host keys [] so we could build the ssh_config file based on this file [], leading to being able to enable OpenSSH’s StrictHostKeyChecking option [][][].
    • Holger Levsen added a number of links to pages, including Guix’s challenge command [], the F-Droid tests [] as well as NixOS and openSUSE tests [].

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & Holger Levsen and was reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

02 April, 2019 01:11PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, March 2019

I was assigned 20 hours of work by Freexian's Debian LTS initiative and carried over 16.5 hours from February. I worked 22.5 hours and so will carry over 14 hours.

I merged changes from stretch's linux package into the linux-4.9 package, uploaded that, and issued DLA-1715. I made another stable update to Linux 3.16 (3.16.64). I then rebased Debian's linux package on that version, uploaded it, and issued DLA-1731. This unfortunately introduced a regression, which I fixed in a second update.

I also reviewed and merged Emilio Pozuelo Monfort's changes to the firmware-nonfree package to address CVE-2018-5383.

02 April, 2019 10:12AM

Abhijith PA

DebUtsav Delhi


Three weeks ago I attended DebUtsav-Delhi organized by the Debian and free folks in North India.

Group photo

Debutsav-Delhi is the third edition of its kind. Initially Mozilla Delhi backed the Debutsav-delhi when they pitched the idea but later they withdrew for some reason and just became a supporting member. I must say Debian India events are happening frequent now. Some years ago in India Debian hang around with other FLOSS events. Now its DebUtsav giving chance to other FLOSS people to meet around Debian.

As the usual way of DebUtsav, this one also was two day event with separate track for Debian related talks and for general FLOSS talk. I gave a talk about Debian LTS project. On first day evening some speakers and organizers gathered for dinner.


Its funny that most of the Debian people gathered there were contributing/contributed to Ruby and JavaScript team . There is a strong reason for that. All the contributors to Debian from India after 2014 were branched out from a single person who do mostly Ruby and JS - Pirate Praveen. You can expect a blog post from him about Debutsav. He is contesting in upcoming Lok Sabha Elections and quite busy with that.

On second day there were talks from SFLC - Digital Security and Privacy. Srud conducted a interactive session with topic Gender diversity in FLOSS projects. We reserved afternoon sessions for Bug Squashing Party and introducing packaging tutorial to newcomers. All together it was a wonderful gathering. I also met isaagar whom with I have corresponded in matrix a lot but finally able to meet him IRL.

Special appreciation to Hamara Linux for sponsoring the event.They are becoming the de facto sponsors of every Debian events in India.

02 April, 2019 08:24AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tint 0.1.1: New Styles

With almost year passed since the previous 0.1.0 release, a nice new release of the tint package arrived on CRAN today. Its name expands from tint is not tufte as the package offers a fresher take on the Tufte-style for html and pdf presentations.

This version adds new features, and a new co-author. Jonathan Gilligan calmly and persistently convinced me that there was ‘life beyond Roboto’ and I overcame the reluctance to offer other fonts. So now we have two additional reference implementations for Lato and Garamond which look stunning, as well as generally enhanced support for fonts, font families and entire LaTeX templates all via the standard YAML headers.

A screenshot for Lato follows:

And another for garamond:

The full list of changes is below.

Changes in tint version 0.1.1 (2019-03-30)

  • The two pdf styles have been extended allowing more flexible LaTeX customization particularly for fonts but also link colour. (Jonathan in #30)

  • Two example documents where added pre-rendered (and not as vignettes to keep processing lighter)

  • Documentation for the HTML style was updates (Jonathan in #30).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 April, 2019 01:03AM

April 01, 2019

hackergotchi for Gunnar Wolf

Gunnar Wolf

Debian @ Internet Freedom Fesitval #internetFF

Today, we had a little get-together of DDs in València, Spain, with some other DDs.

Most of us were here to attend the Internet Freedom Festival (IFF), plus Héctor and Filippo, who are locals. We missed some DDs (because in a 2500+ people gathering... Well, you cannot ever find everybody you are looking for!) so, sorry guys for not having you attend!

Sadly, we have no further report than having enjoyed a very nice dinner. No bugs were closed, no policy was discussed, no GRs were drafted, no cabals were hatched.

deb_iff.jpg2.97 MB
deb_iff_mini.jpg180.18 KB

01 April, 2019 11:07PM by gwolf

Hideki Yamane

Specifying debian mirror for your docker image

For people who use Debian as docker base image...

  • Do not use ftp.debian.org as your apt line in docker image, use deb.debian.org instead. It chooses near mirror where users are
  • If you use old Jessie, use cdn-fastly.deb.debian.org instead of deb.debian.org. Since old apt in Jessie cannot handle redirect
  • Do not use httpredir.debian.org, it is obsolete

01 April, 2019 09:08AM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Julien Danjou

Julien Danjou

Writing Your Own Filtering DSL in Python

Writing Your Own Filtering DSL in Python

A few months ago, we've seen how to write a filtering syntax tree in Python. The idea behind this was to create a data structure — in the form of a dictionary — that would allow to filter data based on conditions.

Our API looked like this:

>>> f = Filter(
  {"and": [
    {"eq": ("foo", 3)},
    {"gt": ("bar", 4)},
>>> f(foo=3, bar=5)
>>> f(foo=4, bar=5)

While such a mechanism is pretty powerful to use, the input data structure format might not be user friendly. It's great to use, for example, with a JSON based REST API, but it's pretty terrible to use for a command-line interface.

A good solution to that problem is to build our own language. That's called a DSL.

Building a DSL

What's a Domain-Specific Language (DSL)? It's a computer language that is specialized to a certain domain. In our case, our domain is filtering, as we're providing a Filter class that allows to filter a set of value.

How do you build a data structure such as {"and": [{"eq": ("foo", 3)}, {"gt": ("bar", 4)}]} from a string? Well, you define a language, parse it, and then convert it to the right format.

In order to parse a language, there are a lot of different solutions, from implementing manual parsers to using regular expression. In this case, we'll use lexical analsysis.

First Iteration

Let's start small and define the base of our grammar. That should be something simple, so we'll go with <identifier><operator><value>. For example "foobar"="baz" is a valid sentence in our grammar and will conver to {"=": ("foobar", "baz")}.

The following code snippet leverages pyparsing for parsing the string and specifying the grammar:

import pyparsing

identifier = pyparsing.QuotedString('"')
operator = (
    pyparsing.Literal("=") |
    pyparsing.Literal("≠") |
    pyparsing.Literal("≥") |
    pyparsing.Literal("≤") |
    pyparsing.Literal("<") |
value = pyparsing.QuotedString('"')

match_format = identifier + operator + value


# Prints:
# ['foobar', '=', '123']

With that simple grammar, we can parse and get a token list composed of our 3 items: the identifier, the operator and the value.

Transforming the Data

The list above in the format [identifier, operator, value] is not really what we need in the end. We need something like {operator: (identifier, value)}. We can leverage pyparsing API to help us with that.

def list_to_dict(pos, tokens):
    return {tokens[1]: (tokens[0], tokens[2])}

match_format = (identifier + operator + value).setParseAction(list_to_dict)


# Prints:
# [{'=': ('foobar', '123')}]

The parseString method allows to modify the returned value of a grammar token. In that case, we transform the list of the dict we need.

Plugging the Parser and the Filter

In the following code, we'll reuse the Filter class we wrote in our previous post. We'll just add the following code to our previous example:

def parse_string(s):
    return match_format.parseString(s, parseAll=True)[0]

f = Filter(parse_string('"foobar"="baz"'))

# Prints:
# True
# False

Now, we have a pretty simple parser and a good way to build a Filter object from a string.

As our Filter object supports complex and nested operations, such as and and or, we could also add it to the grammar — I'll leave that to you reader as an exercise!

Building your own Grammar

pyparsing makes it easy to build one's own grammar. However, it should not be abused: building a DSL means that your users will have to discover and learn it. If it's way different that what they know and already exists, it might be cumbersome for them.

Finally, if you're curious and want to see a real world usage, Mergify condition system leverages pyparsing to implement its parser. Check it out!

01 April, 2019 07:25AM by Julien Danjou

Paul Wise

FLOSS Activities March 2019





  • Debian wiki: update email addresses, whitelist email addresses, whitelist domains
  • Debian security tracker: merge patches



All work was done on a volunteer basis.

01 April, 2019 03:06AM

March 31, 2019

hackergotchi for Keith Packard

Keith Packard


SAMD21 USB vs Windows 7

I'm mostly used to USB being really hard to get working on a new SoC, everything from generating a stable 48MHz clock to diving through thousands of register definitions to get the device programmed to receive that first SETUP packet. However, I'm used to having that part be the hardest section of the work, and once the first SETUP packet has been received and responded to successfully, it's usually down hill from there.

Not this time.

I've written about Snek on the SAMD21G18A before, and this is about the same board. USB on this device is medium-complicated, as the device supports both host and device modes, plus has a range of 'optimizations' which always makes simple operation harder. It took a few hours of hacking to get SETUP packets flowing, but after that (at least when talking to Linux and Mac OS X), the rest of the USB driver was pretty simple.

Enter Windows 7

I'm pushing towards a Snek 1.0 release and was testing snekde on Windows 7. It's working great with the classic Arduino Duemilanove, but when I plugged in the Metro M0 board, it got stuck after I typed one character. "That's Odd", I thought.

I figured it'd be a simple matter of a stuck interrupt or other minor mistake in the SAMD21 USB driver that I wrote. So, I broke out my trusty Beagle USB analyzer to see where the USB link was getting stuck.

IN-NAK ... IN DATAx ...

USB is an odd protocol; data from the device to the host has to sit in the device waiting for the host to come and ask for it. When the device is in use, the host polls for data by sending an IN packet. When there's no data to send back, the device sends a NAK reply. When there is data, the device sends a DATAx packet and the host replies with an ACK packet.

In my case, the host sends thousands of IN packets waiting for data, and the device responds with an equally huge number of NAK packets. The first time data was queued from the device to the host, the device responded to the IN packet with a DATAx packet and the host ACK'd that. After that, the host never sent another IN packet again. It would happily send it's own data using OUT packets, and the device would receive that data, and of course the usual stream of SOF (start of frame) packets were streaming along. But, not a single IN packet to be seen.

Differential Debugging

Well, I've got a lot of USB devices around here, so I hooked up one of our TeleBTv3.0 devices. That worked just fine, which was good as we've sold hundreds of those and it would kinda suck to discover that some Windows boxes weren't compatible.

A visual examination of the traces as seen captured by the Beagle analyzer didn't show anything obvious. But, it's often the little details that break things.

So, I hacked up the SAMD21 board to appear to be the same device as the TeleBT -- same VID/PID, same names, same serial number. Everything.

Now windows can't seem to tell the difference. It uses the same COM port for both at least.

I devised a simple test — plug-in the device, start PuTTY and then type two characters ('a', or 0x61). Because both devices echo whatever you send to them, this means I should get two characters back. Because they're typed separately, those two characters will be sent in separate OUT transactions, and the echos should be sent back in two IN transactions.

I captured traces from both devices:

TeleBT-v3.0 (STM32L151):

Metro M0 (SAMD21G18A):

The 'trimmed' versions elide timing and packet sequence information which can't be easily replicated exactly between the two tests; that "can't" matter, at least according to my understanding of USB. With those versions, I can do a text diff of the packet traces to find that, aside from a different number of SOF and IN-NAK transactions, the only difference appears at the end

$ diff -u stm32l.trim samd21.trim | tail +231
 0  1 B  01 04 OUT txn 61   
 1  3 B  01 04    OUT packet E1 01 BA   
 1  4 B  01 04    DATA0 packet C3 61 81 57   
 1  1 B  01 04    ACK packet D2   
-0  1 B  01 05 IN txn   [57536 POLL] 61   
-1    01 05    [57536 IN-NAK]    
+0  1 B  01 05 IN txn   [50387 POLL] 61   
+1    01 05    [50387 IN-NAK]    
 1  3 B  01 05    IN packet 69 81 0A   
 1  4 B  01 05    DATA0 packet C3 61 81 57   
 1  1 B  01 05    ACK packet D2   
-0      [1004 SOF]  [Frames: 853 - 1856]   
+0      [2000 SOF]  [Frames: 138 - 89] [Periodic Timeout]  
+0      [2000 SOF]  [Frames: 90 - 41] [Periodic Timeout]  
+0      [572 SOF]  [Frames: 42 - 613]   
 0  1 B  01 04 OUT txn 61   
 1  3 B  01 04    OUT packet E1 01 BA   
 1  4 B  01 04    DATA1 packet 4B 61 81 57   
 1  1 B  01 04    ACK packet D2   
-0  1 B  01 05 IN txn   [83901 POLL] 61   
-1    01 05    [83901 IN-NAK]    
-1  3 B  01 05    IN packet 69 81 0A   
-1  4 B  01 05    DATA1 packet 4B 61 81 57   
-1  1 B  01 05    ACK packet D2   
-0    01 01 [16 IN-NAK]  [Periodic Timeout]  
-0    01 05 [178185 IN-NAK]  [Periodic Timeout]  
-0      [2000 SOF]  [Frames: 1857 - 1808] [Periodic Timeout]  
-0    01 01 [16 IN-NAK]  [Periodic Timeout]  
-0    01 05 [147487 IN-NAK]  [Periodic Timeout]  
-0      [2000 SOF]  [Frames: 1809 - 1760] [Periodic Timeout]  
-0      [474 SOF]  [Frames: 1761 - 186]   
-0    01 05 [34876 IN-NAK]    
-0   ! 01 05 [1 ORPHANED]    
-1   U 01 05    [1 IN]    
-0    01 01 [16 IN-NAK]    
-0      Capture stopped  [Sun 31 Mar 2019 02:25:32 PM PDT]  
+0      [2000 SOF]  [Frames: 614 - 565] [Periodic Timeout]  
+0      [1163 SOF]  [Frames: 566 - 1728]   
+0      Capture stopped  [Sun 31 Mar 2019 02:36:23 PM PDT]  

You can see both boards receiving the first 'a' character and then send that back. Then both boards receive the second 'a' character, but only the stm32l gets the IN packets which it can respond with the DATAx packet containing the 'a' character. The samd21 board gets only SOF packets.

Next Steps?

I'm heading out of town on Tuesday to help with the NASA Student Launch, so I think I'll let this sit until I get back. Maybe I'll come up with a new debugging idea, or maybe I'll hear about a fancier USB monitoring device that might capture details that I'm missing.

Anyone with suggestions or comments is welcome to send them along; I'd like to get this bug squashed and finish the rest of the Snek 1.0 release process.

31 March, 2019 10:21PM

Iustin Pop

Kobo Forma eReader review

An e-ink ebook reader in 2019? Why?

Well, “why not” is too brief, so let’s expand on it a bit.

Why an eReader?

I used eBook readers for a long time, as the eInk format was a reasonably good replacement for paper books. I don’t remember when I started exactly (sometimes after 2010), but after my Sony PRS-650 was stolen (argh, stupid me) and I moved to a Sony PRS-T3, my use of it decreased over time.

Not entirely sure why - maybe the slow UI, or the fact that it didn’t have a backlight, or who knows why. Together with my overall reading volume, my ebook use decreased to the extend that maybe I read 2-3 books per year on it. I even have to admit reading more on a tablet, despite the drawbacks (of reading on an LCD screen). One thing I really appreciated on tablets was the fast UI and the large screen. But since Sony closed its eReader division, no new models…

Then, a few weeks back, there was a thread at work about eBook readers vs. tablets, and seeing the people arguing left and right made me look at what would be the choices today when buying a new eReader, and to my surprise, there are pretty large screens available!

The largest ones are only 8 inches, so smaller than a tablet, but is that enough maybe? I read more and the Kobo Forma looked very interesting: good backlight (with blue filter), good size, albeit a very weird form factor.

Fortunately digitec had one in the showroom, so I was able to hold it in my hands, and together with what I read about it I was convinced. Sad to say goodbye to my Sony…

Why Kobo?

From my research, basically the choice is between Kindles and non-Kindles. I have serious reservations about Amazon, and on top of that I prefer the wide availability of the epub format, so the choice, as many times before, was clear.

On the “others” side, it seems there are two main choices, at least in Europe: either Tolino or Kobo. From reading about these two Tolino seems a more restrictive environment, with fewer formats supported. On top of that, it seems that Kobo bought either Tolino or the Tolino brand or the Tolino OS (not entirely sure), so double the argument to go with Kobo.

So on the Kobo large eReader side, there’s either Kobo Aura one, or Kobo Forma, which was much newer (~2 years), and the reviews didn’t point much either way, except that the Forma has also physical buttons (which Sony readers were doing very well). Very clearly then, the Forma it was.

I’d wished the 32 GB model was available, but no, here only the 8GB model is, but that’s enough. So, bought and eager to see how it is.

First impressions

Note: all the handling here is with a cover, not standalone.

The form factor, and especially the asymmetry of the layout: not a problem at all after a couple of hours. Despite still having a 4:3 (1.33×) screen aspect ratio, the (overall) more square ~1.1x ratio is actually well suited visually for landscape reading; the Sony had a 4:3 (1.33×) screen but the body was an almost 1.5× ratio (due to buttons on the bottom), much more book-like, which seems awkward to read in landscape mode, irrespective of actual screen content. So, not only is the form factor not a downside, but it’s actually an advantage!

The backlight works very well, although I only use it in the 0-10% range, and tweaking exact value here is a bit cumbersome. But it works, and the blue light filter is awesomeness++, no problems with it at night - on the contrary, it easily puts me to sleep :) The only small downside is that it eats battery faster, not sure whether due to the backlight or the larger screen, but I presume the former.

The biggest surprise though was how well the large screen is on the eyes, and the fact that it is large enough to enable reading technical books/documents! This I was never able to do on my Sony’s, due to the small size, but on the Forma, double-good. Even diagrams/drawings work OK (not well), so all good.

Portability-wise, it’s large, so doesn’t fit in pockets (and I tried). Aside from that, I can still easily read on the train or while walking so all good.

ePub vs. kePub, oh my!

The first weird thing that one reads about Kobo eReaders is the so-called “kepub” format. Kobo, it seems like many other companies, thought they can do better by having their own eBook format.

But “kepub” is just epub with a custom extension, or at least epub is forward compatible to kepub. Using a book that the reader sees as kepub format brings a couple of advantages:

  • progress report (this chapter left time, overall book left time)
  • per-chapter page numbers (which annoys me to no end)
  • zooming into drawings (double-tap)

The latter is very useful, the first point mildly so, so I’ve settled on using “kepub” as the book format.

Book management

As usual, I manage my eBook collection using Calibre. I have a long collection of books I bought over the years and now over different eReader brands, so I can’t rely on Kobo’s store or any other store to manage my books. Sure I might buy books on Kobo if I really need to, but unless they’re DRM-protected, they’ll go through Calibre before landing on my reader. To be very clear, I buy all my books, but I strongly lean towards DRM-free publishers (which is one reason to avoid a Kindle, but not the only one). I mean here companies such as LeanPub, Manning Publications, etc.. Non-exhaustive list, just some recent examples.

Fortunately Calibre works well with the Kobo as well, so moving my books over was easy, at least for the DRM-free ones. But kepub! Even for this there’s a nice solution: the Kobo Touch Extended plugin, which converts to kepub on the fly (and doesn’t require one to store their books in both epub and kepub). That driver works very well, even collection support is there and a bit more flexible than with the Sony’s, and on top it has series support as well, so happy camper.

Collections note: due to how the (native) reader works, setting the book metadata correctly is only possible once the reader has seen the book. So: plug in, transfer from Calibre, eject, insert again, and at this point Calibre can set the collections for the book. Slightly annoying, but not much.

“Hacking” the Kobos

While reading how to configure my reader, to my surprise I learn that installing additional software on these readers is rather trivial. Well, sadly, that means there’s no firmware update protection as in verification, but I guess you can’t have both. The only snag is how to launch this additional software (and then one reads about KSM vs fmon vs kfmon), but otherwise very simple.

So what additional software can one install? Well, to my surprise, the actual “reader” part. While “nickel” (the built-in reader) works well with (k)epub, it still has limitations, some of the small some of the large. For example: hyphenation support, ligatures, but most of all, PDF (non-reflowable content) support.

I’m fortunate to not have many PDFs (none, actually, I think), but I have to admit this screenshot looks just beatiful. If that link is broken, go to the Plato homepage. There’s also the other option of KOReader, which seems more featured (e.g. native integration with Calibre!). And both of these are open source!

For now, I’m staying with the official reader, but it’s very good to know one has multiple options, especially as you can still use the native reader (i.e. they’re in addition to it, not replacing it).


I’m just less than two weeks with the Forma, and I’ve already read 2½ books on it. For me, that’s surprising—as I was saying, that’s how much I read in the last year on my previous reader. So far I like this device very much, and I definitely recommend it for both normal books and—for the first time—for technical books.

Happy to answer any question (I can) about the device, just ask below.

31 March, 2019 05:24PM

hackergotchi for Jonathan Carter

Jonathan Carter

Free Software Activities (2019-03)

Wow. March is over already. Picture above taken on weekend away on a wine farm in Robertson, Western Cape.

Debian packaging work

2019-03-01: Upload new upstream version of bundlewrap (3.6.0-1) to debian unstable.

2019-03-05: Work on updating python-aniso8601 to version 5.1.0, defer upload due to new dependency: relativetimebuilder (needs packaging).

2019-03-11: Upload live-config (5.20190312) to debian unstable (Closes: #921921).

2019-03-12: Upload new upstream version of powerlevel9k (0.6.7-1) to debian unstable.

2019-03-13: File bug for removal of stale python-fabulous-doc package from debian unstable (ROM: #924469).

2019-03-14: Upload new upstream version of gnome-shell-extension-dash-to-panel (19-1) to debian unstable

2019-03-23: Upload new upstream version of bundlewrap (3.6.1-1) to debian unstable.

2019-03-23: Work on updating gamemode (1.2-1 to 1.3-1), some build problems with inih submodules.

2019-03-23: Upload new upstream version of gnome-shell-extension-dashtodock (66-1~exp1) to debian experimental.

2019-03-24: Upload calamares (3.2.4-4) to debian unstable.

2019-03-24: Upload connectagram (1.2.9-4) to debian unstable.

2019-03-24: Upload fracplanet (0.5.1-4) to debian unstable.

2019-03-24: Upload fractalnow (0.8.2-3) to debian unstable.

2019-03-25: Upload new upstream version of xfce4-screensaver (0.1.4-1~exp1) to debian experimental (Closes: #921835).

2019-03-25: Merge MR#1 for calamares (fix typo).

2019-03-26: File ITP for gnome-shell-extension-draw-on-your screen (ITP: #925518).

2019-03-26: Upload live-wrapper (0.9) to debian unstable (Closes: #924000).

2019-03-28: Upload xfce4-screensaver (0.1.3-1~exp2) to debian experimental.

Debian package sponsoring

2019-03-10: Sponsor package jag (0.3.5-4) for debian unstable (e-mail request).

2019-03-12: Sponsor package vitetris (0.57.2-3) for debian unstable (mentors.debian.org request) (Closes: #923969).

2019-03-12: Sponsor package blastem (0.6.3-1) for debian unstable (mentors.debian.org request) (Closes: #924177).


Lots of stuff not mentioned here, I’m just not used to tracking this, but will hopefully get better at it.

2019-03-05: Take on two new DebConf roles that will consume a lot of time in the immediate future. I’m joining Nattie in a mentoring role for the DC19 team and heading up the bursaries team for the DC19 cycle.

2019-03-15: DebConf committee meeting to decide DC20 location.

Debian quality assurance

2019-03-11: Spot check latest weekly live builds mostly to check EFI/BIOS status when installing via calamares and current status of all desktop environments.

2019-03-30: Troubleshoot grub/luks/live issues.

Debian project leader campaign

Answered lots of questions throughout the month on the debian-vote list that you can read there.

2019-03-14: Jump into the volcano and declare my self-nomination to run for Debian project leader.

2019-03-18: Submit platform for DPL election to Debian secretary.

2019:03-19: Publish blog post “Running for DPL“.

2019-03-20: Publish blog post “GitLab and Debian“.

2019-03-23: Work on platform rebuttals.

2019-03-26: Publish blog post “DPL 2019 Election: Rebuttals“.

2019-03-28: Publish blog post “Fun and Debian“.

31 March, 2019 04:47PM by jonathan

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in March 2019

Here is my monthly update covering what I have been doing in the free software world during March 2019 (previous month):

  • My activities as the current Debian Project Leader are covered in my Bits from the DPL (March 2019) email to the debian-devel-announce mailing list. Attentive followers of the on-going Debian Project Leader Elections will have noted that I am not running for a consecutive third term, so this was therefore my last such update, at least for the time being…

  • Presented at the Free Software Foundation's 2019 edition of LibrePlanet at Massachusetts Institute of Technology, Cambridge, MA on Redis Labs and the tragedy of the Commons Clause. It was great catching up with a large number of free software friends and colleagues. A splendid event as usual but a special congratulations here to Deb Nicholson for winning the FSF's award for the Advancement of Free Software.

  • As part of my duties of being on the board of directors of the Open Source Initiative I attended our monthy board meeting, participated in various licensing discussions occurring on the internet and formally approved the results of the recent OSI Board Member Election results which, as it happens, means that the Board is now predominantly female.

  • Updated my pull request for the shadow UNIX password system to make the build reproducible in order to support the case where secure_getenv(3) is not provided by the system C library. [...]

  • Opened pull requests for the Toil workflow engine [...] and the Vue.js URL router [...] to make their respective builds reproducible.

  • Attended a Debian Bug Squashing Party in Cambridge, United Kingdom. Thanks to Steve McIntyre for arranging and hosting the event.

  • For the Tails privacy-oriented operating system I reviewed and tested a number of feature branches (eg. #16452 & #16559) as well as contributed to a number of discussions on IRC, the mailing lists and on the issue tracker itself (eg. #16552).

  • Updated my django-agpl library — which makes it easier for Django web applications to satisfy the conditions of the GNU Affero General Public License — to set the correct mimetype for .zip files. [...]

  • Fastmail recently updated their user interface which had broken my Fastmail Enhancement Suite Chrome browser extension, requiring some attention. [...]

  • More hacking on the Lintian static analysis tool for Debian packages:

    • Check for placeholder "<project>" strings in debian/watch files as it can result in uscan(1) generating a file with shell metacharacters. (#923589)
    • Support dh-sequence-{gir,gnome,python3} virtual packages as satisfying various build-dependencies. (#924082)
    • Fix false-positives for the version-substvar-for-external-package tag when the Provides field contains multiple items or leading whitespace. (#833608)
    • Correct false-positives in when checking for dh-runit packages that lack a Breaks substitution variable. (#924116)
    • Don't detect non-maintainer upload versions when checking for maintainer scripts that support "ancient" package versions. (#924501)
    • Add itialize to the list of spelling-error-in-binary exceptions. (#923725)
    • Update a large number of tag long descriptions. [...][...][...]

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom. Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

I also made the following changes to our tooling:


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Always warn if the tlsh module is not available (not just if a specific fuzziness threshold is specified) to match the epilog of the --help output. This prevents missing support for file rename detection. (#29)
  • Provide explicit help when the libarchive system package is missing / incomplete. (#50)
  • Fix a number of tests when using GhostScript 9.20 vs 9.26 for Debian stable vs. the same distribution with the security/point release applied. [...]
  • Improved the displayed comment whenever resorting to a binary diff to mention the file's type. (#49)
  • Make --use-dbgsym a ternary operator to make it easier to totally disable. (re. #2)
  • Explicitly mention when the guestfs module is missing at runtime and thus are falling back to a binary diff. (#45)
  • Tidied definition of the no file-specific differences were detected message suffix. [...]
  • Corrected a "recurse" typo [...] and uploaded version 113 to Debian unstable.


strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.


Patches contributed

  • pymongo: Please update the Homepage field. (#924078)

  • wondershaper: Suggest using $IFACE in an /etc/network/interfaces reference. (#924011)

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Investigated and triaged cron, python2.7, python3.4, systemd, openssl (CVE-2019-1543), etc.

  • Frontdesk duties, responding to user/developer questions, reviewing others' packages, etc., particulary around the removal of the wheezy and jessie suites.

  • Issued DLA 1719-1 — it was discovered that there was a denial of service vulnerability in the libjpeg-turbo JPEG image library. A heap-based buffer over-read could be triggered by a specially-crafted bitmap file.

  • Uploaded ruby-i18n 0.7.0-2+deb9u1 to strech-security to prevent a remote denial-of-service vulnerability via an application crash. (#913093)

  • Updated the website to add some missing announcement texts.


Finally, I also made the following non-maintainer uploads (NMUs) to fix release-critical (RC) bugs for the upcoming Debian buster release:

FTP Team

As a Debian FTP assistant I ACCEPTed 14 packages: gcc-9, gcc-9-cross, gcc-9-cross-ports, gnome-shell-extension-bluetooth-quick-connect, golang-github-facebookgo-structtag, golang-github-rs-zerolog, golang-gopkg-stretchr-testify.v1, httpdirfs-fuse, maint-guide, nvidia-graphics-drivers, piuparts, pyglet, qtbase-opensource-src & qtdeclarative-opensource-src.

31 March, 2019 03:55PM

Joerg Jaspert

Miscellaneous, DPL election, Archive changes, Crazyness


As usual, a long time since my last blog. Not that I have been idle, but usually I prefer doing real things over blogging. But hey, here goes one, could be getting long too.

A lot happened since I last blogged. Lets start with the boring stuff: I managed to get myself a slipped disc. Not a boring one either that could be treated “old school”. Would be boring, so I managed to get the whole disc out, leaving 2 bones of my spinal column sit directly on each other. All doctors had been quite surprised and told me they never saw an issue that big.

Surgery, recovery time, life

As it was impossible to do anything except for a surgery, there was no question about it, surgery it had to be. I surprised the docs with telling them Not before day X, as I had a quite important appointment on that day - my son starting school. Day after I let them cut me open and put in an implant.

Recovery times.

Some three and a half hours of surgery are taking an impressive amount of energy out of one, wouldn’t have thought it that bad. Combined with an order to mostly lie down flat for quite a while, it took longer than expected to get back up.

At some point I was told to mostly walk or lie, try not standing still or sitting too much. At the same time the school of my son looked for help during noon hours, so I took my walk time to monitor some elementary school kids. Turns out I am way better at it than anyone expected and they really love me, always asking when I come again, now that I am back at my normal job.


Except for that excitement with a surgery, life is happy normal, which is good. 2 kids around means it is not boring. Work sponsored me a new e-bike, so I am using that a lot - and my older son always wants to go with me. By now he manages 40km tours and plans to reach at least 50km this summer, more if possible. Impressive for a 6, soon 7, years old.


As some may have noticed, I nominated myself for this years DPL election. Crazy times, indeed. Got four other candidates, one has withdrawn in the meantime, so we will have a ballot with 5 options (don’t forget famous NOTA).

My company helpfully agreed on quite a bunch of time I can take, should I really get elected, which I think will also help the other areas I am active in.

I won’t bore you with repeating what I said in my platform or on the Debian Vote List, if you are interested in the DPL election business, feel free to read through it all. It is certainly an interesting campaigning period until now.

Whoever will win in the end, I am sure it will be a good DPL.

Archive changes

Something that turned out to be more felt by people out there have been my recent archive changes. I finally went and archived the wheezy release, long out of support is is. And also jessie, which is only partly out of support - LTS is still active.

Deleting files

Just archiving (moving things to the archive.debian.org machines does not really gain much, the goal is to free up space in the main archive and on the mirrors. Which means deleting the suites and all their files from the archive. For wheezy, that was simple, just use the dak archive tools to set all involved suites empty. Then the usual cleanup processes will get rid of the files, and in a way that mirrors won’t break. Say, deleting only a certain number of files at one mirror push, as our mirrors limit how many files can be deleted at once.

For jessie it was a little more complicated, as the LTS architectures should continue to exist. So it wasn’t a simple “delete it all”, but the right set of files needed to stay around.

Turns out that, while those removals are all fine, at least some suites should continue to stay alive, even if they are empty. Or they generate errors on users systems that don’t really need to be. Say, the jessie-updates suite, while being empty and not receiving any updates anymore (LTS goes via security archive only) is configured per default everywhere.


A while ago there have been two cases in the Debian project where we had to enforce rules and actively take away membership. Something which never was, nor will it ever be, an easy decision.

One of those cases resolved itself nicely in the meantime, in time for the running DPL election even.

The other one seems hell bent on proving our decision right every other day. And ensuring they won’t ever be able to be called Debian Developer, as sad as it is. It is astonishing how much one can defy reality, spit out lies and false accusations and live in a bubble. The sad thing just being how much energy this is needlessly taken away from all people involved.

31 March, 2019 09:09AM