February 24, 2020

Russ Allbery

Book haul

I have been reading rather more than my stream of reviews might indicate, although it's been almost all non-fiction. (Since I've just started a job in astronomy, I decided I should learn something about astronomy. Also, there has been some great non-fiction published recently.)

Ilona Andrews — Sweep with Me (sff)
Conor Dougherty — Golden Gates (non-fiction)
Ann K. Finkbeiner — A Grand and Bold Thing (non-fiction)
Susan Fowler — Whistleblower (non-fiction)
Evalyn Gates — Einstein's Telescope (non-fiction)
T. Kingfisher — Paladin's Grace (sff)
A.K. Larkwood — The Unspoken Name (sff)
Murphy Lawless — Raven Heart (sff)
W. Patrick McCray — Giant Telescopes (non-fiction)
Terry Pratchett — Men at Arms (sff)
Terry Pratchett — Soul Music (sff)
Terry Pratchett — Interesting Times (sff)
Terry Pratchett — Maskerade (sff)
Terry Pratchett — Feet of Clay (sff)
Ethan Siegel — Beyond the Galaxy (non-fiction)
Tor.com (ed.) — Some of the Best from Tor.Com 2019 (sff anthology)

I have also done my one-book experiment of reading Terry Pratchett on the Kindle and it was a miserable experience due to the footnotes, so I'm back to buying Pratchett in mass market paperback.

24 February, 2020 05:04AM

Review: Sweep with Me

Review: Sweep with Me, by Ilona Andrews

Series: Innkeeper Chronicles #5
Publisher: NYLA
Copyright: 2020
ISBN: 1-64197-136-3
Format: Kindle
Pages: 146

Sweep with Me is the fifth book in the Innkeeper Chronicles series. It's a novella rather than a full novel, a bit of a Christmas bonus story. Don't read this before One Fell Sweep; it will significantly spoil that book. I don't believe it spoils Sweep of the Blade, but it may in some way that I don't remember.

Dina and Sean are due to appear before the Assembly for evaluation of their actions as Innkeepers, a nerve-wracking event that could have unknown consequences for their inn. The good news is that this appointment is going to be postponed. The bad news is that the postponement is to allow them to handle a special guest. A Drífan is coming to stay in the Gertrude Hunt.

One of the drawbacks of this story is that it's never clear about what a Drífan is, only that they are extremely magical, the inns dislike them, and they're incredibly dangerous. Unfortunately for Dina, the Drífan is coming for Treaty Stay, which means she cannot turn them down. Treaty Stay is the anniversary of the Treaty of Earth, which established the inns and declared Earth's neutrality. During Treaty Stay, no guest can be turned away from an inn. And a Drífan was one of the signatories of the treaty.

Given some of the guests and problems that Dina has had, I'm a little dubious of this rule from a world-building perspective. It sounds like the kind of absolute rule that's tempting to invent during the first draft of a world background, but that falls apart when one starts thinking about how it might be abused. There's a reason why very few principles of law are absolute. But perhaps we only got the simplified version of the rules of Treaty Stay, and the actual rules have more nuance. In any event, it serves its role as story setup.

Sweep with Me is a bit of a throwback to the early books of the series. The challenge is to handle guests without endangering the inn or letting other people know what's going on. The primary plot involves the Drífan and an asshole businessman who is quite easy to hate. The secondary plots involve a colloquium of bickering, homicidal chickens, a carnivorous hunter who wants to learn how Dina and Sean resolved a war, and the attempts by Dina's chef to reproduce a fast-food hamburger for the Drífan.

I enjoyed the last subplot the best, even if it was a bit predictable. Orro's obsession with (and mistaken impressions about) an Earth cooking show are the sort of alien cultural conflict that makes this series fun, and Dina's willingness to take time away from various crises to find a way to restore his faith in his cooking is the type of action that gives this series its heart. Caldenia, Dina's resident murderous empress, also gets some enjoyable characterization. I'm not sure what I thought a manipulative alien dictator would amuse herself with on Earth, but I liked this answer.

The main plot was a bit less satisfying. I'm happy to read as many stories about Dina managing alien guests as Andrews wants to write, but I like them best when I learn a lot about a new alien culture. The Drífan feel more like a concept than a culture, and the story turns out to revolve around human rivalries far more than alien cultures. It's the world-building that sucks me into these sorts of series; my preference is to learn something grand about the rest of the universe that builds on the ideas already established in the series and deepens them, but that doesn't happen.

The edges of a decent portal fantasy are hiding underneath this plot, but it all happened in the past and we don't get any of the details. I liked the Drífan liege a great deal, but her background felt disappointingly generic and I don't think I learned anything more about the universe.

If you like the other Innkeeper Chronicles books, you'll probably like this, but it's a minor side story, not a continuation of the series arc. Don't expect too much from it, but it's a pleasant diversion to bide the time until the next full novel.

Rating: 7 out of 10

24 February, 2020 03:21AM

hackergotchi for Steve McIntyre

Steve McIntyre

What can you preseed when installing Debian?

Preseeding is a very useful way of installing and pre-configuring a Debian system in one go. You simply supply lots of the settings that your new system will need up front, in a preseed file. The installer will use those settings instead of asking questions, and it will also pass on any extra settings via the debconf database so that any further package setup will use them.

There is documentation about how to do this in the Debian wiki at https://wiki.debian.org/DebianInstaller/Preseed, and an example preseed file for our current stable release (Debian 10, "buster") in the release notes.

One complaint I've heard is that it can be difficult to work out exactly the right data to use in a preseed file, as the format is not the easiest to work with by hand. It's also difficult to find exactly what settings can be changed in a preseed.

So, I've written a script to parse all the debconf templates in each release in the Debian archive and dump all the possible settings in each. I've put the results up online at my debian-preseed site in case it's useful. The data will be updated daily as needed to make sure it's current.

24 February, 2020 12:55AM

February 23, 2020

Enrico Zini

Assorted wonders

Daily Science Fiction :: Rules For Living in a Simulation by Aubrey Hirsch
«Listen. We're fairly certain it's true. The laws of the universe just don't make sense the way they should and it's more and more apparent with every atom of gold we run through the Relativistic Heavy Ion Collider and every electron we smash up at the Large Hadron Collider that we are living in a universe especially constructed for us. And, since we all know infinities cannot be constructed, we must conclude that our universe has been simulated.…»
The Missionary Church of Kopimism (in Swedish Missionerande Kopimistsamfundet), is a congregation of file sharers who believe that copying information is a sacred virtue and was founded by Isak Gerson, a 19-year-old philosophy student, and Gustav Nipe in Uppsala, Sweden in the autumn of 2010.[6] The Church, based in Sweden, has been officially recognized by the Legal, Financial and Administrative Services Agency as a religious community in January 2012, after three application attempts.
I cannibali Korowai vivono in cima agli alberi. Ma è tutto vero? The Korowai cannibals live on top of trees. But is it true?
“Siccome @ciocci mi ha confessato che la cosa gli stava facendo esplodere la testa, e siccome io stesso da tempo ero alla ricerca di risposte adeguate sul tema, ho fatto un po’ di ricerche sull'usanza tutta islandese di celebrare il Natale intonando canzoni pop italiane 🎄🇮🇸🇮🇹”
Sono qui riportate le conversioni tra le antiche unità di misura in uso nel circondario di Bologna e il sistema metrico decimale, così come stabilite ufficialmente nel 1877. Nonostante l'apparente precisione nelle tavole, in molti casi è necessario considerare che i campioni utilizzati (anche per le tavole di epoca napoleonica) erano di fattura approssimativa o discordanti tra loro.[1]
Elenco di popolari creature leggendarie e animali mitologici presenti nei miti, leggende e folclore dei diversi popoli e culture del mondo, in ordine alfabetico. Note Questa lista elenca solo creat…
Last week I wrote about about Meido, the Japanese Underworld, and how it has roots in Indian Buddhism and Chinese Buddhist-Taoist concepts. Today I'll write a little bit about where some unlucky
The Vegetable Lamb of Tartary (Latin: Agnus scythicus or Planta Tartarica Barometz[1]) is a legendary zoophyte of Central Asia, once believed to grow sheep as its fruit. It was believed the sheep were connected to the plant by an umbilical cord and grazed the land around the plant. When all accessible foliage was gone, both the plant and sheep died.

23 February, 2020 11:00PM

Russ Allbery

Review: Exit Strategy

Review: Exit Strategy, by Martha Wells

Series: Murderbot Diaries #4
Publisher: Tor.com
Copyright: October 2018
ISBN: 1-250-18546-7
Format: Kindle
Pages: 172

Exit Strategy is the fourth of the original four Murderbot novellas. As you might expect, this is not the place to begin. Both All Systems Red (the first of the series) and Rogue Protocol (the previous book) are vital to understanding this story.

Be warned that All Systems Red sets up the plot for the rest of the series, and thus any reviews of subsequent books (this one included) run the risk of spoiling parts of that story. If you haven't read it already, I recommend reading it before this review. It's inexpensive and very good!

When I got back to HaveRotten Station, a bunch of humans tried to kill me. Considering how much I'd been thinking about killing a bunch of humans, it was only fair.

Murderbot is now in possession of damning evidence against GrayCris. GrayCris knows that, and is very interested in catching Murderbot. That problem is relatively easy to handle. The harder problem is that GrayCris has gone on the offensive against Murderbot's former client, accusing her of corporate espionage and maneuvering her into their territory. Dr. Mensah is now effectively a hostage, held deep in enemy territory. If she's killed, the newly-gathered evidence will be cold comfort.

Exit Strategy, as befitting the last chapter of Murderbot's initial story arc, returns to and resolves the plot of the first novella. Murderbot reunites with its initial clients, takes on GrayCris directly (or at least their minions), and has to break out of yet another station. It also has to talk to other people about what relationship it wants to have with them, and with the rest of the world, since it's fast running out of emergencies and special situations where that question is pointless.

Murderbot doesn't want to have those conversations very badly because they result in a lot of emotions.

I was having an emotion, and I hate that. I'd rather have nice safe emotions about shows on the entertainment media; having them about things real-life humans said and did just led to stupid decisions like coming to TransRollinHyfa.

There is, of course, a lot of the normal series action: Murderbot grumbling about other people's clear incompetence, coming up with tactical plans on the fly, getting its clients out of tricky situations, and having some very satisfying fights. But the best part of this story is the reunion with Dr. Mensah. Here, Wells does something subtle and important that I've frequently encountered in life but less commonly in stories. Murderbot has played out various iterations of these conversations in its head, trying to decide what it would say. But those imagined conversations were with its fixed and unchanging memory of Dr. Mensah. Meanwhile, the person underlying those memories has been doing her own thinking and reconsideration, and is far more capable of having an insightful conversation than Murderbot expects. The result is satisfying thoughtfulness and one of the first times in the series where Murderbot doesn't have to handle the entire situation by itself.

This is one of those conclusions that's fully as satisfying as I was hoping it would be without losing any of the complexity. The tactics and fighting are more of the same (meaning that they're entertaining and full of snark), but Dr. Mensah's interactions with Murderbot now that she's had the time span of two intervening books to think about how to treat it are some of the best parts of the series. The conclusion doesn't answer all of the questions raised by the series (which is a good thing, since I want more), but it's a solid end to the plot arc.

The sequel, a full-length Murderbot novel (hopefully the first of many) titled Network Effect, is due out in May of 2020.

Rating: 9 out of 10

23 February, 2020 04:46AM

February 22, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.25: Spookyhash bugfix

And a new version of digest is getting onto CRAN now, and to Debian shortly.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 889k monthly downloads with 255 direct reverse dependencies and 7340 indirect reverse dependencies) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release is a one issue fix. Aaron Lun noticed some issues when spookyhash is used in streaming mode. Kendon Bell, who also contributed spookyhash quickly found the issue which is a simple oversight. This was worth addressing in new release, so I pushed 0.6.25.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 February, 2020 11:42PM

hackergotchi for Norbert Preining

Norbert Preining

QOwnNotes for Debian

QOwnNotes is a cross-platform plain text and markdown note taking application. By itself, it wouldn’t be something to talk about, we have vim and emacs and everything in between. But QOwnNotes integrates nicely with the Notes application from NextCloud and OwnCloud, as well as providing useful integration with NextCloud like old version of notes, access to deleted files, watching changes, etc.

The program is written using Qt and contains, besides language files and desktop entries only one binary. There is a package in a PPA for Ubuntu, so it was a breeze to package, converting the cdbs packaging from the ppa to debhelper on the way.

Source packages and amd64 binaries for sid/testing and buster are available at

deb https://www.preining.info/debian unstable main
deb-src https://www.preining.info/debian unstable main

and

deb https://www.preining.info/debian buster main
deb-src https://www.preining.info/debian buster main

respectively. The git repository is als available.

Enjoy.

22 February, 2020 10:07PM by Norbert Preining

hackergotchi for Martin Michlmayr

Martin Michlmayr

ledger2beancount 2.0 released

I released version 2.0 of ledger2beancount, a ledger to beancount converter.

Here are the changes in 2.0:

  • Handle comments in account and commodity declarations
  • Handle transactions with a single posting (without bucket)
  • Handle empty metadata values
  • Rewrite Emacs modeline

You can get ledger2beancount from GitHub.

22 February, 2020 01:28PM by Martin Michlmayr

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.0.2: First Update!

Following up on the initial RcppSimdJson release, a first updated arrived on CRAN yesterday.

RcppSimdJson wraps the fantastic simdjson library by Daniel Lemire which truly impressive. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in persing gigabytes of JSON parsed per second which is quite mindboggling. I highly recommend the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk). The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed.

This release syncs the simdjson headers with upstream, and polishes the build a little by conditioning on actually having a C++17 compiler rather than just suggesting it. The NEWS entry follows.

Changes in version 0.0.2 (2020-02-21)

  • Sychronized with upstream (Dirk in #4 and #5).

  • The R side of validateJSON now globs the file argument, expanding symbols like ~ appropriately.

  • C++ code in validateJSON now conditional on C++17 allowing (incomplete) compilation on lesser systems.

  • New helper function returning value of __cplusplus macro, used in package startup to warn if insufficient compiler used.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 February, 2020 12:55PM

Russ Allbery

Review: All About Emily

Review: All About Emily, by Connie Willis

Publisher: Subterranean
Copyright: 2011
ISBN: 1-59606-488-9
Format: Kindle
Pages: 96

Claire Havilland is a Broadway star, three-time Tony winner, and the first-person narrator of this story. She is also, at least in her opinion, much too old to star in the revival of Chicago, given that the role would require wearing a leotard and fishnet stockings. But that long-standing argument with her manager was just the warm-up request this time. The actual request was to meet with a Nobel-Prize-winning physicist and robotics engineer who will be the Grand Marshal of the Macy's Day Parade. Or, more importantly, to meet with the roboticist's niece, Emily, who has a charmingly encyclopedic knowledge of theater and of Claire Havilland's career in particular.

I'll warn that the upcoming discussion of the background of this story is a spoiler for the introductory twist, but you've probably guessed that spoiler anyway.

I feel bad when someone highly recommends something to me, but it doesn't click with me. That's the case with this novella. My mother loved the character dynamics, which, I'll grant, are charming and tug on the heartstrings, particularly if you enjoy watching two people geek at each other about theater. I got stuck on the world-building and then got frustrated with the near-total lack of engagement with the core problem presented by the story.

The social fear around robotics in All About Emily is the old industrialization fear given new form: new, better robots will be able to do jobs better than humans, and thus threaten human livelihoods. (As is depressingly common in stories like this, the assumptions of capitalism are taken for granted and left entirely unquestioned.) Willis's take on this idea is based on All About Eve, the 1950 film in which an ambitious young fan maneuvers her way into becoming the understudy of an aging Broadway star and then tries to replace her. What if even Broadway actresses could be replaced by robots?

As it turns out, the robot in question has a different Broadway role in mind. To give Willis full credit, it's one that plays adroitly with some stereotypes about robots.

Emily and Claire have good chemistry. Their effusive discussions and Emily's delighted commitment to research are fun to read. But the plot rests on two old SF ideas: the social impact of humans being replaced by machines, and the question of whether simulated emotions in robots should be treated as real (a slightly different question than whether they are real). Willis raises both issues and then does nothing with either of them. The result is an ending that hits the expected emotional notes of an equivalent story that raises no social questions, but which gives the SF reader nothing to work with.

Will robots replace humans? Based on this story, the answer seems to be yes. Should they be allowed to? To avoid spoilers, I'll just say that that decision seems to be made on the basis of factors that won't scale, and on experiences that a cynic like me thinks could be easily manipulated.

Should simulated emotions be treated as real? Willis doesn't seem to realize that's a question. Certainly, Claire never seems to give it a moment's thought.

I think All About Emily could have easily been published in the 1960s. It feels like it belongs to another era in which emotional manipulation by computers is either impossible or, at worst, a happy accident. In today's far more cynical time, when we're increasingly aware that large corporations are deeply invested in manipulating our emotions and quite good at building elaborate computer models for how to do so, it struck me as hollow and tone-deaf. The story is very sweet if you can enjoy it on the same level that the characters engage with it, but is not of much help in grappling with the consequences for abuse.

Rating: 6 out of 10

22 February, 2020 04:38AM

February 21, 2020

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, January 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, 252 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

January started calm until at the end of the month some LTS contributors met, some for the first time ever, at the Mini-DebCamp preceeding FOSDEM in Brussels. While there were no formal events about LTS at both events, such face2face meetings have proven to be very useful for future collaborations!
We currently have 59 LTS sponsors sponsoring 219h each month. Still, as always we are welcoming new LTS sponsors!

The security tracker currently lists 42 packages with a known CVE and the dla-needed.txt file has 33 packages needing an update.

Thanks to our sponsors

New sponsors are in bold (none this month).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

21 February, 2020 05:00PM by Raphaël Hertzog

Andrej Shadura

Follow-up on the train journey to FOSDEM

Here’s a recap of my train journey based on the Twitter thread I kept posting as I travelled.

To FOSDEM…

The departure from Bratislava was as planned:

Ready to depart from Bratislava hl. st.Ready to depart from Bratislava hl. st.

Half an hour in Vienna was just enough for me to grab some coffee and breakfast and board the train to Frankfurt without a hurry:

Boarding a Deutsche Bahn ICE to Frankfurt am MainBoarding a Deutsche Bahn ICE to Frankfurt am Main

Unfortunately, soon after we left Linz and headed to Passau, the train broke down. Apparently, it powered down and the driver was struggling to reboot it. After more than an hour at Haiding, we finally departed with a huge delay:

ICE standing at a platform of a railway station at HaidingTrapped in Haiding near Linz

Since the 18:29 train to Brussels I needed to catch in Frankfurt was the last one that day, I was put into a hotel Leonardo across the street from Frankfurt Hbf, paid by Deutsche Bahn, of course. By the time of our arrival in Frankfurt, the delay was 88 minutes.

Hotel room in Frankfurt am MainHotel room in Frankfurt am Main

Luckily, I didn’t have to convince Deutsche Bahn to let me sleep in the morning, they happily booked me (for free) onto a 10:29 ICE to Brussels so I had an opportunity to have a proper breakfast at the hotel and spend some time at Coffee Fellows at the station.

Frankfurt Hbf building in the morningGuten Morgen Frankfurt
ICE 16 to Brussels waiting at platform 19About to depart for Brussels

Fun fact: Aachen is called Cáchy in Czech, apparently as a corruption of an older German form ze Aachen.

Platform sign saying Aachen Hbf with a double-decker red DB regional trainStopping at Aachen

Having met some Debian people on the train, I have finally arrived in Brussels, albeit with some delay. This, unfortunately meant that I haven’t gone to Vilvoorde to see a friend, so the regional tickets I bought online were useless.

Platform at Bruxelles-MidiFinally, Brussels!

… and back!

The trip home was much better in terms of missed trains, only if a tiny bit more tiring since I took it in one day.

Platform at Bruxelles-Midi with an ICE almost ready to be boardedLeaving Brussels on time

Going to Frankfurt, I’ve spent most of the time in the bistro carriage. Unfortunately, the espresso machine was broken and they didn’t have any croissants, but the tea with milk was good enough.

In the bistro carriageIn the bistro carriage

I’ve used the fifty minutes I had in Frankfurt to claim the compensation for the delay, which (€33) I received in my bank account the next week.

The ICE train to Wien Hbf is about to departThe ICE train to Wien Hbf is about to depart
The view out of the window: going along the river from Passau to LinzHerzlich willkommen in Österreich!

The ICE train at platform 11Arrived at Wien Hbf
The REX to Bratislava waiting at platform 4The last leg

Finally, exactly twelve hours and one minute after the departure, almost home:

The REX from Vienna arrived at platform 2Finally home

21 February, 2020 02:09PM by Andrej Shadura

hackergotchi for Norbert Preining

Norbert Preining

Okular update for Debian

The quest for a good tabbed pdf viewer lead me okular. While Gnome3 has gone they way of “keep it stupid keep it simple” to appeal to less versed users, KDE has gone the opposite direction and provides lots of bells and knobs to configure their application. Not surprisingly, I am tending more and more to KDE apps away from the redux stuff of Gnome apps.

Unfortunately, okular in Debian is horrible outdated. The version shipped in unstable is 17.12.2, there is a version 18.04 in experimental, and the latest from upstream git is 19.12.2. Fortunately, and thanks to the Debian maintainers, the packaging of the version in experimental can be adjusted without too much pain to the latest version, see this git repo.

You can find the sources and amd64 packages in my Debian repository:

deb https://www.preining.info/debian unstable main
deb-src https://www.preining.info/debian unstable main

Enjoy.

21 February, 2020 12:17AM by Norbert Preining

February 20, 2020

hackergotchi for Matthew Garrett

Matthew Garrett

What usage restrictions can we place in a free software license?

Growing awareness of the wider social and political impact of software development has led to efforts to write licenses that prevent software being used to engage in acts that are seen as socially harmful, with the Hippocratic License being perhaps the most discussed example (although the JSON license's requirement that the software be used for good, not evil, is arguably an earlier version of the theme). The problem with these licenses is that they're pretty much universally considered to fall outside the definition of free software or open source licenses due to their restrictions on use, and there's a whole bunch of people who have very strong feelings that this is a very important thing. There's also the more fundamental underlying point that it's hard to write a license like this where everyone agrees on whether a specific thing is bad or not (eg, while many people working on a project may feel that it's reasonable to prohibit the software being used to support drone strikes, others may feel that the project shouldn't have a position on the use of the software to support drone strikes and some may even feel that some people should be the victims of drone strikes). This is, it turns out, all quite complicated.

But there is something that many (but not all) people in the free software community agree on - certain restrictions are legitimate if they ultimately provide more freedom. Traditionally this was limited to restrictions on distribution (eg, the GPL requires that your recipient be able to obtain corresponding source code, and for GPLv3 must also be able to obtain the necessary signing keys to be able to replace it in covered devices), but more recently there's been some restrictions that don't require distribution. The best known is probably the clause in the Affero GPL (or AGPL) that requires that users interacting with covered code over a network be able to download the source code, but the Cryptographic Autonomy License (recently approved as an Open Source license) goes further and requires that users be able to obtain their data in order to self-host an equivalent instance.

We can construct examples of where these prevent certain fields of endeavour, but the tradeoff has been deemed worth it - the benefits to user freedom that these licenses provide is greater than the corresponding cost to what you can do. How far can that tradeoff be pushed? So, here's a thought experiment. What if we write a license that's something like the following:

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. All permissions granted by this license must be passed on to all recipients of modified or unmodified versions of this work
2. This work may not be used in any way that impairs any individual's ability to exercise the permissions granted by this license, whether or not they have received a copy of the covered work


This feels like the logical extreme of the argument. Any way you could use the covered work that would restrict someone else's ability to do the same is prohibited. This means that, for example, you couldn't use the software to implement a DRM mechanism that the user couldn't replace (along the lines of GPLv3's anti-Tivoisation clause), but it would also mean that you couldn't use the software to kill someone with a drone (doing so would impair their ability to make use of the software). The net effect is along the lines of the Hippocratic license, but it's framed in a way that is focused on user freedom.

To be clear, I don't think this is a good license - it has a bunch of unfortunate consequences like it being impossible to use covered code in self-defence if doing so would impair your attacker's ability to use the software. I'm not advocating this as a solution to anything. But I am interested in seeing whether the perception of the argument changes when we refocus it on user freedom as opposed to an independent ethical goal.

Thoughts?

Edit:

Rich Felker on Twitter had an interesting thought - if clause 2 above is replaced with:

2. Your rights under this license terminate if you impair any individual's ability to exercise the permissions granted by this license, even if the covered work is not used to do so

how does that change things? My gut feeling is that covering actions that are unrelated to the use of the software might be a reach too far, but it gets away from the idea that it's your use of the software that triggers the clause.

comment count unavailable comments

20 February, 2020 12:45AM

February 19, 2020

hackergotchi for Gunnar Wolf

Gunnar Wolf

Made with Creative Commons at FIL Minería

Book presentation!

Again, this message is mostly for people that can be at Mexico City on a relatively short notice.

Do you want to get the latest scoop on our translation of Made with Creative Commons? Are you interested in being at a most interesting session presented by the two officials of Creative Commons Mexico chapter, Irene Soria (@arenita) and Iván Martínez (@protoplasmakid) and myself?

Then… Come to the always great 41 Feria Internacional del Libro del Palacio de Minería! We will have the presentation next Monday (2020.02.24), 12:00, in Auditorio Sotero Prieto (Palacio de Minería).

How to get there? Come on… Don’t you know one of the most iconic and beautiful buildings in our historic center? 😉 Information on getting to Palacio de Minería.

See you all there!

19 February, 2020 08:00AM by Gunnar Wolf

hackergotchi for Kees Cook

Kees Cook

security things in Linux v5.4

Previously: v5.3.

Linux kernel v5.4 was released in late November. The holidays got the best of me, but better late than never! ;) Here are some security-related things I found interesting:

waitid() gains P_PIDFD
Christian Brauner has continued his pidfd work by adding a critical mode to waitid(): P_PIDFD. This makes it possible to reap child processes via a pidfd, and completes the interfaces needed for the bulk of programs performing process lifecycle management. (i.e. a pidfd can come from /proc or clone(), and can be waited on with waitid().)

kernel lockdown
After something on the order of 8 years, Linux can now draw a bright line between “ring 0” (kernel memory) and “uid 0” (highest privilege level in userspace). The “kernel lockdown” feature, which has been an out-of-tree patch series in most Linux distros for almost as many years, attempts to enumerate all the intentional ways (i.e. interfaces not flaws) userspace might be able to read or modify kernel memory (or execute in kernel space), and disable them. While Matthew Garrett made the internal details fine-grained controllable, the basic lockdown LSM can be set to either disabled, “integrity” (kernel memory can be read but not written), or “confidentiality” (no kernel memory reads or writes). Beyond closing the many holes between userspace and the kernel, if new interfaces are added to the kernel that might violate kernel integrity or confidentiality, now there is a place to put the access control to make everyone happy and there doesn’t need to be a rehashing of the age old fight between “but root has full kernel access” vs “not in some system configurations”.

tagged memory relaxed syscall ABI
Andrey Konovalov (with Catalin Marinas and others) introduced a way to enable a “relaxed” tagged memory syscall ABI in the kernel. This means programs running on hardware that supports memory tags (or “versioning”, or “coloring”) in the upper (non-VMA) bits of a pointer address can use these addresses with the kernel without things going crazy. This is effectively teaching the kernel to ignore these high bits in places where they make no sense (i.e. mathematical comparisons) and keeping them in place where they have meaning (i.e. pointer dereferences).

As an example, if a userspace memory allocator had returned the address 0x0f00000010000000 (VMA address 0x10000000, with, say, a “high bits” tag of 0x0f), and a program used this range during a syscall that ultimately called copy_from_user() on it, the initial range check would fail if the tag bits were left in place: “that’s not a userspace address; it is greater than TASK_SIZE (0x0000800000000000)!”, so they are stripped for that check. During the actual copy into kernel memory, the tag is left in place so that when the hardware dereferences the pointer, the pointer tag can be checked against the expected tag assigned to referenced memory region. If there is a mismatch, the hardware will trigger the memory tagging protection.

Right now programs running on Sparc M7 CPUs with ADI (Application Data Integrity) can use this for hardware tagged memory, ARMv8 CPUs can use TBI (Top Byte Ignore) for software memory tagging, and eventually there will be ARMv8.5-A CPUs with MTE (Memory Tagging Extension).

boot entropy improvement
Thomas Gleixner got fed up with poor boot-time entropy and trolled Linus into coming up with reasonable way to add entropy on modern CPUs, taking advantage of timing noise, cycle counter jitter, and perhaps even the variability of speculative execution. This means that there shouldn’t be mysterious multi-second (or multi-minute!) hangs at boot when some systems don’t have enough entropy to service getrandom() syscalls from systemd or the like.

userspace writes to swap files blocked
From the department of “how did this go unnoticed for so long?”, Darrick J. Wong fixed the kernel to not allow writes from userspace to active swap files. Without this, it was possible for a user (usually root) with write access to a swap file to modify its contents, thereby changing memory contents of a process once it got paged back in. While root normally could just use CAP_PTRACE to modify a running process directly, this was a loophole that allowed lesser-privileged users (e.g. anyone in the “disk” group) without the needed capabilities to still bypass ptrace restrictions.

limit strscpy() sizes to INT_MAX
Generally speaking, if a size variable ends up larger than INT_MAX, some calculation somewhere has overflowed. And even if not, it’s probably going to hit code somewhere nearby that won’t deal well with the result. As already done in the VFS core, and vsprintf(), I added a check to strscpy() to reject sizes larger than INT_MAX.

ld.gold support removed
Thomas Gleixner removed support for the gold linker. While this isn’t providing a direct security benefit, ld.gold has been a constant source of weird bugs. Specifically where I’ve noticed, it had been pain while developing KASLR, and has more recently been causing problems while stabilizing building the kernel with Clang. Having this linker support removed makes things much easier going forward. There are enough weird bugs to fix in Clang and ld.lld. ;)

Intel TSX disabled
Given the use of Intel’s Transactional Synchronization Extensions (TSX) CPU feature by attackers to exploit speculation flaws, Pawan Gupta disabled the feature by default on CPUs that support disabling TSX.

That’s all I have for this version. Let me know if I missed anything. :) Next up is Linux v5.5!

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

19 February, 2020 12:37AM by kees

February 18, 2020

hackergotchi for Daniel Silverstone

Daniel Silverstone

Subplot volunteers? (Acceptance testing tool)

Note: This is a repost from Lars' blog made to widen the reach and hopefully find the right interested parties.


Would you be willing to try Subplot for acceptance testing for one of your real projects, and give us feedback? We're looking for two volunteers.

given a project
when it uses Subplot
then it is successful

Subplot is a tool for capturing and automatically verifying the acceptance criteria for a software project or a system, in a way that's understood by all stakeholders.

In a software project there are always more than one stakeholder. Even in a project one writes for oneself, there are two stakeholders: oneself, and that malicious cretin oneself-in-the-future. More importantly, though, there are typically stakeholders such as end users, sysadmins, clients, software architects, developers, and testers. They all need to understand what the software should do, and when it's in an acceptable state to be put into use: in other words, what the acceptance criteria are.

Crucially, all stakeholders should understand the acceptance criteria the same way, and also how to verify they are met. In an ideal situation, all verification is automated, and happens very frequently.

There are various tools for this, from generic documentation tooling (word processors, text editors, markup languages, etc) to test automation (Cucumber, Selenium, etc). On the one hand, documenting acceptance criteria in a way that all stakeholders understand is crucial: otherwise the end users are at risk of getting something that's not useful to help them, and the project is a waste of everyone's time and money. On the other hand, automating the verification of how acceptance criteria is met is also crucial: otherwise it's done manually, which is slow, costly, and error prone, which increases the risk of project failure.

Subplot aims to solve this by an approach that combines documentation tooling with automated verification.

  • The stakeholders in a project jointly produce a document that captures all relevant acceptance criteria and also describes how they can be verified automatically, using scenarios. The document is written using Markdown.

  • The developer stakeholders produce code to implement the steps in the scenarios. The Subplot approach allows the step implementations to be done in a highly cohesive, de-coupled manner, making such code usually be quite simple. (Test code should be your best code.)

  • Subplot's "docgen" program produces a typeset version as PDF or HTML. This is meant to be easily comprehensible by all stakeholders.

  • Subplot's "codegen" program produces a test program in the language used by the developer stakeholders. This test program can be run to verify that acceptance criteria are met.

Subplot started in in late 2018, and was initially called Fable. It is based on the yarn tool for the same purpose, from 2013. Yarn has been in active use all its life, if not popular outside a small circle. Subplot improves on yarn by improving document generation, markup, and decoupling of concerns. Subplot is not compatible with yarn.

Subplot is developed by Lars Wirzenius and Daniel Silverstone as a hobby project. It is free software, implemented in Rust, developed on Debian, and uses Pandoc and LaTeX for typesetting. The code is hosted on gitlab.com. Subplot verifies its own acceptance criteria. It is alpha level software.

We're looking for one or two volunteers to try Subplot on real projects of their own, and give us feedback. We want to make Subplot good for its purpose, also for people other than us. If you'd be willing to give it a try, start with the Subplot website, then tell us you're using Subplot. We're happy to respond to questions from the first two volunteers, and from others, time permitting. (The reality of life and time constraints is that we can't commit to supporting more people at this time.)

We'd love your feedback, whether you use Subplot or not.

18 February, 2020 08:24PM by Daniel Silverstone

hackergotchi for Mike Gabriel

Mike Gabriel

MATE 1.24 landed in Debian unstable

Last week, Martin Wimpress (from Ubuntu MATE) and I did a 2.5-day packaging sprint and after that I bundle-uploaded all MATE 1.24 related components to Debian unstable. Thus, MATE 1.24 landed in Debian unstable only four days after the upstream release. I think this was the fastest version bump of MATE in Debian ever.

Packages should have been built by now for most of the 22 architectures supported by Debian. The current/latest build status can be viewed on the DDPO page of the Debian+Ubuntu MATE Packaging Team [1].

Please also refer to the MATE 1.24 upstream release notes for details on what's new and what's changed [2].

Credits

One big thanks goes to Martin Wimpress. Martin and I worked on all the related packages hand in hand. Only this team work made this very fast upload possible. Martin especially found the fix for a flaw in Python Caja that caused all Python3 based Caja extensions to fail in Caja 1.24 / Python Caja 1.24. Well done!

Another big thanks goes to the MATE upstream team. You again did an awesome job, folks. Much, much appreciated.

Last but not least, a big thanks goes to Svante Signell for providing Debian architecture specific patches for Debian's non-Linux distributions (GNU/Hurd, GNU/kFreeBSD). We will wait now until all MATE 1.24 packages have initially migrated to Debian testing and then follow-up upload his fixes. As in the past, MATE shall be available on as many Debian architectures as possible (ideally: all of them). Saying this, all Debian porters are invited to send us patches, if they see components of MATE Desktop fail on not-so-common architectures.

References

light+love,
Mike Gabriel (aka sunweaver)

18 February, 2020 10:03AM by sunweaver

hackergotchi for Keith Packard

Keith Packard

more-iterative-splines

Slightly Better Iterative Spline Decomposition

My colleague Bart Massey (who is a CS professor at Portland State University) reviewed my iterative spline algorithm article and had an insightful comment — we don't just want any spline decomposition which is flat enough, what we really want is a decomposition for which every line segment is barely within the specified flatness value.

My initial approach was to keep halving the length of the spline segment until it was flat enough. This definitely generates a decomposition which is flat enough everywhere, but some of the segments will be shorter than they need to be, by as much as a factor of two.

As we'll be taking the resulting spline and doing a lot more computation with each segment, it makes sense to spend a bit more time finding a decomposition with fewer segments.

The Initial Search

Here's how the first post searched for a 'flat enough' spline section:

t = 1.0f;

/* Iterate until s1 is flat */
do {
    t = t/2.0f;
    _de_casteljau(s, s1, s2, t);
} while (!_is_flat(s1));

Bisection Method

What we want to do is find an approximate solution for the function:

flatness(t) = tolerance

We'll use the Bisection method to find the value of t for which the flatness is no larger than our target tolerance, but is at least as large as tolerance - ε, for some reasonably small ε.

float       hi = 1.0f;
float       lo = 0.0f;

/* Search for an initial section of the spline which
 * is flat, but not too flat
 */
for (;;) {

    /* Average the lo and hi values for our
     * next estimate
     */
    float t = (hi + lo) / 2.0f;

    /* Split the spline at the target location
     */
    _de_casteljau(s, s1, s2, t);

    /* Compute the flatness and see if s1 is flat
     * enough
     */
    float flat = _flatness(s1);

    if (flat <= SCALE_FLAT(SNEK_DRAW_TOLERANCE)) {

        /* Stop looking when s1 is close
         * enough to the target tolerance
         */
        if (flat >= SCALE_FLAT(SNEK_DRAW_TOLERANCE - SNEK_FLAT_TOLERANCE))
            break;

        /* Flat: t is the new lower interval bound */
        lo = t;
    } else {

        /* Not flat: t is the new upper interval bound */
        hi =  t;
    }
}

This searches for a place to split the spline where the initial portion is flat but not too flat. I set SNEK_FLAT_TOLERANCE to 0.01, so we'll pick segments which have flatness between 0.49 and 0.50.

The benefit from the search is pretty easy to understand by looking at the number of points generated compared with the number of _de_casteljau and _flatness calls:

Search Calls Points
Simple 150 33
Bisect 229 25

And here's an image comparing the two:

A Closed Form Approach?

Bart also suggests attempting to find an analytical solution to decompose the spline. What we need is to is take the flatness function and find the spline which makes it equal to the desired flatness. If the spline control points are a, b, c, and d, then the flatness function is:

ux = (3×b.x - 2×a.x - d.x)²
uy = (3×b.y - 2×a.y - d.y)²
vx = (3×c.x - 2×d.x - a.x)²
vy = (3×c.y - 2×d.y - a.y)²

flat = max(ux, vx) + max(uy, vy)

When the spline is split into two pieces, all of the control points for the new splines are determined by the original control points and the 't' value which sets where the split happens. What we want is to find the 't' value which makes the flat value equal to the desired tolerance. Given that the binary search runs De Casteljau and the flatness function almost 10 times for each generated point, there's a lot of opportunity to go faster with a closed form solution.

Update: Fancier Method Found!

Bart points me at two papers:

  1. Flattening quadratic Béziers by Raph Levien
  2. Precise Flattening of Cubic Bézier Segments by Thomas F. Hain, Athar L. Ahmad, and David D. Langan

Levien's paper offers a great solution for quadratic Béziers by directly computing the minimum set of line segments necessary to approximate within a specified flatness. However, it doesn't generalize to cubic Béziers.

Hain, Ahmad and Langan do provide a directly computed decomposition of a cubic Bézier. This is done by constructing a parabolic approximation to the first portion of the spline and finding a 't' value which produces the desired flatness. There are a pile of special cases to deal with when there isn't a good enough parabolic approximation. But, overall computational cost is lower than a straightforward binary decomposition, plus there's no recursion required.

This second algorithm has the same characteristics as my Bisection method as the last segment may have any flatness from zero through the specified tolerance; Levien's solution is neater in that it generates line segments of similar flatness across the whole spline.

Current Implementation

/*
 * Copyright © 2020 Keith Packard <keithp@keithp.com>
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful, but
 * WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License along
 * with this program; if not, write to the Free Software Foundation, Inc.,
 * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.
 */

#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <stdint.h>
#include <math.h>

typedef float point_t[2];
typedef point_t spline_t[4];

uint64_t num_flats;
uint64_t num_points;

#define SNEK_DRAW_TOLERANCE 0.5f
#define SNEK_FLAT_TOLERANCE 0.01f

/*
 * This actually returns flatness² * 16,
 * so we need to compare against scaled values
 * using the SCALE_FLAT macro
 */
static float
_flatness(spline_t spline)
{
    /*
     * This computes the maximum deviation of the spline from a
     * straight line between the end points.
     *
     * From https://hcklbrrfnn.files.wordpress.com/2012/08/bez.pdf
     */
    float ux = 3.0f * spline[1][0] - 2.0f * spline[0][0] - spline[3][0];
    float uy = 3.0f * spline[1][1] - 2.0f * spline[0][1] - spline[3][1];
    float vx = 3.0f * spline[2][0] - 2.0f * spline[3][0] - spline[0][0];
    float vy = 3.0f * spline[2][1] - 2.0f * spline[3][1] - spline[0][1];

    ux *= ux;
    uy *= uy;
    vx *= vx;
    vy *= vy;
    if (ux < vx)
        ux = vx;
    if (uy < vy)
        uy = vy;
    ++num_flats;

    /*
     *If we wanted to return the true flatness, we'd use:
     *
     * return sqrtf((ux + uy)/16.0f)
     */
    return ux + uy;
}

/* Convert constants to values usable with _flatness() */
#define SCALE_FLAT(f)   ((f) * (f) * 16.0f)

/*
 * Linear interpolate from a to b using distance t (0 <= t <= 1)
 */
static void
_lerp (point_t a, point_t b, point_t r, float t)
{
    int i;
    for (i = 0; i < 2; i++)
        r[i] = a[i]*(1.0f - t) + b[i]*t;
}

/*
 * Split 's' into two splines at distance t (0 <= t <= 1)
 */
static void
_de_casteljau(spline_t s, spline_t s1, spline_t s2, float t)
{
    point_t first[3];
    point_t second[2];
    int i;

    for (i = 0; i < 3; i++)
        _lerp(s[i], s[i+1], first[i], t);

    for (i = 0; i < 2; i++)
        _lerp(first[i], first[i+1], second[i], t);

    _lerp(second[0], second[1], s1[3], t);

    for (i = 0; i < 2; i++) {
        s1[0][i] = s[0][i];
        s1[1][i] = first[0][i];
        s1[2][i] = second[0][i];

        s2[0][i] = s1[3][i];
        s2[1][i] = second[1][i];
        s2[2][i] = first[2][i];
        s2[3][i] = s[3][i];
    }
}

/*
 * Decompose 's' into straight lines which are
 * within SNEK_DRAW_TOLERANCE of the spline
 */
static void
_spline_decompose(void (*draw)(float x, float y), spline_t s)
{
    /* Start at the beginning of the spline. */
    (*draw)(s[0][0], s[0][1]);

    /* Split the spline until it is flat enough */
    while (_flatness(s) > SCALE_FLAT(SNEK_DRAW_TOLERANCE)) {
        spline_t    s1, s2;
        float       hi = 1.0f;
        float       lo = 0.0f;

        /* Search for an initial section of the spline which
         * is flat, but not too flat
         */
        for (;;) {

            /* Average the lo and hi values for our
             * next estimate
             */
            float t = (hi + lo) / 2.0f;

            /* Split the spline at the target location
             */
            _de_casteljau(s, s1, s2, t);

            /* Compute the flatness and see if s1 is flat
             * enough
             */
            float flat = _flatness(s1);

            if (flat <= SCALE_FLAT(SNEK_DRAW_TOLERANCE)) {

                /* Stop looking when s1 is close
                 * enough to the target tolerance
                 */
                if (flat >= SCALE_FLAT(SNEK_DRAW_TOLERANCE - SNEK_FLAT_TOLERANCE))
                    break;

                /* Flat: t is the new lower interval bound */
                lo = t;
            } else {

                /* Not flat: t is the new upper interval bound */
                hi =  t;
            }
        }

        /* Draw to the end of s1 */
        (*draw)(s1[3][0], s1[3][1]);

        /* Replace s with s2 */
        memcpy(&s[0], &s2[0], sizeof (spline_t));
    }

    /* S is now flat enough, so draw to the end */
    (*draw)(s[3][0], s[3][1]);
}

void draw(float x, float y)
{
    ++num_points;
    printf("%8g, %8g\n", x, y);
}

int main(int argc, char **argv)
{
    spline_t spline = {
        { 0.0f, 0.0f },
        { 0.0f, 256.0f },
        { 256.0f, -256.0f },
        { 256.0f, 0.0f }
    };
    _spline_decompose(draw, spline);
    fprintf(stderr, "flats %lu points %lu\n", num_flats, num_points);
    return 0;
}

18 February, 2020 07:41AM

February 17, 2020

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

Reasons for job burnout and what motivates people in their job

Burnout comes in many colours and flavours.

Often, burnout is conceived as a weakness of the person experiencing it: "they can't work under stress", "they lack organizational skills", "they are currently going through grief or a break up, that's why they can't keep up" — you've heard it all before, right?

But what if job burnout would actually be an indicator for a toxic work environment? Or for a toxic work setup?

I had read quite a bit of literature trying to explain burnout before stumbling upon the work of Christina Maslach. She has researched burnout for thirty years and is most well known for her research on occupational burnout. While she observed burnout in the 90ies mostly in caregiver professions, we can see an increase of burnout in many other fields in recent years, such as in the tech industry. Maslach outlines in one of her talks what this might be due to.

More interesting to me is the question why job burnout occurs at all? High workload is only one out of six factors that increase the risk for burnout, according to Christina Maslach and her team.

Factors increasing job burnout

  1. Workload. This could be demand overload, lots of different tasks, lots of context switching, unclear expectations, having several part time jobs, lack of resources, lack of work force, etc.
  2. Lack of control. Absence of agency. Absence of the possibility to make decisions. Impossibility to act on one's own account.
  3. Insufficient reward. Here, we are not solely talking about financial reward, but also about gratitude, recognition, visibility, and celebration of accomplishments.
  4. Lack of community. Remote work, asynchronous communication, poor communication skills, isolation in working on tasks, few/no in-person meetings, lack of organizational caring.
  5. Absence of fairness. Invisible hierarchies, lack of (fair) decision making processes, back channel decision making, financial or other rewards unfairly distributed.
  6. Value conflicts. This could be over-emphasizing on return on investment, making unethical requests, not respecting colleagues' boundaries, the lack of organizational vision, poor leadership.

Interestingly, it is possible to improve one area of risk, and see improvements in all the other areas.

What motivates people?

So, what is it that motivates people, what makes them like their work?
Here, Maslach comes up with another interesting list:

  • Autonomy. This could mean for example to trust colleagues to work on tasks autonomously. To let colleagues make their own decisions on how to implement a feature as long as it corresponds to the code writing guidelines. The responsibility for the task should be transferred along with the task. People need to be allowed to make mistakes (and fix them). Autonomy also means to say goodbye to the expectation that colleagues do everything exactly like we would do it. Instead, we can learn to trust in collective intelligence for coming up with different solutions.
  • Feeling of belonging. This one could mean to search to use synchronous communication whenever possible. To privilege in-person meetings. To celebrate achievements. To make collective decisions whenever the outcome affects the collective (or part of it). To have lunch together. To have lunch together and not talk about work.
  • Competence. Having a working feedback process. Valueing each others' competences. Having the possibility to evolve in the workplace. Having the possibility to get training, to try new setups, new methods, or new tools. Having the possibility to increase one's competences, possibly with the financial backing of the workplace.
  • Positive emotions. Encouraging people to take breaks. Make sure work plannings also include downtime. Encouraging people to take at least 5 weeks of vacation per year. Allowing people to have (paid) time off. Practicing gratitude. Acknowledging and celebrating achievements. Giving appreciation.
  • Psychological safety. Learn to communicate with kindness. Practice active listening. Have meetings facilitated. Condemn harassment, personal insults, sexism, racism, fascism. Condemn silencing of people. Have a possibility to report on code of ethics/conduct abuses. Making sure that people who experience problems or need to share something are not isolated.
  • Fairness. How about exploring inclusive leadership models? Making invisible hierarchies visible (See the concept of rank). Being aware of rank. Have clear and transparent decision making processes. Rewarding people equally. Making sure there is no invisible unpaid work done by always the same people.
  • Meaning. Are the issues that we work on meaningful per se? Do they contribute anything to the world, or to the common good? Making sure that tasks or roles of other colleagues are not belittled. Meaning can also be given by putting tasks into perspective, for example by making developers attend conferences where they can meet users and get feedback on their work. Making sure we don't forget why we wanted to do a job in first place. Getting familiar with the concept of bullshit jobs.

In this list, the words written in bold are what we could call "Needs". The descriptions behind them are what we could call "Strategies". There are always many different strategies to fulfill a need, I've only outlined some of them. I'm sure you can come up with others, please don't hesitate to share them with me.

17 February, 2020 11:00PM

hackergotchi for Holger Levsen

Holger Levsen

20200217-SnowCamp

SnowCamp 2020

This is just a late reminder that there are still some seats available for SnowCamp, taking place at the end of this week and during the whole weekend somewhere in the Italian mountains.

I believe it will be a really nice opportunity to hack on Debian things and thus I'd hope that there won't be empty seats, though atm this is the case.

The venue is reachable by train and Debian will be covering the cost of accomodation, so you just have to cover transportation and meals.

The event starts in three days, so hurry up and whatever you plans are, change them!

If you have any further questions, join #suncamp (yes!) on irc.debian.org.

17 February, 2020 07:56PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Amiga floppy recovery project scope

This is the eighth part in a series of blog posts. The previous post was First successful Amiga disk-dumping session. The whole series is available here: Amiga.

The main goal of my Amiga project is to read the data from my old floppy disks. After a bit of hiatus (and after some gentle encouragement from friends at FOSDEM) I'm nearly done, 150/200 disks attempted so far. Ultimately I intend to get rid of the disks to free up space in my house, and probably the Amiga, too. In the meantime, what could I do with it?

Gotek floppy emulator balanced on the Amiga

Gotek floppy emulator balanced on the Amiga

The most immediately obvious things are to improve the housing of the emulated floppy disk. My Gotek adaptor is unceremoniously balanced on top of the case. Housing it within the A500 would be much neater. I might try to follow this guide which requires no case modifications and no 3D printed brackets, but instead of soldering new push-buttons, add a separate OLED display and rotary encoder (knob) in a separate housing, such as this 3D-printed wedge-shaped mount on Thingiverse. I do wonder if some kind of side-mounted solution might be better, so the top casing could be removed without having to re-route the wires each time.

3D printed OLED mount, from Amibay

3D printed OLED mount, from Amibay

Next would be improving the video output. My A520 video modulator developed problems that are most likely caused by leaking or blown capacitors. At the moment, I have a choice of B&W RF out, or using a 30 year old Philips CRT monitor. The latter is too big to comfortably fit on my main desk, and the blue channel has started to fail. Learning the skills to fix the A520 could be useful as the same could happen to the Amiga itself. Alternatively replacements are very cheap on the second hand market. Or I could look at a 3rd-party equivalent like the RGB4ALL. I have tried a direct, passive socket adaptor on the off-chance my LCD TV supported 15kHz, but alas, it appears it doesn't. This list of monitors known to support 15kHz is very short, so sourcing one is not likely to be easy or cheap. It's possible to buy sophisticated "Flicker Fixers/Scan Doublers" that enable the use of any external display, but they're neither cheap nor common.

My original "tank" Amiga mouse (pictured above) is developing problems with the left mouse button. Replacing the switch looks simple (in this Youtube video) but will require me to invest in a soldering iron, multimeter and related equipment (not necessarily a bad thing). It might be easier to buy a different, more comfortable old serial mouse.

Once those are out of the way, It might be interesting to explore aspects of the system that I didn't touch on as a child: how do you program the thing? I don't remember ever writing any Amiga BASIC, although I had several doomed attempts to use "game makers" like AMOS or SEUCK. What programming language were the commercial games written in? Pure assembly? The 68k is supposed to have a pleasant instruction set for this. Was there ever a practically useful C compiler for the Amiga? I never networked my Amiga. I never played around with music sampling or trackers.

There's something oddly satisfying about the idea of taking a 30 year old computer and making it into a useful machine in the modern era. I could consider more involved hardware upgrades. The Amiga enthusiast community is old and the fans are very passionate. I've discovered a lot of incredible enhancements that fans have built to enhanced their machines, right up to FPGA-powered CPU replacements that can run several times faster than the fastest original m68ks, and also offer digital video out, hundreds of MB of RAM, modern storage options, etc. To give an idea, check out Epsilon's Amiga Blog, which outlines some of the improvements they've made to their fleet of machines.

This is a deep rabbit hole, and I'm not sure I can afford the time (or the money!) to explore it at the moment. It will certainly not rise above my more pressing responsibilities. But we'll see how things go.

17 February, 2020 04:05PM

February 16, 2020

Enrico Zini

hackergotchi for Ben Armstrong

Ben Armstrong

Introducing Dronefly, a Discord bot for naturalists

In the past few years, since first leaving Debian as a free software developer in 2016, I’ve taken up some new hobbies, or more accurately, renewed my interest in some old ones.

Screenshot from Dronefly bot tutorialScreenshot from Dronefly bot tutorial

During that hiatus, I also quietly un-retired from Debian, anticipating there would be some way to contribute to the project in these new areas of interest. That’s still an idea looking for the right opportunity to present itself, not to mention the available time to get involved again.

With age comes an increasing clamor of complaints from your body when you have a sedentary job in front of a screen, and hobbies that rarely take you away from it. You can’t just plunk down in front of a screen and do computer stuff non-stop & just bounce back again at the start of each new day. So in the past several years, getting outside more started to improve my well-being and address those complaints. That revived an old interest in me: nature photography. That, in turn, landed me at iNaturalist, re-ignited my childhood love of learning about the natural world, & hooked me on a regular habit of making observations & uploading them to iNat ever since.

Second, back in the late nineties, I wrote a little library loans renewal reminder project in Python. Python was a pleasure to work with, but that project never took off and soon was forgotten. Now once again, decades later, Python is a delight to be writing in, with its focus on writing readable code & backed by a strong culture of education.

Where Python came to bear on this new hobby was when the naturalists on the iNaturalist Discord server became a part of my life. Last spring, I stumbled upon this group & started hanging out. On this platform, we share what we are finding, we talk about those findings, and we challenge each other to get better at it. It wasn’t long before the idea to write some code to access the iNaturalist platform directly from our conversations started to take shape.

Now, ideally, what happened next would have been for an open platform, but this is where the community is. In many ways, too, other chat platforms (like irc) are not as capable vs. Discord to support the image-rich chat experience we enjoy. Thus, it seemed that’s where the code had to be. Dronefly, an open source Python bot for naturalists built on the Red DiscordBot bot framework, was born in the summer of 2019.

Dronefly is still alpha stage software, but in the short space of six months, has grown to roughly 3k lines of code and is used used by hundreds of users across 9 different Discord servers. It includes some innovative features requested by our users like the related command to discover the nearest common ancestor of one or more named taxa, and the map command to easily access a range map on the platform for all the named taxa. So far as I know, no equivalent features exist yet on the iNat website or apps for mobile. Commands like these put iNat data directly at users’ fingertips in chat, improving understanding of the material with minimal interruption to the flow of conversation.

This tutorial gives an overview of Dronefly’s features. If you’re intrigued, please look me up on the iNaturalist Discord server following the invite from the tutorial. You can try out the bot there, and I’d be happy to talk to you about our work. Even if this is not your thing, do have a look at iNaturalist itself. Perhaps, like me, you’ll find in this platform a fun, rewarding, & socially significant outlet that gets you outside more, with all the benefits that go along with that.

That’s what has been keeping me busy lately. I hope all my Debian friends are well & finding joy in what you’re doing. Keep up the good work!

16 February, 2020 04:51PM by Ben Armstrong

February 15, 2020

Russell Coker

DisplayPort and 4K

The Problem

Video playback looks better with a higher scan rate. A lot of content that was designed for TV (EG almost all historical documentaries) is going to be 25Hz interlaced (UK and Australia) or 30Hz interlaced (US). If you view that on a low refresh rate progressive scan display (EG a modern display at 30Hz) then my observation is that it looks a bit strange. Things that move seem to jump a bit and it’s distracting.

Getting HDMI to work with 4K resolution at a refresh rate higher than 30Hz seems difficult.

What HDMI Can Do

According to the HDMI Wikipedia page [1], HDMI 1.3–1.4b (introduced in June 2006) supports 30Hz refresh at 4K resolution and if you use 4:2:0 Chroma Subsampling (see the Chroma Subsampling Wikipedia page [2] you can do 60Hz or 75Hz on HDMI 1.3–1.4b. Basically for colour 4:2:0 means half the horizontal and half the vertical resolution while giving the same resolution for monochrome. For video that apparently works well (4:2:0 is standard for Blue Ray) and for games it might be OK, but for text (my primary use of computers) it would suck.

So I need support for HDMI 2.0 (introduced in September 2013) on the video card and monitor to do 4K at 60Hz. Apparently none of the combinations of video card and HDMI cable I use for Linux support that.

HDMI Cables

The Wikipedia page alleges that you need either a “Premium High Speed HDMI Cable” or a “Ultra High Speed HDMI Cable” for 4K resolution at 60Hz refresh rate. My problems probably aren’t related to the cable as my testing has shown that a cheap “High Speed HDMI Cable” can work at 60Hz with 4K resolution with the right combination of video card, monitor, and drivers. A Windows 10 system I maintain has a Samsung 4K monitor and a NVidia GT630 video card running 4K resolution at 60Hz (according to Windows). The NVidia GT630 card is one that I tried on two Linux systems at 4K resolution and causes random system crashes on both, it seems like a nice card for Windows but not for Linux.

Apparently the HDMI devices test the cable quality and use whatever speed seems to work (the cable isn’t identified to the devices). The prices at a local store are $3.98 for “high speed”, $19.88 for “premium high speed”, and $39.78 for “ultra high speed”. It seems that trying a “high speed” cable first before buying an expensive cable would make sense, especially for short cables which are likely to be less susceptible to noise.

What DisplayPort Can Do

According to the DisplayPort Wikipedia page [3] versions 1.2–1.2a (introduced in January 2010) support HBR2 which on a “Standard DisplayPort Cable” (which probably means almost all DisplayPort cables that are in use nowadays) allows 60Hz and 75Hz 4K resolution.

Comparing HDMI and DisplayPort

In summary to get 4K at 60Hz you need 2010 era DisplayPort or 2013 era HDMI. Apparently some video cards that I currently run for 4K (which were all bought new within the last 2 years) are somewhere between a 2010 and 2013 level of technology.

Also my testing (and reading review sites) shows that it’s common for video cards sold in the last 5 years or so to not support HDMI resolutions above FullHD, that means they would be HDMI version 1.1 at the greatest. HDMI 1.2 was introduced in August 2005 and supports 1440p at 30Hz. PCIe was introduced in 2003 so there really shouldn’t be many PCIe video cards that don’t support HDMI 1.2. I have about 8 different PCIe video cards in my spare parts pile that don’t support HDMI resolutions higher than FullHD so it seems that such a limitation is common.

The End Result

For my own workstation I plugged a DisplayPort cable between the monitor and video card and a Linux window appeared (from KDE I think) offering me some choices about what to do, I chose to switch to the “new monitor” on DisplayPort and that defaulted to 60Hz. After that change TV shows on NetFlix and Amazon Prime both look better. So it’s a good result.

As an aside DisplayPort cables are easier to scrounge as the HDMI cables get taken by non-computer people for use with their TV.

15 February, 2020 11:00PM by etbe

hackergotchi for Keith Packard

Keith Packard

iterative-splines

Decomposing Splines Without Recursion

To make graphics usable in Snek, I need to avoid using a lot of memory, especially on the stack as there's no stack overflow checking on most embedded systems. Today, I worked on how to draw splines with a reasonable number of line segments without requiring any intermediate storage. Here's the results from this work:

The Usual Method

The usual method I've used to convert a spline into a sequence of line segments is split the spline in half using DeCasteljau's algorithm recursively until the spline can be approximated by a straight line within a defined tolerance.

Here's an example from twin:

static void
_twin_spline_decompose (twin_path_t *path,
            twin_spline_t   *spline, 
            twin_dfixed_t   tolerance_squared)
{
    if (_twin_spline_error_squared (spline) <= tolerance_squared)
    {
    _twin_path_sdraw (path, spline->a.x, spline->a.y);
    }
    else
    {
    twin_spline_t s1, s2;
    _de_casteljau (spline, &s1, &s2);
    _twin_spline_decompose (path, &s1, tolerance_squared);
    _twin_spline_decompose (path, &s2, tolerance_squared);
    }
}

The _de_casteljau function splits the spline at the midpoint:

static void
_lerp_half (twin_spoint_t *a, twin_spoint_t *b, twin_spoint_t *result)
{
    result->x = a->x + ((b->x - a->x) >> 1);
    result->y = a->y + ((b->y - a->y) >> 1);
}

static void
_de_casteljau (twin_spline_t *spline, twin_spline_t *s1, twin_spline_t *s2)
{
    twin_spoint_t ab, bc, cd;
    twin_spoint_t abbc, bccd;
    twin_spoint_t final;

    _lerp_half (&spline->a, &spline->b, &ab);
    _lerp_half (&spline->b, &spline->c, &bc);
    _lerp_half (&spline->c, &spline->d, &cd);
    _lerp_half (&ab, &bc, &abbc);
    _lerp_half (&bc, &cd, &bccd);
    _lerp_half (&abbc, &bccd, &final);

    s1->a = spline->a;
    s1->b = ab;
    s1->c = abbc;
    s1->d = final;

    s2->a = final;
    s2->b = bccd;
    s2->c = cd;
    s2->d = spline->d;
}

This is certainly straightforward, but suffers from an obvious flaw — there's unbounded recursion. With two splines in the stack frame, each containing eight coordinates, the stack will grow rapidly; 4 levels of recursion will consume more than 64 coordinates space. This can easily overflow the stack of a tiny machine.

De Casteljau Splits At Any Point

De Casteljau's algorithm is not limited to splitting splines at the midpoint. You can supply an arbitrary position t, 0 < t < 1, and you will end up with two splines which, drawn together, exactly match the original spline. I use 1/2 in the above version because it provides a reasonable guess as to how an arbitrary spline might be decomposed efficiently. You can use any value and the decomposition will still work, it will just change the recursion depth along various portions of the spline.

Iterative Left-most Spline Decomposition

What our binary decomposition does is to pick points t0 - tn such that splines t0..t1 through tn-1 .. tn are all 'flat'. It does this by recursively bisecting the spline, storing two intermediate splines on the stack at each level. If we look at just how the first, or 'left-most' spline is generated, that can be represented as an iterative process. At each step in the iteration, we split the spline in half:

S' = _de_casteljau(s, 1/2)

We can re-write this using the broader capabilities of the De Casteljau algorithm by splitting the original spline at decreasing points along it:

S[n] = _de_casteljau(s0, (1/2)ⁿ)

Now recall that the De Casteljau algorithm generates two splines, not just one. One describes the spline from 0..(1/2)ⁿ, the second the spline from (1/2)ⁿ..1. This gives us an iterative approach to generating a sequence of 'flat' splines for the whole original spline:

while S is not flat:
    n = 1
    do
        Sleft, Sright = _decasteljau(S, (1/2)ⁿ)
        n = n + 1
    until Sleft is flat
    result ← Sleft
    S = Sright
result ← S

We've added an inner loop that wasn't needed in the original algorithm, and we're introducing some cumulative errors as we step around the spline, but we don't use any additional memory at all.

Final Code

Here's the full implementation:

/*
 * Copyright © 2020 Keith Packard <keithp@keithp.com>
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful, but
 * WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License along
 * with this program; if not, write to the Free Software Foundation, Inc.,
 * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.
 */

#include <stdbool.h>
#include <stdio.h>
#include <string.h>

typedef float point_t[2];
typedef point_t spline_t[4];

#define SNEK_DRAW_TOLERANCE 0.5f

/* Is this spline flat within the defined tolerance */
static bool
_is_flat(spline_t spline)
{
    /*
     * This computes the maximum deviation of the spline from a
     * straight line between the end points.
     *
     * From https://hcklbrrfnn.files.wordpress.com/2012/08/bez.pdf
     */
    float ux = 3.0f * spline[1][0] - 2.0f * spline[0][0] - spline[3][0];
    float uy = 3.0f * spline[1][1] - 2.0f * spline[0][1] - spline[3][1];
    float vx = 3.0f * spline[2][0] - 2.0f * spline[3][0] - spline[0][0];
    float vy = 3.0f * spline[2][1] - 2.0f * spline[3][1] - spline[0][1];

    ux *= ux;
    uy *= uy;
    vx *= vx;
    vy *= vy;
    if (ux < vx)
        ux = vx;
    if (uy < vy)
        uy = vy;
    return (ux + uy <= 16.0f * SNEK_DRAW_TOLERANCE * SNEK_DRAW_TOLERANCE);
}

static void
_lerp (point_t a, point_t b, point_t r, float t)
{
    int i;
    for (i = 0; i < 2; i++)
        r[i] = a[i]*(1.0f - t) + b[i]*t;
}

static void
_de_casteljau(spline_t s, spline_t s1, spline_t s2, float t)
{
    point_t first[3];
    point_t second[2];
    int i;

    for (i = 0; i < 3; i++)
        _lerp(s[i], s[i+1], first[i], t);

    for (i = 0; i < 2; i++)
        _lerp(first[i], first[i+1], second[i], t);

    _lerp(second[0], second[1], s1[3], t);

    for (i = 0; i < 2; i++) {
        s1[0][i] = s[0][i];
        s1[1][i] = first[0][i];
        s1[2][i] = second[0][i];

        s2[0][i] = s1[3][i];
        s2[1][i] = second[1][i];
        s2[2][i] = first[2][i];
        s2[3][i] = s[3][i];
    }
}

static void
_spline_decompose(void (*draw)(float x, float y), spline_t s)
{
    float       t;
    spline_t    s1, s2;

    (*draw)(s[0][0], s[0][1]);

    /* If s is flat, we're done */
    while (!_is_flat(s)) {
        t = 1.0f;

        /* Iterate until s1 is flat */
        do {
            t = t/2.0f;
            _de_casteljau(s, s1, s2, t);
        } while (!_is_flat(s1));

        /* Draw to the end of s1 */
        (*draw)(s1[3][0], s1[3][1]);

        /* Replace s with s2 */
        memcpy(&s[0], &s2[0], sizeof (spline_t));
    }
    (*draw)(s[3][0], s[3][1]);
}

void draw(float x, float y)
{
    printf("%8g, %8g\n", x, y);
}

int main(int argc, char **argv)
{
    spline_t spline = {
        { 0.0f, 0.0f },
        { 0.0f, 256.0f },
        { 256.0f, -256.0f },
        { 256.0f, 0.0f }
    };
    _spline_decompose(draw, spline);
    return 0;
}

15 February, 2020 05:55AM

Russell Coker

Self Assessment

Background Knowledge

The Dunning Kruger Effect [1] is something everyone should read about. It’s the effect where people who are bad at something rate themselves higher than they deserve because their inability to notice their own mistakes prevents improvement, while people who are good at something rate themselves lower than they deserve because noticing all their mistakes is what allows them to improve.

Noticing all your mistakes all the time isn’t great (see Impostor Syndrome [2] for where this leads).

Erik Dietrich wrote an insightful article “How Developers Stop Learning: Rise of the Expert Beginner” [3] which I recommend that everyone reads. It is about how some people get stuck at a medium level of proficiency and find it impossible to unlearn bad practices which prevent them from achieving higher levels of skill.

What I’m Concerned About

A significant problem in large parts of the computer industry is that it’s not easy to compare various skills. In the sport of bowling (which Erik uses as an example) it’s easy to compare your score against people anywhere in the world, if you score 250 and people in another city score 280 then they are more skilled than you. If I design an IT project that’s 2 months late on delivery and someone else designs a project that’s only 1 month late are they more skilled than me? That isn’t enough information to know. I’m using the number of months late as an arbitrary metric of assessing projects, IT projects tend to run late and while delivery time might not be the best metric it’s something that can be measured (note that I am slightly joking about measuring IT projects by how late they are).

If the last project I personally controlled was 2 months late and I’m about to finish a project 1 month late does that mean I’ve increased my skills? I probably can’t assess this accurately as there are so many variables. The Impostor Syndrome factor might lead me to think that the second project was easier, or I might get egotistical and think I’m really great, or maybe both at the same time.

This is one of many resources recommending timely feedback for education [4], it says “Feedback needs to be timely” and “It needs to be given while there is still time for the learners to act on it and to monitor and adjust their own learning”. For basic programming tasks such as debugging a crashing program the feedback is reasonably quick. For longer term tasks like assessing whether the choice of technologies for a project was good the feedback cycle is almost impossibly long. If I used product A for a year long project does it seem easier than product B because it is easier or because I’ve just got used to it’s quirks? Did I make a mistake at the start of a year long project and if so do I remember why I made that choice I now regret?

Skills that Should be Easy to Compare

One would imagine that martial arts is a field where people have very realistic understanding of their own skills, a few minutes of contest in a ring, octagon, or dojo should show how your skills compare to others. But a YouTube search for “no touch knockout” or “chi” shows that there are more than a few “martial artists” who think that they can knock someone out without physical contact – with just telepathy or something. George Dillman [5] is one example of someone who had some real fighting skills until he convinced himself that he could use mental powers to knock people out. From watching YouTube videos it appears that such people convince the members of their dojo of their powers, and those people then faint on demand “proving” their mental powers.

The process of converting an entire dojo into believers in chi seems similar to the process of converting a software development team into “expert beginners”, except that martial art skills should be much easier to assess.

Is it ever possible to assess any skills if people trying to compare martial art skills often do it so badly?

Conclusion

It seems that any situation where one person is the undisputed expert has a risk of the “chi” problem if the expert doesn’t regularly meet peers to learn new techniques. If someone like George Dillman or one of the “expert beginners” that Erik Dietrich refers to was to regularly meet other people with similar skills and accept feedback from them they would be much less likely to become a “chi” master or “expert beginner”. For the computer industry meetup.com seems the best solution to this, whatever your IT skills are you can find a meetup where you can meet people with more skills than you in some area.

Here’s one of many guides to overcoming Imposter Syndrome [5]. Actually succeeding in following the advice of such web pages is not going to be easy.

I wonder if getting a realistic appraisal of your own skills is even generally useful. Maybe the best thing is to just recognise enough things that you are doing wrong to be able to improve and to recognise enough things that you do well to have the confidence to do things without hesitation.

15 February, 2020 03:57AM by etbe

February 14, 2020

hackergotchi for Anisa Kuci

Anisa Kuci

Outreachy post 4 - Career opportunities

As mentioned in my last blog posts, Outreachy is very interesting and I got to learn a lot already. Two months have already passed by quickly and there is still one month left for me to continue working and learning.

As I imagine all the other interns are thinking now, I am also thinking about what is going to be the next step for me. After such an interesting experience as this internship, thinking about the next steps is not that simple.

I have been contributing to Free Software projects for quite some years now. I have been part of the only FLOSS community in my country for many years and I grew up together with the community, advocating free software in and around Albania.

I have contributed to many projects, including Mozilla, OpenStreetMap, Debian, GNOME, Wikimedia projects etc. So, I am sure, the FLOSS world is definitely the right place for me to be. I have helped communities grow and I am very enthusiastic about it.

I have been growing up and evolved as a person through contributing to all the projects I have mentioned above. I have gained knowledge that I would not have had a chance to acquire, if it was not for the “sharing knowledge” ideology that is so strong in the FLOSS environment.

Through organizing big and small events from 300 people conferences to 30 people bug squashing parties to 5 people strategy workshops, I have been able to develop skills because the community trusted me with responsibility in event organizing even before I was able to prove myself. I have been supported by great mentors which helped me learn on the job and leave me with practical knowledge that I am happy to continue applying in the FLOSS community. I am thinking about formalizing my education in the marketing or communication areas to also learn some academic background and further strengthen the practical skills.

During Outreachy I have learned to use the bash command line much better. I have learned LaTeX as it was one of the tools that I needed to work on the fundraising materials. I have also improved a lot using git commands and feel much more confident now. I have worked a lot on fundraising while also learning Python very intensively, and programming is definitely a skill that I would love to profound.

I know that foreign languages are something that I enjoy, as I speak English, Italian, Greek and of course my native language Albanian, but lately I learned that programming languages can be as much fun as the natural languages and I am keen on learning more of both.

I love working with people, so I hope in the future I will be able to continue working in environments where you interact with a diverse set of people.

14 February, 2020 12:21PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.0.1 now on CRAN!

A fun weekend-morning project, namely wrapping the outstanding simdjson library by Daniel Lemire (with contributions by Geoff Langdale, John Keiser and many others) into something callable from R via a new package RcppSimdJson lead to a first tweet on January 20, a reference to the brand new github repo, and CRAN upload a few days later—and then two weeks of nothingness.

Well, a little more than nothing as Daniel is an excellent “upstream” to work with who promptly incorporated two changes that arose from preparing the CRAN upload. So we did that. But CRAN being as busy and swamped as they are we needed to wait. The ten days one is warned about. And then some more. So yesterday I did a cheeky bit of “bartering” as Kurt wanted a favour with an updated digest version so I hinted that some reciprocity would be appreciated. And lo and behold he admitted RcppSimdJson to CRAN. So there it is now!

We have some upstream changes already in git, but I will wait a few days to let a week pass before uploading the now synced upstream code. Anybody who wants it sooner knows where to get it on GitHub.

simdjson is a gem. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in persing gigabytes of JSON parsed per second which is quite mindboggling. I highly recommend the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk).

The NEWS entry (from a since-added NEWS file) for the initial RcppSimdJson upload follows.

Changes in version 0.0.1 (2020-01-24)

  • Initial CRAN upload of first version

  • Comment-out use of stdout (now updated upstream)

  • Deactivate use computed GOTOs for compiler compliance and CRAN Policy via #define

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 February, 2020 03:00AM

February 13, 2020

hackergotchi for Jonathan Carter

Jonathan Carter

Initial experiments with the Loongson Pi 2K

Recently, Loongson made some Pi 2K boards available to Debian developers and Aron Xu was kind enough to bring me one to FOSDEM earlier this month. It’s a MIPS64 based board with 2GB RAM, 2 gigabit ethernet cards, an m.2 (SATA) disk slot and a whole bunch more i/o. More details about the board itself is available on the Debian wiki, here is a quick board tour from there:

On my previous blog post I still had the protective wrapping on the acrylic case. Here it is all peeled off and polished after Holger pointed that out to me on IRC. I’ll admit I kind of liked the earthy feel that the protective covers had, but this is nice too.

The reason why I wanted this board is that I don’t have access to any MIPS64 hardware whatsoever, and it can be really useful for getting Calamares to run properly on MIPS64 on Debian. Calamares itself builds fine on this platform, but calamares-settings-debian will only work on amd64 and i386 right now (where it will either install grub-efi or grub-pc depending in which mode you booted, otherwise it will crash during installation). I already have lots of plans for the Bullseye release cycle (and even for Calamares specifically), so I’m not sure if I’ll get there but I’d like to get support for mips64 and arm64 into calamares-settings-debian for the bullseye release. I think it’s mostly just a case of detecting the platforms properly and installing/configuring the right bootloaders. Hopefully it’s that simple.

In the meantime, I decided to get to know this machine a bit better. I’m curious how it could be useful to me otherwise. All its expansion ports definitely seems interesting. First I plugged it into my power meter to check what power consumption looks like. According to this, it typically uses between 7.5W and 9W and about 8.5W on average.

I initially tried it out on an old Sun monitor that I salvaged from a recycling heap. It wasn’t working anymore but my anonymous friend replaced its power supply and its CFL backlight with an LED backlight, now it’s a really nice 4:3 monitor for my vintage computers. On a side-note, if you’re into electronics, follow his YouTube channel where you can see him repair things. Unfortunately the board doesn’t like this screen by default (just black screen when xorg started), I didn’t check if it was just a xorg configuration issue or a hardware limitiation, but I just moved it to an old 720P TV that I usually use for my mini collection and it displayed fine there. I thought I’d just mention it in case someone tries this board and wonders why they just see a black screen after it boots.

I was curious whether these Ethernet ports could realistically do anything more than 100mbps (sometimes they go on a bus that maxes out way before gigabit does), so I install iperf3 and gave it a shot. This went through 2 switches that has some existing traffic on it, but the ~85MB/s I got on my first test completely satisfied me that these ports are plenty fast enough.

Since I first saw the board, I was curious about the PCIe slot. I attached an older NVidia (that still runs fine with the free Nouveau driver), also attached some external power to the card and booted it all up…

The card powers on and the fan enthusiastically spins up, but sadly the card is not detected on the Loongson board. I think you need some PC BIOS equivelent stuff to poke the card at the right places so that it boots up properly.

Disk performance is great, as can be expected with the SSD it has on board. It’s significantly better than the extremely slow flash you typically get on development boards.

I was starting to get curious about whether Calamares would run on this. So I went ahead and installed it along with calamares-settings-debian. I wasn’t even sure it would start up, but lo and behold, it did. This is quite possibly the first time Calamares has ever started up on a MIPS64 machine. It started up in Chinese since I haven’t changed the language settings yet in Xfce.

I was curious whether Calamares would start up on the framebuffer. Linux framebuffer support can be really flaky on platforms with weird/incomplete Linux drivers. I ran ‘calamares -platform linuxfb’ from a virtual terminal and it just worked.

This is all very promising and makes me a lot more eager to get it all working properly and get a nice image generated that you can use Calamares to install Debian on a MIPS64 board. Unfortunately, at least for now, this board still needs its own kernel so it would need it’s own unique installation image. Hopefully all the special bits will make it into the mainline Linux kernel before too long. Graphic performance wasn’t good, but I noticed that they do have some drivers on GitHub that I haven’t tried yet, but that’s an experiment for another evening.

Updates:

  • Price: A few people asked about the price, so I asked Aron if he can share some pricing information. I got this one for free, it’s an unreleased demo model. At least two models might be released that’s based on this, a smaller board with fewer pinouts for about €100, and the current demo version is about $200 (CNY 1399), so the final version might cost somewhere in that ballpark too. These aren’t any kind of final prices, and I don’t represent Loongson in any capacity, but at least this should give you some idea of what it would cost.
  • More boards: Not all Debian Developers who requested their board have received them, Aron said that more boards should become available by March/April.

13 February, 2020 08:29PM by jonathan

hackergotchi for Romain Perier

Romain Perier

Meetup Debian Toulouse

Hi there !

My company Viveris is opening its office for hosting a Debian Meetup in Toulouse this summer (June 5th or June 12th).

Everyone is welcome to this event, we're currently looking for volunteers for presenting demo, lightning talks or conferences (following the talks any kind of hacking session is possible like bugs triaging, coding sprints etc).

Any kind of topic is welcome.

See the announcement (in french) for more details.

13 February, 2020 06:50PM by Romain Perier (noreply@blogger.com)

February 12, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.24: Some more refinements

Another new version of digest arrived on CRAN (and also on Debian) earlier today.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 889k monthly downloads with 255 direct reverse dependencies and 7340 indirect reverse dependencies) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release comes a few month after the previous release. It contains a few contributed fixes, some of which prepare for R 4.0.0 in its current development. This includes a testing change to the matrix/array class, and corrects the registration for the PMurHash routine as pointed out by Tomas Kalibera and Kurt Hornik (who also kindly reminded me to finally upload this as I had made the fix already in December). Moreover, Will Landau sped up one operation affecting his popular drake pipeline toolkit. Lastly, Thierry Onkelinx corrected one more aspect related to sha1.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 February, 2020 11:17PM

hackergotchi for Paulo Henrique de Lima Santana

Paulo Henrique de Lima Santana

Bits from MiniDebCamp Brussels and FOSDEM 2020

Bits from MiniDebCamp Brussels and FOSDEM 2020

I traveled to Brussels from January 28th to February 6th to join MiniDebCamp and FOSDEM 2020. It was my second trip to Brussels because I was there in 2019 to join Video Team Sprint and FOSDEM

MiniDebCamp took place at Hackerspace Brussels (HSBXL) for 3 days (January 29-31). My initial idea was travel on 27th and arrive in Brussels on 28th to rest and go to MiniDebCamp on the first day, but I had buy a ticket to leave Brazil on 28th because it was cheaper.

Trip from Curitiba to Brussels

I left Curitiba on 28th at 13:20 and I arrived in São Paulo at 14:30. The flight from São Paulo to Munich departured at 18h and after 12 hours I arrived there at 10h (local time). The flight was 30 minutes late because we had to wait airport staff remove ice on the ground. I was worried because my flight to Brussels would departure at 10:25 and I had to get through by immigration yet.

Bruxelas

After walked a lot, I arrrived at immigration desk (there wasn’t line), I got my passaport stamp, walked a lot again, took a train, I arrived in my gate and the flight was late too. So, everything was going well. I departured Munich at 10:40 and I arrived in Brussels on 29th at 12h.

I went from airport to the Hostel Galia by bus, by train and by other bus to check-in and to leave my luggage. On the way I had lunch at “Station Brussel Noord” because I was really hungry, and I arrived at hostel at 15h.

My reservation was on a coletive bedroom, and when I arrived there, I meet Marcos, a brazilian guy from Brasília and he was there to join a internationl Magic card competion. He was in Brussels for the first time and he was a little lost about what he could do in the city. I invited him to go to downtown to looking for a cellphone store because we needed to buy sim-cards. I wanted to buy from Base, and hostel frontdesk people said to us to go to the store at Rue Neuve. I showed Grand-Place to Marcos and after we bought sim-cards, we went to Primark because he needed to buy a towel. It was night and we decided to buy food to have dinner at Hostel. I gave up to go to HSBXL because I was tired and I thought it was not a good idea to go there for the first time at night.

MiniDebCamp day 1

On Thursday (30th) morning I went to HSBXL. I walked from the hostel to “Gare du Midi”, and after walk from on side to other, I finally could find the bus stop. I got off the bus at the fourth stop in front of the hackerspace building. It was a little hard to find the right entrance, but I got it. I arrived at HSBXL room, talked to other DDs there and I could find a empty table to put my laptop. Other DDs were arriving during all day.

Bruxelas

I read and answered e-mails and went out to walking in Anderlecht to meet the city and to looking for a place to have lunch because I didn’t want eat sandwich at restaurant on the building. I stoped at Lidl and Aldi stores to buy some food to eat later, and I stoped in a turkish restaurant to have lunch, and the food was very good. After that, I decided to walk a little more to visit the Jean-Claude Van Damme statue to take some photos :-)

Bruxelas

Bruxelas

Bruxelas

Bruxelas

Backing to HSBXL my mostly interest at MiniDebCamp was to join the DebConf Vídeo Team sprint to learn how to setup a voctomix and gateway machine to be used in MiniDebConf Maceió 2020. I was asking some questions to Nicolas about that and he suggested I make a new instalation using the Video Team machine and Buster.

Bruxelas

I installed Buster and using USB installer and ansible playbooks it was possible setup the machine as Voctotest. I already had done this setup at home using a simple machine without a Blackmagic card and a camera. From that point, I didn’t know what to do. So, Nicolas come and started to setup the machine first as Voctomix, and after as Gateway. I was watching and learning. After a while, everything worked perfect with a camera.

It was night and the group ordered some pizzas to eat with beers sold by HSBXL. I was celebreting too because during the day I received messages and a call from Rentcars because I was hired by them! Before travel, I went to a interview at Rentcars on the morning and I got a positive answer when I was in Brussels.

Bruxelas

Before I left the hackerspace, I received doors codes to open HSBXL next day early. Some days before MiniDebCamp, Holger had asked if someone could open the room friday morning and I answered him I could. I left at 22h and back to the hostel to sleep.

MiniDebCamp day 2

On friday I arrived at HSBXL at 9h and opened the room and I took some photos with empty space. It is amazing how we can use spaces like that in Europe. Last year I was in MiniDebConf Hamburg at Dock Europe. I miss this kind of building and hackerspace in Curitiba.

Bruxelas

Bruxelas

I installed and setup the Video Team machine again, but this time, I was alone following what Nicolas did before. And everything worked perfectly again. Nicolas asked me to create a new ansible playbook joining voctomix and gateway to make instalation easier, send it as a MR, and test it.

I went out to have lunch in the same restaurant the day before and I discoveried there was a Leonidas factory outlet in front of HSBXL, meaning I could buy belgium chocolates cheaper. I went there and I bought a box with 1,5kg of chocolates.

Bruxelas

When I come back to HSBXL, I started to test the new ansible playbook. The test was taking longer than I expected and on the end of the day, Nicolas needed to take away the equipments. It was really great make this hands-on with real equipments used by Video Team. I learned a lot!

To celebrate the MiniDebCamp ending, we had free beer sponsored! I have to say I drank to much and it was complicated arrived at hostel that night :-)

Bruxelas

Bruxelas Bruxelas

Bruxelas Bruxelas

A complete report from DebConf Video Team can be read here.

Many thanks to Nicolas Dandrimont for teaching me Video Team stuff, to Kyle Robbertze for setting up the Video Sprint, to Holger Levsen for organizing MiniDebCamp, and to HSBXL people for receiving us there.

FOSDEM day 1

FOSDEM 2020 took place at ULB on February 1st and 2nd. On the first day I took a train and I listened a group of brazilians talking in portuguese and they were going to FOSDEM too. I arrived there around 9:30 and I went to Debian booth because I was volunteer to help and I was taking t-shirts from Brazil to sell. It was a madness with people buying Debian stuff.

Bruxelas

Bruxelas

Bruxelas

After while I had to leave the booth because I was volunteer to film the talks at Janson auditorium from 11h to 13h. I had done this job last year I decided to do it again because It is a way to help the event, and they gave me a t-shirt and a free meal ticket that I changed for two sandwiches :-)

Bruxelas

Bruxelas

Bruxelas

After lunch, I walked around the booths, got some stickers, talked with peolple, drank some beers from OpenSuse booth, until the end of the day. I left FOSDEM and went to hostel to leave my bag, and I went to the Debian dinner organized by Marco d’Itri at Chezleon.

Bruxelas

The dinner was great, with 25 very nice Debian people. After the dinner, we ate waflles and some of us went to Delirium but I decided to go to the hostel to sleep.

FOSDEM day 2

On the second and last day I arrived around 9h, spent some time at Debian booth and I went to Janson auditorium to help again from 10h to 13h.

Bruxelas

I got the free meal ticket and after lunch, I walked around, visited booths, and I went to Community devroom to watch talks. The first was “Recognising Burnout” by Andrew Hutchings and listening him I believe I had bournet symptoms organizing DebConf19. The second was “How Does Innersource Impact on the Future of Upstream Contributions?” by Bradley Kuhn. Both talks were great.

After the end of FOSDEM, we went in a group to have dinner at a restaurant near from ULB. We spent a great time together. After the dinnner we took the same train and we did a group photo.

Bruxelas

Two days to join Brussels

With the end of MiniDebcamp and FOSDEM I had Monday and Tuesday free before returning to Brazil on Wednesday. I wanted to join Config Management Camp in Ghent, but I decided to stay in Brussels to visit some places. I visited:

Bruxelas

Bruxelas

  • Carrefour - to buy beers to bring to Brazil :-)

Bruxelas

Bruxelas

Bruxelas

Bruxelas

Bruxelas

Last day and returning to Brazil

On Wednesday (5th) I woke up early to finish packing and do my check-out. I left the hostel and took a bus, a train and other bus to Brussels Airport. My flight departured at 15:05 to Frankfurt arriving there at 15:55. I thought to visit the city because I had to wait for 6 hours and I read it was possible to looking around with this time. But I was very tired and I decided to stay at airport.

I walked to my gate, got through by immigration to get my passaport stamp, and waited until 22:05 when my flight departured to São Paulo. After 12 hours flying, I arrived in São Paulo at 6h (local time). In São Paulo when we arrive from international flight, we must to take all luggages, and get through customs. After I left my luggage with local airplane company, I went to the gate to wait my flight to Curitiba.

The flight should departure at 8:30 but it was 20 minutes late. So I arrived in Curitiba 10h, took a uber and finally I was at home.

Last words

I wrote a diary (in portuguese) telling each of all my days in Brussels. It can be read starting here.

All my photos are here

Many thanks to Debian for sponsoring my trip to Brussels, and to DPL Sam Hartman for approving it. It’s a unique opportunity to go to Europe to meet and to work with a lot of DDs, and participate in a very important worldwide free software event.

12 February, 2020 10:00AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Announcing miniDebConf Montreal 2020 -- August 6th to August 9th 2020

This is a guest post by the miniDebConf Montreal 2020 orga team on pollo's blog.

Dear Debianites,

We are happy to announce miniDebConf Montreal 2020! The event will take place in Montreal, at Concordia University's John Molson School of Business from August 6th to August 9th 2020. Anybody interested in Debian development is welcome.

Following the announcement of the DebConf20 location, our desire to participate became incompatible with our commitment toward the Boycott, Divestment and Sanctions (BDS) campaign launched by Palestinian civil society in 2005. Hence, many active Montreal-based Debian developpers, along with a number of other Debian developpers, have decided not to travel to Israel in August 2020 for DebConf20.

Nevertheless, recognizing the importance of DebConf for the health of both the developper community and the project as a whole, we decided to organize a miniDebConf just prior to DebConf20 in the hope that fellow developpers who may have otherwise skipped DebConf entirely this year might join us instead. Fellow developpers who decide to travel to both events are of course most welcome.

Registration is open

Registration is open now, and free, so go add your name and details on the Debian wiki.

We'll accept registrations until July 25th, but don't wait too much before making your travel plans! Finding reasonnable accommodation in Montreal during the summer can be hard if you don't make plans in advance.

We have you covered with lots of attendee information already.

Sponsors wanted

We're looking for sponsors willing to help making this event possible. Information on sponsorship tiers can be found here.

Get in touch

We gather on the #debian-quebec on irc.debian.org and the debian-dug-quebec@lists.debian.org list.

12 February, 2020 05:00AM by The miniDebConf Montreal 2020 orga team

hackergotchi for Norbert Preining

Norbert Preining

MuPDF, QPDFView and other Debian updates

Update 2020-02-24: The default Debian packages for pupdf and pymupdf have been updated to the current version (or newer), and thus I have removed the packages from my repo. Thanks to the maintainers for updating! Qpdfview is still outdated, though.

For those interested, I have updated mupdf (1.16.1), pymupdf (1.16.10), and qpdfview (current bzr sources) to the latest versions and added to my local Debian apt repository:

deb https://www.preining.info/debian unstable main
deb-src https://www.preining.info/debian unstable main

QPDFView has now the Fitz (MuPDF) backend available.

At the same time I have updated Elixir to 1.10.1. All packages are in source and amd64 binary format. Information on other apt repositories available here can be found at this post.

Enjoy.

12 February, 2020 03:03AM by Norbert Preining

February 11, 2020

hackergotchi for Sean Whitton

Sean Whitton

Traditional Perl 5 classes and objects

Last summer I read chromatic’s Modern Perl, and was recommended to default to using Moo or Moose to define classes, rather than writing code to bless things into objecthood myself. At the time the project I was working on needed to avoid any dependencies outside of the Perl core, so I made a mental note of the advice, but didn’t learn how to use Moo or Moose. I do remember feeling like I was typing out a lot of boilerplate, and wishing I could use Moo or Moose to reduce that.

In recent weeks I’ve been working on a Perl distribution which can freely use non-core dependencies from CPAN, and so right from the start I used Moo to define my classes. It seemed like a no-brainer because it’s more declarative; it didn’t seem like there could be any disadvantages.

At one point, when writing a new class, I got stuck. I needed to call one of the object’s methods immediately after instantiation of the object. BUILDARGS is, roughly, the constructor for Moo/Moose classes, so I started there, but you don’t have access to the new object during BUILDARGS, so you can’t simply call its methods on it. So what I needed to do was change my design around so as to be more conformant to the Moo/Moose view of the world, such that the work of the method call could get done at the right time. I mustn’t have been in a frame of mind for that sort of thinking at the time because what I ended up doing was dropping Moo from the package and writing a constructor which called the method on the new object, after blessing the hash, but before returning a hashref to the caller.

This was my first experience of having the call to bless() not be the last line of my constructor, and I believe that this simple dislocation helped significantly improved my grip on core Perl 5 classes and objects: the point is that they’re not declarative—they’re collections of functionality to operate on encapsulated data, where the instantiation of that data, too, is a piece of functionality. I had been thinking about classes too declaratively, and this is why writing out constructors and accessors felt like boilerplate. Now writing those out feels like carefully setting down precisely what functionality for operating on the encapsulated data I want to expose. I also find core Perl 5 OO quite elegant (in fact I find pretty much everything about Perl 5 highly elegant, except of course for its dereferencing syntax; not sure why this opinion is so unpopular).

I then came across the Cor proposal and followed a link to this recent talk criticising Moo/Moose. The speaker, Tadeusz Sośnierz, argues that Moo/Moose implicitly encourages you to have an accessor for each and every piece of the encapsulated data in your class, which is bad OO. Sośnierz pointed out that if you take care to avoid generating all these accessors, while still having Moo/Moose store the arguments to the constructor provided by the user in the right places, you end up back with a new kind of boilerplate, which is Moo/Moose-specific, and arguably worse than what’s involved in defining core Perl 5 classes. So, he asks, if we are going to take care to avoid generating too many accessors, and thereby end up with boilerplate, what are we getting out of using Moo/Moose over just core Perl 5 OO? There is some functionality for typechecking and method signatures, and we have the ability to use roles instead of multiple-inheritance.

After watching Sośnierz talk, I have been rethinking about whether I should follow Modern Perl’s advice to default to using Moo/Moose to define new classes, because I want to avoid the problem of too many accessors. Considering the advantages of Moo/Moose Sośnierz ends up with at the end of his talk: I find the way that Perl provides parameters to subroutines and methods intuitive and flexible, and don’t see the need to build typechecking into that process—just throw some exceptions with croak() if the types aren’t right, before getting on with the business logic of the subroutine or method. Roles are a different matter. These are certainly an improvement on multiple inheritance. But there is Role::Tiny that you can use instead of Moo/Moose.

So for the time being it seems I should go back to blessing hashes, and that I should also get to grips with Role::Tiny. I don’t have a lot of experience with OO design, so can certainly imagine changing my mind about things like Perlish typechecking and subroutine signatures (I also don’t understand, yet, why some people find the convention of prefixing private methods and attributes with an underscore not to be sufficient—Cor wants to add attribute and method privacy to Perl). However, it seems sensible to avoid using things like Moo/Moose until I can be very clear in my own mind about what advantages using them is getting me. Bad OO with Moo/Moose seems worse than occasionally simplistic, occasionally tedious, but correct OO with the Perl 5 core.

11 February, 2020 04:23PM

hackergotchi for Paulo Henrique de Lima Santana

Paulo Henrique de Lima Santana

My free software activities in january 2020

My free software activities in january 2020

Hello, this is my first monthly report about activities in Debian and Free Software in general.

Since the end of DebConf19 in July 2020 I was avoiding to work in Debian stuff because the event was too stresseful to me. For months I felt discouraged to contribute to the project, until December.

Packages

On december I watched two news video tutorials from João Eriberto about:

  • Debian Packaging - using git and gbp, parts 1, 2, 3, 4, 5 and 6
  • Debian Packaging with docker, parts 1 and 2

Since then, I decided update my packages using gbd and docker and it have been great. On December and January I worked on these following packages.

I did QA Uploads of:

I adopted and packaged new release of:

  • ddir 2019.0505 closing bugs #903093 and #920066.

I packaged new releases of:

I packaged new upstream versions of:

I backported to buster-backports:

I packaged:

MiniDebConf Maceió 2020

I helped to edit the MiniDebConf Maceió 2020 website.

I wrote the sponsorship brochure and I sent it some brazilian companies.

I sent a message with call for activities to national and international mailing lists.

I sent a post to Debian Micronews.

FLISOL 2020

I sent a message to UFPR Education Director asking him if we could use the Campus Rebouças auditorium to organize FLISOL there on april, but he denied. We still looking for a place to FLISOL.

DevOps

I started to study DevOps culture and for that, I watch a lot of vídeos from LINUXtips

And I read the book “Docker para desenvolvedores” wrote by Rafael Gomes.

MiniDebCamp in Brussels

I traveled to Brussels to join MiniDebCamp on January 29-31 and FOSDEM on February 1-2.

At MiniDebCamp my mostly interest was to join the DebConf Vídeo Team sprint to learn how to setup a voctomix and gateway machine to be used in MiniDebConf Maceió 2020. I could setup the Video Team machine installing Buster and using ansible playbooks. It was a very nice opportunity to learn how to do that.

A complete report from DebConf Video Team can be read here.

I wrote a diary (in portuguese) telling each of all my days in Brussels. It can be read starting here. I intend to write more in english about MiniDebCamp and FOSDEM in a specific post.

Many thanks to Debian for sponsor my trip to Brussels. It’s a unique opportunity to go to Europe to meet and to work with a lot of DDs.

Misc

I did a MR to the DebConf20 website fixing some texts.

I joined the WordPress Meetup

I joined a live streaming from Comunidade Debian Brasil to talk about MiniDebConf Maceió 2020.

I watched an interesting vídeo “Who has afraid of Debian Sid” from debxp channel

I deleted the Agenda de eventos de Sofware Livre e Código Aberto because I wasn’t receiving events to add there, and I was not having free time to publicize it.

I started to write the list of FLOSS events for 2020 that I keep in my website for many years.

Finally I have watched vídeos from DebConf19. Until now, I saw these great talks:

  • Bastidores Debian - entenda como a distribuição funciona
  • Benefícios de uma comunidade local de contribuidores FLOSS
  • Caninos Loucos: a plataforma nacional de Single Board Computers para IoT
  • Como obter ajuda de forma eficiente sobre Debian
  • Comunidades: o bom o ruim e o maravilhoso
  • O Projeto Debian quer você!
  • A newbie’s perspective towards Debian
  • Bits from the DPL
  • I’m (a bit) sick of maintaining piuparts.debian.org (mostly) alone, please help

That’s all folks!

11 February, 2020 10:00AM

February 10, 2020

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in January 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in February) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Again Reiner Herrman did a very good job with updating some of the most famous FOSS games in Debian. I reviewed and sponsored supertux 0.6.1.1, supertuxkart 1.1 and love 11.3, also several updates to fix build failures with the latest version of scons in Debian. Reiner Herrmann, Moritz Mühlenhoff and Phil Wyett contributed patches to fix release critical bugs in netpanzer, boswars, btanks, and xboxdrv.
  • I packaged new upstream versions of minetest 5.1.1, empire 1.15 and bullet 2.89.
  • I backported freeciv 2.6.1 to buster-backports and
  • applied a patch by Asher Gordon to fix a teleporter bug in berusky2. He also submitted another patch to address even more bugs and I hope to review and upload a new revision soon.

Debian Java

Misc

  • As the maintainer I requested the removal of pyblosxom, a web blog engine written in Python 2. Unfortunately pyblosxom is no longer actively maintained and the port to Python 3 has never been finished. I thought it would be better to remove the package now since we have a couple of good alternatives like Hugo or Jekyll.
  • I packaged new upstream versions of wabt and privacybadger.

Debian LTS

This was my 47. month as a paid contributor and I have been paid to work 15 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2065-1. Issued a security update for apache-log4j1.2 fixing 1 CVE.
  • DLA-2077-1. Issued a security update for tomcat7 fixing 2 CVE.
  • DLA-2078-1. Issued a security update for libxmlrpc3-java fixing 1 CVE.
  • DLA-2097-1. Issued a security update for ppp fixing 1 CVE.
  • DLA-2098-1. Issued a security update for ipmitool fixing 1 CVE.
  • DLA-2099-1. Issued a security update for checkstyle fixing 1 CVE.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my twentieth month and I have been paid to work 10 hours on ELTS.

  • ELA-208-1. Issued a security update for tomcat7 fixing 2 CVE.
  • ELA-209-1. Issued a security update for linux fixing 41 CVE.
  • Investigated CVE-2019-17023 in nss which is needed to build and run OpenJDK 7. I found that the vulnerability did not affect this version of nss because of the incomplete and experimental support for TLS 1.3.

Thanks for reading and see you next time.

10 February, 2020 10:57PM by apo

Ruby Team

Ruby Team Sprint 2020 in Paris - Day Five - We’ve brok^done it

On our last day we met like every day before, working on our packages, fixing and uploading them. The transitions went on. Antonio, Utkarsh, Lucas, Deivid, and Cédric took some time to examine the gem2deb bug reports. We uploaded the last missing Kali Ruby package. And we had our last discussion, covering the future of the team and an evaluation of the sprint:

Last discussion of the Ruby Team Sprint 2020 in Paris Last discussion round of the Ruby Team Sprint 2020 in Paris

As a result:

  • We will examine ways to find leaf packages.
  • We plan to organize another sprint next year right before the release freeze, probably again about FOSDEM time. We tend to have it in Berlin but will explore the locations available and the costs.
  • We will have monthly IRC meetings.

We think the sprint was a success. Some stuff got (intentionally and less intentionally) broken on the way. And also a lot of stuff got fixed. Eventually we made our step towards a successful Ruby 2.7 transition.

So we want to thank

  • the Debian project and our DPL Sam for sponsoring the event,
  • Offensive Security for sponsoring the event too,
  • Sorbonne Université and LPSM for hosting us,
  • Cédric Boutillier for organizing the sprint and kindly hosting us,
  • and really everyone who attended, making this a success: Antonio, Abhijith, Georg, Utkarsh, Balu, Praveen, Sruthi, Marc, Lucas, Cédric, Sebastien, Deivid, Daniel.
Group photo of the attendees of the Ruby Team Sprint 2020 in Paris Group photo; from the left in the Back: Antonio, Abhijith, Georg, Utkarsh, Balu, Praveen, Sruthi, Josy. And in the Front: Marc, Lucas, Cédric, Sebastien, Deivid, Daniel.

In the evening we finally closed the venue which hosted us for 5 days, cleaned up, and went for a last beer together (at least for now). Some of us will stay in Paris a few days longer and finally get to see the city.

Eiffel tower in Paris Eiffel Tower Paris (February 2020)

Goodbye Paris and save travels to everyone. It was a pleasure.

10 February, 2020 10:45PM by Daniel Leidert (dleidert@debian.org)

Utkarsh Gupta

Debian Activities for January 2020

Here’s my (fourth) monthly update about the activities I’ve done in Debian this January.

Debian LTS

This was my fourth month as a Debian LTS paid contributor.
I was assigned 23.75 hours and worked on the following things:

CVE Fixes and Announcements:

  • Issued DLA 2060-1, fixing CVE-2020-5504, for phpmyadmin.
    Details here:

    In phpMyAdmin 4 before 4.9.4 and 5 before 5.0.1, SQL injection exists in the user accounts page. A malicious user could inject custom SQL in place of their own username when creating queries to this page. An attacker must have a valid MySQL account to access the server.

    For Debian 8 “Jessie”, this problem has been fixed in version 4:4.2.12-2+deb8u8.
    Furthermore, worked on preparing the security update for Stretch and Buster with the original maintainer.

  • Issued DLA 2063-1, fixing CVE-2019-3467 for debian-lan-config.
    Details here:

    In debian-lan-config < 0.26, configured too permissive ACLs for the Kerberos admin server allowed password changes for other Kerberos user principals.

    For Debian 8 “Jessie”, this problem has been fixed in version 0.19+deb8u2.

  • Issued DLA 2070-1, fixing CVE-2019-16779, for ruby-excon.
    Details here:

    In RubyGem excon before 0.71.0, there was a race condition around persistent connections, where a connection which is interrupted (such as by a timeout) would leave data on the socket. Subsequent requests would then read this data, returning content from the previous response.

    For Debian 8 “Jessie”, this problem has been fixed in version 0.33.0-2+deb8u1.
    Furthermore, sent a patch to the Security team for Stretch and Buster.

    P.S. this backporting took the most time and effort this month.

  • Issued DLA 2090-1, fixing CVE-2020-7039, for qemu.
    Details here:

    tcp_emu in tcp_subr.c in libslirp 4.1.0, as used in QEMU 4.2.0, mismanages memory, as demonstrated by IRC DCC commands in EMU_IRC. This can cause a heap-based buffer overflow or other out-of-bounds access which can lead to a DoS or potential execute arbitrary code.

    For Debian 8 “Jessie”, this problem has been fixed in version 1:2.1+dfsg-12+deb8u13.

Miscellaneous:

  • Triaged samba, cacti, storebackup, and qemu.

  • Checked with upstream of ruby-rack for their CVE fix which induces regression.

  • Worked a bit on ruby-rack-cors but couldn’t complete because of Amsterdam -> Brussels travel. Thanks to Brian for completing it \o/


Debian Uploads

This was a great month! MiniDebCamp -> FOSDEM -> Ruby Sprints. Blog post soon :D
In any case, in the month of January, I did the following work:

New Version:

  • ruby-haml-rails ~ 2.0.1-1 (to unstable).

Source-Only and Other Uploads:

  • golang-github-zyedidia-pty ~ 1.1.1+git20180126.3036466-3 (to unstable).
  • ruby-benchmark-suite ~ 1.0.0+git.20130122.5bded6-3 (to unstable).
  • golang-github-robertkrimen-otto ~ 0.0+git20180617.15f95af-2~bpo10+1 (to buster-backports).
  • golang-github-zyedidia-pty ~ 1.1.1+git20180126.3036466-3~bpo10+1 (to buster-backports).
  • golang-github-mitchellh-go-homedir ~ 1.1.0-1~bpo10+1 (to buster-backports).
  • golang-golang-x-sys ~ 0.0+git20190726.fc99dfb-1~bpo10+1 (to buster-backports).
  • golang-github-mattn-go-isatty ~ 0.0.8-2~bpo10+1 (to buster-backports).
  • golang-github-mattn-go-runewidth ~ 0.0.7-1~bpo10+1 (to buster-backports).
  • golang-github-dustin-go-humanize ~ 1.0.0-1~bpo10+1 (to buster-backports).
  • golang-github-blang-semver ~ 3.6.1-1~bpo10+1 (buster-backports).
  • golang-github-flynn-json5 ~ 0.0+git20160717.7620272-2~bpo10+1 (to buster-backports).
  • golang-github-zyedidia-terminal ~ 0.0+git20180726.533c623-2~bpo10+1 (to buster-backports).
  • golang-github-go-errors-errors ~ 1.0.1-3~bpo10+1 (to buster-backports).
  • python-debianbts ~ 3.0.2~bpo10+1 (to buster-backports).

Bug Fixes:

  • #945232 for ruby-benchmark-suite.
  • #946904 for ruby-excon (CVE-2019-19779).

Reviews and Sponsored Uploads:

  • phpmyadmin for William Desportes.
Miscellaneous:
  • Outreachy mentoring for GitLab project for Sakshi Sangwan.
  • Raised various MRs upstream to sync Debian’s package version with GitLab’s upstream.

One exciting blog post coming very soon.

Until next time.
:wq for today.

10 February, 2020 09:50PM

Reproducible Builds

Reproducible Builds in January 2020

Welcome to the January 2020 report from the Reproducible Builds project. In our reports we outline the most important things that we have been up to. In this month’s report, we cover:

  • Upstream news & event coverageReproducing the Telegram messenger, etc.
  • Software developmentUpdates and improvements to our tooling
  • Distribution workMore work in Debian, openSUSE & friends
  • Misc newsFrom our mailing list & how to get in touch etc.
What are reproducible builds?

Whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

If you are interested in contributing, please visit the Contribute page on our website.


Upstream news & event coverage

The Telegram messaging application has documented full instructions for verifying that its original source code is exactly the same code that is used to build the versions available on the Apple App Store and Google Play.

Reproducible builds were mentioned in a panel on Software Distribution with Sam Hartman, Richard Fontana, & Eben Moglen at the Software Freedom Law Center’s 15h Anniversary Fall Conference (at ~35m21s).

Vagrant Cascadian will present a talk at SCALE 18x in Pasadena, California on March 8th titled There and Back Again, Reproducibly.

Matt Graeber (@mattifestation) posted on Twitter that:

If you weren’t aware of the reason Portable Executable timestamps in Win 10 binaries were nonsensical, Raymond’s post explains the reason: to support reproducible builds.

… referencing an article by Raymond Chen from January 2018 which, amongst other things, mentions:

One of the changes to the Windows engineering system begun in Windows 10 is the move toward reproducible builds.

Jan Nieuwenhuizen announced the release of GNU Mes 0.22. Vagrant Cascadian subsequently uploaded this version to Debian which produced a bit-for-bit identical mescc-mes-static binary with the mes-rb5 package in GNU Guix.

Software development

diffoscope

diffoscope is our in-depth and content-aware diff-like utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of nondeterministic behaviour.

This month, diffoscope versions 135 and 136 were uploaded to Debian unstable by Chris Lamb. He also made the following changes to diffoscope itself, including:

  • New features:

    • Support external difference tools such as Meld, etc. similar to git-difftool(1). (#87)
    • Extract resources.arsc files as well as classes.dex from Android .apk files to ensure that we show the differences there. (#27)
    • Fallback to the regular .zip container format for .apk files if apktool is not available. [][][][]
    • Drop --max-report-size-child and --max-diff-block-lines-parent; scheduled for removal in January 2018. []
    • Append a comment to a difference if we fallback to a less-informative container format but we are missing a tool. [][]
  • Bug fixes:

    • No longer raise a KeyError exception if we request an invalid member from a directory container. []
  • Documentation/workflow improvements:

    • Clarify that “install X” in various outputs actually refers to system-level packages. []
    • Add a note to the Contributing documentation to suggest enable concurrency when running the tests locally. []
    • Include the CONTRIBUTING.md file in the PyPI.org release. [][]
  • Logging improvements:

    • Log a debug-level message if we cannot open a file as container due to a missing tool to assist in diagnosing issues. []
    • Correct a debug message related to compare_meta calls to quote the arguments correctly. []
    • Add the current PATH environment variable to the Normalising locale... debug-level message. []
    • Print the Starting diffoscope $VERSION line as the first line of the log as we are, well, starting diffoscope. []
    • If we don’t know the HTML output name, don’t emit an enigmatically truncated HTML output for debug message. []
  • Tests:

    • Don’t exhaustively output the entire HTML report when testing the regression for #875281; parsing the JSON and pruning the tree should be enough. (#84)
    • Refresh and update the fixtures for the .ico tests to match the latest version of Imagemagick in Debian unstable. []
  • Code improvements:

    • Add a .git-blame-ignore-revs file to improve the output of git-blame(1) by ignoring large changes when introducing the Black source code reformatter and update the CONTRIBUTING.md guide on how to optionally use it locally. []
    • Add a noqa line to avoid a false-positive Flake8 “unused import” warning. []
    • Move logo.svg to under the doc/ directory [] and make setup.py executable [].
    • Tidy diffoscope.main’s configure method. [][][][]
    • Drop an assertion that is guaranteed by parallel if conditional [] and an unused “Difference” import from the APK comparator. []
    • Turn down the “volume” for a recommendation in a comment. []
    • Rename the diffoscope.locale module to diffoscope.environ as we are modifying things beyond just the locale (eg. calling tzset, etc.) []
    • Factor-out the generation of foo not available in path comment messages into the exception that raises them [] and factor out running all of our many zipinfo into a new method [].
  • trydiffoscope is the web-based version of diffoscope. This month, Chris Lamb fixed the PyPI.org release by adding the trydiffoscope script itself to the MANIFEST file and performing another release cycle. []

In addition, Marc Herbert adjusted the cbfstool tests to search for expected keywords in the output, rather than specific output [], fixed a misplaced debugging line [] and added a “Testing” section to the CONTRIBUTING.rst [] file. Vagrant Cascadian updated to diffoscope 135 in GNU Guix.

reprotest

reprotest is our end-user tool to build same source code twice in widely differing environments and then checks the binaries produced by each build for any differences. This month, versions 0.7.11 and 0.7.12 were uploaded to Debian unstable by Holger Levsen. This month, Iñaki Malerba improved the version test to split on the + character [] and Ross Vandegrift updated the code to allow the user to override timeouts from the surrounding environment [].

Holger Levsen also made the following additionally changes:

  • Drop the short timeout and use the install timeout instead. (#897442)
  • Use “real” reStructuredText comments instead of using the raw directive. []
  • Update the PyPI classifier to express we are using Python 3.7 now. []

Other tools

  • disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues. This month, Chris Lamb fixed an issue by ignoring the return values of fsyncdir to ensure (for example) dpkg(1) can “flush” /var/lib/dpkg correctly [] and merged a change from Helmut Grohne to use the build architecture’s version of pkg-config to permit cross-architecture builds [].

  • strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, version 1.6.3-2 was uploaded to Debian unstable by Holger Levsen to bump the Standards-Version. []

Upstream development

The Reproducible Builds project detects, dissects and attempts to fix as many unreproducible packages as possible. Naturally, we endeavour to send all of our patches upstream. This month, we wrote another large number of such patches, including:


Distribution work

openSUSE

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update and submitted the following bugs and patches:

Many Python packages were updated to avoid writing .pyc files with an embedded random path, including jupyter-jupyter-wysiwyg, jupyter-jupyterlab-latex, python-PsyLab, python-hupper, python-ipyevents (don’t rewrite .zip file), python-ipyleaflet, python-jupyter-require, python-jupyter_kernel_test, python-nbdime (do not rewrite .zip, avoid time-based .pyc), python-nbinteract, python-plaster, python-pythreejs, python-sidecar & tensorflow (use pip install --no-compile).

Debian

There was yet more progress towards making the Debian Installer images reproducible. Following-on from last months’ efforts, Chris Lamb requested a status update on the Debian bug in question.

Daniel Schepler posted to the debian-devel mailing list to ask whether “running dpkg-buildpackage manually from the command line” is supported, particularly with respect to having extra packages installed during the package was built either resulted in a failed build or even broken packages (eg. #948522, #887902, etc.). Our .buildinfo files could be one solution to this as they record the environment at the time of the package build.

Holger disabled scheduling of packages from the “oldstable” stretch release on tests.reproducible-builds.org. This is the first time since stretch’s existence that we are no longer testing this release.

OpenJDK, a free and open-source implementation of the Java Platform was updated in Debian to incorporate a number of patches from Emmanuel Bourg, including:

  • Make the generated character data source files reproducible. (#933339)
  • Make the generated module-info.java files reproducible. (#933342)
  • Make the generated copyright headers reproducible. (#933349)
  • Make the build user reproducible. (#933373)

83 reviews of Debian packages were added, 32 were updated and 96 were removed this month adding to our knowledge about identified issues. Many issue types were updated by Chris Lamb, including timestamp_in_casacore_tables, random_identifiers_in_epub_files_generated_by_asciidoc, nondeterministic_ordering_in_casacore_tables, captures_build_path_in_golang_compiler, captures_build_path_via_haskell_adddependentfile & png_generated_by_plantuml_captures_kernel_version_and_builddate`.

Lastly, Mattia Rizzolo altered the permissions and shared the notes.git repository which underpins the aforementioned package classifications with the entire “Debian” group on Salsa, therefore giving all DDs write access to it. This is an attempt to invite more direct contributions instead of merge requests.

Other distributions

The FreeBSD Project Tweeted that:

Reproducible builds are turned on by default for -RELEASE []

… which targets the next released version of this distribution (view revision). Daniel Ebdrup followed-up to note that this option:

Used to be turned on in -CURRENT when it was being tested, but it has been turned off now that there’s another branch where it’s used, whereas -CURRENT has more need to have the revision printed in uname (which is one of the things that make a build unreproducible). []

For Alpine Linux, Holger Levsen disabled the builders run by the Reproducible Builds project as our patch to the abuild utility (see December’s report doesn’t apply anymore and thus all builds have become unreproducible again. Subsequent to this, a patch was merged upstream. []

In GNU Guix, on January 14th, Konrad Hinsen posted a blog post entitled Reproducible computations with Guix which, amongst other things remarks that:

The [guix time-machine command] machine actually downloads the specified version of Guix and passes it the rest of the command line. You are running the same code again. Even bugs in Guix will be reproduced faithfully!

The Yocto Project reported that they have reproducible cross-built binaries that are independent of both the underlying host distribution the build is run on and independent of the path used for the build. This is now being continually tested on the Yocto Project’s automated infrastructure to ensure this state is maintained in the future.

Project website & documentation

There was more work performed on our website this month, including:

In addition, Arnout Engelen added a Scala programming language example for the SOURCE_DATE_EPOCH environment variable [], David del Amo updated the link to the Software Freedom Conversancy to remove some double parentheses [] and Peter Wu added a Debian example for the -ffile-prefix-map argument to support Clang version 10 [].

Testing framework

We operate a fully-featured and comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, the following changes were made:

  • Adrian Bunk:
    • Use the et_EE locale/language instead of fr_CH. In Estonian, the z character is sorted between s and t which is contrary to common incorrect assumptions about the sorting order of ASCII characters.. []
    • Add ffile_prefix_map_passed_to_clang to the list of issues filtered as these build failures should be ignored. []
    • Remove the ftbfs_build_depends_not_available_on_amd64 from the list of filtered issues as this specific problem no longer exists. []
  • Holger Levsen:

    • Debian:
      • Always configure apt to ignore expired release files on hosts running in the future. []
      • Create an “oldsuites” page, showing suites we used to test in the past. [][][][][]
      • Schedule more old packages from the buster distribution. []
      • Deal with shell escaping and other options. [][][]
      • Reverse the suite ordering on the packages page. [][]
      • Show bullseye statistics on dashboard page, moving away from buster [] and additionally omit stretch [].
    • F-Droid:
      • Document the increased diskspace requirements; we require over 700 GiB now. []
    • Misc:
      • Gracefully deal with umount problems. [][]
      • Run code to show “todo” entries locally. []
      • Use mmdebstrap instead of debootstrap. [][][]
  • Jelle van der Waa (Arch Linux):

    • Set the PACKAGER variable to a valid string to avoid noise in the logging. []
    • Add a link to the Arch Linux-specific package page in the overview table. []
  • Mattia Rizzolo:
    • Fix a hard-coded reference to the current year. []
    • Ignore No server certificate defined warning messages when automatically parsing logfiles. []
  • Vagrant Cascadian special-cased u-boot on the armhf architecture: First, do not build the all architecture as the dependencies are not available on this architecture [] and also pass the --binary-arch argument to pbuilder too [].

The usual node maintenance was performed by Mattia Rizzolo [][], Vagrant Cascadian [][][][] and Holger Levsen.


Misc news

On our mailing list this month:

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can also get in touch with us via:



This month’s report was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, heinrich5991, Holger Levsen, Jelle van der Waa, Mattia Rizzolo and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

10 February, 2020 05:33PM

hackergotchi for Anisa Kuci

Anisa Kuci

FOSDEM 2020

As many other people, this year I attended FOSDEM.

For the ones that might not be familiar with the name, FOSDEM is the biggest free software developers gathering in Europe, happening every year in Brussels, Belgium.

This year I decided to attend again as it is an event I have really enjoyed the last two times I have attended during the past years. As I am currently doing my Outreachy internship I found FOSDEM a very good opportunity to receive some more inspiration. My goal was to come back from this event with some ideas or motivation that would help during the last phases of my internship, as I need to work on documentation and best practices on fundraising. I also wanted to meet in person the people that I have worked with so far regarding Outreachy and discuss with them in person about organizational topics and even ask for advice.

As FOSDEM is quite big, I saw again and met many Debian community members and I received very nice feedback on my work on Outreachy. During the weekend I spent a little bit of time at the Debian booth, were I tried to help as all the people there were already busy and the Debian booth during FOSDEM is really crowded. I understand why, I couldn’t resist but buying some Debian merchandise myself. I felt proud to also see the results of my work on the fundraising materials for DebConf20, as the fundraising brochure that I worked on and the “freshly baked” stickers were available at the booth, to promote the next DebConf.

FOSDEM 2020 - Debian booth merchandise

During the weekend I volunteered to help at the GNOME booth, which was quite crowded as well. This is not the first time I contribute to GNOME. I have been adapted very quickly to the GNOME community as everyone is very friendly and positive, so for me it was very enjoyable to spend some time there as well. I was also introduced to the GNOME Asia organizing team and had a great exchange on our mutual interest of organizing conferences. Thank you for the GNOME Asia keychain!

FOSDEM 2020 - Meeting GNOME Asia

I attended a few talks, and unfortunately I missed some other ones that I was interested in. Luckily the FOSDEM team works hard and they have recorded the talks, so they are available online for people who could not make it to the conference or to the talk rooms because they are often full.

FOSDEM 2020 - Attending talks

As I am working on fundraising I was requested by the team to be part of a meeting with one of the potential sponsors for DebConf20. We have been discussing the sponsor levels available and perks that this specific company would be interested in receiving. This was a good experience for me as this kind of in-person communication is very important for establishing a good connection with potential work partners.

My Outreachy internship finishes soon and this is also one of the reasons why my mentor supported attending FOSDEM using the Outreachy stipend. FOSDEM is huge, and you meet hundreds of people within two days, so it is a good opportunity to look for a future job. There is also a job fair booth where companies post job offers. I surely passed by and got myself some offers that I thought would be suitable for me.

And the cherry on top of the cake during FOSDEM, are all the booths distributed in different buildings. I did not only meet friends from different communities, but also got to know so many new projects that I had not heard of before. And of course, got some very nice swag. Stickers and other goodies are never too much!

Thank you Debian, Outreachy and SFC for enabling me to attend FOSDEM 2020.

FOSDEM 2020 - Debian booth

10 February, 2020 12:30PM

François Marier

Fedora 31 LXC setup on Ubuntu Bionic 18.04

Similarly to what I wrote for Fedora 29, here is how I was able to create a Fedora 31 LXC container on an Ubuntu 18.04 (bionic) laptop.

Setting up LXC on Ubuntu

First of all, install lxc:

apt install lxc
echo "veth" >> /etc/modules
modprobe veth

turn on bridged networking by putting the following in /etc/sysctl.d/local.conf:

net.ipv4.ip_forward=1

and applying it using:

sysctl -p /etc/sysctl.d/local.conf

Then allow the right traffic in your firewall (/etc/network/iptables.up.rules in my case):

# LXC containers
-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and apply these changes:

iptables-apply

before restarting the lxc networking:

systemctl restart lxc-net.service

Create the container

Once that's in place, you can finally create the Fedora 29 container:

lxc-create -n fedora31 -t download -- -d fedora -r 31 -a amd64

To see a list of all distros available with the download template:

lxc-create -n foo --template=download -- --list

Once the container has been created, disable AppArmor for it:

lxc.apparmor.profile = unconfined

since the AppArmor profile isn't working at the moment.

Logging in as root

Starting the container in one window:

lxc-start -n fedora31 -F

and attaching to a console:

lxc-attach -n fedora31

to set a root password:

passwd

Logging in as an unprivileged user via ssh

While logged into the console, I tried to install ssh:

$ dnf install openssh-server
Cannot create temporary file - mkstemp: No such file or directory

but it failed because TMPDIR is set to a non-existent directory:

$ echo $TMPDIR
/tmp/user/0

I found a fix and ran the following:

TMPDIR=/tmp dnf install openssh-server

then started the ssh service:

systemctl start sshd.service

Then I installed a few other packages as root:

dnf install vim sudo man

and created an unprivileged user with sudo access:

adduser francois -G wheel
passwd francois

I set this in /etc/ssh/sshd_config:

GSSAPIAuthentication no

to prevent slow ssh logins.

Now login as that user from the console and add an ssh public key:

mkdir .ssh
chmod 700 .ssh
echo "<your public key>" > .ssh/authorized_keys
chmod 644 .ssh/authorized_keys

You can now login via ssh. The IP address to use can be seen in the output of:

lxc-ls --fancy

10 February, 2020 05:30AM

February 09, 2020

Enrico Zini

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.9.850.1.0

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 685 other packages on CRAN.

A new upstream release 9.850.1 of Armadillo was just released. And as some will undoubtedly notice, Conrad opted for an increment of 50 rather 100. We wrapped this up as version 0.9.850.1.0, having prepared a full (github-only) tarball and the release candidate 9.850.rc1 a few days ago. Both the release candidate and the release got the full reverse depends treatment, and no issues were found.

Changes in the new release below.

Changes in RcppArmadillo version 0.9.850.1.0 (2020-02-09)

  • Upgraded to Armadillo release 9.850.1 (Pyrocumulus Wrath)

    • faster handling of compound expressions by diagmat()

    • expanded .save() and .load() to handle CSV files with headers via csv_name(filename,header) specification

    • added log_normpdf()

    • added .is_zero()

    • added quantile()

  • The sparse matrix test using scipy, if available, is now simplified thanks to recently added reticulate conversions.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 February, 2020 10:29PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

23.97 fps

In the series “X11 really isn't that great”:

xrandr(1) will show you 23.974 Hz modes:

   1920x1080     60.00    50.00    59.94    30.00    25.00    24.00*   29.97    23.98

But you cannot select them:

sesse@morgental:~$ xrandr -r 23.98
Rate 23.98 Hz not available for this size

I finally dug into the source code, and... the configuration setting is using a “simplified” library that can only deal with integer rates.

The simplest workaround (I guess save for using a GUI; this is my HTPC, and I manage it remote) is to find the mode number directly:

sesse@morgental:~$ xrandr --verbose
[...]
  1920x1080 (0x5a) 74.176MHz +HSync +VSync
        h: width  1920 start 2558 end 2602 total 2750 skew    0 clock  26.97KHz
        v: height 1080 start 1084 end 1089 total 1125           clock  23.98Hz
[...]
sesse@morgental:~$ xrandr --output HDMI-2 --mode 0x5a

And tada, welcome to 2005:

   1920x1080     60.00    50.00    59.94    30.00    25.00    24.00    29.97    23.98*

Gah.

09 February, 2020 09:45PM

Andrew Cater

CD release time post 3 20202029 1653 - Release for Oldstable / Squeeze / 9.12 has happened

Just tidying up on the last of my tests. The release to the mirrors for 9.12 has happened.

Much sterling work by  amacater, Isy, RattusRattus, Sledge and Schweer (in alphabetical order). This one seemed to have turned up fewer bugs.

Lots of fun, the usual bouts of problems between chair and keyboard and a feeling for me of something worthwhile to share in.

My private CD mirror is now up to date with 10.3 and a few machines here have already been upgraded :) On a day when the weather is cold, wet, windy and rainy, it's been a good excuse to sit in the warm with a laptop.

Here's to the next one, then.

09 February, 2020 05:05PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Apt Offline 1.8.2

apt-offline 1.8.2

I am pleased to announce the release of apt-offline version 1.8.2

This release has many bug fixes along with a long standing issue of signature validation

2017 - The year of realization and change

Back in 2017, the bug was reported that apt-offline did not validate apt meta Packages files. apt-offline was only doing a signature validation for the Release files but did no validation of the apt meta Packages files, which had their checksums listed in the Release files. This validation was completely missing in apt-offline and gave the user the wrong impression that validation was in place.

I had hoped to fix this issue soon when it was reported, to have it part of the next Debian Stable release. But that never happened. On the contrary, I think 2 stable releases happened in between. And now it is 2020.

2017 was a year to spend a large chunk of my time on real life issues, for good. I realized that it is important to always give precedence to personal life, fix issues, set realistic priorities, spend time on realizing the happenings around, get life rolling smooth and then come back to work. This helps sustain in the longer run. Otherwise, with no self, everything can fall apart catastrophically.

From that phase, I learned many things. I now have much more respect for people who really have been successful at committing a large amount of their time on a volunteer project like Debian. Having myself gone through the time crunch phase, I can only imagine how many of the fellow DDs manage their time, sustainably, over the years. There are many folks I have seen active for more than a decade and they still rock.

1.8.2 release

Because the apt meta validation was a major issue, I have decided to run through the workflow and explain how apt-offline reacts to invalid tampered data. Below are konsole captures, with snipped output, where not very relevant.


apt-offline ‘set’ operation

The usual first step on the offline box to generate a file with all relevant details about repositories and packages. This step generates the set.uris file that needs to be transferred to the online machine. In the following example, it is being run with the defaults, which is to generate the necessary information about the ‘update’ and ‘upgrade’ operation.

rrs@priyasi:/var/tmp/Debian-Build/Result$ sudo apt-offline set /tmp/set.uris
Gathering details needed for 'update' operation
Gathering details needed for 'upgrade' operation
16:33 â™’ ༐  â˜ş đŸ˜„    


apt-offline ‘get’ operation

The ‘get’ operation should run on most machines where Python is available. In below example, it is the usual output where it downloads the required data, information for which is derived from the set.uris file which was generated in the previous step, on the ‘offline’ machine.

One item to pay attention to, in this step, is some of the errors that are reported. Not all repository admins enable all the apt meta data available on their mirrors. This is commonly seen for localization related files. Similarly, not all compression types are available on all the repository servers. Some may only have .xz based meta files hosted while others may have .gz ones. So, for apt-offline, which has to bridge the gap of the offline <=> online setup, there is more work.

For compression types, apt-offline cycles through the known list of types. Only if, after cycling through all the known compression types, if the return is still a 404, then we error out.

Similarly, for localization related meta, we do the same cycling. But in addition to that, there is the possibility that the repository admin may not have enabled the localization data to be served at all. In that case, apt-offline ultimately will report and error.

And that is what is shown below. Because I see them not breaking the functionality, I treat them as non-fatal errors.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
rrs@priyasi:/var/tmp/Debian-Build/Result$ apt-offline get /tmp/set-trimmed.uris --bundle /tmp/set.zip --threads 5

Fetching APT Data

WARNING: If you are on a slow connection, it is good to
WARNING: limit the number of threads to a low number like 2.
WARNING: Else higher number of threads executed could cause
WARNING: network congestion and timeouts.

Downloading http://deb.debian.org/debian/dists/testing/Release.gpg                                                             
Downloading http://deb.debian.org/debian/dists/testing/Release                                                             
Downloading http://deb.debian.org/debian/dists/testing/InRelease                                                             
Downloading http://deb.debian.org/debian/dists/unstable/Release.gpg                                                             
Downloading http://deb.debian.org/debian/dists/unstable/Release                                                             
http://deb.debian.org/debian/dists/unstable/Release.gpg done                                                             
Downloading http://deb.debian.org/debian/dists/unstable/InRelease                                                             
http://deb.debian.org/debian/dists/testing/Release.gpg done                                                             
Downloading http://deb.debian.org/debian/dists/experimental/Release.gpg                                                             
http://deb.debian.org/debian/dists/unstable/Release done                                                             
Downloading http://deb.debian.org/debian/dists/experimental/Release                                                             
http://deb.debian.org/debian/dists/testing/InRelease done                                                             
Downloading http://deb.debian.org/debian/dists/experimental/InRelease                                                             
http://deb.debian.org/debian/dists/testing/Release done                                                             
Downloading http://deb.debian.org/debian/dists/testing/main/source/Sources.xz                                                             
http://deb.debian.org/debian/dists/unstable/InRelease done                                                             
Downloading http://deb.debian.org/debian/dists/testing/non-free/source/Sources.xz                                                             
http://deb.debian.org/debian/dists/experimental/Release.gpg done                                                             
Downloading http://deb.debian.org/debian/dists/testing/contrib/source/Sources.xz                                                             
http://deb.debian.org/debian/dists/experimental/InRelease done                                                             
Downloading http://deb.debian.org/debian/dists/testing/main/binary-amd64/Packages.xz                                                             
http://deb.debian.org/debian/dists/experimental/Release done                                                             
Downloading http://deb.debian.org/debian/dists/testing/main/binary-i386/Packages.xz                                                             
http://deb.debian.org/debian/dists/testing/contrib/source/Sources.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/main/binary-all/Packages.xz                                                             
http://deb.debian.org/debian/dists/testing/non-free/source/Sources.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/main/i18n/Translation-en_IN.xz                                                             
ERROR: Giving up on URL http://deb.debian.org/debian/dists/testing/main/i18n/Translation-en_IN.lzma
Downloading http://deb.debian.org/debian/dists/testing/main/i18n/Translation-en.xz                                                             
http://deb.debian.org/debian/dists/testing/main/binary-all/Packages.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/main/i18n/Translation-en_US.xz                                                             
http://deb.debian.org/debian/dists/testing/main/source/Sources.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/main/Contents-amd64.xz                                                             
http://deb.debian.org/debian/dists/testing/main/i18n/Translation-en.bz2 done                                                             
ERROR: Giving up on URL http://deb.debian.org/debian/dists/testing/main/i18n/Translation-en_US.lzma
Downloading http://deb.debian.org/debian/dists/testing/main/Contents-i386.xz                                                             
Downloading http://deb.debian.org/debian/dists/testing/main/Contents-all.xz                                                             
ERROR: Giving up on URL http://deb.debian.org/debian/dists/testing/main/Contents-all.lzma
Downloading http://deb.debian.org/debian/dists/testing/non-free/binary-amd64/Packages.xz
http://deb.debian.org/debian/dists/testing/non-free/binary-amd64/Packages.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/non-free/binary-i386/Packages.xz
http://deb.debian.org/debian/dists/testing/non-free/binary-i386/Packages.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/non-free/binary-all/Packages.xz
http://deb.debian.org/debian/dists/testing/non-free/binary-all/Packages.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/non-free/i18n/Translation-en_IN.xz
ERROR: Giving up on URL http://deb.debian.org/debian/dists/testing/non-free/i18n/Translation-en_IN.lzma
Downloading http://deb.debian.org/debian/dists/testing/non-free/i18n/Translation-en.xz
http://deb.debian.org/debian/dists/testing/non-free/i18n/Translation-en.bz2 done                                                             
Downloading http://deb.debian.org/debian/dists/testing/non-free/i18n/Translation-en_US.xz
http://deb.debian.org/debian/dists/testing/main/binary-i386/Packages.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/non-free/Contents-amd64.xz                                                             
ERROR: Giving up on URL http://deb.debian.org/debian/dists/testing/non-free/i18n/Translation-en_US.lzma
Downloading http://deb.debian.org/debian/dists/testing/non-free/Contents-i386.xz                                                             
http://deb.debian.org/debian/dists/testing/non-free/Contents-i386.gz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/non-free/Contents-all.xz                                                             
http://deb.debian.org/debian/dists/testing/non-free/Contents-amd64.gz done                                                             
http://deb.debian.org/debian/dists/testing/main/binary-amd64/Packages.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/contrib/binary-amd64/Packages.xz
Downloading http://deb.debian.org/debian/dists/testing/contrib/binary-i386/Packages.xz
http://deb.debian.org/debian/dists/testing/contrib/binary-amd64/Packages.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/contrib/binary-all/Packages.xz                                                             
http://deb.debian.org/debian/dists/testing/contrib/binary-i386/Packages.xz done                                                             
http://deb.debian.org/debian/dists/testing/contrib/binary-all/Packages.xz done                                                             
Downloading http://deb.debian.org/debian/dists/testing/contrib/i18n/Translation-en_IN.xz
Downloading http://deb.debian.org/debian/dists/testing/contrib/i18n/Translation-en.xz                                                             
ERROR: Giving up on URL http://deb.debian.org/debian/dists/testing/non-free/Contents-all.lzma
Downloading http://deb.debian.org/debian/dists/testing/contrib/i18n/Translation-en_US.xz
http://deb.debian.org/debian/dists/testing/contrib/i18n/Translation-en.bz2 done                                                             
Downloading http://deb.debian.org/debian/dists/testing/contrib/Contents-amd64.xz                                                             
ERROR: Giving up on URL http://deb.debian.org/debian/dists/testing/contrib/i18n/Translation-en_IN.lzma
Downloading http://deb.debian.org/debian/dists/testing/contrib/Contents-i386.xz                                                             
ERROR: Giving up on URL http://deb.debian.org/debian/dists/testing/contrib/i18n/Translation-en_US.lzma
Downloading http://deb.debian.org/debian/dists/testing/contrib/Contents-all.xz                                                             
http://deb.debian.org/debian/dists/testing/contrib/Contents-amd64.gz done                                                             
http://deb.debian.org/debian/dists/testing/contrib/Contents-i386.gz done                                                             
ERROR: Giving up on URL http://deb.debian.org/debian/dists/testing/contrib/Contents-all.lzma
http://deb.debian.org/debian/dists/testing/main/Contents-i386.gz done                                                             
http://deb.debian.org/debian/dists/testing/main/Contents-amd64.gz done                                                             
 81 /  81 items: [##############################] 100.0% of 101 MiB
Downloaded data to /tmp/set.zip
ERROR: Some items failed to download. Downloaded data may be incomplete
ERROR: Please run in verbose mode to see details about failed items


16:38 â™’ ༐   ☚ đŸ˜Ÿ=> 100  


Back to the offline machine

Now that we’ve got all the data downloaded in set.zip and transferred back to the offline machine. Let’s look into it.

First, lets unpack the archive file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
rrs@priyasi:/var/tmp/Debian-Build/Result$ cd  /tmp/
16:39 â™’ ༐  â˜ş đŸ˜„    
rrs@priyasi:/tmp$ mkdir set-folder
16:39 â™’ ༐  â˜ş đŸ˜„    
rrs@priyasi:/tmp$ cd set-folder/
16:39 â™’ ༐  â˜ş đŸ˜„    
rrs@priyasi:/tmp/set-folder$ unzip ../set.zip 
Archive:  ../set.zip
  inflating: deb.debian.org_debian_dists_unstable_Release.gpg  
  inflating: deb.debian.org_debian_dists_testing_Release.gpg  
  inflating: deb.debian.org_debian_dists_unstable_Release  
  inflating: deb.debian.org_debian_dists_testing_InRelease  
  inflating: deb.debian.org_debian_dists_testing_Release  
  inflating: deb.debian.org_debian_dists_unstable_InRelease  
  inflating: deb.debian.org_debian_dists_experimental_Release.gpg  
  inflating: deb.debian.org_debian_dists_experimental_InRelease  
  inflating: deb.debian.org_debian_dists_experimental_Release  
  inflating: deb.debian.org_debian_dists_testing_contrib_source_Sources.xz  
  inflating: deb.debian.org_debian_dists_testing_non-free_source_Sources.xz  
  inflating: deb.debian.org_debian_dists_testing_main_binary-all_Packages.xz  
  inflating: deb.debian.org_debian_dists_testing_main_source_Sources.xz  
  inflating: deb.debian.org_debian_dists_testing_main_i18n_Translation-en.bz2  
  inflating: deb.debian.org_debian_dists_testing_non-free_binary-amd64_Packages.xz  
  inflating: deb.debian.org_debian_dists_testing_non-free_binary-i386_Packages.xz  
  inflating: deb.debian.org_debian_dists_testing_non-free_binary-all_Packages.xz  
  inflating: deb.debian.org_debian_dists_testing_non-free_i18n_Translation-en.bz2  
  inflating: deb.debian.org_debian_dists_testing_main_binary-i386_Packages.xz  
  inflating: deb.debian.org_debian_dists_testing_non-free_Contents-i386.gz  
  inflating: deb.debian.org_debian_dists_testing_non-free_Contents-amd64.gz  
  inflating: deb.debian.org_debian_dists_testing_main_binary-amd64_Packages.xz  
  inflating: deb.debian.org_debian_dists_testing_contrib_binary-amd64_Packages.xz  
  inflating: deb.debian.org_debian_dists_testing_contrib_binary-i386_Packages.xz  
  inflating: deb.debian.org_debian_dists_testing_contrib_binary-all_Packages.xz  
  inflating: deb.debian.org_debian_dists_testing_contrib_i18n_Translation-en.bz2  
  inflating: deb.debian.org_debian_dists_testing_contrib_Contents-amd64.gz  
  inflating: deb.debian.org_debian_dists_testing_contrib_Contents-i386.gz  
  inflating: deb.debian.org_debian_dists_testing_main_Contents-i386.gz  
  inflating: deb.debian.org_debian_dists_testing_main_Contents-amd64.gz  
16:39 â™’ ༐  â˜ş đŸ˜„    
rrs@priyasi:/tmp/set-folder$ ls
deb.debian.org_debian_dists_experimental_InRelease                    deb.debian.org_debian_dists_testing_main_Contents-i386.gz
deb.debian.org_debian_dists_experimental_Release                      deb.debian.org_debian_dists_testing_main_i18n_Translation-en.bz2
deb.debian.org_debian_dists_experimental_Release.gpg                  deb.debian.org_debian_dists_testing_main_source_Sources.xz
deb.debian.org_debian_dists_testing_contrib_binary-all_Packages.xz    deb.debian.org_debian_dists_testing_non-free_binary-all_Packages.xz
deb.debian.org_debian_dists_testing_contrib_binary-amd64_Packages.xz  deb.debian.org_debian_dists_testing_non-free_binary-amd64_Packages.xz
deb.debian.org_debian_dists_testing_contrib_binary-i386_Packages.xz   deb.debian.org_debian_dists_testing_non-free_binary-i386_Packages.xz
deb.debian.org_debian_dists_testing_contrib_Contents-amd64.gz         deb.debian.org_debian_dists_testing_non-free_Contents-amd64.gz
deb.debian.org_debian_dists_testing_contrib_Contents-i386.gz          deb.debian.org_debian_dists_testing_non-free_Contents-i386.gz
deb.debian.org_debian_dists_testing_contrib_i18n_Translation-en.bz2   deb.debian.org_debian_dists_testing_non-free_i18n_Translation-en.bz2
deb.debian.org_debian_dists_testing_contrib_source_Sources.xz         deb.debian.org_debian_dists_testing_non-free_source_Sources.xz
deb.debian.org_debian_dists_testing_InRelease                         deb.debian.org_debian_dists_testing_Release
deb.debian.org_debian_dists_testing_main_binary-all_Packages.xz       deb.debian.org_debian_dists_testing_Release.gpg
deb.debian.org_debian_dists_testing_main_binary-amd64_Packages.xz     deb.debian.org_debian_dists_unstable_InRelease
deb.debian.org_debian_dists_testing_main_binary-i386_Packages.xz      deb.debian.org_debian_dists_unstable_Release
deb.debian.org_debian_dists_testing_main_Contents-amd64.gz            deb.debian.org_debian_dists_unstable_Release.gpg
16:39 â™’ ༐  â˜ş đŸ˜„    


Tamper apt package meta files

Now lets tamper one of the downloaded files to see how apt-offline deals with it.

1
2
rrs@priyasi:/tmp/set-folder$ echo 112312312321 >> deb.debian.org_debian_dists_testing_non-free_source_Sources.xz
16:40 â™’ ༐  â˜ş đŸ˜„    


Install tampered apt package meta files

So in this step, we tell apt-offline to install the downloaded files. This will also include the tampered file. The output you see below is standard and reports everything to have succeeded.

But note that the tampered file is not in the list of synced files. That file is just simply missing from the list.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
rrs@priyasi:/tmp/set-folder$ sudo apt-offline install .
Proceeding with installation
gpgv: Signature made Friday 07 February 2020 01:55:24 PM IST
gpgv:                using RSA key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC
gpgv: Good signature from "Debian Archive Automatic Signing Key (9/stretch) <ftpmaster@debian.org>"
gpgv: Signature made Friday 07 February 2020 01:55:43 PM IST
gpgv:                using RSA key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138
gpgv: Good signature from "Debian Archive Automatic Signing Key (10/buster) <ftpmaster@debian.org>"
gpgv: Signature made Friday 07 February 2020 01:56:44 PM IST
gpgv:                using RSA key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC
gpgv: Good signature from "Debian Archive Automatic Signing Key (9/stretch) <ftpmaster@debian.org>"
gpgv: Signature made Friday 07 February 2020 01:56:45 PM IST
gpgv:                using RSA key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138
gpgv: Good signature from "Debian Archive Automatic Signing Key (10/buster) <ftpmaster@debian.org>"
gpgv: Signature made Friday 07 February 2020 01:56:58 PM IST
gpgv:                using RSA key 16E90B3FDF65EDE3AA7F323C04EE7237B7D453EC
gpgv: Good signature from "Debian Archive Automatic Signing Key (9/stretch) <ftpmaster@debian.org>"
gpgv: Signature made Friday 07 February 2020 01:56:58 PM IST
gpgv:                using RSA key 0146DC6D4A0B2914BDED34DB648ACFD622F3D138
gpgv: Good signature from "Debian Archive Automatic Signing Key (10/buster) <ftpmaster@debian.org>"
deb.debian.org_debian_dists_testing_contrib_Contents-amd64.gz synced.
deb.debian.org_debian_dists_testing_contrib_Contents-i386.gz synced.
deb.debian.org_debian_dists_testing_contrib_binary-all_Packages.xz synced.
deb.debian.org_debian_dists_testing_contrib_binary-amd64_Packages.xz synced.
deb.debian.org_debian_dists_testing_contrib_binary-i386_Packages.xz synced.
deb.debian.org_debian_dists_testing_contrib_i18n_Translation-en.bz2 synced.
deb.debian.org_debian_dists_testing_contrib_source_Sources.xz synced.
deb.debian.org_debian_dists_testing_main_Contents-amd64.gz synced.
deb.debian.org_debian_dists_testing_main_Contents-i386.gz synced.
deb.debian.org_debian_dists_testing_main_binary-all_Packages.xz synced.
deb.debian.org_debian_dists_testing_main_binary-amd64_Packages.xz synced.
deb.debian.org_debian_dists_testing_main_binary-i386_Packages.xz synced.
deb.debian.org_debian_dists_testing_main_i18n_Translation-en.bz2 synced.
deb.debian.org_debian_dists_testing_main_source_Sources.xz synced.
deb.debian.org_debian_dists_testing_non-free_Contents-amd64.gz synced.
deb.debian.org_debian_dists_testing_non-free_Contents-i386.gz synced.
deb.debian.org_debian_dists_testing_non-free_binary-all_Packages.xz synced.
deb.debian.org_debian_dists_testing_non-free_binary-amd64_Packages.xz synced.
deb.debian.org_debian_dists_testing_non-free_binary-i386_Packages.xz synced.
deb.debian.org_debian_dists_testing_non-free_i18n_Translation-en.bz2 synced.
16:41 â™’ ༐  â˜ş đŸ˜„    


Install tampered apt package meta files with verbose switch one

So, in the above example, apt-offline discarded the tampered file and the final exit of the command was a success. Now, let’s run the same command with the ‘–verbose’ switch. Below is the output.

Notice the highlighted line below, where it reports that the file is tampered and does not match the checksum

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
rrs@priyasi:/tmp/set-folder$ sudo apt-offline install . --verbose
VERBOSE: Namespace(allow_unauthenticated=False, func=<function installer at 0x7f6a6c7c54d0>, install='.', install_simulate=False, install_src_path=None, skip_bug_reports=False, skip_changelog=False, strict_deb_check=False, verbose=True)
VERBOSE: No changelog available
Proceeding with installation
VERBOSE: {}
VERBOSE: Great!!! No bugs found for all the packages that were downloaded.

VERBOSE: APT Signature verification path is: ['/etc/apt/trusted.gpg.d/', '/etc/apt/trusted.gpg']
VERBOSE: Adding /etc/apt/trusted.gpg.d/debian-archive-jessie-automatic.gpg to the apt-offline keyring
VERBOSE: Adding /etc/apt/trusted.gpg.d/debian-archive-jessie-security-automatic.gpg to the apt-offline keyring
VERBOSE: Adding /etc/apt/trusted.gpg.d/debian-archive-jessie-stable.gpg to the apt-offline keyring
VERBOSE: Adding /etc/apt/trusted.gpg.d/debian-archive-stretch-automatic.gpg to the apt-offline keyring
VERBOSE: Adding /etc/apt/trusted.gpg.d/debian-archive-stretch-security-automatic.gpg to the apt-offline keyring
VERBOSE: Adding /etc/apt/trusted.gpg.d/debian-archive-stretch-stable.gpg to the apt-offline keyring
VERBOSE: Adding /etc/apt/trusted.gpg.d/debian-archive-buster-automatic.gpg to the apt-offline keyring
VERBOSE: Adding /etc/apt/trusted.gpg.d/debian-archive-buster-security-automatic.gpg to the apt-offline keyring

.....snipped.....

VERBOSE: localFile ./deb.debian.org_debian_dists_testing_non-free_Contents-amd64.gz Integrity with checksum 024957d30be2acbb9e66c9802f825115d32437420300a2b28ab60ae4ecb76fcf matches
VERBOSE: localFile ./deb.debian.org_debian_dists_testing_non-free_Contents-i386.gz Integrity with checksum 5266d2f3ea41c4e988e71b4bbe58dd1178a23ce1ed50908c73a0cb39201136e3 matches
VERBOSE: localFile ./deb.debian.org_debian_dists_testing_non-free_binary-all_Packages.xz Integrity with checksum 9f0f3aa5560452d45f82c5121ea844c68e641c8fbb56ef69d570c641b6cce662 matches
VERBOSE: localFile ./deb.debian.org_debian_dists_testing_non-free_binary-amd64_Packages.xz Integrity with checksum 811f7752a13dfcbd748478dda267fb810c52fc14769d2d5c7871c75e35350d66 matches
VERBOSE: localFile ./deb.debian.org_debian_dists_testing_non-free_binary-i386_Packages.xz Integrity with checksum 7df3512b5da7258613774921023d68c71858d89fddafd694e2dfd19cef54314b matches
VERBOSE: localFile ./deb.debian.org_debian_dists_testing_non-free_i18n_Translation-en.bz2 Integrity with checksum 1bf3cd0cff6fadf1a74280912c3229374344cd6c347d2f533b001843d84b236d matches
VERBOSE: localFile ./deb.debian.org_debian_dists_testing_non-free_source_Sources.xz integrity doesn't match to checksum a94589ab3c204bb4d710d72ea21abac8007b14e5c5dacbe43be07c51ba5f0a0a
VERBOSE: Synchronized file to /var/lib/apt/lists/deb.debian.org_debian_dists_testing_contrib_Contents-amd64
VERBOSE: /var/lib/apt/lists/deb.debian.org_debian_dists_testing_contrib_Contents-amd64 file synced to APT.
deb.debian.org_debian_dists_testing_contrib_Contents-amd64.gz synced.
VERBOSE: Synchronized file to /var/lib/apt/lists/deb.debian.org_debian_dists_testing_contrib_Contents-i386
VERBOSE: /var/lib/apt/lists/deb.debian.org_debian_dists_testing_contrib_Contents-i386 file synced to APT.
deb.debian.org_debian_dists_testing_contrib_Contents-i386.gz synced.
VERBOSE: Synchronized file to /var/lib/apt/lists/deb.debian.org_debian_dists_testing_contrib_binary-all_Packages
VERBOSE: /var/lib/apt/lists/deb.debian.org_debian_dists_testing_contrib_binary-all_Packages file synced to APT.
deb.debian.org_debian_dists_testing_contrib_binary-all_Packages.xz synced.
VERBOSE: Synchronized file to /var/lib/apt/lists/deb.debian.org_debian_dists_testing_contrib_binary-amd64_Packages
VERBOSE: /var/lib/apt/lists/deb.debian.org_debian_dists_testing_contrib_binary-amd64_Packages file synced to APT.
deb.debian.org_debian_dists_testing_contrib_binary-amd64_Packages.xz synced.
VERBOSE: Synchronized file to /var/lib/apt/lists/deb.debian.org_debian_dists_testing_contrib_binary-i386_Packages
VERBOSE: /var/lib/apt/lists/deb.debian.org_debian_dists_testing_contrib_binary-i386_Packages file synced to APT.
deb.debian.org_debian_dists_testing_contrib_binary-i386_Packages.xz synced.

.....snipped.....

16:42 â™’ ༐  â˜ş đŸ˜„

This is pretty much the validation required and done by apt-offline for apt meta Packages files.

Please do file bug reports if you think the overall exit status of apt-offline under such scenarios should be different than what it is currently.

For the tampered meta Packages files:

  • should the visual representation be different ?
  • Should an error be printed ?
  • What about the exit status ?

Similarly, for the ‘get’ operation:

  • Should we do something different for non-existing localization files on the repository server ?
  • Is there any different way to go through the supported list of compression types for meta files ?


Now the deb file examples

apt-offline allows a user to install a new package and all its dependencies easily on the offline machine. The below workflow will demonstrate the same and will also go through the tampering of the .deb files and see how apt-offline/apt deals with it.

In below example, a user wants to install the gnome-todo package on the offline machine, which has a couple of dependencies.

rrs@priyasi:/tmp/set-folder$ sudo apt install gnome-todo
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  gnome-todo-common libgnome-todo libpeas-1.0-0 libpeas-common
The following NEW packages will be installed:
  gnome-todo gnome-todo-common libgnome-todo libpeas-1.0-0 libpeas-common
0 upgraded, 5 newly installed, 0 to remove and 1 not upgraded.
Need to get 784 kB of archives.
After this operation, 2,337 kB of additional disk space will be used.
Do you want to continue? [Y/n] n
Abort.
16:47 â™’ ༐   ☚ đŸ˜Ÿ=> 1  


apt-offline ‘set’ operation

The below command generates a (signature) file, which will include all details about requested package and its dependencies.

1
2
3
rrs@priyasi:/tmp/set-folder$ sudo apt-offline set /tmp/gnome-todo.uris --install-packages gnome-todo
Gathering installation details for package ['gnome-todo']
16:48 â™’ ༐  â˜ş đŸ˜„    


apt-offline ‘get’ operation

Below is the usual step to be performed on the online machine with the generated gnome-todo.uris signature file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
rrs@priyasi:/tmp/set-folder$ apt-offline get /tmp/gnome-todo.uris --download-dir /tmp/gnome-todo --bug-reports --threads 3

Fetching APT Data

WARNING: If you are on a slow connection, it is good to
WARNING: limit the number of threads to a low number like 2.
WARNING: Else higher number of threads executed could cause
WARNING: network congestion and timeouts.

Downloading libpeas-common - 187 KiB                                                             
Downloading libpeas-1.0-0 - 196 KiB                                                             
Downloading gnome-todo-common - 228 KiB                                                             
libpeas-common done                                                             
Fetching bug report for libpeas-common                                                            
libpeas-1.0-0 done                                                             
Fetching bug report for libpeas-1.0-0                                                            
gnome-todo-common done                                                             
Fetching bug report for gnome-todo-common                                                            
Fetched bug report for libpeas-common                                                            
Downloading libgnome-todo - 6 KiB                                                             
libgnome-todo done                                                             
Fetching bug report for libgnome-todo                                                            
Fetched bug report for gnome-todo-common                                                            
Downloading gnome-todo - 146 KiB                                                             
gnome-todo done                                                             
Fetching bug report for gnome-todo                                                            
Fetched bug report for libpeas-1.0-0                                                            
Fetched bug report for libgnome-todo                                                            
Fetched bug report for gnome-todo                                                            
  5 /   5 items: [##############################] 100.0% of 765 KiB
Downloaded data to /tmp/gnome-todo
16:49 â™’ ༐  â˜ş đŸ˜„    


The --strict-deb-check option

This new option has been introduced for the ‘install’ command in the 1.8.2 release. The default behavior for apt-offline is to not do strict checks for the .deb files.

Note: The fact is that apt-offline will not do any checksum validation for the .deb files. The validation is completely delegated to apt.

rrs@priyasi:/tmp/gnome-todo$ sudo apt-offline install -h
usage: apt-offline install [-h] [--verbose] [--simulate]
                           [--install-src-path INSTALL_SRC_PATH]
                           [--skip-bug-reports] [--skip-changelog]
                           [--allow-unauthenticated] [--strict-deb-check]
                           apt-offline-download.zip | apt-offline-download/

positional arguments:
  apt-offline-download.zip | apt-offline-download/
                        Install apt-offline data, a bundle file or a directory

optional arguments:
  -h, --help            show this help message and exit
  --verbose             Enable verbose messages
  --simulate            Just simulate. Very helpful when debugging
  --install-src-path INSTALL_SRC_PATH
                        Install src packages to specified path.
  --skip-bug-reports    Skip the bug report check
  --skip-changelog      Skip display of changelog
  --allow-unauthenticated
                        Ignore apt gpg signatures mismatch
  --strict-deb-check    Perform strict checksum validaton for downloaded .deb
                        files
16:50 â™’ ༐  â˜ş đŸ˜„    

and from the manpage:

       --strict-deb-check
                 With  this option enabled, apt-offline delegate's .deb package checksum validation to apt. While the .debs are already avail‐
                 able, they are stored in the temporary apt cache, where apt validates its checksum, before considering it  for  further  pro‐
                 cessing.   Note:  This  does  have the caveat that apt may need network availability even though it doesn't download anything
                 over the network. But it does invoke the download routines and realizes that the payload is already available. It  then  fur‐
                 ther proceeds with checksum validation

                 The  default  behavior  is to not do strict checksum validation for .deb files. Instead, apt-offline copies the .deb files to
                 apt's download location. apt still does size validation of the available .deb files and discards them in case there is a mis‐
                 match.


Non Tampered file with default option, i.e. no strict deb checking.

Before we proceed with the example of checksum verification for .deb files, lets do a pristine run of the downloaded files, without any tampering to them.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
rrs@priyasi:/tmp/gnome-todo$ sudo apt-offline install .
Proceeding with installation


Following are the list of bugs present.
822525  gnome-todo      : gnome-todo: Memory leak while loading local and remote lists
853114  gnome-todo      : no longer loads caldav lists
883961  libgnome-todo   : libgnome-todo: Not actually a library
829470  libpeas-1.0-0   : libpeas: Python Plugin Broken
(Y) Yes. Proceed with installation
(N) No, Abort.
(R) Redisplay the list of bugs.
(Bug Number) Display the bug report from the Offline Bug Reports.
(?) Display this help message.
What would you like to do next:  (y, N, ?)y
gnome-todo_3.28.1-5_amd64.deb file synced.
libgnome-todo_3.28.1-5_amd64.deb file synced.
gnome-todo-common_3.28.1-5_all.deb file synced.
libpeas-1.0-0_1.22.0-5_amd64.deb file synced.
libpeas-common_1.22.0-5_all.deb file synced.
16:51 â™’ ༐  â˜ş đŸ˜„    
rrs@priyasi:/tmp/gnome-todo$ sudo apt install gnome-todo
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  gnome-todo-common libgnome-todo libpeas-1.0-0 libpeas-common
The following NEW packages will be installed:
  gnome-todo gnome-todo-common libgnome-todo libpeas-1.0-0 libpeas-common
0 upgraded, 5 newly installed, 0 to remove and 1 not upgraded.
Need to get 0 B/784 kB of archives.
After this operation, 2,337 kB of additional disk space will be used.
Do you want to continue? [Y/n] n
Abort.
16:51 â™’ ༐   ☚ đŸ˜Ÿ=> 1  

In the above example, everything is clean and all requirements to apt are satisfied.


Non tampered file with strict deb checking

Here’s one more exaple, where we invoke the non-default --strict-deb-check option.

Everything remains the same, but apt gives a prompt saying that it needs to download the payload from the web. The reality is that if you just proceed with yes, nothing gets downloaded.

Note: It is not possible to explain that with a still presentation and I’m lazy to make a motion object of it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
rrs@priyasi:/tmp/gnome-todo$ sudo apt-offline install . --strict-deb-check 
Proceeding with installation


Following are the list of bugs present.
822525  gnome-todo      : gnome-todo: Memory leak while loading local and remote lists
853114  gnome-todo      : no longer loads caldav lists
883961  libgnome-todo   : libgnome-todo: Not actually a library
829470  libpeas-1.0-0   : libpeas: Python Plugin Broken
(Y) Yes. Proceed with installation
(N) No, Abort.
(R) Redisplay the list of bugs.
(Bug Number) Display the bug report from the Offline Bug Reports.
(?) Display this help message.
What would you like to do next:  (y, N, ?)y
gnome-todo_3.28.1-5_amd64.deb file synced.
libgnome-todo_3.28.1-5_amd64.deb file synced.
gnome-todo-common_3.28.1-5_all.deb file synced.
libpeas-1.0-0_1.22.0-5_amd64.deb file synced.
libpeas-common_1.22.0-5_all.deb file synced.
16:52 â™’ ༐  â˜ş đŸ˜„    
rrs@priyasi:/tmp/gnome-todo$ sudo apt install gnome-todo
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  gnome-todo-common libgnome-todo libpeas-1.0-0 libpeas-common
The following NEW packages will be installed:
  gnome-todo gnome-todo-common libgnome-todo libpeas-1.0-0 libpeas-common
0 upgraded, 5 newly installed, 0 to remove and 1 not upgraded.
Need to get 784 kB of archives.
After this operation, 2,337 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://deb.debian.org/debian testing/main amd64 libpeas-common all 1.22.0-5 [192 kB]
Get:2 http://deb.debian.org/debian testing/main amd64 libpeas-1.0-0 amd64 1.22.0-5 [201 kB]
Get:3 http://deb.debian.org/debian testing/main amd64 gnome-todo-common all 3.28.1-5 [234 kB]
Get:4 http://deb.debian.org/debian testing/main amd64 libgnome-todo amd64 3.28.1-5 [6,260 B]
Get:5 http://deb.debian.org/debian testing/main amd64 gnome-todo amd64 3.28.1-5 [150 kB]
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
.....snipped.....
16:53 â™’ ༐  â˜ş đŸ˜„    

To sum it up, this one is an odd case because though nothing for the debs is downloaded, BUT, the network needs to be active for this co-routine to run. If, say, the network is unavailable, apt complains. I haven’t checked, but apt does invoke some network code.

But no payload is downloaded. apt just validates and realizes that all the to-be-downloaded data, is intact and available.


Tamper the .deb file

Now, let’s really tamper one of the .deb files.

rrs@priyasi:/tmp/gnome-todo$ echo fasdfadsfasdfasdfasd >> gnome-todo_3.28.1-5_amd64.deb
16:54 â™’ ༐  â˜ş đŸ˜„    

rrs@priyasi:/tmp/gnome-todo$ sudo apt clean
16:54 â™’ ༐  â˜ş đŸ˜„    


Install tampered file with –strict-deb-check

So we tampered one of the .deb files, gnome-todo_3.28.1-5_amd64.deb. And ask apt-offline to run its ‘install’ operation along with the new --strict-deb-check option.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
rrs@priyasi:/tmp/gnome-todo$ sudo apt-offline install . --strict-deb-check 
Proceeding with installation


Following are the list of bugs present.
822525  gnome-todo      : gnome-todo: Memory leak while loading local and remote lists
853114  gnome-todo      : no longer loads caldav lists
883961  libgnome-todo   : libgnome-todo: Not actually a library
829470  libpeas-1.0-0   : libpeas: Python Plugin Broken
(Y) Yes. Proceed with installation
(N) No, Abort.
(R) Redisplay the list of bugs.
(Bug Number) Display the bug report from the Offline Bug Reports.
(?) Display this help message.
What would you like to do next:  (y, N, ?)y
gnome-todo_3.28.1-5_amd64.deb file synced.
libgnome-todo_3.28.1-5_amd64.deb file synced.
gnome-todo-common_3.28.1-5_all.deb file synced.
libpeas-1.0-0_1.22.0-5_amd64.deb file synced.
libpeas-common_1.22.0-5_all.deb file synced.
16:54 â™’ ༐  â˜ş đŸ˜„    

rrs@priyasi:/tmp/gnome-todo$ sudo apt install gnome-todo
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  gnome-todo-common libgnome-todo libpeas-1.0-0 libpeas-common
The following NEW packages will be installed:
  gnome-todo gnome-todo-common libgnome-todo libpeas-1.0-0 libpeas-common
0 upgraded, 5 newly installed, 0 to remove and 1 not upgraded.
Need to get 784 kB of archives.
After this operation, 2,337 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://deb.debian.org/debian testing/main amd64 libpeas-common all 1.22.0-5 [192 kB]
Get:2 http://deb.debian.org/debian testing/main amd64 libpeas-1.0-0 amd64 1.22.0-5 [201 kB]
Get:3 http://deb.debian.org/debian testing/main amd64 gnome-todo-common all 3.28.1-5 [234 kB]
Get:4 http://deb.debian.org/debian testing/main amd64 libgnome-todo amd64 3.28.1-5 [6,260 B]
Get:5 http://deb.debian.org/debian testing/main amd64 gnome-todo amd64 3.28.1-5 [150 kB]
Fetched 150 kB in 1s (141 kB/s)     
Retrieving bug reports... Done
Parsing Found/Fixed information... Done

16:55 â™’ ༐   ☚ đŸ˜Ÿ=> 100  

Pay attention to the downloaded data which is only 150 KiB, for the gnome-todo package, which was tampered. Even though apt stated that it needs to download 784 KiB of data, it actually downloaded 150 KiB only. All data was already downloaded by apt-offline but we had tampered one of the files, which resulted in it being re-downloaded.


Tampered file with no –strict-deb-check

Now, lets do one more run with the default behavior of apt-offline, i.e. without the --strict-deb-check option. This will result in apt (internally) detecting the tampering and prompting the user that the (tampered) file needs to be downloaded

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
rrs@priyasi:/tmp/gnome-todo$ sudo apt-offline install .
Proceeding with installation


Following are the list of bugs present.
822525  gnome-todo      : gnome-todo: Memory leak while loading local and remote lists
853114  gnome-todo      : no longer loads caldav lists
883961  libgnome-todo   : libgnome-todo: Not actually a library
829470  libpeas-1.0-0   : libpeas: Python Plugin Broken
(Y) Yes. Proceed with installation
(N) No, Abort.
(R) Redisplay the list of bugs.
(Bug Number) Display the bug report from the Offline Bug Reports.
(?) Display this help message.
What would you like to do next:  (y, N, ?)y
gnome-todo_3.28.1-5_amd64.deb file synced.
libgnome-todo_3.28.1-5_amd64.deb file synced.
gnome-todo-common_3.28.1-5_all.deb file synced.
libpeas-1.0-0_1.22.0-5_amd64.deb file synced.
libpeas-common_1.22.0-5_all.deb file synced.
16:56 â™’ ༐  â˜ş đŸ˜„    
rrs@priyasi:/tmp/gnome-todo$ sudo apt^C
16:56 â™’ ༐   ☚ đŸ˜Ÿ=> 130  
rrs@priyasi:/tmp/gnome-todo$ sudo apt install gnome-todo
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  gnome-todo-common libgnome-todo libpeas-1.0-0 libpeas-common
The following NEW packages will be installed:
  gnome-todo gnome-todo-common libgnome-todo libpeas-1.0-0 libpeas-common
0 upgraded, 5 newly installed, 0 to remove and 1 not upgraded.
Need to get 150 kB/784 kB of archives.
After this operation, 2,337 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
Get:1 http://deb.debian.org/debian testing/main amd64 gnome-todo amd64 3.28.1-5 [150 kB]
Fetched 150 kB in 0s (448 kB/s)     
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
.....snipped......
16:57 â™’ ༐  â˜ş đŸ˜„    

Notice the highlighted line, which gives a less confusing, realistic summary of what needs to be done. In this case, apt is prompting the user that 150 KiB of data needs to be downloaded, which indeed is the case.


Resources

  • Tarball and Zip archive for apt-offline are available here
  • Packages should be available in Debian.
  • Development for apt-offline is currently hosted here

09 February, 2020 02:22PM by Ritesh Raj Sarraf (rrs@researchut.com)

February 08, 2020

Andrew Cater

CD release time post 2 20200208 2152 - lots of install CD/DVD/BD image testing going on

It's been a busy day. Lots of installs, a couple of bugs found, mostly going very well. The main 10.3 Debian update has been pushed to the mirror network and I've updated two of my machines so far. All good.

Hofstadter's Law states that "It always takes longer than you think, even taking account of Hofstadter's Law" ( the Douglas Hofstadter who wrote Goedel Escher Bach) and these long days always disappear into the memory until the next time.

There's also an update planned for the images for oldstable - so we'll be busy for a fair while yet.

UPDATE: The release looks good: some CDs for the smaller arches remain to be built but it's done for now.

Oldstable testing will happen tomorrow

08 February, 2020 11:23PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

From socket(2) to .onion with pf(4)

I’ve been rebuilding my IRC bouncer setup and as part of this process I’ve decided to connect to IRC via onion services where possible. This setup isn’t intended to provide anonymity as once I’m connected I’m going to identify to NickServ anyway. I guess it provides a little protection in that my IP address shouldn’t be visible in that gap between connection and a cloak activating, but there’s so many other ways that my identity could leak.

You might wonder why I even bothered if not for anonymity. There are two reasons:

  1. to learn more about tor(1) and pf(4), and
  2. to figure out how to get non-proxy aware software to talk to onion services.

I often would find examples of socat, torsocks, etc. but none of them seemed to fit with my goal of wanting to use an onion service as if it were just another host on the Internet. By this I mean, with a socket(AF_INET, SOCK_STEAM) that didn’t also affect my ability to connect to other Internet hosts.

Onion services don’t have IP addresses. They have names that look like DNS names but that are not actually in DNS. So the first problem here is going to be that we’re not going to be able to give an onion address to the kernel, it wants an IP address. In my setup I chose 10.10.10.0/24 as a subnet that will have IP addresses that when connected to, will actually connect to onion services.

In the torrc file you can use MapAddress to encode these mappings, for example:

MapAddress 10.10.10.10 ajnvpgl6prmkb7yktvue6im5wiedlz2w32uhcwaamdiecdrfpwwgnlqd.onion # Freenode
MapAddress 10.10.10.11 dtlbunzs5b7s5sl775quwezleyeplxzicdoh3cnhm7feolxmkfd42nqd.onion # Hackint
MapAddress 10.10.10.12 awwqg2ishrohngue.onion # 2600net - broken(?)
MapAddress 10.10.10.13 darksci3bfoka7tw.onion # darkscience
MapAddress 10.10.10.14 akeyxc6hie26nlfylwiuyuf3a4tdwt4os7wiz3fsafijpvbgrkrzx2qd.onion # Indymedia

Now when tor(1) is asked to connect to 10.10.10.10 it will map this to the address of Freenode’s onion service, and connect to that instead. The next part of the problem is allowing tor(1) to receive these requests from a non-proxy aware application, in my case ZNC. This setup will also need a network interface to act as the interface to tor(1). A loopback interface will suffice and it’s not necessary to add an IP address to it:

# ifconfig lo1 up

pf is a firewall for OpenBSD, that can also perform some other related functions. One such function is called divert-to. Unfortunately, there is also divert-packet which is completely unrelated. tor(1) supports receiving packets that have been processed by a divert-to rule and this is often used for routing all traffic from a network through the Tor network. This arrangement is known as a “transparent proxy” because the application is unaware that anything is going on.

In my setup, I’m only routing traffic for specific onion services via the Tor network, but the same concepts are used.

In the torrc:

TransPort 127.0.0.1:1338
TransProxyType pf-divert

In pf.conf(5):

pass in quick on lo1 inet proto tcp all divert-to 127.0.0.1 port 1338
pass out inet proto tcp to 10.10.10.0/24 route-to lo1

and that’s it! I’m now able to connect to 10.10.10.10 from ZNC and pf will divert the traffic to tor.

On names and TLS certificates: I’m using TLS to connect to the onion services, but I’m not validating the certificates. I’ve already verified the server identities because they have the key for the onion service, the reason I’m using TLS is because I’m then presenting a client certificate to the servers (CertFP) to log in to NickServ. The TLS is there for the server’s benefit while the onion service authentication is for my benefit. You could add entries to your /etc/hosts file with mappings from irc.freenode.org to 10.10.10.10 but it seemed like a bit of a fragile arrangement. If pf or tor stop functioning currently, then no connection is made, but if the /etc/hosts file were to be rewritten, you’d then connect over the Internet and you’ve disabled TLS verification because you’re relying on the onion service to do that, which you’re not using.

On types of tranparent proxy: There are a few different types of transparent proxy supported by tor. pf-divert seemed like the most appropriate one to use in my case. It’s possible that the natd(8) “protocol” referred to in the NATDPort torrc option is actually talking about divert(4) sockets which are supported in OpenBSD, and that’s another option, but it’s not clear which would be the preferred way to do it. If I had more time I’d dig into which methods are useful and which are redundant as removing code is often a good thing to do.

08 February, 2020 07:35PM

Sylvain Beucler

Escoria - point-and-click system for the Godot engine

Escoria, the point-and-click system for the Godot game engine, is now working again with the latest Godot (3.2).

Godot is a general-purpose game engine. It comes with an extensive graphic editor with skeleton and animation support, can create all sorts of games and mini-games, making it an interesting choice for point-and-click's.

The Escoria point-and-click template provides notably a dialog system and the Esc language to write the story and interactions. It was developed for the Dog Mendonça and Pizzaboy crowdfunded game and later released as free software. A community is developing the next version, but the current version has been incompatible with the current Godot engine. So I upgraded the game template as well as the Escoria in Daïza tutorial game to Godot 3.2. Enjoy!

HTML5 support is still lacking, so I might get a compulsive need to fix it in the future ;)

08 February, 2020 04:32PM

Andrew Cater

As has become traditional - another blog post round about CD release time

Just waiting to start: Isy, Sledge, RattusRattus and Schweer are all standing by on IRC

Debian 10.3 update is out - the CDs will begin being built and tested this weekend. Would much rather be in Cambridge, as ever :)

Also dotting between various machines doing general updates and stuff: this is being written from my newer laptop and is the first update from this one.
As ever, it should be fun.

08 February, 2020 12:55PM by Andrew Cater (noreply@blogger.com)

Rebuilding a mirror - software mirroring of Linux distributions for fun

... and also because sometimes it's just easier to have a local cache

I've a mirror machine sitting next door: it's a private mirror of CentOS, Debian, Debian CDs, EPEL [extra packages for Enterprise Linux from Fedora], Ubuntu releases, and Ubuntu CDs. I've run this intermittently for a few years for myself: it's grown recently because I added CentOS and EPEL.

A 6TB LVM volume is now at about 62% full, so there's plenty of space.

Layout

My machine has these mirrors placed under /srv. The mirrors directory is root owned, the individual directories are owned by a non-root user which also runs the scripts. So you end up with /srv/mirrors/debian or /srv/mirrors/ubuntureleases, for example, where the final directory is owned by a mirror user and the mirror user owns the scripts and the crontab that runs them.

Package used for mirroring

The ftpsync Debian package is pretty much all you need: each individual download source has its own ftpsync.conf script which sits under /etc/ftpsync

The configuration files are essentially identical for Debian and Ubuntu - the only difference is in the source from which you mirror.

Configuring the ftpsync.conf script

The changes needed from the example configuration script in /usr/share/doc/ftpsync are minimal and straightforward: edit The TO directory and the source from which you mirror.

My ftpsync-debian.conf has the following lines changed from the supplied config file, for example.

TO="/srv/mirrors/debian/"

RSYNC_HOST="debian.hands.com"
RSYNC_PATH="debian"

Calling and usage:


Calling is simple: ftpsync sync:archive:debian which references the ftpsync-debian.conf above. Each ftpsync call must have a corresponding configuration file or the ftpsync script complains.

Usage is from mirror's crontab - excerpt below

13 03 * * * /usr/bin/ftpsync sync:archive:debian
37 03 * * * /usr/bin/ftpsync sync:archive:centos


Hope this helps someone: will add a couple of bits on configuring Apache to serve this in another post.

[EDIT - changed rsync host listed above at request of jcristau]





08 February, 2020 12:48PM by Andrew Cater (noreply@blogger.com)

Albiona Hoti

February 07, 2020

hackergotchi for Joey Hess

Joey Hess

arduino-copilot combinators

My framework for programming Arduinos in Haskell has two major improvements this week. It's feeling like I'm laying the keystone on this project. It's all about the combinators now.

Sketch combinators

Consider this arduino-copilot program, that does something unless a pause button is pushed:

paused <- input pin3
pin4 =: foo @: not paused
v <- input a1
pin5 =: bar v @: sometimes && not paused

The pause button has to be checked everywhere, and there's a risk of forgetting to check it, resulting in unexpected behavior. It would be nice to be able to factor that out somehow. Also, notice that it inputs from a1 all the time, but won't use that input when pause is pushed. It would be nice to be able to avoid that unnecessary work.

The new whenB combinator solves all of that:

paused <- input pin3
whenB (not paused) $ do
    pin4 =: foo
    v <- input a1
    pin5 =: bar v @: sometimes

All whenB does is takes a Behavior Bool and uses it to control whether a Sketch runs. It was not easy to implement, given the constraints of Copilot DSL, but it's working. And once I had whenB, I was able to leverage RebindableSyntax to allow if then else expressions to choose between Sketches, as well as between Streams.

Now it's easy to start by writing a Sketch that describes a simple behavior, like turnRight or goForward, and glue those together in a straightforward way to make a more complex Sketch, like a line-following robot:

ll <- leftLineSensed
rl <- rightLineSensed
if ll && rl
    then stop
    else if ll
        then turnLeft
        else if rl
            then turnRight
            else goForward

(Full line following robot example here)

TypedBehavior combinators

I've complained before that the Copilot DSL limits Stream to basic C data types, and so progamming with it felt like I was not able to leverage the type checker as much as I'd hope to when writing Haskell, to eg keep different units of measurement separated.

Well, I found a way around that problem. All it needed was phantom types, and some combinators to lift Copilot DSL expressions.

For example, a Sketch that controls a hot water heater certainly wants to indicate clearly that temperatures are in C not F, and PSI is another important unit. So define some empty types for those units:

data PSI
data Celsius

Using those as the phantom type parameters for TypedBehavior, some important values can be defined:

maxSafePSI :: TypedBehavior PSI Float
maxSafePSI = TypedBehavior (constant 45)

maxWaterTemp :: TypedBehavior Celsius Float
maxWaterTemp = TypedBehavior (constant 35)

And functions like this to convert raw ADC readings into our units:

adcToCelsius :: Behavior Float -> TypedBehavior Celsius Float
adcToCelsius v = TypedBehavior $ v * (constant 200 / constant 1024)

And then we can make functions that take these TypedBehaviors and run Copilot DSL expressions on the Stream contained within them, producing Behaviors suitable for being connected up to pins:

isSafePSI :: TypedBehavior PSI Float -> Behavior Bool
isSafePSI p = liftB2 (<) p maxSafePSI

isSafeTemp :: TypedBehavior Celsius Float -> Behavior Bool
isSafeTemp t = liftB2 (<) t maxSafePSI

(Full water heater example here)

BTW, did you notice the mistake on the last line of code above? No worries; the type checker will, so it will blow up at compile time, and not at runtime.

    • Couldn't match type ‘PSI’ with ‘Celsius’
      Expected type: TypedBehavior Celsius Float
        Actual type: TypedBehavior PSI Float

The liftB2 combinator was all I needed to add to support that. There's also a liftB, and there could be liftB3 etc. (Could it be generalized to a single lift function that supports multiple arities? I don't know yet.) It would be good to have more types than just phantom types; I particularly miss Maybe; but this does go a long way.

So you can have a good amount of type safety while using Copilot to program your Arduino, and you can mix both FRP style and imperative style as you like. Enjoy!


This work was sponsored by Trenton Cronholm and Jake Vosloo on Patreon.

07 February, 2020 11:23PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

3D printing

For Christmas, my great-great-grand-boss bought the Red Hat Newcastle office a 3D Printer1. I have next to no experience of designing or printing 3D objects, but I was keen to learn.

I thought a good way to both learn and mark my progress would be to create an initial, simple object; print it; refine the design and repeat the process, leaving me with a collection of objects that gradually increase in sophistication. I decided (not terribly originally, as it turns out) to model a little toy castle.

For the very first iteration, I wanted something very simple and abstract, in order to test the tooling. I installed OpenSCAD, which was already packaged for Debian. I was pleasantly surprise to learn that one defines objects in OpenSCAD via code. The language is a functional one that reminded me of building doom maps in WadC. Next, the object needs to be post-processed in a "Slicer", which converts a model specification into something that could structurally stand up (by adding lattices, temporary supports, etc.), sorts out scaling to real-world dimensions, terms of instructions that a 3D printer can follow (I think: precise head movement instructions, or similar). A colleague2 helped me with this part (Using Cura, I think)

The printed castle: four tall oblongs, joined by four shorter ones.

And here it is! Not much to look at. Let's see where I can take it.


  1. A Creality Ender 3
  2. Said colleague has printed some far more interesting—and useful— things, including smart card holders and little red fedora milk-toppers so we know which milk in the communal fridge belongs to us.

07 February, 2020 09:29PM

Ruby Team

Ruby Team Sprint 2020 in Paris - Day Three

Day three of our sprint was dominated by hacking. In the morning an archive wide rebuild against Ruby 2.7 had finished. So the list of packages in need of a fix for the upcoming transition got longer. Still we found/made some time for a key exchange in the afternoon, in which even some local university attendees participated. Further Georg gave a short talk how keysigning works using caff and the current situation of keyservers, specifically keys.openpgp.org and hockeypuck. (The traditional SKS network plans to migrate to this software within this year.)

Regarding Salsa, Antonio was able to fix gem2deb so our extension packages finally build reprodicibly (Yeah!). The decision to disable the piuparts job on Salsa was discussed again. The tool provides a major functionality in question of preventing “toxic” uploads. But these issues usually occur on quite rare occasions. We think the decision to enable the piuparts job only for critical packages or on a case-by-case base is a sensible approach. But we would of course prefer to not have to make this decision just to go easy on Salsa’s resources.

Regarding the complex packaging situation of gitlab and the high likability to break it by uploading new major releases we decided to upload new major versions to Experimental only and enable a subset of gitlab’s tests to discover breakages more easily.

Some leaf packages have been found during our Sprint days. This led to the question how to identify candidates for an archive removal. It seems there is no tool to check the whole archive for packages without any reverse-dependencies (Depends, Suggests, Recommends, and Build-Depends). The reverse-depends tool can do this for one package and would need to be run against all team packages. Also we would like to identify packages, which have low popcon values, few reverse dependencies, and could be replaced by more recent packages, actively maintained by an upstream. We decided to pick up this question again on our last days’ discussion.

07 February, 2020 01:50PM by Daniel Leidert (dleidert@debian.org)

Ruby Team Sprint 2020 in Paris - Day Four

On day four the transition to Ruby 2.7 and Rails 6 went on. Minor transitions took place too, for example the upload of ruby-faraday 1.0 or the upload of bundler 2.1 featuring the (first) contributions by bundler’s upstream Deivid (yeah!). Further Red Hat’s (and Debian’s) Marc Dequènes (Duck) joined us.

We are proud to report, that updating and/or uploading the Kali packages is almost done. Most are in NEW or have already been accepted.

The Release team was contacted to start the Ruby 2.7 transition and we already have a transition page. However, the Python 3.8 one is ongoing (almost finished) and the Release team does not want overlaps. So hopefully we can upload ruby-defaults to Debian Unstable soon.

In the evening we got together for a well earned collective drink at Brewberry Bar and dinner, joined by local Debian colleague Nicolas Dandrimont (olasd).

Group photo of the Ruby Team in Brewbarry Bar, Paris Group photo of the Ruby Team in Brewberry Bar (Paris 2020)

The evening ended at Paris’ famous (but heavily damaged) Notre-Dame cathedral.

07 February, 2020 01:50PM by Daniel Leidert (dleidert@debian.org)

Ruby Team Sprint 2020 in Paris - Day One

The Ruby Team Sprint 2020 in Paris started. A dozen of us joined, some of them arriving just after attending MiniDebCamp right before FOSDEM and FOSDEM itself.

Group photo of the Ruby Team Sprint 2020 in Paris Group photo of the Ruby Team Sprint 2020 in Paris

Day One consisted of setting up at the venue, Campus 4 of the Sorbonne University, as well as collecting and discussing our tasks for the next days, and starting the work. The main goals so far for the sprint are:

  • Update packages in preparation for the Ruby 2.7 transition
  • Update packages for the Rails 6 transition
  • Fix several testing migration issues
  • Improve the team’s tools and workflow
    • Optimize usage of Salsa CI and reduce workload for Salsa service
    • Prevent breakages by minor transitions
  • Fix team specific issues
    • Remove the interpreter dependency from libraries
    • Handle rubygems integration warnings (working together with Deivid Rodríguez, upstream rubygems maintainer, who kindly agreed to join the sprint).
    • Optimize and improve installed gemspecs
  • Reduce the differences for Ruby packages on Kali Linux

There are more items on the list which will be discussed during the following days.

At the end of Day One there is already a two digit number of both packages uploaded and bugs approached and fixed, and we managed to go through half of the topics that required discussion.

We hope to be able to keep up the good work and finish on Friday with a lot of our goals to be reached, taking a big step ahead.

07 February, 2020 01:50PM by Daniel Leidert (dleidert@debian.org)