February 24, 2024

Niels Thykier

Language Server for Debian: Spellchecking

This is my third update on writing a language server for Debian packaging files, which aims at providing a better developer experience for Debian packagers.

Lets go over what have done since the last report.

Semantic token support

I have added support for what the Language Server Protocol (LSP) call semantic tokens. These are used to provide the editor insights into tokens of interest for users. Allegedly, this is what editors would use for syntax highlighting as well.

Unfortunately, eglot (emacs) does not support semantic tokens, so I was not able to test this. There is a 3-year old PR for supporting with the last update being ~3 month basically saying "Please sign the Copyright Assignment". I pinged the GitHub issue in the hopes it will get unstuck.

For good measure, I also checked if I could try it via neovim. Before installing, I read the neovim docs, which helpfully listed the features supported. Sadly, I did not spot semantic tokens among those and parked from there.

That was a bit of a bummer, but I left the feature in for now. If you have an LSP capable editor that supports semantic tokens, let me know how it works for you! :)

Spellchecking

Finally, I implemented something Otto was missing! :)

This stared with Paul Wise reminding me that there were Python binding for the hunspell spellchecker. This enabled me to get started with a quick prototype that spellchecked the Description fields in debian/control. I also added spellchecking of comments while I was add it.

The spellchecker runs with the standard en_US dictionary from hunspell-en-us, which does not have a lot of technical terms in it. Much less any of the Debian specific slang. I spend considerable time providing a "built-in" wordlist for technical and Debian specific slang to overcome this. I also made a "wordlist" for known Debian people that the spellchecker did not recognise. Said wordlist is fairly short as a proof of concept, and I fully expect it to be community maintained if the language server becomes a success.

My second problem was performance. As I had suspected that spellchecking was not the fastest thing in the world. Therefore, I added a very small language server for the debian/changelog, which only supports spellchecking the textual part. Even for a small changelog of a 1000 lines, the spellchecking takes about 5 seconds, which confirmed my suspicion. With every change you do, the existing diagnostics hangs around for 5 seconds before being updated. Notably, in emacs, it seems that diagnostics gets translated into an absolute character offset, so all diagnostics after the change gets misplaced for every character you type.

Now, there is little I could do to speed up hunspell. But I can, as always, cheat. The way diagnostics work in the LSP is that the server listens to a set of notifications like "document opened" or "document changed". In a response to that, the LSP can start its diagnostics scanning of the document and eventually publish all the diagnostics to the editor. The spec is quite clear that the server owns the diagnostics and the diagnostics are sent as a "notification" (that is, fire-and-forgot). Accordingly, there is nothing that prevents the server from publishing diagnostics multiple times for a single trigger. The only requirement is that the server publishes the accumulated diagnostics in every publish (that is, no delta updating).

Leveraging this, I had the language server for debian/changelog scan the document and publish once for approximately every 25 typos (diagnostics) spotted. This means you quickly get your first result and that clears the obsolete diagnostics. Thereafter, you get frequent updates to the remainder of the document if you do not perform any further changes. That is, up to a predefined max of typos, so we do not overload the client for longer changelogs. If you do any changes, it resets and starts over.

The only bit missing was dealing with concurrency. By default, a pygls language server is single threaded. It is not great if the language server hangs for 5 seconds everytime you type anything. Fortunately, pygls has builtin support for asyncio and threaded handlers. For now, I did an async handler that await after each line and setup some manual detection to stop an obsolete diagnostics run. This means the server will fairly quickly abandon an obsolete run.

Also, as a side-effect of working on the spellchecking, I fixed multiple typos in the changelog of debputy. :)

Follow up on the "What next?" from my previous update

In my previous update, I mentioned I had to finish up my python-debian changes to support getting the location of a token in a deb822 file. That was done, the MR is now filed, and is pending review. Hopefully, it will be merged and uploaded soon. :)

I also submitted my proposal for a different way of handling relationship substvars to debian-devel. So far, it seems to have received only positive feedback. I hope it stays that way and we will have this feature soon. Guillem proposed to move some of this into dpkg, which might delay my plans a bit. However, it might be for the better in the long run, so I will wait a bit to see what happens on that front. :)

As noted above, I managed to add debian/changelog as a support format for the language server. Even if it only does spellchecking and trimming of trailing newlines on save, it technically is a new format and therefore cross that item off my list. :D

Unfortunately, I did not manage to write a linter variant that does not involve using an LSP-capable editor. So that is still pending. Instead, I submitted an MR against elpa-dpkg-dev-el to have it recognize all the fields that the debian/control LSP knows about at this time to offset the lack of semantic token support in eglot.

From here...

My sprinting on this topic will soon come to an end, so I have to a bit more careful now with what tasks I open!

I think I will narrow my focus to providing a batch linting interface. Ideally, with an auto-fix for some of the more mechanical issues, where this is little doubt about the answer.

Additionally, I think the spellchecking will need a bit more maturing. My current code still trips on naming patterns that are "clearly" verbatim or code references like things written in CamelCase or SCREAMING_SNAKE_CASE. That gets annoying really quickly. It also trips on a lot of commands like dpkg-gencontrol, but that is harder to fix since it could have been a real word. I think those will have to be fixed people using quotes around the commands. Maybe the most popular ones will end up in the wordlist.

Beyond that, I will play it by ear if I have any time left. :)

24 February, 2024 08:45AM by Niels Thykier

February 23, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

10 things software developers should learn about learning

This post is a review for Computing Reviews for 10 things software developers should learn about learning , a article published in Communications of the ACM

As software developers, we understand the detailed workings of the different components of our computer systems. And–probably due to how computers were presented since their appearance as “digital brains” in the 1940s–we sometimes believe we can transpose that knowledge to how our biological brains work, be it as learners or as problem solvers. This article aims at making the reader understand several mechanisms related to how learning and problem solving actually work in our brains. It focuses on helping expert developers convey knowledge to new learners, as well as learners who need to get up to speed and “start coding.” The article’s narrative revolves around software developers, but much of what it presents can be applied to different problem domains.

The article takes this mission through ten points, with roughly the same space given to each of them, starting with wrong assumptions many people have about the similarities between computers and our brains. The first section, “Human Memory Is Not Made of Bits,” explains the brain processes of remembering as a way of strengthening the force of a memory (“reconsolidation”) and the role of activation in related network pathways. The second section, “Human Memory Is Composed of One Limited and One Unlimited System,” goes on to explain the organization of memories in the brain between long-term memory (functionally limitless, permanent storage) and working memory (storing little amounts of information used for solving a problem at hand). However, the focus soon shifts to how experience in knowledge leads to different ways of using the same concepts, the importance of going from abstract to concrete knowledge applications and back, and the role of skills repetition over time.

Toward the end of the article, the focus shifts from the mechanical act of learning to expertise. Section 6, “The Internet Has Not Made Learning Obsolete,” emphasizes that problem solving is not just putting together the pieces of a puzzle; searching online for solutions to a problem does not activate the neural pathways that would get fired up otherwise. The final sections tackle the differences that expertise brings to play when teaching or training a newcomer: the same tools that help the beginner’s productivity as “training wheels” will often hamper the expert user’s as their knowledge has become automated.

The article is written with a very informal and easy-to-read tone and vocabulary, and brings forward several issues that might seem like commonsense but do ring bells when it comes to my own experiences both as a software developer and as a teacher. The article closes by suggesting several books that further expand on the issues it brings forward. While I could not identify a single focus or thesis with which to characterize this article, the several points it makes will likely help readers better understand (and bring forward to consciousness) mental processes often taken for granted, and consider often-overlooked aspects when transmitting knowledge to newcomers.

23 February, 2024 01:56AM

Reproducible Builds (diffoscope)

diffoscope 258 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 258. This version includes the following changes:

[ Chris Lamb ]
* Use the 7zip package (over p7zip-full) after package transition.
  (Closes: #1063559)
* Update debian/tests/control.

[ Vagrant Cascadian ]
* Fix a typo in the package name field (!) within debian/changelog.

You find out more by visiting the project homepage.

23 February, 2024 12:00AM

February 21, 2024

Niels Thykier

Expanding on the Language Server (LSP) support for debian/control

I have spent some more time on improving my language server for debian/control. Today, I managed to provide the following features:

  • The X- style prefixes for field names are now understood and handled. This means the language server now considers XC-Package-Type the same as Package-Type.

  • More diagnostics:

    • Fields without values now trigger an error marker
    • Duplicated fields now trigger an error marker
    • Fields used in the wrong paragraph now trigger an error marker
    • Typos in field names or values now trigger a warning marker. For field names, X- style prefixes are stripped before typo detection is done.
    • The value of the Section field is now validated against a dataset of known sections and trigger a warning marker if not known.
  • The "on-save trim end of line whitespace" now works. I had a logic bug in the server side code that made it submit "no change" edits to the editor.

  • The language server now provides "hover" documentation for field names. There is a small screenshot of this below. Sadly, emacs does not support markdown or, if it does, it does not announce the support for markdown. For now, all the documentation is always in markdown format and the language server will tag it as either markdown or plaintext depending on the announced support.

  • The language server now provides quick fixes for some of the more trivial problems such as deprecated fields or typos of fields and values.

  • Added more known fields including the XS-Autobuild field for non-free packages along with a link to the relevant devref section in its hover doc.

This covers basically all my known omissions from last update except spellchecking of the Description field.

An image of emacs showing documentation for the Provides field from the language server.

Spellchecking

Personally, I feel spellchecking would be a very welcome addition to the current feature set. However, reviewing my options, it seems that most of the spellchecking python libraries out there are not packaged for Debian, or at least not other the name I assumed they would be.

The alternative is to pipe the spellchecking to another program like aspell list. I did not test this fully, but aspell list does seem to do some input buffering that I cannot easily default (at least not in the shell). Though, either way, the logic for this will not be trivial and aspell list does not seem to include the corrections either. So best case, you would get typo markers but no suggestions for what you should have typed. Not ideal.

Additionally, I am also concerned with the performance for this feature. For d/control, it will be a trivial matter in practice. However, I would be reusing this for d/changelog which is 99% free text with plenty of room for typos. For a regular linter, some slowness is acceptable as it is basically a batch tool. However, for a language server, this potentially translates into latency for your edits and that gets annoying.

While it is definitely on my long term todo list, I am a bit afraid that it can easily become a time sink. Admittedly, this does annoy me, because I wanted to cross off at least one of Otto's requested features soon.

On wrap-and-sort support

The other obvious request from Otto would be to automate wrap-and-sort formatting. Here, the problem is that "we" in Debian do not agree on the one true formatting of debian/control. In fact, I am fairly certain we do not even agree on whether we should all use wrap-and-sort. This implies we need a style configuration.

However, if we have a style configuration per person, then you get style "ping-pong" for packages where the co-maintainers do not all have the same style configuration. Additionally, it is very likely that you are a member of multiple packaging teams or groups that all have their own unique style. Ergo, only having a personal config file is doomed to fail.

The only "sane" option here that I can think of is to have or support "per package" style configuration. Something that would be committed to git, so the tooling would automatically pick up the configuration. Obviously, that is not fun for large packaging teams where you have to maintain one file per package if you want a consistent style across all packages. But it beats "style ping-pong" any day of the week.

Note that I am perfectly open to having a personal configuration file as a fallback for when the "per package" configuration file is absent.

The second problem is the question of which format to use and what to name this file. Since file formats and naming has never been controversial at all, this will obviously be the easy part of this problem. But the file should be parsable by both wrap-and-sort and the language server, so you get the same result regardless of which tool you use. If we do not ensure this, then we still have the style ping-pong problem as people use different tools.

This also seems like time sink with no end. So, what next then...?

What next?

On the language server front, I will have a look at its support for providing semantic hints to the editors that might be used for syntax highlighting. While I think most common Debian editors have built syntax highlighting already, I would like this language server to stand on its own. I would like us to be in a situation where we do not have implement yet another editor extension for Debian packaging files. At least not for editors that support the LSP spec.

On a different front, I have an idea for how we go about relationship related substvars. It is not directly related to this language server, except I got triggered by the language server "missing" a diagnostic for reminding people to add the magic Depends: ${misc:Depends}[, ${shlibs:Depends}] boilerplate. The magic boilerplate that you have to write even though we really should just fix this at a tooling level instead. Energy permitting, I will formulate a proposal for that and send it to debian-devel.

Beyond that, I think I might start adding support for another file. I also need to wrap up my python-debian branch, so I can get the position support into the Debian soon, which would remove one papercut for using this language server.

Finally, it might be interesting to see if I can extract a "batch-linter" version of the diagnostics and related quickfix features. If nothing else, the "linter" variant would enable many of you to get a "mini-Lintian" without having to do a package build first.

21 February, 2024 10:00PM by Niels Thykier

February 20, 2024

Language Server (LSP) support for debian/control

About a month ago, Otto Kekäläinen asked for editor extensions for debian related files on the debian-devel mailing list. In that thread, I concluded that what we were missing was a "Language Server" (LSP) for our packaging files.

Last week, I started a prototype for such a LSP for the debian/control file as a starting point based on the pygls library. The initial prototype worked and I could do very basic diagnostics plus completion suggestion for field names.

Current features

I got 4 basic features implemented, though I have only been able to test two of them in emacs.

  • Diagnostics or linting of basic issues.
  • Completion suggestions for all known field names that I could think of and values for some fields.
  • Folding ranges (untested). This feature enables the editor to "fold" multiple lines. It is often used with multi-line comments and that is the feature currently supported.
  • On save, trim trailing whitespace at the end of lines (untested). Might not be registered correctly on the server end.

Despite its very limited feature set, I feel editing debian/control in emacs is now a much more pleasant experience.

Coming back to the features that Otto requested, the above covers a grand total of zero. Sorry, Otto. It is not you, it is me.

Completion suggestions

For completion, all known fields are completed. Place the cursor at the start of the line or in a partially written out field name and trigger the completion in your editor. In my case, I can type R-R-R and trigger the completion and the editor will automatically replace it with Rules-Requires-Root as the only applicable match. Your milage may vary since I delegate most of the filtering to the editor, meaning the editor has the final say about whether your input matches anything.

The only filtering done on the server side is that the server prunes out fields already used in the paragraph, so you are not presented with the option to repeat an already used field, which would be an error. Admittedly, not an error the language server detects at the moment, but other tools will.

When completing field, if the field only has one non-default value such as Essential which can be either no (the default, but you should not use it) or yes, then the completion suggestion will complete the field along with its value.

This is mostly only applicable for "yes/no" fields such as Essential and Protected. But it does also trigger for Package-Type at the moment.

As for completing values, here the language server can complete the value for simple fields such as "yes/no" fields, Multi-Arch, Package-Type and Priority. I intend to add support for Section as well - maybe also Architecture.

Diagnostics

On the diagnostic front, I have added multiple diagnostics:

  • An error marker for syntax errors.
  • An error marker for missing a mandatory field like Package or Architecture. This also includes Standards-Version, which is admittedly mandatory by policy rather than tooling falling part.
  • An error marker for adding Multi-Arch: same to an Architecture: all package.
  • Error marker for providing an unknown value to a field with a set of known values. As an example, writing foo in Multi-Arch would trigger this one.
  • Warning marker for using deprecated fields such as DM-Upload-Allowed, or when setting a field to its default value for fields like Essential. The latter rule only applies to selected fields and notably Multi-Arch: no does not trigger a warning.
  • Info level marker if a field like Priority duplicates the value of the Source paragraph.

Notable omission at this time:

  • No errors are raised if a field does not have a value.
  • No errors are raised if a field is duplicated inside a paragraph.
  • No errors are used if a field is used in the wrong paragraph.
  • No spellchecking of the Description field.
  • No understanding that Foo and X[CBS]-Foo are related. As an example, XC-Package-Type is completely ignored despite being the old name for Package-Type.
  • Quick fixes to solve these problems... :)

Trying it out

If you want to try, it is sadly a bit more involved due to things not being uploaded or merged yet. Also, be advised that I will regularly rebase my git branches as I revise the code.

The setup:

  • Build and install the deb of the main branch of pygls from https://salsa.debian.org/debian/pygls The package is in NEW and hopefully this step will soon just be a regular apt install.
  • Build and install the deb of the rts-locatable branch of my python-debian fork from https://salsa.debian.org/nthykier/python-debian There is a draft MR of it as well on the main repo.
  • Build and install the deb of the lsp-support branch of debputy from https://salsa.debian.org/debian/debputy
  • Configure your editor to run debputy lsp debian/control as the language server for debian/control. This is depends on your editor. I figured out how to do it for emacs (see below). I also found a guide for neovim at https://neovim.io/doc/user/lsp. Note that debputy can be run from any directory here. The debian/control is a reference to the file format and not a concrete file in this case.

Obviously, the setup should get easier over time. The first three bullet points should eventually get resolved by merges and upload meaning you end up with an apt install command instead of them.

For the editor part, I would obviously love it if we can add snippets for editors to make the automatically pick up the language server when the relevant file is installed.

Using the debputy LSP in emacs

The guide I found so far relies on eglot. The guide below assumes you have the elpa-dpkg-dev-el package installed for the debian-control-mode. Though it should be a trivially matter to replace debian-control-mode with a different mode if you use a different mode for your debian/control file.

In your emacs init file (such as ~/.emacs or ~/.emacs.d/init.el), you add the follow blob.

(with-eval-after-load 'eglot
    (add-to-list 'eglot-server-programs
        '(debian-control-mode . ("debputy" "lsp" "debian/control"))))

Once you open the debian/control file in emacs, you can type M-x eglot to activate the language server. Not sure why that manual step is needed and if someone knows how to automate it such that eglot activates automatically on opening debian/control, please let me know.

For testing completions, I often have to manually activate them (with C-M-i or M-x complete-symbol). Though, it is a bit unclear to me whether this is an emacs setting that I have not toggled or something I need to do on the language server side.

From here

As next steps, I will probably look into fixing some of the "known missing" items under diagnostics. The quick fix would be a considerable improvement to assisting users.

In the not so distant future, I will probably start to look at supporting other files such as debian/changelog or look into supporting configuration, so I can cover formatting features like wrap-and-sort.

I am also very much open to how we can provide integrations for this feature into editors by default. I will probably create a separate binary package for specifically this feature that pulls all relevant dependencies that would be able to provide editor integrations as well.

20 February, 2024 03:45PM by Niels Thykier

Language Server (LSP) support for debian/control

Work done:

  • [X] No errors are raised if a field does not have a value.
  • [X] No errors are raised if a field is duplicated inside a paragraph.
  • [X] No errors are used if a field is used in the wrong paragraph.
  • [ ] No spellchecking of the Description field.
  • [X] No understanding that Foo and X[CBS]-Foo are related. As an example, XC-Package-Type is completely ignored despite being the old name for Package-Type.
  • [X] Fixed the on-save trim end of line whitespace. Bug in the server end.
  • [X] Hover text for field names

20 February, 2024 03:45PM by Niels Thykier

hackergotchi for Jonathan Dowland

Jonathan Dowland

Propaganda — A Secret Wish

How can I not have done one of these for Propaganda already?

Propaganda: A Secret Wish, and 12"s of Duel and p:Machinery

Propaganda/A Secret Wish is criminally underrated. There seem to be a zillion variants of each track, which keeps completionists busy. Of the variants of Jewel/Duel/etc., I'm fond of the 03:10, almost instrumental mix of Jewel; preferring the lyrics to be exclusive to the more radio friendly Duel (04:42); I don't need them conflating (Jewel 06:21); but there are further depths I've yet to explore (Do Well cassette mix, the 20:07 The First Cut / Duel / Jewel (Cut Rough)/ Wonder / Bejewelled mega-mix...)

I recently watched The Fall of the House of Usher which I think has Poe lodged in my brain, which is how this album popped back into my conciousness this morning, with the opening lines of Dream within a Dream.

But are they Goth?

20 February, 2024 09:26AM

February 19, 2024

hackergotchi for Matthew Garrett

Matthew Garrett

Debugging an odd inability to stream video

We have a cabin out in the forest, and when I say "out in the forest" I mean "in a national forest subject to regulation by the US Forest Service" which means there's an extremely thick book describing the things we're allowed to do and (somewhat longer) not allowed to do. It's also down in the bottom of a valley surrounded by tall trees (the whole "forest" bit). There used to be AT&T copper but all that infrastructure burned down in a big fire back in 2021 and AT&T no longer supply new copper links, and Starlink isn't viable because of the whole "bottom of a valley surrounded by tall trees" thing along with regulations that prohibit us from putting up a big pole with a dish on top. Thankfully there's LTE towers nearby, so I'm simply using cellular data. Unfortunately my provider rate limits connections to video streaming services in order to push them down to roughly SD resolution. The easy workaround is just to VPN back to somewhere else, which in my case is just a Wireguard link back to San Francisco.

This worked perfectly for most things, but some streaming services simply wouldn't work at all. Attempting to load the video would just spin forever. Running tcpdump at the local end of the VPN endpoint showed a connection being established, some packets being exchanged, and then… nothing. The remote service appeared to just stop sending packets. Tcpdumping the remote end of the VPN showed the same thing. It wasn't until I looked at the traffic on the VPN endpoint's external interface that things began to become clear.

This probably needs some background. Most network infrastructure has a maximum allowable packet size, which is referred to as the Maximum Transmission Unit or MTU. For ethernet this defaults to 1500 bytes, and these days most links are able to handle packets of at least this size, so it's pretty typical to just assume that you'll be able to send a 1500 byte packet. But what's important to remember is that that doesn't mean you have 1500 bytes of packet payload - that 1500 bytes includes whatever protocol level headers are on there. For TCP/IP you're typically looking at spending around 40 bytes on the headers, leaving somewhere around 1460 bytes of usable payload. And if you're using a VPN, things get annoying. In this case the original packet becomes the payload of a new packet, which means it needs another set of TCP (or UDP) and IP headers, and probably also some VPN header. This still all needs to fit inside the MTU of the link the VPN packet is being sent over, so if the MTU of that is 1500, the effective MTU of the VPN interface has to be lower. For Wireguard, this works out to an effective MTU of 1420 bytes. That means simply sending a 1500 byte packet over a Wireguard (or any other VPN) link won't work - adding the additional headers gives you a total packet size of over 1500 bytes, and that won't fit into the underlying link's MTU of 1500.

And yet, things work. But how? Faced with a packet that's too big to fit into a link, there are two choices - break the packet up into multiple smaller packets ("fragmentation") or tell whoever's sending the packet to send smaller packets. Fragmentation seems like the obvious answer, so I'd encourage you to read Valerie Aurora's article on how fragmentation is more complicated than you think. tl;dr - if you can avoid fragmentation then you're going to have a better life. You can explicitly indicate that you don't want your packets to be fragmented by setting the Don't Fragment bit in your IP header, and then when your packet hits a link where your packet exceeds the link MTU it'll send back a packet telling the remote that it's too big, what the actual MTU is, and the remote will resend a smaller packet. This avoids all the hassle of handling fragments in exchange for the cost of a retransmit the first time the MTU is exceeded. It also typically works these days, which wasn't always the case - people had a nasty habit of dropping the ICMP packets telling the remote that the packet was too big, which broke everything.

What I saw when I tcpdumped on the remote VPN endpoint's external interface was that the connection was getting established, and then a 1500 byte packet would arrive (this is kind of the behaviour you'd expect for video - the connection handshaking involves a bunch of relatively small packets, and then once you start sending the video stream itself you start sending packets that are as large as possible in order to minimise overhead). This 1500 byte packet wouldn't fit down the Wireguard link, so the endpoint sent back an ICMP packet to the remote telling it to send smaller packets. The remote should then have sent a new, smaller packet - instead, about a second after sending the first 1500 byte packet, it sent that same 1500 byte packet. This is consistent with it ignoring the ICMP notification and just behaving as if the packet had been dropped.

All the services that were failing were failing in identical ways, and all were using Fastly as their CDN. I complained about this on social media and then somehow ended up in contact with the engineering team responsible for this sort of thing - I sent them a packet dump of the failure, they were able to reproduce it, and it got fixed. Hurray!

(Between me identifying the problem and it getting fixed I was able to work around it. The TCP header includes a Maximum Segment Size (MSS) field, which indicates the maximum size of the payload for this connection. iptables allows you to rewrite this, so on the VPN endpoint I simply rewrote the MSS to be small enough that the packets would fit inside the Wireguard MTU. This isn't a complete fix since it's done at the TCP level rather than the IP level - so any large UDP packets would still end up breaking)

I've no idea what the underlying issue was, and at the client end the failure was entirely opaque: the remote simply stopped sending me packets. The only reason I was able to debug this at all was because I controlled the other end of the VPN as well, and even then I wouldn't have been able to do anything about it other than being in the fortuitous situation of someone able to do something about it seeing my post. How many people go through their lives dealing with things just being broken and having no idea why, and how do we fix that?

(Edit: thanks to this comment, it sounds like the underlying issue was a kernel bug that Fastly developed a fix for - under certain configurations, the kernel fails to associate the MTU update with the egress interface and so it continues sending overly large packets)

comment count unavailable comments

19 February, 2024 10:30PM

Valhalla's Things

Jeans, step one

Posted on February 19, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

CW for body size change mentions

A woman wearing a pair of tight jeans.

Just like the corset, I also needed a new pair of jeans.

Back when my body size changed drastically of course my jeans no longer fit. While I was waiting for my size to stabilize I kept wearing them with a somewhat tight belt, but it was ugly and somewhat uncomfortable.

When I had stopped changing a lot I tried to buy new ones in the same model, and found out that I was too thin for the menswear jeans of that shop. I could have gone back to wearing women’s jeans, but I didn’t want to have to deal with the crappy fabric and short pockets, so I basically spent a few years wearing mostly skirts, and oversized jeans when I really needed trousers.

Meanwhile, I had drafted a jeans pattern for my SO, which we had planned to make in technical fabric, but ended up being made in a cotton-wool mystery mix for winter and in linen-cotton for summer, and the technical fabric version was no longer needed (yay for natural fibres!)

It was clear what the solution to my jeans problems would have been, I just had to stop getting distracted by other projects and draft a new pattern using a womanswear block instead of a menswear one.

Which, in January 2024 I finally did, and I believe it took a bit less time than the previous one, even if it had all of the same fiddly pieces.

I already had a cut of the same cotton-linen I had used for my SO, except in black, and used it to make the pair this post is about.

The parametric pattern is of course online, as #FreeSoftWear, at the usual place. This time it was faster, since I didn’t have to write step-by-step instructions, as they are exactly the same as the other pattern.

Same as above, from the back, with the crotch seam pulling a bit. A faint decoration can be seen on the pockets, with the line art version of the logo seen on this blog.

Making also went smoothly, and the result was fitting. Very fitting. A big too fitting, and the standard bum adjustment of the back was just enough for what apparently still qualifies as a big bum, so I adjusted the pattern to be able to add a custom amount of ease in a few places.

But at least I had a pair of jeans-shaped trousers that fit!

Except, at 200 g/m² I can’t say that fabric is the proper weight for a pair of trousers, and I may have looked around online1 for some denim, and, well, it’s 2024, so my no-fabric-buy 2023 has not been broken, right?

Let us just say that there may be other jeans-related posts in the near future.


  1. I had already asked years ago for denim at my local fabric shops, but they don’t have the proper, sturdy, type I was looking for.↩︎

19 February, 2024 12:00AM

February 18, 2024

Iustin Pop

New skis ⛷️ , new fun!

As I wrote a bit back, I had a really, really bad fourth quarter in 2023. As new years approached, and we were getting ready to go on a ski trip, I wasn’t even sure if and how much I’ll be able to ski.

And I felt so out of it that I didn’t even buy a ski pass for the whole week, just bought one day to see if a) I still like, and b) my knee can deal with it. And, of course, it was good.

It was good enough that I ended up skiing the entire week, and my knee got better during the week. WTH?! I don’t understand this anymore, but it was good. Good enough that this early year trip put me back on track and I started doing sports again.

But the main point is, that during this ski week, and talking to the teacher, I realised that my ski equipment is getting a bit old. I bought everything roughly ten years ago, and while they still hold up OK, my ski skills have improved since then. I said to myself, 10 years is a good run, I’ll replace this year the skis, next year the boot & helmet, etc.

I didn’t expect much from new skis - I mean, yes, better skis, but what does “better” mean? Well, once I’ve read enough forum posts, apparently the skis I selected are “that good”, which to me meant they’re not bad.

Oh my, how wrong I was! Double, triple wrong! Rather than fighting with the skis, it’s enough to think what I wand to do, and the skis do it. I felt OK-ish, maybe 10% limited by my previous skis, but the new skis are really good and also I know that I’m just at 30% or so of the new skis - so room to grow. For now, I am able to ski faster, longer, and I feel less tired than before. I’ve actually compared and I can do twice the distance in a day and feel slightly less tired at the end. I’ve moved from “this black is cool but a bit difficult, I’ll do another run later in the day when I’ve recovered” to “how cool, this blacks is quite empty of people, let’s stay here for 2-3 more rounds”.

The skis are new, and I haven’t used them on all the places I’m familiar with - but the upgrade is huge. The people on the ski forum were actually not exaggerating, I realise now. Stöckli++, and they’re also made in Switzerland. Can’t wait to get back to Saas Fee and to the couple of slopes that were killing me before, to see how they feel now.

So, very happy with this choice. I’d be even happier if my legs were less damaged, but well, you can’t win them all.

And not last, the skis are also very cool looking 😉

18 February, 2024 06:00AM

Russell Coker

Release Years

In 2008 I wrote about the idea of having a scheduled release for Debian and other distributions as Mark Shuttleworth had proposed [1]. I still believe that Mark’s original idea for synchronised release dates of Linux distributions (or at least synchronised feature sets) is a good one but unfortunately it didn’t take off.

Having been using Ubuntu a bit recently I’ve found the version numbering system to be really good. Ubuntu version 16.04 was release in April 2016, it’s support ended 5 years later in about April 2021, so any commonly available computers from 2020 should run it well and versions of applications released in about 2017 should run on it. If I have to support a Debian 10 (Buster) system I need to start with a web search to discover when it was released (July 2019). That suggests that applications packaged for Ubuntu 18.04 are likely to run on it.

If we had the same numbering system for Debian and Ubuntu then it would be easier to compare versions. Debian 19.06 would be obviously similar to Ubuntu 18.04, or we could plan for the future and call it Debian 2019.

Then it would be ideal if hardware vendors did the same thing (as car manufacturers have been doing for a long time). Which versions of Ubuntu and Debian would run well on a Dell PowerEdge R750? It takes a little searching to discover that the R750 was released in 2021, but if they called it a PowerEdge 2021R70 then it would be quite obvious that Ubuntu 2022.04 would run well on it and that Ubuntu 2020.04 probably has a kernel update with all the hardware supported.

One of the benefits for the car industry in naming model years is that it drives the purchase of a new car. A 2015 car probably isn’t going to impress anyone and you will know that it is missing some of the features in more recent models. It would be easier to get management to sign off on replacing old computers if they had 2015 on the front, trying to estimate hidden costs of support and lost productivity of using old computers is hard but saying “it’s a 2015 model and way out of date” is easy.

There is a history of using dates as software versions. The “Reference Policy” for SE Linux [2] (which is used for Debian) has releases based on date. During the Debian development process I upload policy to Debian based on the upstream Git and use the same version numbering scheme which is more convenient than the “append git date to last full release” system that some maintainers are forced to use. The users can get an idea of how much the Debian/Unstable policy has diverged from the last full release by looking at the dates. Also an observer might see the short difference between release dates of SE Linux policy and Debian release freeze dates as an indication that I beg the upstream maintainers to make a new release just before each Debian freeze – which is expactly what I do.

When I took over the Portslave [3] program I made releases based on date because there were several forks with different version numbering schemes so my options were to just choose a higher number (which is OK initially but doesn’t scale if there are more forks) or use a date and have people know that the recent date is the most recent version. The fact that the latest release of Portslave is version 2010.04.19 shows that I have not been maintaining it in recent years (due to lack of hardware as well as lack of interest), so if someone wants to take over the project I would be happy to help them do so!

I don’t expect people at Dell and other hardware vendors to take much notice of my ideas (I have tweeted them photographic evidence of a problem with no good response). But hopefully this will start a discussion in the free software community.

18 February, 2024 04:00AM by etbe

February 17, 2024

James Valleroy

snac2: a minimalist ActivityPub server in Debian

snac2, currently available in Debian testing and unstable, is described by its upstream as “A simple, minimalistic ActivityPub instance written in portable C.” It provides an ActivityPub server with a bare-bones web interface. It does not use JavaScript or require a database.

Basic forms for creating a new post, or following someone

ActivityPub is the protocol for federated social networks that is implemented by Mastodon, Pleroma, and other similar server software. Federated social networks are most often used for “micro-blogging”, or making many small posts. You can decide to follow another user (or bot) to see their posts, even if they happen to be on a different server (as long as the server software is compatible with the ActivityPub standard).

The timeline shows posts from accounts that you follow

In addition, snac2 has preliminary support for the Mastodon Client API. This allows basic support for mobile apps that support Mastodon, but you should expect that many features are not available yet.

If you are interested in running a minimalist ActivityPub server on Debian, please try out snac2, and report any bugs that you find.

17 February, 2024 12:58PM by James Valleroy

February 16, 2024

Melissa Wen

Keep an eye out: We are preparing the 2024 Linux Display Next Hackfest!

Igalia is preparing the 2024 Linux Display Next Hackfest and we are thrilled to announce that this year’s hackfest will take place from May 14th to 16th at our HQ in A Coruña, Spain.

This unconference-style event aims to bring together the most relevant players in the Linux display community to tackle current challenges and chart the future of the display stack.

Key goals for the hackfest include:

  • Releasing the power of collaboration: We’ll work to remove bottlenecks and pave the way for smoother, more performant displays.
  • Problem-solving powerhouse: Brainstorming sessions and collaborative coding will target issues like HDR, color management, variable refresh rates, and more.
  • Building on past commitments: Let’s solidify the progress made in recent years and push the boundaries even further.

The hackfest fosters an intimate and focused environment to brainstorm, hack, and design solutions alongside fellow display experts. Participants will dive into discussions, tinker with code, and contribute to shaping the future of the Linux display stack.

More details are available on the official website.

Stay tuned! Keep an eye out for more information, mark your calendars and start prepping your hacking gear.

16 February, 2024 05:25PM

hackergotchi for David Bremner

David Bremner

Generating ikiwiki markdown from org

My web pages are (still) in ikiwiki, but lately I have started authoring things like assignments and lectures in org-mode so that I can have some literate programming facilities. There is is org-mode export built-in, but it just exports source blocks as examples (i.e. unhighlighted verbatim). I added a custom exporter to mark up source blocks in a way ikiwiki can understand. Luckily this is not too hard the second time.

(with-eval-after-load "ox-md"
  (org-export-define-derived-backend 'ik 'md
    :translate-alist '((src-block . ik-src-block))
    :menu-entry '(?m 1 ((?i "ikiwiki" ik-export-to-ikiwiki)))))

(defun ik-normalize-language  (str)
  (cond
   ((string-equal str "plait") "racket")
   ((string-equal str "smol") "racket")
   (t str)))

(defun ik-src-block (src-block contents info)
  "Transcode a SRC-BLOCK element from Org to beamer
         CONTENTS is nil.  INFO is a plist used as a communication
         channel."
  (let* ((body  (org-element-property :value src-block))
         (lang  (ik-normalize-language (org-element-property :language src-block))))
    (format "[[!format <span class="error">Error: unsupported page format &#37;s</span>]]" lang body)))

(defun ik-export-to-ikiwiki
    (&optional async subtreep visible-only body-only ext-plist)
  "Export current buffer as an ikiwiki markdown file.
    See org-md-export-to-markdown for full docs"
  (require 'ox)
  (interactive)
  (let ((file (org-export-output-file-name ".mdwn" subtreep)))
    (org-export-to-file 'ik file
      async subtreep visible-only body-only ext-plist)))

16 February, 2024 04:01PM

hackergotchi for Mike Gabriel

Mike Gabriel

Debian Edu 12 - Call for Testing

This is a call for testing of Debian Edu based on Debian bookworm. With the Debian 12.5 point release all required packages have landed in the Debian Edu ISO images that allow you to install a Debian Edu system based on Debian 12.

ISO Image Downloads

You can find the Blueray Disc ISO image (use for main server installation) at: http://cdimage.debian.org/cdimage/release/current/amd64/iso-bd/debian-ed...

For standalone workstation installations or installations on an already up-and-running Debian Edu site, please use the netinst ISO image: http://cdimage.debian.org/cdimage/release/current/amd64/iso-cd/debian-ed...

Quick Start HowTo

For testing Debian Edu 12, set up e.g. LXD or libVirt and install (at least) three virtual machines. In your virtualization software prepare an internal network where the VMs can reach one another without needing access to your local network.

The three VMs:

  • setup a gateway VM (no DHCP service) at 10.0.0.1/8 (e.g. OPNsense, pfSense, Debian Edu Router, etc.), two NICs: one uplink, one NIC in the internal network
  • install the Debian Edu mainserver from the ISO image on another VM, one NIC in the internal network
  • then boot a 3rd VM via PXE and install your first workstation, on NIC in the internal network

Happy testing!

Further Readings

Overall installation profile concept of Debian Edu:
https://wiki.debian.org/DebianEdu/BeforeGettingStarted

Debian Edu 12 manual:
https://jenkins.debian.net/userContent/debian-edu-doc/

Debian Edu 12 status page:
https://wiki.debian.org/DebianEdu/Status/Bookworm

16 February, 2024 07:09AM by sunweaver

February 13, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Random Postgres wishlist

In no particular order, things it would be neat if Postgres had (some of these I've implemented myself earlier, though not in Postgres):

  • A JIT that is not based on LLVM (using an AOT compiler framework for JIT is just not a salvagable choice; witness the absolute dearth of really successful LLVM-based JITs out there).
  • Interesting orders that understand functional dependencies.
  • Cross-join correlation statistics.
  • Better combination of multi-column statistics (the standard way in academia seems to be maximum entropy).
  • Longer-term, some built-in really solid column store or at least multi-row handling, for larger OLAP jobs.
  • Ability to hold an in-memory sample, also for OLAP stuff.
  • Ditching GEQO for something more modern (there are tons of choices depending on how you want the optimizer to evolve, pick your poison).

I'm sure everyone would want something like multimaster clustering, but I honestly don't care for myself. :-) I only have one server anyway.

13 February, 2024 10:16PM

Arturo Borrero González

Back to the Wikimedia Foundation!

Wikimedia Foundation logo

In October 2023, I departed from the Wikimedia Foundation, the non-profit organization behind well-known projects like Wikipedia and others, to join Spryker.

However, in January 2024 Spryker conducted a round of layoffs reportedly due to budget and business reasons. I was among those affected, being let go just three months after joining the company.

Fortunately, the Wikimedia Cloud Services team, where I previously worked, was still seeking to backfill my position, so I reached out to them. They graciously welcomed me back as a Senior Site Reliability Engineer, in the same team and position as before.

Although this three-month career “detour” wasn’t the outcome I initially envisioned, I found it to be a valuable experience. During this time, I gained knowledge in a new tech stack, based on AWS, and discovered new engineering methodologies. Additionally, I had the opportunity to meet some wonderful individuals. I believe I have emerged stronger from this experience.

Returning to the Wikimedia Foundation is truly motivating. It feels privileged to be part of this mature organization, its community, and movement, with its inspiring mission and values.

In addition, I’m hoping that this also means I can once again dedicate a bit more attention to my FLOSS activities, such as my duties within the Debian project.

My email address is back online: aborrero@wikimedia.org. You can find me again in the IRC libera.chat server, in the usual wikimedia channels, nick arturo.

13 February, 2024 09:30AM

hackergotchi for Matthew Palmer

Matthew Palmer

Not all TLDs are Created Equal

In light of the recent cancellation of the queer.af domain registration by the Taliban, the fragile and difficult nature of country-code top-level domains (ccTLDs) has once again been comprehensively demonstrated. Since many people may not be aware of the risks, I thought I’d give a solid explainer of the whole situation, and explain why you should, in general, not have anything to do with domains which are registered under ccTLDs.

Top-level What-Now?

A top-level domain (TLD) is the last part of a domain name (the collection of words, separated by periods, after the https:// in your web browser’s location bar). It’s the “com” in example.com, or the “af” in queer.af.

There are two kinds of TLDs: country-code TLDs (ccTLDs) and generic TLDs (gTLDs). Despite all being TLDs, they’re very different beasts under the hood.

What’s the Difference?

Generic TLDs are what most organisations and individuals register their domains under: old-school technobabble like “com”, “net”, or “org”, historical oddities like “gov”, and the new-fangled world of words like “tech”, “social”, and “bank”. These gTLDs are all regulated under a set of rules created and administered by ICANN (the “Internet Corporation for Assigned Names and Numbers”), which try to ensure that things aren’t a complete wild-west, limiting things like price hikes (well, sometimes, anyway), and providing means for disputes over names1.

Country-code TLDs, in contrast, are all two letters long2, and are given out to countries to do with as they please. While ICANN kinda-sorta has something to do with ccTLDs (in the sense that it makes them exist on the Internet), it has no authority to control how a ccTLD is managed. If a country decides to raise prices by 100x, or cancel all registrations that were made on the 12th of the month, there’s nothing anyone can do about it.

If that sounds bad, that’s because it is. Also, it’s not a theoretical problem – the Taliban deciding to asssert its bigotry over the little corner of the Internet namespace it has taken control of is far from the first time that ccTLDs have caused grief.

Shifting Sands

The queer.af cancellation is interesting because, at the time the domain was reportedly registered, 2018, Afghanistan had what one might describe as, at least, a different political climate. Since then, of course, things have changed, and the new bosses have decided to get a bit more active.

Those running queer.af seem to have seen the writing on the wall, and were planning on moving to another, less fraught, domain, but hadn’t completed that move when the Taliban came knocking.

The Curious Case of Brexit

When the United Kingdom decided to leave the European Union, it fell foul of the EU’s rules for the registration of domains under the “eu” ccTLD3. To register (and maintain) a domain name ending in .eu, you have to be a resident of the EU. When the UK ceased to be part of the EU, residents of the UK were no longer EU residents.

Cue much unhappiness, wailing, and gnashing of teeth when this was pointed out to Britons. Some decided to give up their domains, and move to other parts of the Internet, while others managed to hold onto them by various legal sleight-of-hand (like having an EU company maintain the registration on their behalf).

In any event, all very unpleasant for everyone involved.

Geopolitics… on the Internet?!?

After Russia invaded Ukraine in February 2022, the Ukranian Vice Prime Minister asked ICANN to suspend ccTLDs associated with Russia. While ICANN said that it wasn’t going to do that, because it wouldn’t do anything useful, some domain registrars (the companies you pay to register domain names) ceased to deal in Russian ccTLDs, and some websites restricted links to domains with Russian ccTLDs.

Whether or not you agree with the sort of activism implied by these actions, the fact remains that even the actions of a government that aren’t directly related to the Internet can have grave consequences for your domain name if it’s registered under a ccTLD. I don’t think any gTLD operator will be invading a neighbouring country any time soon.

Money, Money, Money, Must Be Funny

When you register a domain name, you pay a registration fee to a registrar, who does administrative gubbins and causes you to be able to control the domain name in the DNS. However, you don’t “own” that domain name4 – you’re only renting it. When the registration period comes to an end, you have to renew the domain name, or you’ll cease to be able to control it.

Given that a domain name is typically your “brand” or “identity” online, the chances are you’d prefer to keep it over time, because moving to a new domain name is a massive pain, having to tell all your customers or users that now you’re somewhere else, plus having to accept the risk of someone registering the domain name you used to have and capturing your traffic… it’s all a gigantic hassle.

For gTLDs, ICANN has various rules around price increases and bait-and-switch pricing that tries to keep a lid on the worst excesses of registries. While there are any number of reasonable criticisms of the rules, and the Internet community has to stay on their toes to keep ICANN from totally succumbing to regulatory capture, at least in the gTLD space there’s some degree of control over price gouging.

On the other hand, ccTLDs have no effective controls over their pricing. For example, in 2008 the Seychelles increased the price of .sc domain names from US$25 to US$75. No reason, no warning, just “pay up”.

Who Is Even Getting That Money?

A closely related concern about ccTLDs is that some of the “cool” ones are assigned to countries that are… not great.

The poster child for this is almost certainly Libya, which has the ccTLD “ly”. While Libya was being run by a terrorist-supporting extremist, companies thought it was a great idea to have domain names that ended in .ly. These domain registrations weren’t (and aren’t) cheap, and it’s hard to imagine that at least some of that money wasn’t going to benefit the Gaddafi regime.

Similarly, the British Indian Ocean Territory, which has the “io” ccTLD, was created in a colonialist piece of chicanery that expelled thousands of native Chagossians from Diego Garcia. Money from the registration of .io domains doesn’t go to the (former) residents of the Chagos islands, instead it gets paid to the UK government.

Again, I’m not trying to suggest that all gTLD operators are wonderful people, but it’s not particularly likely that the direct beneficiaries of the operation of a gTLD stole an island chain and evicted the residents.

Are ccTLDs Ever Useful?

The answer to that question is an unqualified “maybe”. I certainly don’t think it’s a good idea to register a domain under a ccTLD for “vanity” purposes: because it makes a word, is the same as a file extension you like, or because it looks cool.

Those ccTLDs that clearly represent and are associated with a particular country are more likely to be OK, because there is less impetus for the registry to try a naked cash grab. Unfortunately, ccTLD registries have a disconcerting habit of changing their minds on whether they serve their geographic locality, such as when auDA decided to declare an open season in the .au namespace some years ago. Essentially, while a ccTLD may have geographic connotations now, there’s not a lot of guarantee that they won’t fall victim to scope creep in the future.

Finally, it might be somewhat safer to register under a ccTLD if you live in the location involved. At least then you might have a better idea of whether your domain is likely to get pulled out from underneath you. Unfortunately, as the .eu example shows, living somewhere today is no guarantee you’ll still be living there tomorrow, even if you don’t move house.

In short, I’d suggest sticking to gTLDs. They’re at least lower risk than ccTLDs.

“+1, Helpful”

If you’ve found this post informative, why not buy me a refreshing beverage? My typing fingers (both of them) thank you in advance for your generosity.


Footnotes

  1. don’t make the mistake of thinking that I approve of ICANN or how it operates; it’s an omnishambles of poor governance and incomprehensible decision-making. 

  2. corresponding roughly, though not precisely (because everything has to be complicated, because humans are complicated), to the entries in the ISO standard for “Codes for the representation of names of countries and their subdivisions”, ISO 3166. 

  3. yes, the EU is not a country; it’s part of the “roughly, though not precisely” caveat mentioned previously. 

  4. despite what domain registrars try very hard to imply, without falling foul of deceptive advertising regulations. 

13 February, 2024 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

February 12, 2024

Andrew Cater

Lessons from (and for) colleagues - and, implicitly, how NOT to get on

I have had excellent colleagues both at my day job and, especially, in Debian over the last thirty-odd years. Several have attempted to give me good advice - others have been exemplars. People retire: sadly, people die. What impression do you want to leave behind when you leave here?

Belatedly, I've come to realise that obduracy, sheer bloody mindedness, force of will and obstinacy will only get you so far. The following began very much as a tongue in cheek private memo to myself a good few years ago. I showed it to a colleague who suggested at the time that I should share it to a wider audience.

SOME ADVICE YOU MAY BENEFIT FROM

Personal conduct

  • Never argue with someone you believe to be arguing idiotically - a dispassionate bystander may have difficulty telling who's who.
  • You can't make yourself seem reasonable by behaving unreasonably
  • It does not matter how correct your point of view is if you get people's backs up
  • They may all be #####, @@@@@, %%%% and ******* - saying so out loud doesn't help improve matters and may make you seem intemperate.

 Working with others

  • Be the change you want to be and behave the way you want others to behave in order to achieve the desired outcome.
  • You can demolish someone's argument constructively and add weight to good points rather than tearing down their ideas and hard work and being ultra-critical and negative - no-one likes to be told "You know - you've got a REALLY ugly baby there"
  • It's easier to work with someone than to work against them and have to apologise repeatedly.
  • Even when you're outstanding and superlative, even you had to learn it all once. Be generous to help others learn: you shouldn't have to teach too many times if you teach correctly once and take time in doing so.

Getting the message across

  • Stop: think: write: review: (peer review if necessary): publish.
  • Clarity is all: just because you understand it doesn't mean anyone else will.
  • It does not matter how correct your point of view is if you put it across badly.
  • If you're giving advice: make sure it is:
  • Considered
  • Constructive
  • Correct as far as you can (and)
  • Refers to other people who may be able to help
  • Say thank you promptly if someone helps you and be prepared to give full credit where credit's due.

Work is like that

  • You may not know all the answers or even have the whole picture - consult, take advice - LISTEN TO THE ADVICE
  • Sometimes the right answer is not the immediately correct answer
  • Corollary: Sometimes the right answer for the business is not your suggested/preferred outcome
  • Corollary: Just because you can do it like that in the real world doesn't mean that you can do it that way inside the business. [This realisation is INTENSELY frustrating but you have to learn to deal with it]
  • DON'T ALWAYS DO IT YOURSELF - Attempt to fix the system, sometimes allow the corporate monster to fail - then do it yourself and fix it. It is always easier and tempting to work round the system and Just Flaming Do It but it doesn't solve problems in the longer term and may create more problems and ill-feeling than it solves.

    [Worked out for Andy Cater for himself after many years of fighting the system as a misguided missile - though he will freely admit that he doesn't always follow them as often as he should :) ]

     

 

 

12 February, 2024 10:13PM by Andrew Cater (noreply@blogger.com)

Emanuele Rocca

Enabling Kernel Settings in Debian

This time it’s about enabling new kernel config options in the official Debian kernel packages. A few dependencies are needed to run the various scripts used by the Debian kernel folks, as well as to build the kernel itself:

apt install git gpg python3-debian python3-dacite
apt build-dep linux

With that in place, fetch the linux and kernel-team repos:

git clone --depth 1 https://salsa.debian.org/kernel-team/linux
git clone --depth 1 https://salsa.debian.org/kernel-team/kernel-team

So far you’ve only got the Debian-specific bits. Fetch the actual kernel sources now. In the likely case that you’re building a stable kernel, run the following from within the linux directory:

debian/bin/genorig.py https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git

Use the torvalds repo if you’re building an RC version instead:

debian/bin/genorig.py https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Now generate the upstream tarball as well as debian/control. The first command will take a bit, and the second command will fail: but that’s success — just as the output says.

debian/rules orig
debian/rules debian/control

Now generate patched sources with:

debian/rules source

Time to edit the Kconfig and enable/disable whatever setting you wanted to change. Take a look around the files under debian/config/ to see where your changes should go. If it’s a setting shared among multiple architectures that may be debian/config/config. For x86-specific things, the file is debian/config/amd64/config. On aarch64 debian/config/arm64/config. If in doubt, you could try asking #debian-kernel on IRC.

It may look like you need to figure out where exactly in the file the setting should be placed. That is not the case. There’s a helpful script fixing things up for you:

../kernel-team/utils/kconfigeditor2/process.py .

The above will fail if you forgot to run debian/rules source. The debian/build/source_rt/Kconfig file is needed by the script:

Traceback (most recent call last):
  File "/tmp/linux/../kernel-team/utils/kconfigeditor2/process.py", line 19, in __init__
    menu = fs_menu[featureset or 'none']
           ~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'rt'

During handling of the above exception, another exception occurred:
[...]
FileNotFoundError: [Errno 2] No such file or directory: './debian/build/source_rt/Kconfig'

If that happens, run:

debian/rules source

Now process.py should work fine and fix your config file.

Excellent, now the config is updated and we’re ready to build the kernel. Off we go:

export MAKEFLAGS=-j$(nproc)
export DEB_BUILD_PROFILES='pkg.linux.nokerneldbg pkg.linux.nokerneldbginfo pkg.linux.notools nodoc'
dpkg-buildpackage -b -nc -uc

12 February, 2024 07:52AM by Emanuele Rocca (ema@linux.it)

hackergotchi for Gunnar Wolf

Gunnar Wolf

Heads up! A miniDebConf is approaching in Santa Fe, Argentina

I realize it’s a bit late to start publicly organizing this, but… better late than never 😉 I’m happy some Debian people I have directly contacted have already expressed interest. So, lets make this public!

For all interested people who are reasonably close to central Argentina, or can be persuaded to come here in a month’s time… You are all welcome!

It seems I managed to convince my good friend Martín Bayo (some Debian people will remember him, as he was present in DebConf19 in Curitiba, Brazil) to get some facilities for us to have a nice Debian get-together in Central Argentina.

Where?

We will meet at APUL — Asociación de Personal no-docente de la Universidad Nacional del Litoral, in downtown Santa Fe, Argentina.

When?

Saturday, 2024.03.09. It is quite likely we can get some spaces for continuing over Sunday if there is demand.

What are we planning?

We have little time for planning… but we want to have a space for Debian-related outreach (so, please think about a topic or two you’d like to share with general free software-interested, not too technical, audience). Please tell me by mail (gwolf@debian.org) about any ideas you might have.

We also want to have a general hacklab-style area to hang out, work a bit in our projects, and spend a good time together.

Logistics

I have briefly commented about this with our dear and always mighty DPL, and Debian will support Debian-related people interested in attending; please check personally with me for specifics on how to handle this case by case. My intention is to cover costs for travel, accomodation (one or two nights) and food for whoever is interested in coming over.

More information

As I don’t want to direct people to keep an eye on my blog post for updates, I’ll copy this information (and keep it updated!) at the Debian Wiki / DebianEvents / ar / 2024 / MiniDebConf / Santa Fe — please refer to that page!

Contact

Codes of Conduct

DebConf and Debian Code of Conduct apply.

See the DebConf Code of Conduct and the Debian Code of Conduct.

Registration

Registration is free, but needed. See the separate Registration page.

Talks

Please, send your proposal to gwolf@debian.org

12 February, 2024 12:03AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, January 2024 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In January, 16 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 14.0h (out of 7.0h assigned and 7.0h from previous period).
  • Bastien Roucariès did 22.0h (out of 16.0h assigned and 6.0h from previous period).
  • Ben Hutchings did 14.5h (out of 8.0h assigned and 16.0h from previous period), thus carrying over 9.5h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 10.0h (out of 10.0h assigned).
  • Emilio Pozuelo Monfort did 10.0h (out of 14.75h assigned and 27.0h from previous period), thus carrying over 31.75h to the next month.
  • Guilhem Moulin did 9.75h (out of 25.0h assigned), thus carrying over 15.25h to the next month.
  • Holger Levsen did 3.5h (out of 12.0h assigned), thus carrying over 8.5h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Roberto C. Sánchez did 8.75h (out of 9.5h assigned and 2.5h from previous period), thus carrying over 3.25h to the next month.
  • Santiago Ruano Rincón did 13.5h (out of 8.25h assigned and 7.75h from previous period), thus carrying over 2.5h to the next month.
  • Sean Whitton did 0.5h (out of 0.25h assigned and 5.75h from previous period), thus carrying over 5.5h to the next month.
  • Sylvain Beucler did 9.5h (out of 23.25h assigned and 18.5h from previous period), thus carrying over 32.25h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 12.0h (out of 10.25h assigned and 1.75h from previous period).
  • Utkarsh Gupta did 8.5h (out of 35.75h assigned), thus carrying over 24.75h to the next month.

Evolution of the situation

In January, we have released 25 DLAs.

A variety of particularly notable packages were updated during January. Among those updates were the Linux kernel (both versions 5.10 and 4.19), mariadb-10.3, openjdk-11, firefox-esr, and thunderbird.

In addition to the many other LTS package updates which were released in January, LTS contributors continue their efforts to make impactful contributions both within the Debian community.

Thanks to our sponsors

Sponsors that joined recently are in bold.

12 February, 2024 12:00AM by Roberto C. Sánchez

Reproducible Builds (diffoscope)

diffoscope 257 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 257. This version includes the following changes:

[ James Addison ]
* Parse the header and hunksize of diffs strictly before parsing the context
  below. (Closes: reproducible-builds/diffoscope#363)
* Reformat code to comply with the latest version of Black (24.1.1).

[ Chris Lamb ]
* Expand the previous changelog entry to include the CVE number that was
  subsequently assigned.
* Bump the miniumum Black requirement to run the "Black clean" test and make
  test_zip.py Black clean.

You find out more by visiting the project homepage.

12 February, 2024 12:00AM

February 11, 2024

hackergotchi for Marco d'Itri

Marco d'Itri

Extending access to the systemd RuntimeDirectory with a POSIX ACL

inn2 uses ephemeral UNIX domain sockets in /run/news/ to communicate with the ctlinnd program. Since the directory is only writeable by the "news" user, other unprivileged users are not able to use the command.

I solved this by extending the inn2.service systemd unit with a drop-in file which uses setfacl to give access to my user "md" to the RuntimeDirectory created by systemd. This is the content of /etc/systemd/system/inn2.service.d/md-ctlinnd.conf:

[Service]
# innd will change the permissions of /run/news/ when started: without
# creating it now with mode 0775 then that will change the ACL mask.
RuntimeDirectoryMode=0775
# allow user md to run ctlinnd(8), which creates sockets in /run/news/
ExecStartPost=/usr/bin/setfacl --modify user:md:rwx $RUNTIME_DIRECTORY

The non-obvious issue here is that the innd daemon on startup will change the directory permissions in a way which sets a more restrictive (non group-writeable) ACL mask, and this would make the newly created user ACL ineffective. The solution is to create the directory group-writeable from start.

(Beware: this creates a trivial privileges escalation from md to news.)

11 February, 2024 04:12PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Adding a new date entry using GAS on Google Docs.

Adding a new date entry using GAS on Google Docs. When I am writing a diary, or having a weekly meeting, it is sometimes tedious to add today's date every time. I can automate such tasks with GAS. There's a onOpen method that gets called every time a doc is opened, and you can use that to implement adding the latest date when the latest date does not match.

11 February, 2024 12:15AM by Junichi Uekawa

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Upcoming Improvements to Salsa CI, /usr-move, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Upcoming Improvements to Salsa CI, by Santiago Ruano Rincón

Santiago started picking up the work made by Outreachy Intern, Enock Kashada (a big thanks to him!), to solve some long-standing issues in Salsa CI. Currently, the first job in a Salsa CI pipeline is the extract-source job, used to produce a debianize source tree of the project. This job was introduced to make it possible to build the projects on different architectures, on the subsequent build jobs. However, that extract-source approach is sub-optimal: not only it increases the execution time of the pipeline by some minutes, but also projects whose source tree is too large are not able to use the pipeline. The debianize source tree is passed as an artifact to the build jobs, and for those large projects, the size of their source tree exceeds the Salsa’s limits. This is specific issue is documented as issue #195, and the proposed solution is to get rid of the extract-source job, relying on sbuild in the very build job (see issue #296).

Switching to sbuild would also help to improve the build source job, solving issues such as #187 and #298.

The current work-in-progress is very preliminary, but it has already been possible to run the build (amd64), build-i386 and build-source job using sbuild with the unshare mode. The image on the right shows a pipeline that builds grep. All the test jobs use the artifacts of the new build job. There is a lot of remaining work, mainly making the integration with ccache work. This change could break some things, it will also be important to test how the new pipeline works with complex projects.

Also, thanks to Emmanuel Arias, we are proposing a Google Summer of Code 2024 project to improve Salsa CI. As part of the ongoing work in preparation for the GSoC 2024 project, Santiago has proposed a merge request to make more efficient how contributors can test their changes on the Salsa CI pipeline.

/usr-move, by Helmut Grohne

In January, we sent most of the moving patches for the set of packages involved with debootstrap. Notably missing is glibc, which turns out harder than anticipated via dumat, because it has Conflicts between different architectures, which dumat does not analyze.

Patches for diversion mitigations have been updated in a way to not exhibit any loss anymore.

The main change here is that packages which are being diverted now support the diverting packages in transitioning their diversions. We also supported a few packages with non-trivial changes such as netplan.io. dumat has been enhanced to better support derivatives such as Ubuntu.

Miscellaneous contributions

  • Python 3.12 migration trundles on. Stefano Rivera helped port several new packages to support 3.12.
  • Stefano updated the Sphinx configuration of DebConf Video Team’s documentation, which was broken by Sphinx 7.
  • Stefano published the videos from the Cambridge MiniDebConf to YouTube and PeerTube.
  • DebConf 24 planning has begun, and Stefano & Utkarsh have started work on this.
  • Utkarsh re-sponsored the upload of golang-github-prometheus-community-pgbouncer-exporter for Lena.
  • Colin Watson added Incus support to autopkgtest.
  • Colin discovered Perl::Critic and used it to tidy up some poor practices in several of his packages, including debconf.
  • Colin did some overdue debconf maintenance, mainly around tidying up error message handling in several places (1, 2, 3).
  • Colin figured out how to update the mirror size documentation in debmirror, last updated in 2010. It should now be much easier to keep it up to date regularly.
  • Colin issued a man-db buster update to clean up some irritations due to strict sandboxing.
  • Thorsten Alteholz adopted two more packages, magicfilter and ifhp, for the debian-printing team. Those packages are the last ones of the latest round of adoptions to preserve the old printing protocol within Debian. If you know of other packages that should be retained, please don’t hesitate to contact Thorsten.
  • Enrico participated in /usr-merge discussions with Helmut.
  • Helmut sent patches for 16 cross build failures.
  • Helmut supported Matthias Klose (not affiliated with Freexian) with adding -for-host support to gcc-defaults.
  • Helmut uploaded dput-ng enabling dcut migrate and merging two MRs of Ben Hutchings.
  • Santiago took part in the discussions relating to the EU Cyber Resilience Act (CRA) and the Debian public statement that was published last year. He participated in a meeting with Members of the European Parliament (MEPs), Marcel Kolaja and Karen Melchior, and their teams to clarify some points about the impact of the CRA and Debian and downstream projects, and the improvements in the last version of the proposed regulation.

11 February, 2024 12:00AM by Utkarsh Gupta

February 10, 2024

Andrew Cater

Debian point releases - updated media for Bullseye (11.9) and Bookworm (12.5) - 2024-02-10

 It's been a LONG day: two point releases in a day takes of the order of twelve or thirteen hours of fairly solid work on behalf of those doing the releases and testing.

Thanks firstly to the main Debian release team for all the initial work.

Thanks to Isy, RattusRattus, Sledge and egw in Cottenham, smcv and Helen closer to the centre of Cambridge, cacin and others who have dropped in and out of IRC and helped testing.

I've been at home but active on IRC - missing the team (and the food) and drinking far too much coffee/eating too many biscuits.

We've found relatively few bugs that we haven't previously noted: it's been a good day. Back again, at some point a couple of months from now to do this all over again.

With luck, I can embed a picture of the Cottenham folk below: it's fun to know _exactly_ where people are because you've been there yourself.



10 February, 2024 10:54PM by Andrew Cater (noreply@blogger.com)

February 09, 2024

Thorsten Alteholz

My Debian Activities in January 2024

FTP master

This month I accepted 333 and rejected 31 packages. The overall number of packages that got accepted was 342.

Hooray, I already accepted package number 30000.

The statistic, where I get my numbers from, started in February 2002. Up to now 81694 packages got accepted. Given that I accepted package 20000 in October 2020, would I be able to accept half of the packages that made it through NEW?

Debian LTS

This was my hundred-fifteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3726-1] bind9 security update for one CVEs to fix stack exhaustion
  • [#1060186] Bookworm PU-bug for libde265; yes, this is a new one.
  • [#1056935] Bullseye PU-bug for libde; yes, this is a new one as well

This month I was finally able to really run the test suite of bind9. I already wanted to give up with this package, but Santiago encouraged me to proceed. So, here you are fixed-Buster-version. Jessie and Stretch have to wait a bit until the dust has settled.

Last but not least I also did a few days of frontdesk duties.

Debian ELTS

This month was the sixty-sixth ELTS month. During my allocated time I uploaded:

  • [ELA-1031-1]xerces-c security update for one CVE to fix an out-of-bound access in Jessie and Stretch
  • [ELA-1036-1] jasper security update for one CVE to fix an invalid memory write

This month I also worked on the Jessie and Stretch updates for bind9. The uploads should happen soon. I also started to work on an update for exim4. Last but not least I did a few days of frontdesk duties.

Debian Printing

This month I adopted:

At the moment these packages are the last adoptions to preserve the old printing protocol within Debian. If you know of other packages that should be retained, please don’t hesitate to ask me. But don’t wait for too long, I have fun to process RM-bugs :-).

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version of:

Debian IoT

This month I uploaded new upstream versions of:

  • pyicloud to remove the deprecated dependency python3-future

Other stuff

This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue:

09 February, 2024 03:23PM by alteholz

Abhijith PA

A new(kind of) phone

I was using a refurbished Xiaomi Redmi 6 Pro (codename: sakura . I do remember buying this phone to run LineageOS as I found this model had support. But by the time I bought it (there was a quite a gap between my research and the actual purchase) lineageOS ended the suport for this device. The maintainer might’ve ended this port, I don’t blame them. Its a non rewarding work.

Later I found there is a DotOS custom rom available. I did a week of test run. There seems lot of bugs and it all was minor and I can live with that. Since there is no other official port for redmi 6 pro at the time, I settled on this buggy DotOS nightly build.

The phone was doing fine all these(3+) years, but recently the camera started showing greyish dots on some areas to a point that I can’t even properly take a photo. The outer body of the phone wasn’t original, as it was a refurbished, thus the colors started to peel off and wasn’t aesthetically pleasing. Then there is the Location detection issue which took a while to figure out location and battery drains fast. I recently started mapping public transportation routes, and the GPS issue was a problem for me. With all these problems combined, I decided to buy a new phone.

But choosing a phone wasn’t easy. There are far too many options in the market. However, one thing I was quite sure of was that I wouldn’t be buying a brand new phone, but a refurbished one. It is my protest against all these phone manufacturers who have convinced the general public that a mobile phone’s lifespan is only 2 years. Of course, there are a few exceptions, but the majority of players in the Indian market are not.

Now I think about it, I haven’t bought any brand new computing device, be it phones or laptops since 2015. All are refurbished devices except for an Orange pi. My Samsung R439 laptop which I already blogged about is still going strong.

Back to picking phone. So I began by comparing LineageOS website to an online refurbished store to pick a phone that has an official lineage support then to reddit search for any reported hardware failures

Google Pixel phones are well-known in the custom ROM community for ease of installation, good hardware and Android security releases. My above claims are from privacyguides.org and XDA-developers. I was convinced to go with this choice. But I have some one at home who is at the age of throwing things. So I didn’t want to risk buying an expensive Pixel phone only to have it end up with a broken screen and edges. Perhaps I will consider buying a Pixel next time. After doing some more research and cross comparison I landed up on Redmi note 9. Codename: merlinx.

Based on my previous experience I knew it is going to be a pain to unlock the bootloader of Xiaomi phones and I was prepared for that but this time there was an extra hoop. The phone came with ROM MIUI version 13. A small search on XDA forums and reddit told unless we downgrade to 12 or so it is difficult to unlock bootloader for this device. And performing this wasn’t exactly easy as we need tools like SP Flash etc.

It took some time, but I’ve completed the downgrade process. From there, the rest was cakewalk as everything was perfectly documented in LineageOS wiki. And ta-da, I have a phone running lineageOS. I been using it for some time and honestly I haven’t come across any bug in my way of usage.

One advice I will give to you if you are going to flash a custom ROM. There are plenty of videos about unlocking bootloader, installing ROMs in Youtube. Please REFRAIN from watching and experimenting it on your phone unless you figured a way to un brick your device first.

09 February, 2024 10:53AM

Reproducible Builds (diffoscope)

diffoscope 256 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 256. This version includes the following changes:

* CVE-2024-25711: Use a determistic name when extracting content from GPG
  artifacts instead of trusting the value of gpg's --use-embedded-filenames.

  This prevents a potential information disclosure vulnerability that could
  have been exploited by providing a specially-crafted GPG file with an
  embedded filename of, say, "../../.ssh/id_rsa".

  Many thanks to Daniel Kahn Gillmor <dkg@debian.org> for reporting this
  issue and providing feedback.

  (Closes: reproducible-builds/diffoscope#361)

* Temporarily fix support for Python 3.11.8 re. a potential regression
  with the handling of ZIP files. (See reproducible-builds/diffoscope#362)

You find out more by visiting the project homepage.

09 February, 2024 12:00AM

February 08, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.12.8.0.0 on CRAN: New Upstream, Interface Polish

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1119 other packages on CRAN, downloaded 32.5 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 575 times according to Google Scholar.

This release brings a new (stable) upstream (minor) release Armadillo 12.8.0 prepared by Conrad two days ago. We, as usual, prepared a release candidate which we tested against the over 1100 CRAN packages using RcppArmadillo. This found no issues, which was confirmed by CRAN once we uploaded and so it arrived as a new release today in a fully automated fashion.

We also made a small change that had been prepared by GitHub issue #400: a few internal header files that were cluttering the top-level of the include directory have been moved to internal directories. The standard header is of course unaffected, and the set of ‘full / light / lighter / lightest’ headers (matching we did a while back in Rcpp) also continue to work as one expects. This change was also tested in a full reverse-dependency check in January but had not been released to CRAN yet.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.0.0 (2024-02-06)

  • Upgraded to Armadillo release 12.8.0 (Cortisol Injector)

    • Faster detection of symmetric expressions by pinv() and rank()

    • Expanded shift() to handle sparse matrices

    • Expanded conv_to for more flexible conversions between sparse and dense matrices

    • Added cbrt()

    • More compact representation of integers when saving matrices in CSV format

  • Five non-user facing top-level include files have been removed (#432 closing #400 and building on #395 and #396)

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 February, 2024 11:33PM

Sven Hoexter

Use GitHub CLI to List all Repository Secrets

Write it down before I forget about it again:

for x in $(gh api graphql --paginate -f query='query($endCursor:String) { organization(login:"myorg") {
    repositories(first: 100, after: $endCursor, isArchived:false) {
        pageInfo {
            hasNextPage
            endCursor
        }
        nodes {
            name
        }
    }
    }
    }' --jq '.data.organization.repositories.nodes[].name'); do

    secrets=$(gh secret list --json name --jq '.[].name' -R "myorg/${x}" | tr '\n' ',')
    if ! [ -z "${secrets}" ]; then
        echo "${x},${secrets}"
    fi
done

Requests a list of all not archived repositories in a GitHub org and queries repository secrets. If we find some we output the repo name and the secrets in a comma separated list. Not real CSV, but good enough for further processing. I've to admit it's kinda beautiful what you can do with the gh cli by now. Sadly it seems the secrets are not yet available via GraphQL (or I missed it in the docs), so I just use the gh cli to do the REST calls.

08 February, 2024 10:12AM

hackergotchi for Bits from Debian

Bits from Debian

DebConf24 Logo Contest Results

Earlier this month the DebConf team announced the DebConf24 Logo Contest asking aspiring artists, designers, and contributors to submit an image that would represent the host city of Busan, the host nation of South Korea, and promote the next Debian Developer Conference.

The logo contest for DebConf24 received 10 submissions and garnered 354 responses with 3 proposals in particular getting very close to first place. The winning logo received 88 votes, the 2nd favored logo received 87 votes, and the 3rd most favored received 86 votes.

Thank you to Woohee Yang and Junsang Moon for sharing their artistic visions.

A very special Thank You to everyone who took the time to vote for our beautiful new logo!

The DebConf24 Team is proud to share for preview only the winning logo for the 24th Debian Developer Conference:

[DebConf24 Logo Contest Winner]

'sun-seagull-sea' by Woohee Yang

This is a preview copy, other revisions will occur for sizing, print, and media... but we had to share it with you all now. :).

Looking forward to seeing you all at #debconf24 in #Busan, South Korea 2024!

08 February, 2024 05:00AM by Donald Norwood

Reproducible Builds

Reproducible Builds at FOSDEM 2024

Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. Titled Reproducible Builds: The First Ten Years

In this talk Holger ‘h01ger’ Levsen will give an overview about Reproducible Builds: How it started with a small BoF at DebConf13 (and before), then grew from being a Debian effort to something many projects work on together, until in 2021 it was mentioned in an Executive Order of the President of the United States. And of course, the talk will not end there, but rather outline where we are today and where we still need to be going, until Debian stable (and other distros!) will be 100% reproducible, verified by many.

h01ger has been involved in reproducible builds since 2014 and so far has set up automated reproducibility testing for Debian, Fedora, Arch Linux, FreeBSD, NetBSD and coreboot.

More information can be found on FOSDEM’s own page for the talk, including a video recording and slides.


Separate from Holger’s talk, however, there were a number of other talks about reproducible builds at FOSDEM this year:

… and there was even an entire track on Software Bill of Materials.

08 February, 2024 12:00AM

February 07, 2024

Reproducible Builds in January 2024

Welcome to the January 2024 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. If you are interested in contributing to the project, please visit our Contribute page on our website.


“How we executed a critical supply chain attack on PyTorch”

John Stawinski and Adnan Khan published a lengthy blog post detailing how they executed a supply-chain attack against PyTorch, a popular machine learning platform “used by titans like Google, Meta, Boeing, and Lockheed Martin”:

Our exploit path resulted in the ability to upload malicious PyTorch releases to GitHub, upload releases to [Amazon Web Services], potentially add code to the main repository branch, backdoor PyTorch dependencies – the list goes on. In short, it was bad. Quite bad.

The attack pivoted on PyTorch’s use of “self-hosted runners” as well as submitting a pull request to address a trivial typo in the project’s README file to gain access to repository secrets and API keys that could subsequently be used for malicious purposes.


New Arch Linux forensic filesystem tool

On our mailing list this month, long-time Reproducible Builds developer kpcyrd announced a new tool designed to forensically analyse Arch Linux filesystem images.

Called archlinux-userland-fs-cmp, the tool is “supposed to be used from a rescue image (any Linux) with an Arch install mounted to, [for example], /mnt.” Crucially, however, “at no point is any file from the mounted filesystem eval’d or otherwise executed. Parsers are written in a memory safe language.”

More information about the tool can be found on their announcement message, as well as on the tool’s homepage. A GIF of the tool in action is also available.


Issues with our SOURCE_DATE_EPOCH code?

Chris Lamb started a thread on our mailing list summarising some potential problems with the source code snippet the Reproducible Builds project has been using to parse the SOURCE_DATE_EPOCH environment variable:

I’m not 100% sure who originally wrote this code, but it was probably sometime in the ~2015 era, and it must be in a huge number of codebases by now.

Anyway, Alejandro Colomar was working on the shadow security tool and pinged me regarding some potential issues with the code. You can see this conversation here.

Chris ends his message with a request that those with intimate or low-level knowledge of time_t, C types, overflows and the various parsing libraries in the C standard library (etc.) contribute with further info.


Distribution updates

In Debian this month, Roland Clobus posted another detailed update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that “all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours)”. Additionally 7 of the 8 bookworm images from the official download link build reproducibly at any later time.

In addition to this, three reviews of Debian packages were added, 17 were updated and 15 were removed this month adding to our knowledge about identified issues.

Elsewhere, Bernhard posted another monthly update for his work elsewhere in openSUSE.


Community updates

There were made a number of improvements to our website, including Bernhard M. Wiedemann fixing a number of typos of the term ‘nondeterministic’. [] and Jan Zerebecki adding a substantial and highly welcome section to our page about SOURCE_DATE_EPOCH to document its interaction with distribution rebuilds. [].


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 245 and 255 to Debian but focusing on triaging and/or merging code from other contributors. This included adding support for comparing eXtensible ARchive’ (.XAR/.PKG) files courtesy of Seth Michael Larson [][], as well considerable work from Vekhir in order to fix compatibility between various and subtle incompatible versions of the progressbar libraries in Python [][][][]. Thanks!


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Reduce the number of arm64 architecture workers from 24 to 16. []
    • Use diffoscope from the Debian release being tested again. []
    • Improve the handling when killing unwanted processes [][][] and be more verbose about it, too [].
    • Don’t mark a job as ‘failed’ if process marked as ‘to-be-killed’ is already gone. []
    • Display the architecture of builds that have been running for more than 48 hours. []
    • Reboot arm64 nodes when they hit an OOM (out of memory) state. []
  • Package rescheduling changes:

    • Reduce IRC notifications to ‘1’ when rescheduling due to package status changes. []
    • Correctly set SUDO_USER when rescheduling packages. []
    • Automatically reschedule packages regressing to FTBFS (build failure) or FTBR (build success, but unreproducible). []
  • OpenWrt-related changes:

    • Install the python3-dev and python3-pyelftools packages as they are now needed for the sunxi target. [][]
    • Also install the libpam0g-dev which is needed by some OpenWrt hardware targets. []
  • Misc:

    • As it’s January, set the real_year variable to 2024 [] and bump various copyright years as well [].
    • Fix a large (!) number of spelling mistakes in various scripts. [][][]
    • Prevent Squid and Systemd processes from being killed by the kernel’s OOM killer. []
    • Install the iptables tool everywhere, else our custom rc.local script fails. []
    • Cleanup the /srv/workspace/pbuilder directory on boot. []
    • Automatically restart Squid if it fails. []
    • Limit the execution of chroot-installation jobs to a maximum of 4 concurrent runs. [][]

Significant amounts of node maintenance was performed by Holger Levsen (eg. [][][][][][][] etc.) and Vagrant Cascadian (eg. [][][][][][][][]). Indeed, Vagrant Cascadian handled an extended power outage for the network running the Debian armhf architecture test infrastructure. This provided the incentive to replace the UPS batteries and consolidate infrastructure to reduce future UPS load. []

Elsewhere in our infrastructure, however, Holger Levsen also adjusted the email configuration for @reproducible-builds.org to deal with a new SMTP email attack. []


Upstream patches

The Reproducible Builds project tries to detects, dissects and fix as many (currently) unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Separate to this, Vagrant Cascadian followed up with the relevant maintainers when reproducibility fixes were not included in newly-uploaded versions of the mm-common package in Debian — this was quickly fixed, however. []



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

07 February, 2024 10:16PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

carbon

I got a new work laptop this year: A Thinkpad X1 Carbon (Gen 11). It wasn't the one I wanted: I'd ordered an X1 Nano, which had a footprint very reminiscent to me of my beloved x40.

Never mind! The Carbon is lovely. Despite ostensibly the same size as the T470s it's replacing, it's significantly more portable, and more capable. The two USB-A ports, as well as the full-size HDMI port, are welcome and useful (over the Nano).

I used to keep notes on setting up Linux on different types of hardware, but I haven't really bothered now for years. Things Just Work. That's good!

My old machine naming schemes are stretched beyond breaking point (and I've re-used my favourite hostname, qusp, one too many times) so I went for something new this time: Riffing on Carbon, I settled (for now) on carbyne, a carbon allotrope which is of interest to nanotechnologists (Seems appropriate)

07 February, 2024 07:24PM

February 06, 2024

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - February 2024

New Year, Same Great People! Our Debian User Group met for the first of our 2024 bi-monthly meetings on February 4th and it was loads of fun. Around twelve different people made it this time to Koumbit, where the meeting happened.

As a reminder, our meetings are called "Debian & Stuff" because we want to be as open as possible and welcome people that want to work on "other stuff" than Debian.

Here is what we did:

pollo:

  • tested a laptop that had a defective battery with a known good one (the problem was indeed with the battery :D)
  • renewed his expiring OpenPGP key
  • worked on removing trapperkeeper-scheduler-clojure from the DebCI reject_list
  • helped anarcat with packaging sigal
  • managed to feed most of the people present :)

LeLutin:

  • worked with lavamind to upload new upstream release of smokeping

mjeanson:

  • migrated from lxd to incus on his servers
  • helped anarcat with flashing his AirGradient

lavamind:

  • submitted a bug report on r-cran-rserve (promptly fixed/uploaded by maintainer!)
  • reviewed and uploaded smokeping
  • bug triaged the facter package
  • worked on puppet-agent new upstream version 8.4.0

viashimo:

  • updated puppet-strings from 2.9.0 to 4.2.1
  • reported upstream test failures on puppet-strings with recent versions of mdl

tvaz & tassia:

  • drafted a call & request for funding for the Vanier College FLOSS Club hardware marathon at Eastern Bloc
  • worked on an application to conduct research at Vanier College on Debian usability
  • babysitted :-)

joeDoe:

  • worked on his AirGradient
  • debugged the WiFi and VPN setup on his new laptop

anarcat:

Pictures

I was pretty busy this time around and ended up not taking a lot of pictures. Here's a bad one of the ceiling at Koumbit I took, and a picture by anarcat of the content of his boxes of loot:

A picture of the ceiling at Koumbit The content of anarcat's boxes of loot

06 February, 2024 07:30PM by Louis-Philippe Véronneau

hackergotchi for Robert McQueen

Robert McQueen

Flathub: Pros and Cons of Direct Uploads

I attended FOSDEM last weekend and had the pleasure to participate in the Flathub / Flatpak BOF on Saturday. A lot of the session was used up by an extensive discussion about the merits (or not) of allowing direct uploads versus building everything centrally on Flathub’s infrastructure, and related concerns such as automated security/dependency scanning.

My original motivation behind the idea was essentially two things. The first was to offer a simpler way forward for applications that use language-specific build tools that resolve and retrieve their own dependencies from the internet. Flathub doesn’t allow network access during builds, and so a lot of manual work and additional tooling is currently needed (see Python and Electron Flatpak guides). And the second was to offer a maybe more familiar flow to developers from other platforms who would just build something and then run another command to upload it to the store, without having to learn the syntax of a new build tool. There were many valid concerns raised in the room, and I think on reflection that this is still worth doing, but might not be as valuable a way forward for Flathub as I had initially hoped.

Of course, for a proprietary application where Flathub never sees the source or where it’s built, whether that binary is uploaded to us or downloaded by us doesn’t change much. But for an FLOSS application, a direct upload driven by the developer causes a regression on a number of fronts. We’re not getting too hung up on the “malicious developer inserts evil code in the binary” case because Flathub already works on the model of verifying the developer and the user makes a decision to trust that app – we don’t review the source after all. But we do lose other things such as our infrastructure building on multiple architectures, and visibility on whether the build environment or upload credentials have been compromised unbeknownst to the developer.

There is now a manual review process for when apps change their metadata such as name, icon, license and permissions – which would apply to any direct uploads as well. It was suggested that if only heavily sandboxed apps (eg no direct filesystem access without proper use of portals) were permitted to make direct uploads, the impact of such concerns might be somewhat mitigated by the sandboxing.

However, it was also pointed out that my go-to example of “Electron app developers can upload to Flathub with one command” was also a bit of a fiction. At present, none of them would pass that stricter sandboxing requirement. Almost all Electron apps run old versions of Chromium with less complete portal support, needing sandbox escapes to function correctly, and Electron (and Chromium’s) sandboxing still needs additional tooling/downstream patching to run inside a Flatpak. Buh-boh.

I think for established projects who already ship their own binaries from their own centralised/trusted infrastructure, and for developers who have understandable sensitivities about binary integrity such such as encryption, password or financial tools, it’s a definite improvement that we’re able to set up direct uploads with such projects with less manual work. There are already quite a few applications – including verified ones – where the build recipe simply fetches a binary built elsewhere and unpacks it, and if this already done centrally by the developer, repeating the exercise on Flathub’s server adds little value.

However for the individual developer experience, I think we need to zoom out a bit and think about how to improve this from a tools and infrastructure perspective as we grow Flathub, and as we seek to raise funds for different sources for these improvements. I took notes for everything that was mentioned as a tooling limitation during the BOF, along with a few ideas about how we could improve things, and hope to share these soon as part of an RFP/RFI (Request For Proposals/Request for Information) process. We don’t have funding yet but if we have some prospective collaborators to help refine the scope and estimate the cost/effort, we can use this to go and pursue funding opportunities.

06 February, 2024 10:57AM by ramcq

February 02, 2024

hackergotchi for Junichi Uekawa

Junichi Uekawa

February already.

February already. I was planning on doing some Debian Rust stuff but then I need to reconstruct my environment.

02 February, 2024 09:42PM by Junichi Uekawa

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in January 2024

02 February, 2024 04:26PM

Scarlett Gately Moore

Some exciting news! Kubuntu: I’m back!!!

It’s official, the Kubuntu Council has hired me part time to work on the 24.04 LTS release, preparation for Plasma 6, and to bring life back into the Distribution. First I want thank the Kubuntu Council for this opportunity and I plan a long and successful journey together!!!!

My first week ( I started midweek ):

It has been a busy one! Many meet and greets with the team and other interested parties. I had the chance to chat with Mike from Kubuntu Focus and I have to say I am absolutely amazed with the work they have done, and if you are in the market for a new laptop, you must check these out!!! https://kfocus.org Or if you want to try before you buy you can download the OS! All they ask is for an e-mail, which is completely reasonable. Hosting isn’t free! Besides, you can opt out anytime and they don’t share it with anyone. I look forward to working closely with this project.

We now have a Kubuntu Team in KDE invent https://invent.kde.org/teams/distribution-kubuntu if you would like to join us, please don’t hesitate to ask! I have started a new Wiki and our first page is the ever important Bug triaging! It is still a WIP but you can check it out here: https://invent.kde.org/teams/distribution-kubuntu/docs/-/wikis/Bug-Triage-Story-WIP , with that said I have started the launchpad work to make tracking our bugs easier buy subscribing kubuntu-bugs to all our packages and creating proper projects for our packages missing them.

We have compiled a list of our various documentation links that need updated and Rick Timmis is updating kubuntu.org! Aaron Honeycutt has been busy with the Kubuntu Manual https://github.com/kubuntu-team/kubuntu-manual which is in good shape. We just need to improve our developer story 🙂

I have been working on the rather massive Apparmor bug https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2046844 with testing the fixes from the ppa and writing profiles for the various KDE packages affected ( pretty much anything that uses webengine ) and making progress there.

My next order of business staging Frameworks 5.114 with guidance from our super awesome Rik Mills that has been doing most of the heavy lifting in Kubuntu for many years now. So thank you for that Rik 🙂

I will also start on our big transition to the Calamaras Installer! I do have experience here, so I expect it will be a smooth one.

I am so excited for the future of Kubuntu and the exciting things to come! With that said, the Kubuntu funding is community donation driven. There is enough to pay me part time for a couple contracts, but it will run out and a full-time contract would be super awesome. I am reaching out to anyone enjoying Kubuntu and want to help with the future of Kubuntu to please consider a donation! We are working on more donation options, but for now you can donate through paypal at https://kubuntu.org/donate/ Thank you!!!!!

02 February, 2024 04:22PM by sgmoore

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in December 2023

02 February, 2024 03:55PM

Ian Jackson

UPS, the Useless Parcel Service; VAT and fees

I recently had the most astonishingly bad experience with UPS, the courier company. They severely damaged my parcels, and were very bad about UK import VAT, ultimately ending up harassing me on autopilot.

The only thing that got their attention was my draft Particulars of Claim for intended legal action.

Surprisingly, I got them to admit in writing that the “disbursement fee” they charge recipients alongside the actual VAT, is just something they made up with no legal basis.

What happened

Autumn last year I ordered some furniture from a company in Germany. This was to be shipped by them to me by courier. The supplier chose UPS.

UPS misrouted one of the three parcels to Denmark. When everything arrived, it had been sat on by elephants. The supplier had to replace most of it, with considerable inconvenience and delay to me, and of course a loss to the supplier.

But this post isn’t mostly about that. This post is about VAT. You see, import VAT was due, because of fucking Brexit.

UPS made a complete hash of collecting that VAT. Their computers can’t issue coherent documents, their email helpdesk is completely useless, and their automated debt collection systems run along uninfluenced by any external input.

The crazy, including legal threats and escalating late payment fees, continued even after I paid the VAT discrepancy (which I did despite them not yet having provided any coherent calculation for it).

This kind of behaviour is a very small and mild version of the kind of things British Gas did to Lisa Ferguson, who eventually won substantial damages for harassment, plus £10K of costs.

Having tried asking nicely, and sending stiff letters, I too threatened litigation.

I would have actually started a court claim, but it would have included a claim under the Protection from Harassment Act. Those have to be filed under the “Part 8 procedure”, which involves sending all of the written evidence you’re going to use along with the claim form. Collating all that would be a good deal of work, especially since UPS and ControlAccount didn’t engage with me at all, so I had no idea which things they might actually dispute. So I decided that before issuing proceedings, I’d send them a copy of my draft Particulars of Claim, along with an offer to settle if they would pay me a modest sum and stop being evil robots at me.

Rather than me typing the whole tale in again, you can read the full gory details in the PDF of my draft Particulars of Claim. (I’ve redacted the reference numbers).

Outcome

The draft Particulars finally got their attention. UPS sent me an offer: they agreed to pay me £50, in full and final settlement. That was close enough to my offer that I accepted it. I mostly wanted them to stop, and they do seem to have done so. And I’ve received the £50.

VAT calculation

They also finally included an actual explanation of the VAT calculation. It’s absurd, but it’s not UPS’s absurd:

The clearance was entered initially with estimated import charges of £400.03, consisting of £387.83 VAT, and £12.20 disbursement fee. This original entry regrettably did not include the freight cost for calculating the VAT, and as such when submitted for final entry the VAT value was adjusted to include this and an amended invoice was issued for an additional £39.84.

HMRC calculate the amount against which VAT is raised using the value of goods, insurance and freight, however they also may apply a VAT adjustment figure.

The VAT Adjustment is based on many factors (Incidental costs in regards to a shipment), which includes charge for currency conversion if the invoice does not list values in Sterling, but the main is due to the inland freight from airport of destination to the final delivery point, as this charge varies, for example, from EMA to Edinburgh would be £150, from EMA to Derby would be £1, so each year UPS must supply HMRC with all values incurred for entry build up and they give an average which UPS have to use on the entry build up as the VAT Adjustment.

The correct calculation for the import charges is therefore as follows:

Goods value divided by exchange rate 2,489.53 EUR / 1.1683 = 2,130.89 GBP

Duty: Goods value plus freight (%) 2,130.89 GBP + 5% = 2,237.43 GBP. That total times the duty rate. X 0 % = 0 GBP

VAT: Goods value plus freight (100%) 2,130.89 GBP + 0 = 2,130.89 GBP

That total plus duty and VAT adjustment 2,130.89 GBP + 0 GBP + 7.49 GBP = 2,348.08 GBP. That total times 20% VAT = 427.67 GBP

As detailed above we must confirm that the final VAT charges applied to the shipment were correct, and that no refund of this is therefore due.

This looks very like HMRC-originated nonsense. If only they had put it on the original bills! It’s completely ridiculous that it took four months and near-litigation to obtain it.

“Disbursement fee”

One more thing. UPS billed me a £12 “disbursement fee”.

When you import something, there’s often tax to pay. The courier company pays that to the government, and the consignee pays it to the courier. Usually the courier demands it before final delivery, since otherwise they end up having to chase it as a debt.

It is common for parcel companies to add a random fee of their own. As I note in my Particulars, there isn’t any legal basis for this.

In my own offer of settlement I proposed that UPS should:

State under what principle of English law (such as, what enactment or principle of Common Law), you levy the “disbursement fee” (or refund it).

To my surprise they actually responded to this in their own settlement letter. (They didn’t, for example, mention the harassment at all.) They said (emphasis mine):

A disbursement fee is a fee for amounts paid or processed on behalf of a client. It is an established category of charge used by legal firms, amongst other companies, for billing of various ancillary costs which may be incurred in completion of service. Disbursement fees are not covered by a specific law, nor are they legally prohibited.

Regarding UPS’ disbursement fee this is an administrative charge levied for the use of UPS’ deferment account to prepay import charges for clearance through CDS. This charge would therefore be billed to the party that is responsible for the import charges, normally the consignee or receiver of the shipment in question. The disbursement fee as applied is legitimate, and as you have stated is a commonly used and recognised charge throughout the courier industry, and I can confirm that this was charged correctly in this instance.

On UPS’s analysis, they can just make up whatever fee they like. That is clearly not right (and I don’t even need to refer to consumer protection law, which would also make it obviously unlawful).

And, that everyone does it doesn’t make it lawful. There are so many things that are ubiquitous but unlawful, especially nowadays when much of the legal system - especially consumer protection regulators - has been underfunded to beyond the point of collapse.

Next time this comes up I might have a go at getting the fee back. (Obviously I’ll have to pay it first, to get my parcel.)

ParcelForce and Royal Mail

I think this analysis doesn’t apply to ParcelForce and (probably) Royal Mail. I looked into this in 2009, and I found that Parcelforce had been given the ability to write their own private laws: “Schemes” made under section 89 of the Postal Services Act 2000.

This is obviously ridiculous but I think it was the law in 2009. I doubt the intervening governments have fixed it.

Furniture

Oh, yes, the actual furniture. The replacements arrived intact and are great :-).



comment count unavailable comments

02 February, 2024 12:38AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.21 on CRAN: Maintenance

A new minor release 0.4.21 of RQuantLib arrived at CRAN this afternoon, and has already been uploaded to Debian as well.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for more than twenty years (!!) as it was one of the first packages I uploaded there.

This release of RQuantLib benefits from some kind attention that Jeroen has been paying to how we build (especially at CRAN) on both macOS and Windows. So the build processes are a little better now, and no internal code changed. QuantLib 1.33 built unchanged.

Changes in RQuantLib version 0.4.21 (2024-02-01)

  • Generalize macOS build to universal build (Jeroen in #179)

  • Generalize Windows build to arm64 (Jeroen in #181)

  • Generalize version string use to support cmake use (Jeroen in #181 fixing #180)

  • Minor update to 'ci.yaml' github action (Dirk)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 February, 2024 12:04AM

February 01, 2024

Russ Allbery

Review: System Collapse

Review: System Collapse, by Martha Wells

Series: Murderbot Diaries #7
Publisher: Tordotcom
Copyright: 2023
ISBN: 1-250-82698-5
Format: Kindle
Pages: 245

System Collapse is the second Murderbot novel. Including the novellas, it's the 7th in the series. Unlike Fugitive Telemetry, the previous novella that was out of chronological order, this is the direct sequel to Network Effect. A very direct sequel; it picks up just a few days after the previous novel ended. Needless to say, you should not start here.

I was warned by other people and therefore re-read Network Effect immediately before reading System Collapse. That was an excellent idea, since this novel opens with a large cast, no dramatis personae, not much in the way of a plot summary, and a lot of emotional continuity from the previous novel. I would grumble about this more, like I have in other reviews, but I thoroughly enjoyed re-reading Network Effect and appreciated the excuse.

ART-drone said, “I wouldn’t recommend it. I lack a sense of proportional response. I don’t advise engaging with me on any level.”

Saying much about the plot of this book without spoiling Network Effect and the rest of the series is challenging. Murderbot is suffering from the aftereffects of the events of the previous book more than it expected or would like to admit. It and its humans are in the middle of a complicated multi-way negotiation with some locals, who the corporates are trying to exploit. One of the difficulties in that negotiation is getting people to believe that the corporations are as evil as they actually are, a plot element that has a depressing amount in common with current politics. Meanwhile, Murderbot is trying to keep everyone alive.

I loved Network Effect, but that was primarily for the social dynamics. The planet that was central to the novel was less interesting, so another (short) novel about the same planet was a bit of a disappointment. This does give Wells a chance to show in more detail what Murderbot's new allies have been up to, but there is a lot of speculative exploration and detailed descriptions of underground tunnels that I found less compelling than the relationship dynamics of the previous book. (Murderbot, on the other hand, would much prefer exploring creepy abandoned tunnels to talking about its feelings.)

One of the things this series continues to do incredibly well, though, is take non-human intelligence seriously in a world where the humans mostly don't. It perfectly fills a gap between Star Wars, where neither the humans nor the story take non-human intelligences seriously (hence the creepy slavery vibes as soon as you start paying attention to droids), and the Culture, where both humans and the story do.

The corporates (the bad guys in this series) treat non-human intelligences the way Star Wars treats droids. The good guys treat Murderbot mostly like a strange human, which is better but still wrong, and still don't notice the numerous other machine intelligences. But Wells, as the author, takes all of the non-human characters seriously, which means there are complex and fascinating relationships happening at a level of the story that the human characters are mostly unaware of. I love that Murderbot rarely bothers to explain; if the humans are too blinkered to notice, that's their problem.

About halfway into the story, System Collapse hits its stride, not coincidentally at the point where Murderbot befriends some new computers. The rest of the book is great.

This was not as good as Network Effect. There is a bit less competence porn at the start, and although that's for good in-story reasons I still missed it. Murderbot's redaction of things it doesn't want to talk about got a bit annoying before it finally resolved. And I was not sufficiently interested in this planet to want to spend two novels on it, at least without another major revelation that didn't come. But it's still a Murderbot novel, which means it has the best first-person narrative voice I've ever read, some great moments, and possibly the most compelling and varied presentation of computer intelligence in science fiction at the moment.

There was no feed ID, but AdaCol2 supplied the name Lucia and when I asked it for more info, the gender signifier bb (which didn’t translate) and he/him pronouns. (I asked because the humans would bug me for the information; I was as indifferent to human gender as it was possible to be without being unconscious.)

This is not a series to read out of order, but if you have read this far, you will continue to be entertained. You don't need me to tell you this — nearly everyone reviewing science fiction is saying it — but this series is great and you should read it.

Rating: 8 out of 10

01 February, 2024 06:06AM

Paul Wise

FLOSS Activities January 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

  • OpenStreetMap: fixed a bunch of broken website URLs
  • Debian pass-otp: oathtool safety
  • reportbug: fix crash
  • Debian BTS usertags: fix Python, Ruby, QA, porter, archive, release tags
  • Debian wiki pages: TransitionUploadHook

Issues

Review

  • Debian BTS usertags: changes for the month
  • Debian screenshots:

Administration

  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

All work was done on a volunteer basis.

01 February, 2024 05:54AM

January 31, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

dtts 0.1.2 on CRAN: Maintenance

Leonardo and I are happy to announce the release of a very minor maintenance release 0.1.2 of our dtts package which has been on CRAN for a little under two years now.

dtts builds upon our nanotime package as well as the beloved data.table to bring high-performance and high-resolution indexing at the nanosecond level to data frames. dtts aims to offers the time-series indexing versatility of xts (and zoo) to the immense power of data.table while supporting highest nanosecond resolution.

This release follows yesterday’s long-awaited release of data.table version 1.5.0 which had been some time in the making as the first new major.minor release since Matt drifted into being less active and the forefront. The release also renamed the one C-level API accessor to data.table (which was added, if memory serves, by Leonardo with our use in mind). So we have to catch up to the renamed identifier; this release does that, and adds a versioned imports statement on data.table.

The short list of changes follows.

Changes in version 0.1.2 (2024-01-31)

  • Update the one exported C-level identifier from data.table following its 1.5.0 release and a renaming

  • Routine continuous integration updates

Courtesy of my CRANberries, there is also a report with diffstat for this release. Questions, comments, issue tickets can be brought to the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

31 January, 2024 11:02PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (November and December 2023)

The following contributors got their Debian Developer accounts in the last two months:

  • Alexandre Detiste (tchet)
  • Amin Bandali (bandali)
  • Jean-Pierre Giraud (jipege)
  • Timthy Pearson (tpearson)

The following contributor was added as Debian Maintainer in the last two months:

  • Safir Secerovic

Congratulations!

31 January, 2024 03:00PM by Donald Norwood

Valhalla's Things

Macrame Bookbag

Posted on January 31, 2024
Tags: madeof:atoms, craft:macrame

a macrame bag in ~3 mm ecru yarn, with very irregular knots of different types, holding a book with a blue cover. The bottom part has a rigid single layer triangle and a fringe.

In late 2022 I prepared a batch of drawstring backpacks in cotton as reusable wrappers for Christmas gifts; however I didn’t know what cord to use, didn’t want to use paracord, and couldn’t find anything that looked right in the local shops.

With Christmas getting dangerously closer, I visited a craft materials website for unrelated reasons, found out that they sold macrame cords, and panic-bought a few types in the hope that at least one would work for the backpacks.

I got lucky, and my first choice fitted just fine, and I was able to finish the backpacks in time for the holidays.

And then I had a box full of macrame cords in various sizes and types that weren’t the best match for the drawstring in a backpack, and no real use for them.

I don’t think I had ever done macrame, but I have made friendship bracelets in primary school, and a few Friendship Bracelets, But For Real Men So We Call Them Survival Bracelets(TM) more recently, so I didn’t bother reading instructions or tutorials online, I just grabbed the Ashley Book of Knots to refresh myself on the knots used, and decided to make myself a small bag for an A6 book.

I choose one of the thin, ~3 mm cords, Tre Sfere Macramé Barbante, of which there was plenty, so that I could stumble around with no real plan.

A loop of four cords, with a handle made of square knots that keeps it together.

I started by looping 5 m of cord, making iirc 2 rounds of a loop about the right size to go around the book with a bit of ease, then used the ends as filler cords for a handle, wrapped them around the loop and worked square knots all over them to make a handle.

Then I cut the rest of the cord into 40 pieces, each 4 m long, because I had no idea how much I was going to need (spoiler: I successfully got it wrong :D )

I joined the cords to the handle with lark head knots, 20 per side, and then I started knotting without a plan or anything, alternating between hitches and square knots, sometimes close together and sometimes leaving some free cord between them.

And apparently I also completely forgot to take in-progress pictures.

I kept working on this for a few months, knotting a row or two now and then, until the bag was long enough for the book, then I closed the bottom by taking one cord from the front and the corresponding on the back, knotting them together (I don’t remember how) and finally I made a rigid triangle of tight square knots with all of the cords, progressively leaving out a cord from each side, and cutting it in a fringe.

I then measured the remaining cords, and saw that the shortest ones were about a meter long, but the longest ones were up to 3 meters, I could have cut them much shorter at the beginning (and maybe added a couple more cords). The leftovers will be used, in some way.

And then I postponed taking pictures of the finished object for a few months.

The same bag, empty and showing how the sides aren't straight.

Now the result is functional, but I have to admit it is somewhat ugly: not as much for the lack of a pattern (that I think came out quite fine) but because of how irregular the knots are; I’m not confident that the next time I will be happy with their regularity, either, but I hope I will improve, and that’s one important thing.

And the other important thing is: I enjoyed making this, even if I kept interrupting the work, and I think that there may be some other macrame in my future.

31 January, 2024 12:00AM

January 30, 2024

Antoine Beaupré

router archeology: the Soekris net5001

Roadkiller was a Soekris net5501 router I used as my main gateway between 2010 and 2016 (for réseau and téléphone).

It was upgraded to FreeBSD 8.4-p12 (2014-06-06) and pkgng. It was retired in favor of octavia around 2016.

Roughly 10 years later (2024-01-24), I found it in a drawer and, to my surprised, it booted. After wrangling with a RS-232 USB adapter, a null modem cable, and bit rates, I even logged in:

comBIOS ver. 1.33  20070103  Copyright (C) 2000-2007 Soekris Engineering.

net5501

0512 Mbyte Memory                        CPU Geode LX 500 Mhz 

Pri Mas  WDC WD800VE-00HDT0              LBA Xlt 1024-255-63  78 Gbyte

Slot   Vend Dev  ClassRev Cmd  Stat CL LT HT  Base1    Base2   Int 
-------------------------------------------------------------------
0:01:2 1022 2082 10100000 0006 0220 08 00 00 A0000000 00000000 10
0:06:0 1106 3053 02000096 0117 0210 08 40 00 0000E101 A0004000 11
0:07:0 1106 3053 02000096 0117 0210 08 40 00 0000E201 A0004100 05
0:08:0 1106 3053 02000096 0117 0210 08 40 00 0000E301 A0004200 09
0:09:0 1106 3053 02000096 0117 0210 08 40 00 0000E401 A0004300 12
0:20:0 1022 2090 06010003 0009 02A0 08 40 80 00006001 00006101 
0:20:2 1022 209A 01018001 0005 02A0 08 00 00 00000000 00000000 
0:21:0 1022 2094 0C031002 0006 0230 08 00 80 A0005000 00000000 15
0:21:1 1022 2095 0C032002 0006 0230 08 00 00 A0006000 00000000 15

 4 Seconds to automatic boot.   Press Ctrl-P for entering Monitor.

�������������������������������������������
 �                                         �
 �                                         �      ______
 �                                         �     |  ____| __ ___  ___ 
 �          Welcome to FreeBSD!            �     | |__ | '__/ _ \/ _ \
 �                                         �     |  __|| | |  __/  __/
 �                                         �     | |   | | |    |    |
 �  1. Boot FreeBSD [default]              �     |_|   |_|  \___|\___|
 �  2. Boot FreeBSD with ACPI enabled      �      ____   _____ _____
 �  3. Boot FreeBSD in Safe Mode           �     |  _ \ / ____|  __ \
 �  4. Boot FreeBSD in single user mode    �     | |_) | (___ | |  | |
 �  5. Boot FreeBSD with verbose logging   �     |  _ < \___ \| |  | |
 �  6. Escape to loader prompt             �     | |_) |____) | |__| |
 �  7. Reboot                              �     |     |      |      |
 �                                         �     |____/|_____/|_____/
 �                                         �
 �                                         �
 �                                         �
 �  Select option, [Enter] for default     �
 �  or [Space] to pause timer  5           �
 �������������������������������������������


Copyright (c) 1992-2013 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 8.4-RELEASE-p12 #5: Fri Jun  6 02:43:23 EDT 2014
    root@roadkiller.anarc.at:/usr/obj/usr/src/sys/ROADKILL i386
gcc version 4.2.2 20070831 prerelease [FreeBSD]
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Geode(TM) Integrated Processor by AMD PCS (499.90-MHz 586-class CPU)
  Origin = "AuthenticAMD"  Id = 0x5a2  Family = 5  Model = a  Stepping = 2
  Features=0x88a93d<FPU,DE,PSE,TSC,MSR,CX8,SEP,PGE,CMOV,CLFLUSH,MMX>
  AMD Features=0xc0400000<MMX+,3DNow!+,3DNow!>
real memory  = 536870912 (512 MB)
avail memory = 506445824 (482 MB)
kbd1 at kbdmux0
K6-family MTRR support enabled (2 registers)
ACPI Error: A valid RSDP was not found (20101013/tbxfroot-309)
ACPI: Table initialisation failed: AE_NOT_FOUND
ACPI: Try disabling either ACPI or apic support.
cryptosoft0: <software crypto> on motherboard
pcib0 pcibus 0 on motherboard
pci0: <PCI bus> on pcib0
Geode LX: Soekris net5501 comBIOS ver. 1.33 20070103 Copyright (C) 2000-2007
pci0: <encrypt/decrypt, entertainment crypto> at device 1.2 (no driver attached)
vr0: <VIA VT6105M Rhine III 10/100BaseTX> port 0xe100-0xe1ff mem 0xa0004000-0xa00040ff irq 11 at device 6.0 on pci0
vr0: Quirks: 0x2
vr0: Revision: 0x96
miibus0: <MII bus> on vr0
ukphy0: <Generic IEEE 802.3u media interface> PHY 1 on miibus0
ukphy0:  none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto, auto-flow
vr0: Ethernet address: 00:00:24:cc:93:44
vr0: [ITHREAD]
vr1: <VIA VT6105M Rhine III 10/100BaseTX> port 0xe200-0xe2ff mem 0xa0004100-0xa00041ff irq 5 at device 7.0 on pci0
vr1: Quirks: 0x2
vr1: Revision: 0x96
miibus1: <MII bus> on vr1
ukphy1: <Generic IEEE 802.3u media interface> PHY 1 on miibus1
ukphy1:  none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto, auto-flow
vr1: Ethernet address: 00:00:24:cc:93:45
vr1: [ITHREAD]
vr2: <VIA VT6105M Rhine III 10/100BaseTX> port 0xe300-0xe3ff mem 0xa0004200-0xa00042ff irq 9 at device 8.0 on pci0
vr2: Quirks: 0x2
vr2: Revision: 0x96
miibus2: <MII bus> on vr2
ukphy2: <Generic IEEE 802.3u media interface> PHY 1 on miibus2
ukphy2:  none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto, auto-flow
vr2: Ethernet address: 00:00:24:cc:93:46
vr2: [ITHREAD]
vr3: <VIA VT6105M Rhine III 10/100BaseTX> port 0xe400-0xe4ff mem 0xa0004300-0xa00043ff irq 12 at device 9.0 on pci0
vr3: Quirks: 0x2
vr3: Revision: 0x96
miibus3: <MII bus> on vr3
ukphy3: <Generic IEEE 802.3u media interface> PHY 1 on miibus3
ukphy3:  none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto, auto-flow
vr3: Ethernet address: 00:00:24:cc:93:47
vr3: [ITHREAD]
isab0: <PCI-ISA bridge> at device 20.0 on pci0
isa0: <ISA bus> on isab0
atapci0: <AMD CS5536 UDMA100 controller> port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0xe000-0xe00f at device 20.2 on pci0
ata0: <ATA channel> at channel 0 on atapci0
ata0: [ITHREAD]
ata1: <ATA channel> at channel 1 on atapci0
ata1: [ITHREAD]
ohci0: <OHCI (generic) USB controller> mem 0xa0005000-0xa0005fff irq 15 at device 21.0 on pci0
ohci0: [ITHREAD]
usbus0 on ohci0
ehci0: <AMD CS5536 (Geode) USB 2.0 controller> mem 0xa0006000-0xa0006fff irq 15 at device 21.1 on pci0
ehci0: [ITHREAD]
usbus1: EHCI version 1.0
usbus1 on ehci0
cpu0 on motherboard
pmtimer0 on isa0
orm0: <ISA Option ROM> at iomem 0xc8000-0xd27ff pnpid ORM0000 on isa0
atkbdc0: <Keyboard controller (i8042)> at port 0x60,0x64 on isa0
atkbd0: <AT Keyboard> irq 1 on atkbdc0
kbd0 at atkbd0
atkbd0: [GIANT-LOCKED]
atkbd0: [ITHREAD]
atrtc0: <AT Real Time Clock> at port 0x70 irq 8 on isa0
ppc0: parallel port not found.
uart0: <16550 or compatible> at port 0x3f8-0x3ff irq 4 flags 0x10 on isa0
uart0: [FILTER]
uart0: console (19200,n,8,1)
uart1: <16550 or compatible> at port 0x2f8-0x2ff irq 3 on isa0
uart1: [FILTER]
Timecounter "TSC" frequency 499903982 Hz quality 800
Timecounters tick every 1.000 msec
IPsec: Initialized Security Association Processing.
usbus0: 12Mbps Full Speed USB v1.0
usbus1: 480Mbps High Speed USB v2.0
ad0: 76319MB <WDC WD800VE-00HDT0 09.07D09> at ata0-master UDMA100 
ugen0.1: <AMD> at usbus0
uhub0: <AMD OHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus0
ugen1.1: <AMD> at usbus1
uhub1: <AMD EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus1
GEOM: ad0s1: geometry does not match label (255h,63s != 16h,63s).
uhub0: 4 ports with 4 removable, self powered
Root mount waiting for: usbus1
Root mount waiting for: usbus1
uhub1: 4 ports with 4 removable, self powered
Trying to mount root from ufs:/dev/ad0s1a

The last log rotation is from 2016:

[root@roadkiller /var/log]# stat /var/log/wtmp      
65 61783 -rw-r--r-- 1 root wheel 208219 1056 "Nov  1 05:00:01 2016" "Jan 18 22:29:16 2017" "Jan 18 22:29:16 2017" "Nov  1 05:00:01 2016" 16384 4 0 /var/log/wtmp

Interestingly, I switched between eicat and teksavvy on December 11th. Which year? Who knows!

Dec 11 16:38:40 roadkiller mpd: [eicatL0] LCP: authorization successful
Dec 11 16:41:15 roadkiller mpd: [teksavvyL0] LCP: authorization successful

Never realized those good old logs had a "oh dear forgot the year" issue (that's something like Y2K except just "Y", I guess).

That was probably 2015, because the log dates from 2017, and the last entry is from November of the year after the above:

[root@roadkiller /var/log]# stat mpd.log 
65 47113 -rw-r--r-- 1 root wheel 193008 71939195 "Jan 18 22:39:18 2017" "Jan 18 22:39:59 2017" "Jan 18 22:39:59 2017" "Apr  2 10:41:37 2013" 16384 140640 0 mpd.log

It looks like the system was installed in 2010:

[root@roadkiller /var/log]# stat /
63 2 drwxr-xr-x 21 root wheel 2120 512 "Jan 18 22:34:43 2017" "Jan 18 22:28:12 2017" "Jan 18 22:28:12 2017" "Jul 18 22:25:00 2010" 16384 4 0 /

... so it lived for about 6 years, but still works after almost 14 years, which I find utterly amazing.

Another amazing thing is that there's tuptime installed on that server! That is a software I thought I discovered later and then sponsored in Debian, but turns out I was already using it then!

[root@roadkiller /var]# tuptime 
System startups:        19   since   21:20:16 11/07/15
System shutdowns:       0 ok   -   18 bad
System uptime:          85.93 %   -   1 year, 11 days, 10 hours, 3 minutes and 36 seconds
System downtime:        14.07 %   -   61 days, 15 hours, 22 minutes and 45 seconds
System life:            1 year, 73 days, 1 hour, 26 minutes and 20 seconds

Largest uptime:         122 days, 9 hours, 17 minutes and 6 seconds   from   08:17:56 02/02/16
Shortest uptime:        5 minutes and 4 seconds   from   21:55:00 01/18/17
Average uptime:         19 days, 19 hours, 28 minutes and 37 seconds

Largest downtime:       57 days, 1 hour, 9 minutes and 59 seconds   from   20:45:01 11/22/16
Shortest downtime:      -1 years, 364 days, 23 hours, 58 minutes and 12 seconds   from   22:30:01 01/18/17
Average downtime:       3 days, 5 hours, 51 minutes and 43 seconds

Current uptime:         18 minutes and 23 seconds   since   22:28:13 01/18/17

Actual up/down times:

[root@roadkiller /var]# tuptime -t
No.        Startup Date                                         Uptime       Shutdown Date   End                                                  Downtime

1     21:20:16 11/07/15      1 day, 0 hours, 40 minutes and 12 seconds   22:00:28 11/08/15   BAD                                  2 minutes and 37 seconds
2     22:03:05 11/08/15      1 day, 9 hours, 41 minutes and 57 seconds   07:45:02 11/10/15   BAD                                  3 minutes and 24 seconds
3     07:48:26 11/10/15    20 days, 2 hours, 41 minutes and 34 seconds   10:30:00 11/30/15   BAD                        4 hours, 50 minutes and 21 seconds
4     15:20:21 11/30/15                      19 minutes and 40 seconds   15:40:01 11/30/15   BAD                                   6 minutes and 5 seconds
5     15:46:06 11/30/15                      53 minutes and 55 seconds   16:40:01 11/30/15   BAD                           1 hour, 1 minute and 38 seconds
6     17:41:39 11/30/15     6 days, 16 hours, 3 minutes and 22 seconds   09:45:01 12/07/15   BAD                4 days, 6 hours, 53 minutes and 11 seconds
7     16:38:12 12/11/15   50 days, 17 hours, 56 minutes and 49 seconds   10:35:01 01/31/16   BAD                                 10 minutes and 52 seconds
8     10:45:53 01/31/16     1 day, 21 hours, 28 minutes and 16 seconds   08:14:09 02/02/16   BAD                                  3 minutes and 48 seconds
9     08:17:56 02/02/16    122 days, 9 hours, 17 minutes and 6 seconds   18:35:02 06/03/16   BAD                                 10 minutes and 16 seconds
10    18:45:18 06/03/16   29 days, 17 hours, 14 minutes and 43 seconds   12:00:01 07/03/16   BAD                                 12 minutes and 34 seconds
11    12:12:35 07/03/16   31 days, 17 hours, 17 minutes and 26 seconds   05:30:01 08/04/16   BAD                                 14 minutes and 25 seconds
12    05:44:26 08/04/16     15 days, 1 hour, 55 minutes and 35 seconds   07:40:01 08/19/16   BAD                                  6 minutes and 51 seconds
13    07:46:52 08/19/16     7 days, 5 hours, 23 minutes and 10 seconds   13:10:02 08/26/16   BAD                                  3 minutes and 45 seconds
14    13:13:47 08/26/16   27 days, 21 hours, 36 minutes and 14 seconds   10:50:01 09/23/16   BAD                                  2 minutes and 14 seconds
15    10:52:15 09/23/16   60 days, 10 hours, 52 minutes and 46 seconds   20:45:01 11/22/16   BAD                 57 days, 1 hour, 9 minutes and 59 seconds
16    21:55:00 01/18/17                        5 minutes and 4 seconds   22:00:04 01/18/17   BAD                                 11 minutes and 15 seconds
17    22:11:19 01/18/17                       8 minutes and 42 seconds   22:20:01 01/18/17   BAD                                   1 minute and 20 seconds
18    22:21:21 01/18/17                       8 minutes and 40 seconds   22:30:01 01/18/17   BAD   -1 years, 364 days, 23 hours, 58 minutes and 12 seconds
19    22:28:13 01/18/17                      20 minutes and 17 seconds

The last few entries are actually the tests I'm running now, it seems this machine thinks we're now on 2017-01-18 at ~22:00, while we're actually 2024-01-24 at ~12:00 local:

Wed Jan 18 23:05:38 EST 2017

FreeBSD/i386 (roadkiller.anarc.at) (ttyu0)

login: root
Password:
Jan 18 23:07:10 roadkiller login: ROOT LOGIN (root) ON ttyu0
Last login: Wed Jan 18 22:29:16 on ttyu0
Copyright (c) 1992-2013 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.

FreeBSD 8.4-RELEASE-p12 (ROADKILL) #5: Fri Jun  6 02:43:23 EDT 2014

Reminders:
 * commit stuff in /etc
 * reload firewall (in screen!):
    pfctl -f /etc/pf.conf ; sleep 1
 * vim + syn on makes pf.conf more readable
 * monitoring the PPPoE uplink:
   tail -f /var/log/mpd.log

Current problems:

 * sometimes pf doesn't start properly on boot, if pppoe failed to come up, use
   this to resume:

     /etc/rc.d/pf start

   it will kill your shell, but fix NAT (2012-08-10)
 * babel fails to start on boot (2013-06-15):

     babeld -D -g 33123 tap0 vr3

 * DNS often fails, tried messing with unbound.conf (2014-10-05) and updating
   named.root (2016-01-28) and performance tweaks (ee63689)
 * asterisk and mpd4 are deprecated and should be uninstalled when we're sure
   their replacements (voipms + ata and mpd5) are working (2015-01-13)
 * if IPv6 fails, it's because netblocks are not being routed upstream. DHCPcd
   should do this, but doesn't start properly, use this to resume (2015-12-21):

     /usr/local/sbin/dhcpcd -6 --persistent --background --timeout 0 -C resolv.conf ng0

This machine is doomed to be replaced with the new omnia router, Indiegogo
campaign should ship in april 2016: http://igg.me/at/turris-omnia/x

(I really like the motd I left myself there. In theory, I guess this could just start connecting to the internet again if I still had the same PPPoE/ADSL link I had almost a decade ago; obviously, I do not.)

Not sure how the system figured the 2017 time: the onboard clock itself believes we're in 1980, so clearly the CMOS battery has (understandably) failed:

> ?
comBIOS Monitor Commands

boot [drive][:partition] INT19 Boot
reboot                   cold boot
download                 download a file using XMODEM/CRC
flashupdate              update flash BIOS with downloaded file
time [HH:MM:SS]          show or set time
date [YYYY/MM/DD]        show or set date
d[b|w|d] [adr]           dump memory bytes/words/dwords
e[b|w|d] adr value [...] enter bytes/words/dwords
i[b|w|d] port            input from 8/16/32-bit port
o[b|w|d] port value      output to 8/16/32-bit port
run adr                  execute code at adr
cmosread [adr]           read CMOS RAM data
cmoswrite adr byte [...] write CMOS RAM data
cmoschecksum             update CMOS RAM Checksum
set parameter=value      set system parameter to value
show [parameter]         show one or all system parameters
?/help                   show this help

> show

ConSpeed = 19200
ConLock = Enabled
ConMute = Disabled
BIOSentry = Enabled
PCIROMS = Enabled
PXEBoot = Enabled
FLASH = Primary
BootDelay = 5
FastBoot = Disabled
BootPartition = Disabled
BootDrive = 80 81 F0 FF 
ShowPCI = Enabled
Reset = Hard
CpuSpeed = Default

> time

Current Date and Time is: 1980/01/01 00:56:47

Another bit of archeology: I had documented various outages with my ISP... back in 2003!

[root@roadkiller ~/bin]# cat ppp_stats/downtimes.txt
11/03/2003 18:24:49 218
12/03/2003 09:10:49 118
12/03/2003 10:05:57 680
12/03/2003 10:14:50 106
12/03/2003 10:16:53 6
12/03/2003 10:35:28 146
12/03/2003 10:57:26 393
12/03/2003 11:16:35 5
12/03/2003 11:16:54 11
13/03/2003 06:15:57 18928
13/03/2003 09:43:36 9730
13/03/2003 10:47:10 23
13/03/2003 10:58:35 5
16/03/2003 01:32:36 338
16/03/2003 02:00:33 120
16/03/2003 11:14:31 14007
19/03/2003 00:56:27 11179
19/03/2003 00:56:43 5
19/03/2003 00:56:53 0
19/03/2003 00:56:55 1
19/03/2003 00:57:09 1
19/03/2003 00:57:10 1
19/03/2003 00:57:24 1
19/03/2003 00:57:25 1
19/03/2003 00:57:39 1
19/03/2003 00:57:40 1
19/03/2003 00:57:44 3
19/03/2003 00:57:53 0
19/03/2003 00:57:55 0
19/03/2003 00:58:08 0
19/03/2003 00:58:10 0
19/03/2003 00:58:23 0
19/03/2003 00:58:25 0
19/03/2003 00:58:39 1
19/03/2003 00:58:42 2
19/03/2003 00:58:58 5
19/03/2003 00:59:35 2
19/03/2003 00:59:47 3
19/03/2003 01:00:34 3
19/03/2003 01:00:39 0
19/03/2003 01:00:54 0
19/03/2003 01:01:11 2
19/03/2003 01:01:25 1
19/03/2003 01:01:48 1
19/03/2003 01:02:03 1
19/03/2003 01:02:10 2
19/03/2003 01:02:20 3
19/03/2003 01:02:44 3
19/03/2003 01:03:45 3
19/03/2003 01:04:39 2
19/03/2003 01:05:40 2
19/03/2003 01:06:35 2
19/03/2003 01:07:36 2
19/03/2003 01:08:31 2
19/03/2003 01:08:38 2
19/03/2003 01:10:07 3
19/03/2003 01:11:05 2
19/03/2003 01:12:03 3
19/03/2003 01:13:01 3
19/03/2003 01:13:58 2
19/03/2003 01:14:59 5
19/03/2003 01:15:54 2
19/03/2003 01:16:55 2
19/03/2003 01:17:50 2
19/03/2003 01:18:51 3
19/03/2003 01:19:46 2
19/03/2003 01:20:46 2
19/03/2003 01:21:42 3
19/03/2003 01:22:42 3
19/03/2003 01:23:37 2
19/03/2003 01:24:38 3
19/03/2003 01:25:33 2
19/03/2003 01:26:33 2
19/03/2003 01:27:30 3
19/03/2003 01:28:55 2
19/03/2003 01:29:56 2
19/03/2003 01:30:50 2
19/03/2003 01:31:42 3
19/03/2003 01:32:36 3
19/03/2003 01:33:27 2
19/03/2003 01:34:21 2
19/03/2003 01:35:22 2
19/03/2003 01:36:17 3
19/03/2003 01:37:18 2
19/03/2003 01:38:13 3
19/03/2003 01:39:39 2
19/03/2003 01:40:39 2
19/03/2003 01:41:35 3
19/03/2003 01:42:35 3
19/03/2003 01:43:31 3
19/03/2003 01:44:31 3
19/03/2003 01:45:53 3
19/03/2003 01:46:48 3
19/03/2003 01:47:48 2
19/03/2003 01:48:44 3
19/03/2003 01:49:44 2
19/03/2003 01:50:40 3
19/03/2003 01:51:39 1
19/03/2003 11:04:33 19   
19/03/2003 18:39:36 2833 
19/03/2003 18:54:05 825  
19/03/2003 19:04:00 454  
19/03/2003 19:08:11 210  
19/03/2003 19:41:44 272  
19/03/2003 21:18:41 208  
24/03/2003 04:51:16 6
27/03/2003 04:51:20 5
30/03/2003 04:51:25 5
31/03/2003 08:30:31 255  
03/04/2003 08:30:36 5
06/04/2003 01:16:00 621  
06/04/2003 22:18:08 17   
06/04/2003 22:32:44 13   
09/04/2003 22:33:12 28   
12/04/2003 22:33:17 6
15/04/2003 22:33:22 5
17/04/2003 15:03:43 18   
20/04/2003 15:03:48 5
23/04/2003 15:04:04 16   
23/04/2003 21:08:30 339  
23/04/2003 21:18:08 13   
23/04/2003 23:34:20 253  
26/04/2003 23:34:45 25   
29/04/2003 23:34:49 5
02/05/2003 13:10:01 185  
05/05/2003 13:10:06 5
08/05/2003 13:10:11 5
09/05/2003 14:00:36 63928
09/05/2003 16:58:52 2
11/05/2003 23:08:48 2
14/05/2003 23:08:53 6
17/05/2003 23:08:58 5
20/05/2003 23:09:03 5
23/05/2003 23:09:08 5
26/05/2003 23:09:14 5
29/05/2003 23:00:10 3
29/05/2003 23:03:01 10   
01/06/2003 23:03:05 4
04/06/2003 23:03:10 5
07/06/2003 23:03:38 28   
10/06/2003 23:03:50 12   
13/06/2003 23:03:55 6
14/06/2003 07:42:20 3
14/06/2003 14:37:08 3
15/06/2003 20:08:34 3
18/06/2003 20:08:39 6
21/06/2003 20:08:45 6
22/06/2003 03:05:19 138  
22/06/2003 04:06:28 3
25/06/2003 04:06:58 31   
28/06/2003 04:07:02 4
01/07/2003 04:07:06 4
04/07/2003 04:07:11 5
07/07/2003 04:07:16 5
12/07/2003 04:55:20 6
12/07/2003 19:09:51 1158 
12/07/2003 22:14:49 8025 
15/07/2003 22:14:54 6
16/07/2003 05:43:06 18   
19/07/2003 05:43:12 6
22/07/2003 05:43:17 5
23/07/2003 18:18:55 183  
23/07/2003 18:19:55 9
23/07/2003 18:29:15 158  
23/07/2003 19:48:44 4604 
23/07/2003 20:16:27 3
23/07/2003 20:37:29 1079 
23/07/2003 20:43:12 342  
23/07/2003 22:25:51 6158

Fascinating.

I suspect the (IDE!) hard drive might be failing as I saw two new files created in /var that I didn't remember seeing before:

-rw-r--r--   1 root    wheel        0 Jan 18 22:55 3@T3
-rw-r--r--   1 root    wheel        0 Jan 18 22:55 DY5

So I shutdown the machine, possibly for the last time:

Waiting (max 60 seconds) for system process `bufdaemon' to stop...done
Waiting (max 60 seconds) for system process `syncer' to stop...
Syncing disks, vnodes remaining...3 3 0 1 1 0 0 done
All buffers synced.
Uptime: 36m43s
usbus0: Controller shutdown
uhub0: at usbus0, port 1, addr 1 (disconnected)
usbus0: Controller shutdown complete
usbus1: Controller shutdown
uhub1: at usbus1, port 1, addr 1 (disconnected)
usbus1: Controller shutdown complete

The operating system has halted.
Please press any key to reboot.

I'll finally note this was the last FreeBSD server I personally operated. I also used FreeBSD to setup the core routers at Koumbit but those were replaced with Debian recently as well.

Thanks Soekris, that was some sturdy hardware. Hopefully this new Protectli router will live up to that "decade plus" challenge.

Not sure what the fate of this device will be: I'll bring it to the next Montreal Debian & Stuff to see if anyone's interested, contact me if you can't show up and want this thing.

30 January, 2024 04:20AM

hackergotchi for Matthew Palmer

Matthew Palmer

Why Certificate Lifecycle Automation Matters

If you’ve perused the ActivityPub feed of certificates whose keys are known to be compromised, and clicked on the “Show More” button to see the name of the certificate issuer, you may have noticed that some issuers seem to come up again and again. This might make sense – after all, if a CA is issuing a large volume of certificates, they’ll be seen more often in a list of compromised certificates. In an attempt to see if there is anything that we can learn from this data, though, I did a bit of digging, and came up with some illuminating results.

The Procedure

I started off by finding all the unexpired certificates logged in Certificate Transparency (CT) logs that have a key that is in the pwnedkeys database as having been publicly disclosed. From this list of certificates, I removed duplicates by matching up issuer/serial number tuples, and then reduced the set by counting the number of unique certificates by their issuer.

This gave me a list of the issuers of these certificates, which looks a bit like this:

/C=BE/O=GlobalSign nv-sa/CN=AlphaSSL CA - SHA256 - G4
/C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Domain Validation Secure Server CA
/C=GB/ST=Greater Manchester/L=Salford/O=Sectigo Limited/CN=Sectigo RSA Organization Validation Secure Server CA
/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
/C=US/ST=Arizona/L=Scottsdale/O=Starfield Technologies, Inc./OU=http://certs.starfieldtech.com/repository//CN=Starfield Secure Certificate Authority - G2
/C=AT/O=ZeroSSL/CN=ZeroSSL RSA Domain Secure Site CA
/C=BE/O=GlobalSign nv-sa/CN=GlobalSign GCC R3 DV TLS CA 2020

Rather than try to work with raw issuers (because, as Andrew Ayer says, The SSL Certificate Issuer Field is a Lie), I mapped these issuers to the organisations that manage them, and summed the counts for those grouped issuers together.

The Data

Lieutenant Commander Data from Star Trek: The Next Generation Insert obligatory "not THAT data" comment here

The end result of this work is the following table, sorted by the count of certificates which have been compromised by exposing their private key:

IssuerCompromised Count
Sectigo170
ISRG (Let's Encrypt)161
GoDaddy141
DigiCert81
GlobalSign46
Entrust3
SSL.com1

If you’re familiar with the CA ecosystem, you’ll probably recognise that the organisations with large numbers of compromised certificates are also those who issue a lot of certificates. So far, nothing particularly surprising, then.

Let’s look more closely at the relationships, though, to see if we can get more useful insights.

Volume Control

Using the issuance volume report from crt.sh, we can compare issuance volumes to compromise counts, to come up with a “compromise rate”. I’m using the “Unexpired Precertificates” colume from the issuance volume report, as I feel that’s the number that best matches the certificate population I’m examining to find compromised certificates. To maintain parity with the previous table, this one is still sorted by the count of certificates that have been compromised.

IssuerIssuance VolumeCompromised CountCompromise Rate
Sectigo88,323,0681701 in 519,547
ISRG (Let's Encrypt)315,476,4021611 in 1,959,480
GoDaddy56,121,4291411 in 398,024
DigiCert144,713,475811 in 1,786,586
GlobalSign1,438,485461 in 31,271
Entrust23,16631 in 7,722
SSL.com171,81611 in 171,816

If we now sort this table by compromise rate, we can see which organisations have the most (and least) leakiness going on from their customers:

IssuerIssuance VolumeCompromised CountCompromise Rate
Entrust23,16631 in 7,722
GlobalSign1,438,485461 in 31,271
SSL.com171,81611 in 171,816
GoDaddy56,121,4291411 in 398,024
Sectigo88,323,0681701 in 519,547
DigiCert144,713,475811 in 1,786,586
ISRG (Let's Encrypt)315,476,4021611 in 1,959,480

By grouping by order-of-magnitude in the compromise rate, we can identify three “bands”:

  • The Super Leakers: Customers of Entrust and GlobalSign seem to love to lose control of their private keys. For Entrust, at least, though, the small volumes involved make the numbers somewhat untrustworthy. The three compromised certificates could very well belong to just one customer, for instance. I’m not aware of anything that GlobalSign does that would make them such an outlier, either, so I’m inclined to think they just got unlucky with one or two customers, but as CAs don’t include customer IDs in the certificates they issue, it’s not possible to say whether that’s the actual cause or not.

  • The “Regular” Leakers: Customers of SSL.com, GoDaddy, and Sectigo all have compromise rates in the 1-in-hundreds-of-thousands range. Again, the low volumes of SSL.com make the numbers somewhat unreliable, but the other two organisations in this group have large enough numbers that we can rely on that data fairly well, I think.

  • The Low Leakers: Customers of DigiCert and Let’s Encrypt are at least three times less likely than customers of the “regular leakers” to lose control of their private keys. Good for them!

Now we have some useful insights we can think about.

Why Is It So?

Professor Julius Sumner Miller If you don't know who Professor Julius Sumner Miller is, I highly recommend finding out

All of the organisations on the list, with the exception of Let’s Encrypt, are what one might term “traditional” CAs. To a first approximation, it’s reasonable to assume that the vast majority of the customers of these traditional CAs probably manage their certificates the same way they have for the past two decades or more. That is, they generate a key and CSR, upload the CSR to the CA to get a certificate, then copy the cert and key… somewhere. Since humans are handling the keys, there’s a higher risk of the humans using either risky practices, or making a mistake, and exposing the private key to the world.

Let’s Encrypt, on the other hand, issues all of its certificates using the ACME (Automatic Certificate Management Environment) protocol, and all of the Let’s Encrypt documentation encourages the use of software tools to generate keys, issue certificates, and install them for use. Given that Let’s Encrypt has 161 compromised certificates currently in the wild, it’s clear that the automation in use is far from perfect, but the significantly lower compromise rate suggests to me that lifecycle automation at least reduces the rate of key compromise, even though it doesn’t eliminate it completely.

It is true that all of the organisations in this analysis also provide ACME issuance workflows, should customers desire it. However, the “traditional CA” companies have been around a lot longer than ACME has, and so they probably acquired many of their customers before ACME existed.

Given that it’s incredibly hard to get humans to change the way they do things, once they have a way that “works”, it seems reasonable to assume that most of the certificates issued by these CAs are handled in the same human-centric, error-prone manner they always have been.

If organisations would like to refute this assumption, though, by sharing their data on ACME vs legacy issuance rates, I’m sure we’d all be extremely interested.

Explaining the Outlier

The difference in presumed issuance practices would seem to explain the significant difference in compromise rates between Let’s Encrypt and the other organisations, if it weren’t for one outlier. This is a largely “traditional” CA, with the manual-handling issues that implies, but with a compromise rate close to that of Let’s Encrypt.

We are, of course, talking about DigiCert.

The thing about DigiCert, that doesn’t show up in the raw numbers from crt.sh, is that DigiCert manages the issuance of certificates for several of the biggest “hosted TLS” providers, such as CloudFlare and AWS. When these services obtain a certificate from DigiCert on their customer’s behalf, the private key is kept locked away, and no human can (we hope) get access to the private key. This is supported by the fact that no certificates identifiably issued to either CloudFlare or AWS appear in the set of certificates with compromised keys.

When we ask for “all certificates issued by DigiCert”, we get both the certificates issued to these big providers, which are very good at keeping their keys under control, as well as the certificates issued to everyone else, whose key handling practices may not be quite so stringent.

It’s possible, though not trivial, to account for certificates issued to these “hosted TLS” providers, because the certificates they use are issued from intermediates “branded” to those companies. With the crt.sh psql interface we can run this query to get the total number of unexpired precertificates issued to these managed services:

SELECT SUM(sub.NUM_ISSUED[2] - sub.NUM_EXPIRED[2])
  FROM (
    SELECT ca.name, max(coalesce(coalesce(nullif(trim(cc.SUBORDINATE_CA_OWNER), ''), nullif(trim(cc.CA_OWNER), '')), cc.INCLUDED_CERTIFICATE_OWNER)) as OWNER,
           ca.NUM_ISSUED, ca.NUM_EXPIRED
      FROM ccadb_certificate cc, ca_certificate cac, ca
     WHERE cc.CERTIFICATE_ID = cac.CERTIFICATE_ID
       AND cac.CA_ID = ca.ID
  GROUP BY ca.ID
  ) sub
 WHERE sub.name ILIKE '%Amazon%' OR sub.name ILIKE '%CloudFlare%' AND sub.owner = 'DigiCert';

The number I get from running that query is 104,316,112, which should be subtracted from DigiCert’s total issuance figures to get a more accurate view of what DigiCert’s “regular” customers do with their private keys. When I do this, the compromise rates table, sorted by the compromise rate, looks like this:

IssuerIssuance VolumeCompromised CountCompromise Rate
Entrust23,16631 in 7,722
GlobalSign1,438,485461 in 31,271
SSL.com171,81611 in 171,816
GoDaddy56,121,4291411 in 398,024
"Regular" DigiCert40,397,363811 in 498,732
Sectigo88,323,0681701 in 519,547
All DigiCert144,713,475811 in 1,786,586
ISRG (Let's Encrypt)315,476,4021611 in 1,959,480

In short, it appears that DigiCert’s regular customers are just as likely as GoDaddy or Sectigo customers to expose their private keys.

What Does It All Mean?

The takeaway from all this is fairly straightforward, and not overly surprising, I believe.

The less humans have to do with certificate issuance, the less likely they are to compromise that certificate by exposing the private key.

While it may not be surprising, it is nice to have some empirical evidence to back up the common wisdom.

Fully-managed TLS providers, such as CloudFlare, AWS Certificate Manager, and whatever Azure’s thing is called, is the platonic ideal of this principle: never give humans any opportunity to expose a private key. I’m not saying you should use one of these providers, but the security approach they have adopted appears to be the optimal one, and should be emulated universally.

The ACME protocol is the next best, in that there are a variety of standardised tools widely available that allow humans to take themselves out of the loop, but it’s still possible for humans to handle (and mistakenly expose) key material if they try hard enough.

Legacy issuance methods, which either cannot be automated, or require custom, per-provider automation to be developed, appear to be at least four times less helpful to the goal of avoiding compromise of the private key associated with a certificate.

Humans Are, Of Course, The Problem

Bender, the robot from Futurama, asking if we'd like to kill all humans No thanks, Bender, I'm busy tonight

This observation – that if you don’t let humans near keys, they don’t get leaked – is further supported by considering the biggest issuers by volume who have not issued any certificates whose keys have been compromised: Google Trust Services (fourth largest issuer overall, with 57,084,529 unexpired precertificates), and Microsoft Corporation (sixth largest issuer overall, with 22,852,468 unexpired precertificates). It appears that somewhere between “most” and “basically all” of the certificates these organisations issue are to customers of their public clouds, and my understanding is that the keys for these certificates are managed in same manner as CloudFlare and AWS – the keys are locked away where humans can’t get to them.

It should, of course, go without saying that if a human can never have access to a private key, it makes it rather difficult for a human to expose it.

More broadly, if you are building something that handles sensitive or secret data, the more you can do to keep humans out of the loop, the better everything will be.

Your Support is Appreciated

If you’d like to see more analysis of how key compromise happens, and the lessons we can learn from examining billions of certificates, please show your support by buying me a refreshing beverage. Trawling CT logs is thirsty work.

Appendix: Methodology Limitations

In the interests of clarity, I feel it’s important to describe ways in which my research might be flawed. Here are the things I know of that may have impacted the accuracy, that I couldn’t feasibly account for.

  • Time Periods: Because time never stops, there is likely to be some slight “mismatches” in the numbers obtained from the various data sources, because they weren’t collected at exactly the same moment.

  • Issuer-to-Organisation Mapping: It’s possible that the way I mapped issuers to organisations doesn’t match exactly with how crt.sh does it, meaning that counts might be skewed. I tried to minimise that by using the same data sources (the CCADB AllCertificates report) that I believe that crt.sh uses for its mapping, but I cannot be certain of a perfect match.

  • Unwarranted Grouping: I’ve drawn some conclusions about the practices of the various organisations based on their general approach to certificate issuance. If a particular subordinate CA that I’ve grouped into the parent organisation is managed in some unusual way, that might cause my conclusions to be erroneous. I was able to fairly easily separate out CloudFlare, AWS, and Azure, but there are almost certainly others that I didn’t spot, because hoo boy there are a lot of intermediate CAs out there.

30 January, 2024 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

January 29, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

bcachefs boot tweaks

Following my previous foray into bcachefs-on-/ booting, I whipped up some patches to make multidevice root filesystems boot in Debian:

  • bcachefs-tools: Enable the Rust parts in the Debian package, and upgrade to latest git HEAD.
  • klibc: Add bcachefs detection and UUID extraction to fstype (probably not needed, blkid is used as a fallback and seems to work fine)
  • initramfs-tools: Don't rewrite root=UUID=12345678 back to a /dev device for bcachefs (which would find only an arbitrary device in the filesystem, not all of them); keep the UUID= around so that mount.bcachefs can find everything.
  • grub: Enable support in grub-mkconfig for reading UUIDs from bcachefs, and to generally understand the multi-device /dev syntax.

With all of these applied, and then the 6.7 kernel and the right fstab setups, I can seemingly boot just fine. Still no installer support, of course, but that would be a bit premature.

There's also a minor quibble in libzstd-dev; if it's fixed, it's possible to take the sid-built bcachefs-tools .deb and install directly on bookworm, which has a value in itself, given that mount.bcachefs is not built in bookworm.

29 January, 2024 05:09PM

Russell Coker

Thinkpad X1 Yoga Gen3

I just bought myself a Thinkpad X1 Yoga Gen3 for $359.10. I have been quite happy with the Thinkpad X1 Carbon Gen5 I’ve had for just over a year (apart from my mistake in buying one with lost password) [1] and I normally try to get more use out of a computer than that. If I divide total cost by the time that I’ve had it working that comes out to about $1.30 per day. I would pay more than that for a laptop and I have paid much more than that for laptops in the past, but I prefer not to. I was initially tempted to buy a new Thinkpad by the prices of high end X1 devices dropping, this new Yoga has 16G of RAM and a 2560*1440 screen – that’s a good upgrade from 8G with 1920*1080. The CPU of my new Thinkpad is a quad core i5-8350U that rates 6226 [2] and is a decent upgrade from the dual core i5-6300U that rates 3239 [3] although that wasn’t a factor as I found the old CPU fast enough.

The Yoga Gen3 has a minimum weight of 1.4Kg and mine might not be the lightest model in the range while the old Carbon weighs 1.14Kg. I can really feel the difference. It’s also slightly larger but fortunately still fits in the pocket of my Scottware jacket.

The higher resolution screen and more RAM were not sufficient to make me want to spend some money. The deciding factor is that as I’m working on phones with touch screens it is a benefit to use a laptop with a touch screen so I can do more testing. The Yoga I bought was going cheap because the touch part of the touch screen is broken but the stylus still works, this is apparently a common failure mode of the Yoga.

The Yoga has a brighter screen than the Carbon and seems to have better contrast. I think Lenovo had some newer technology for that generation of laptops or maybe my Carbon is slightly defective in that regard. It’s a hazard of buying second hand that if something basically works but isn’t quite as good as it should be then you will never know.

I’m happy with this purchase and I recommend that everyone who buys laptops secondhand the way I do only get 1440p or better displays. I’ve currently got the Kitty terminal emulator [4] setup with 9 windows that each have 103 or 104 columns and 26 or 28 rows of text. That’s a lot of terminals on a laptop screen!

29 January, 2024 11:23AM by etbe

Russ Allbery

Review: Bluebird

Review: Bluebird, by Ciel Pierlot

Publisher: Angry Robot
Copyright: 2022
ISBN: 0-85766-967-2
Format: Kindle
Pages: 458

Bluebird is a stand-alone far-future science fiction adventure.

Ten thousand years ago, a star fell into the galaxy carrying three factions of humanity. The Ascetics, the Ossuary, and the Pyrites each believe that only their god survived and the other two factions are heretics. Between them, they have conquered the rest of the galaxy and its non-human species. The only thing the factions hate worse than each other are those who attempt to stay outside the faction system.

Rig used to be a Pyrite weapon designer before she set fire to her office and escaped with her greatest invention. Now she's a Nightbird, a member of an outlaw band that tries to help refugees and protect her fellow Kashrini against Pyrite genocide. On her side, she has her girlfriend, an Ascetic librarian; her ship, Bluebird; and her guns, Panache and Pizzazz. And now, perhaps, the mysterious Ginka, a Zazra empath and remarkably capable fighter who helps Rig escape from an ambush by Pyrite soldiers.

Rig wants to stay alive, help her people, and defy the factions. Pyrite wants Rig's secrets and, as leverage, has her sister. What Ginka wants is not entirely clear even to Ginka.

This book is absurd, but I still had fun with it.

It's dangerous for me to compare things to anime given how little anime that I've watched, but Bluebird had that vibe for me: anime, or maybe Japanese RPGs or superhero comics. The storytelling is very visual, combat-oriented, and not particularly realistic. Rig is a pistol sharpshooter and Ginka is the type of undefined deadly acrobatic fighter so often seen in that type of media. In addition to her ship, Rig has a gorgeous hand-maintained racing hoverbike with a beautiful paint job. It's that sort of book.

It's also the sort of book where the characters obey cinematic logic designed to maximize dramatic physical confrontations, even if their actions make no logical sense. There is no facial recognition or screening, and it's bizarrely easy for the protagonists to end up in same physical location as high-up bad guys. One of the weapon systems that's critical to the plot makes no sense whatsoever. At critical moments, the bad guys behave more like final bosses in a video game, picking up weapons to deal with the protagonists directly instead of using their supposedly vast armies of agents. There is supposedly a whole galaxy full of civilizations with capital worlds covered in planet-spanning cities, but politics barely exist and the faction leaders get directly involved in the plot.

If you are looking for a realistic projection of technology or society, I cannot stress enough that this is not the book that you're looking for. You probably figured that out when I mentioned ten thousand years of war, but that will only be the beginning of the suspension of disbelief problems. You need to turn off your brain and enjoy the action sequences and melodrama.

I'm normally good at that, and I admit I still struggled because the plot logic is such a mismatch with the typical novels I read. There are several points where the characters do something that seems so monumentally dumb that I was sure Pierlot was setting them up for a fall, and then I got wrong-footed because their plan worked fine, or exploded for unrelated reasons. I think this type of story, heavy on dramatic eye-candy and emotional moments with swelling soundtracks, is a lot easier to pull off in visual media where all the pretty pictures distract your brain. In a novel, there's a lot of time to think about the strategy, technology, and government structure, which for this book is not a good idea.

If you can get past that, though, Rig is entertainingly snarky and Ginka, who turns out to be the emotional heart of the book, is an enjoyable character with a real growth arc. Her background is a bit simplistic and the villains are the sort of pure evil that you might expect from this type of cinematic plot, but I cared about the outcome of her story. Some parts of the plot dragged and I think the editing could have been tighter, but there was enough competence porn and banter to pull me through.

I would recommend Bluebird only cautiously, since you're going to need to turn off large portions of your brain and be in the right mood for nonsensically dramatic confrontations, but I don't regret reading it. It's mostly in primary colors and the emotional conflicts are not what anyone would call subtle, but it delivers a character arc and a somewhat satisfying ending.

Content warning: There is a lot of serious physical injury in this book, including surgical maiming. If that's going to bother you, you may want to give this one a pass.

Rating: 6 out of 10

29 January, 2024 02:20AM

Michael Ablassmeier

qmpbackup 0.28

Over the last weekend i had some spare time to improve qmpbackup a little more, the new version:

  • Uses blockdev-backup QMP commands instead of the soon to be deprecated drive-backup.
  • Adds --compress option: target QCOW files data can be compressed.
  • Adds --speed-limit option to limit backup speed throughput.
  • Adds --include-raw option so attached raw devices can be backed up too.
  • Allows backing up virtual machines in paused or pre-running state.
  • Adds backup option copy to allow backing up virtual machines with older qcow image formats, that are not supporting persistent bitmaps.
  • Improved logging output.

and some minor code reworks. Hope its useful for someone.

29 January, 2024 12:00AM

January 28, 2024

Niels Thykier

Annotating the Debian packaging directory

In my previous blog post Providing online reference documentation for debputy, I made a point about how debhelper documentation was suboptimal on account of being static rather than online. The thing is that debhelper is not alone in this problem space, even if it is a major contributor to the number of packaging files you have to to know about.

If we look at the "competition" here such as Fedora and Arch Linux, they tend to only have one packaging file. While most Debian people will tell you a long list of cons about having one packaging file (such a Fedora's spec file being 3+ domain specific languages "mashed" into one file), one major advantage is that there is only "the one packaging file". You only need to remember where to find the documentation for one file, which is great when you are running on wetware with limited storage capacity.

Which means as a newbie, you can dedicate less mental resources to tracking multiple files and how they interact and more effort understanding the "one file" at hand. I started by asking myself how can we in Debian make the packaging stack more accessible to newcomers? Spoiler alert, I dug myself into rabbit hole and ended up somewhere else than where I thought I was going.

I started by wanting to scan the debian directory and annotate all files that I could with documentation links. The logic was that if debputy could do that for you, then you could spend more mental effort elsewhere. So I combined debputy's packager provided files detection with a static list of files and I quickly had a good starting point for debputy-based packages.

Adding (non-static) dpkg and debhelper files to the mix

Now, I could have closed the topic here and said "Look, I did debputy files plus couple of super common files". But I decided to take it a bit further. I added support for handling some dpkg files like packager provided files (such as debian/substvars and debian/symbols). But even then, we all know that debhelper is the big hurdle and a major part of the omission...

In another previous blog post (A new Debian package helper: debputy), I made a point about how debputy could list all auxiliary files while debhelper could not. This was exactly the kind of feature that I would need for this feature, if this feature was to cover debhelper. Now, I also remarked in that blog post that I was not willing to maintain such a list. Also, I may have ranted about static documentation being unhelpful for debhelper as it excludes third-party provided tooling.

Fortunately, a recent update to dh_assistant had provided some basic plumbing for loading dh sequences. This meant that getting a list of all relevant commands for a source package was a lot easier than it used to be. Once you have a list of commands, it would be possible to check all of them for dh's NOOP PROMISE hints. In these hints, a command can assert it does nothing if a given pkgfile is not present. This lead to the new dh_assistant list-guessed-dh-config-files command that will list all declared pkgfiles and which helpers listed them.

With this combined feature set in place, debputy could call dh_assistant to get a list of pkgfiles, pretend they were packager provided files and annotate those along with manpage for the relevant debhelper command. The exciting thing about letting debpputy resolve the pkgfiles is that debputy will resolve "named" files automatically (debhelper tools will only do so when --name is passed), so it is much more likely to detect named pkgfiles correctly too. Side note: I am going to ignore the elephant in the room for now, which is dh_installsystemd and its package@.service files and the wide-spread use of debian/foo.service where there is no package called foo. For the latter case, the "proper" name would be debian/pkg.foo.service.

With the new dh_assistant feature done and added to debputy, debputy could now detect the ubiquitous debian/install file. Excellent. But less great was that the very common debian/docs file was not. Turns out that dh_installdocs cannot be skipped by dh, so it cannot have NOOP PROMISE hints. Meh...

Well, dh_assistant could learn about a new INTROSPECTABLE marker in addition to the NOOP PROMISE and then I could sprinkle that into a few commands. Indeed that worked and meant that debian/postinst (etc.) are now also detectable.

At this point, debputy would be able to identify a wide range of debhelper related configuration files in debian/ and at least associate each of them with one or more commands.

Nice, surely, this would be a good place to stop, right...?

Adding more metadata to the files

The debhelper detected files only had a command name and manpage URI to that command. It would be nice if we could contextualize this a bit more.

Like is this file installed into the package as is like debian/pam or is it a file list to be processed like debian/install. To make this distinction, I could add the most common debhelper file types to my static list and then merge the result together.

Except, I do not want to maintain a full list in debputy. Fortunately, debputy has a quite extensible plugin infrastructure, so added a new plugin feature to provide this kind of detail and now I can outsource the problem! I split my definitions into two and placed the generic ones in the debputy-documentation plugin and moved the debhelper related ones to debhelper-documentation. Additionally, third-party dh addons could provide their own debputy plugin to add context to their configuration files.

So, this gave birth file categories and configuration features, which described each file on different fronts. As an example, debian/gbp.conf could be tagged as a maint-config to signal that it is not directly related to the package build but more of a tool or style preference file. On the other hand, debian/install and debian/debputy.manifest would both be tagged as a pkg-helper-config. Files like debian/pam were tagged as ppf-file for packager provided file and so on.

I mentioned configuration features above and those were added because, I have had a beef with debhelper's "standard" configuration file format as read by filearray and filedoublearray. They are often considered simple to understand, but it is hard to know how a tool will actually read the file. As an example, consider the following:

  • Will the debhelper use filearray, filedoublearray or none of them to read the file? This topic has about 2 bits of entropy.
  • Will the config file be executed if it is marked executable assuming you are using the right compat level? If it is executable, does dh-exec allow renaming for this file? This topic adds 1 or 2 bit of entropy depending on the context.
  • Will the config file be subject to glob expansions? This topic sounds like a boolean but is a complicated mess. The globs can be handled either by debhelper as it parses the file for you. In this case, the globs are applied to every token. However, this is not what dh_install does. Here the last token on each line is supposed to be a directory and therefore not subject to globs. Therefore, dh_install does the globbing itself afterwards but only on part of the tokens. So that is about 2 bits of entropy more. Actually, it gets worse...
    • If the file is executed, debhelper will refuse to expand globs in the output of the command, which was a deliberate design choice by the original debhelper maintainer took when he introduced the feature in debhelper/8.9.12. Except, dh_install feature interacts with the design choice and does enable glob expansion in the tool output, because it does so manually after its filedoublearray call.

So these "simple" files have way too many combinations of how they can be interpreted. I figured it would be helpful if debputy could highlight these difference, so I added support for those as well.

Accordingly, debian/install is tagged with multiple tags including dh-executable-config and dh-glob-after-execute. Then, I added a datatable of these tags, so it would be easy for people to look up what they meant.

Ok, this seems like a closed deal, right...?

Context, context, context

However, the dh-executable-config tag among other are only applicable in compat 9 or later. It does not seem newbie friendly if you are told that this feature exist, but then have to read in the extended description that that it actually does not apply to your package.

This problem seems fixable. Thanks to dh_assistant, it is easy to figure out which compat level the package is using. Then tweak some metadata to enable per compat level rules. With that tags like dh-executable-config only appears for packages using compat 9 or later.

Also, debputy should be able to tell you where packager provided files like debian/pam are installed. We already have the logic for packager provided files that debputy supports and I am already using debputy engine for detecting the files. If only the plugin provided metadata gave me the install pattern, debputy would be able tell you where this file goes in the package. Indeed, a bit of tweaking later and setting install-pattern to usr/lib/pam.d/{name}, debputy presented me with the correct install-path with the package name placing the {name} placeholder.

Now, I have been using debian/pam as an example, because debian/pam is installed into usr/lib/pam.d in compat 14. But in earlier compat levels, it was installed into etc/pam.d. Well, I already had an infrastructure for doing compat file tags. Off we go to add install-pattern to the complat level infrastructure and now changing the compat level would change the path. Great. (Bug warning: The value is off-by-one in the current version of debhelper. This is fixed in git)

Also, while we are in this install-pattern business, a number of debhelper config files causes files to be installed into a fixed directory. Like debian/docs which causes file to be installed into /usr/share/docs/{package}. Surely, we can expand that as well and provide that bit of context too... and done. (Bug warning: The code currently does not account for the main documentation package context)

It is rather common pattern for people to do debian/foo.in files, because they want to custom generation of debian/foo. Which means if you have debian/foo you get "Oh, let me tell you about debian/foo ". Then you rename it to debian/foo.in and the result is "debian/foo.in is a total mystery to me!". That is suboptimal, so lets detect those as well as if they were the original file but add a tag saying that they are a generate template and which file we suspect it generates.

Finally, if you use debputy, almost all of the standard debhelper commands are removed from the sequence, since debputy replaces them. It would be weird if these commands still contributed configuration files when they are not actually going to be invoked. This mostly happened naturally due to the way the underlying dh_assistant command works. However, any file mentioned by the debhelper-documentation plugin would still appear unfortunately. So off I went to filter the list of known configuration files against which dh_ commands that dh_assistant thought would be used for this package.

Wrapping it up

I was several layers into this and had to dig myself out. I have ended up with a lot of data and metadata. But it was quite difficult for me to arrange the output in a user friendly manner.

However, all this data did seem like it would be useful any tool that wants to understand more about the package. So to get out of the rabbit hole, I for now wrapped all of this into JSON and now we have a debputy tool-support annotate-debian-directory command that might be useful for other tools.

To try it out, you can try the following demo:

In another day, I will figure out how to structure this output so it is useful for non-machine consumers. Suggestions are welcome. :)

Limitations of the approach

As a closing remark, I should probably remind people that this feature relies heavily on declarative features. These include:

  • When determining which commands are relevant, using Build-Depends: dh-sequence-foo is much more reliable than configuring it via the Turing complete configuration we call debian/rules.
  • When debhelper commands use NOOP promise hints, dh_assistant can "see" the config files listed those hints, meaning the file will at least be detected. For new introspectable hint and the debputy plugin, it is probably better to wait until the dust settles a bit before adding any of those.

You can help yourself and others to better results by using the declarative way rather than using debian/rules, which is the bane of all introspection!

28 January, 2024 05:35PM by Niels Thykier

Russell Coker

January 26, 2024

hackergotchi for Bastian Venthur

Bastian Venthur

Investigating popularity of Python build backends over time

Inspired by a Mastodon post by Françoise Conil, who investigated the current popularity of build backends used in pyproject.toml files, I wanted to investigate how the popularity of build backends used in pyproject.toml files evolved over the years since the introduction of PEP-0517 in 2015.

Getting the data

Tom Forbes provides a huge dataset that contains information about every file within every release uploaded to PyPI. To get the current dataset, we can use:

curl -L --remote-name-all $(curl -L "https://github.com/pypi-data/data/raw/main/links/dataset.txt")

This will download approximately 30GB of parquet files, providing detailed information about each file included in a PyPI upload, including:

  1. project name, version and release date
  2. file path, size and line count
  3. hash of the file

The dataset does not contain the actual files themselves though, more on that in a moment.

Querying the dataset using duckdb

We can now use duckdb to query the parquet files directly. Let’s look into the schema first:

describe select * from '*.parquet';

┌─────────────────┬─────────────┬─────────┐
   column_name    column_type   null   
     varchar        varchar    varchar 
├─────────────────┼─────────────┼─────────┤
 project_name     VARCHAR      YES     
 project_version  VARCHAR      YES     
 project_release  VARCHAR      YES     
 uploaded_on      TIMESTAMP    YES     
 path             VARCHAR      YES     
 archive_path     VARCHAR      YES     
 size             UBIGINT      YES     
 hash             BLOB         YES     
 skip_reason      VARCHAR      YES     
 lines            UBIGINT      YES     
 repository       UINTEGER     YES     
├─────────────────┴─────────────┴─────────┤
 11 rows                       6 columns 
└─────────────────────────────────────────┘

From all files mentioned in the dataset, we only care about pyproject.toml files that are in the project’s root directory. Since we’ll still have to download the actual files, we need to get the path and the repository to construct the corresponding URL to the mirror that contains all files in a bunch of huge git repositories. Some files are not available on the mirrors; to skip these, we only take files where the skip_reason is empty. We also care about the timestamp of the upload (uploaded_on) and the hash to avoid processing identical files twice:

select
    path,
    hash,
    uploaded_on,
    repository
from '*.parquet'
where
    skip_reason == '' and
    lower(string_split(path, '/')[-1]) == 'pyproject.toml' and
    len(string_split(path, '/')) == 5
order by uploaded_on desc

This query runs for a few minutes on my laptop and returns ~1.2M rows.

Getting the actual files

Using the repository and path, we can now construct an URL from which we can fetch the actual file for further processing:

url = f"https://raw.githubusercontent.com/pypi-data/pypi-mirror-{repository}/code/{path}"

We can download the individual pyproject.toml files and parse them to read the build-backend into a dictionary mapping the file-hash to the build backend. Downloads on GitHub are rate-limited, so downloading 1.2M files will take a couple of days. By skipping files with a hash we’ve already processed, we can avoid downloading the same file more than once, cutting the required downloads by circa 50%.

Results

Assuming the data is complete and my analysis is sound, these are the findings:

There is a surprising amount of build backends in use, but the overall amount of uploads per build backend decreases quickly, with a long tail of single uploads:

>>> results.backend.value_counts()
backend
setuptools        701550
poetry            380830
hatchling          56917
flit               36223
pdm                11437
maturin             9796
jupyter             1707
mesonpy              625
scikit               556
                   ...
postry                 1
tree                   1
setuptoos              1
neuron                 1
avalon                 1
maturimaturinn         1
jsonpath               1
ha                     1
pyo3                   1
Name: count, Length: 73, dtype: int64

We pick only the top 4 build backends, and group the remaining ones (including PDM and Maturin) into “other” so they are accounted for as well.

The following plot shows the relative distribution of build backends over time. Each bin represents a time span of 28 days. I chose 28 days to reduce visual clutter. Within each bin, the height of the bars corresponds to the relative proportion of uploads during that time interval:

Relative distribution of build backends over time

Looking at the right side of the plot, we see the current distribution. It confirms Françoise’s findings about the current popularity of build backends:

  • Setuptools: ~50%
  • Poetry: ~33%
  • Hatch: ~10%
  • Flit: ~3%
  • Other: ~4%

Between 2018 and 2020 the graph exhibits significant fluctuations, due to the relatively low amount uploads utizing pyproject.toml files. During that early period, Flit started as the most popular build backend, but was eventually displaced by Setuptools and Poetry.

Between 2020 and 2020, the overall usage of pyproject.toml files increased significantly. By the end of 2022, the share of Setuptools peaked at 70%.

After 2020, other build backends experienced a gradual rise in popularity. Amongh these, Hatch emerged as a notable contender, steadily gaining traction and ultimately stabilizing at 10%.

We can also look into the absolute distribution of build backends over time:

Absolute distribution of build backends over time

The plot shows that Setuptools has the strongest growth trajectory, surpassing all other build backends. Poetry and Hatch are growing at a comparable rate, but since Hatch started roughly 4 years after Poetry, it’s lagging behind in popularity. Despite not being among the most widely used backends anymore, Flit maintains a steady and consistent growth pattern, indicating its enduring relevance in the Python packaging landscape.

The script for downloading and analyzing the data can be found in my GitHub repository. It contains the results of the duckb query (so you don’t have to download the full dataset) and the pickled dictionary, mapping the file hashes to the build backends, saving you days for downloading and analyzing the pyproject.toml files yourself.

26 January, 2024 06:00PM by Bastian Venthur

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Life with bcachefs

After bcachefs suddenly got merged into mainline in 6.7 (after years and years of development and arguing on LKML), I've been curious; could this be an interesting thing to test out? I gave up btrfs many years ago as just way too slow, and to be honest I've been quite fine with ext4/xfs + LVM + md, but there was something in it that spoke to me nevertheless.

So the last month or so, I've been having /home on my primary machine (server) as bcachefs, with a small 10GB SSD backing. Surprisingly… it's been quite OK? It doesn't really speak all that well with LVM (since it's very much intended as a replacement, you can't do stuff like let one SSD volume back two different HDD volumes), and its erasure coding parts are not ready yet (so I'll still need MD underneath for the RAID-6), but it's been stable. It hasn't eaten any of my files, it hasn't crashed, it hasn't had weird performance hiccups. For something that's still so new in mainline, that's actually pretty good.

There have been bugs. The built-in compression doesn't always manage to compress everything that it should (although it is definitely an improvement over no compression at all!), rebalance hasn't always woken up when it should. There were scary fsck warnings on a mount once. bcachefs migrate (which is one of those things that sound totally bonkers, and I guess is), whose concept I assume was copied from btrfs, didn't work at first for me until I bugfixed it.

And I see sometimes people on #bcache report errors, although it seems they do get timely help (and not just of the sort “blow away your entire filesystem and restore from backups”). You can run this as your primary filesystem right now, but you are definitely an early adopter then, and you have to be ready for some cuts. Parts of online fsck is coming in 6.8, but it's still missing stuff like scrub, the aforementioned erasure coding (RAID that isn't RAID-0 or RAID-1 or a combination thereof), send/receive, and a few other things.

Integration into Debian is also pretty young still. On another computer (my main desktop machine, although that's not my most used one), I now have / on a multi-device bcachefs, and that is… rough. Upstream is increasingly writing things in Rust, which needs some working out. GRUB can't read it (not that I really care about /boot on bcachefs). There's obviously no installer support. But again, it hasn't eaten anything yet.

So if you like living life on the edge, I guess you could give it a spin. But remember those backups :-)

26 January, 2024 04:33PM

Dima Kogan

mrcal 2.4 released!

mrcal 2.4 is out: the release notes. Once again, this is mostly a bug-fix release en route to the big new features coming in 3.0. The most noteworthy fixes:

  • mrcal can be built with clang. Try it out like this: CC=clang CXX=clang++ make. This opens up some portability improvements, such as making it easier to run on Windows.
  • Full dense stereo pipeline in C.
  • Tools to support more file formats:

    These are experimental. Please let me know if these are or aren't useful

The portability work was motivated by Matt Morley, who was interested in integrating mrcal into PhotonVision, the toolkit used by students in the FIRST Robotics Competition. Matt completed that work, and mrcal is now a part of PhotonVision 2024.1.2! Thanks, Matt!

I don't know if there will be a mrcal 2.5, but the next interesting release will be mrcal 3.0. The biggest internal rework is complete: the new cross-reprojection uncertainty quantification method is implemented, tested and documented. The results are very promising, but lots needs to happen before we can reliably compute intrinsics without chessboards and produce full SFM solves in mrcal and all the related things.

26 January, 2024 02:07AM by Dima Kogan