December 17, 2025

hackergotchi for Matthew Garrett

Matthew Garrett

How did IRC ping timeouts end up in a lawsuit?

I recently won a lawsuit against Roy and Rianne Schestowitz, the authors and publishers of the Techrights and Tuxmachines websites. The short version of events is that they were subject to an online harassment campaign, which they incorrectly blamed me for. They responded with a large number of defamatory online posts about me, which the judge described as unsubstantiated character assassination and consequently awarded me significant damages. That's not what this post is about, as such. It's about the sole meaningful claim made that tied me to the abuse.

In the defendants' defence and counterclaim[1], 15.27 asserts in part The facts linking the Claimant to the sock puppet accounts include, on the IRC network: simultaneous dropped connections to the mjg59_ and
elusive_woman accounts. This is so unlikely to be coincidental that the natural inference is that the same person posted under both names
. "elusive_woman" here is an account linked to the harassment, and "mjg59_" is me. This is actually a surprisingly interesting claim to make, and it's worth going into in some more detail.

The event in question occurred on the 28th of April, 2023. You can see a line reading *elusive_woman has quit (Ping timeout: 2m30s), followed by one reading *mjg59_ has quit (Ping timeout: 2m30s). The timestamp listed for the first is 09:52, and for the second 09:53. Is that actually simultaneous? We can actually gain some more information - if you hover over the timestamp links on the right hand side you can see that the link is actually accurate to the second even if that's not displayed. The first event took place at 09:52:52, and the second at 09:53:03. That's 11 seconds apart, which is clearly not simultaneous, but maybe it's close enough. Figuring out more requires knowing what a "ping timeout" actually means here.

The IRC server in question is running Ergo (link to source code), and the relevant function is handleIdleTimeout(). The logic here is fairly simple - track the time since activity was last seen from the client. If that time is longer than DefaultIdleTimeout (which defaults to 90 seconds) and a ping hasn't been sent yet, send a ping to the client. If a ping has been sent and the timeout is greater than DefaultTotalTimeout (which defaults to 150 seconds), disconnect the client with a "Ping timeout" message. There's no special logic for handling the ping reply - a pong simply counts as any other client activity and resets the "last activity" value and timeout.

What does this mean? Well, for a start, two clients running on the same system will only have simultaneous ping timeouts if their last activity was simultaneous. Let's imagine a machine with two clients, A and B. A sends a message at 02:22:59. B sends a message 2 seconds later, at 02:23:01. The idle timeout for A will fire at 02:24:29, and for B at 02:24:31. A ping is sent for A at 02:24:29 and is responded to immediately - the idle timeout for A is now reset to 02:25:59, 90 seconds later. The machine hosting A and B has its network cable pulled out at 02:24:30. The ping to B is sent at 02:24:31, but receives no reply. A minute later, at 02:25:31, B quits with a "Ping timeout" message. A ping is sent to A at 02:25:59, but receives no reply. A minute later, at 02:26:59, A quits with a "Ping timeout" message. Despite both clients having their network interrupted simultaneously, the ping timeouts occur 88 seconds apart.

So, two clients disconnecting with ping timeouts 11 seconds apart is not incompatible with the network connection being interrupted simultaneously - depending on activity, simultaneous network interruption may result in disconnections up to 90 seconds apart. But another way of looking at this is that network interruptions may occur up to 90 seconds apart and generate simultaneous disconnections[2]. Without additional information it's impossible to determine which is the case.

This already casts doubt over the assertion that the disconnection was simultaneous, but if this is unusual enough it's still potentially significant. Unfortunately for the Schestowitzes, even looking just at the elusive_woman account, there were several cases where elusive_woman and another user had a ping timeout within 90 seconds of each other - including one case where elusive_woman and schestowitz[TR] disconnect 40 seconds apart. By the Schestowitzes argument, it's also a natural inference that elusive_woman and schestowitz[TR] (one of Roy Schestowitz's accounts) are the same person.

We didn't actually need to make this argument, though. In England it's necessary to file a witness statement describing the evidence that you're going to present in advance of the actual court hearing. Despite being warned of the consequences on multiple occasions the Schestowitzes never provided any witness statements, and as a result weren't allowed to provide any evidence in court, which made for a fairly foregone conclusion.

[1] As well as defending themselves against my claim, the Schestowitzes made a counterclaim on the basis that I had engaged in a campaign of harassment against them. This counterclaim failed.

[2] Client A and client B both send messages at 02:22:59. A falls off the network at 02:23:00, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. B falls off the network at 02:24:28, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. Simultaneous disconnects despite over a minute of difference in the network interruption.

comment count unavailable comments

17 December, 2025 01:17PM

December 16, 2025

Christian Kastner

Simple-PPA, a minimalistic PPA implementation

Today, the Debusine developers launched Debusine repositories, a beta implementation of PPAs. In the announcement, Colin remarks that "[d]iscussions about this have been happening for long enough that people started referring to PPAs for Debian as 'bikesheds'"; a characterization that I'm sure most will agree with.

So it is with great amusement that on this same day, I launch a second PPA implementation for Debian: Simple-PPA.

Simple-PPA was never meant to compete with Debusine, though. In fact, it's entirely the opposite: from discussions at DebConf, I knew that it was only a matter of time until Debusine gained a PPA-like feature, but I needed a stop-gap solution earlier, and with some polish, what was once by Python script already doing APT processing for apt.ai.debian.net, recently became Simple-PPA.

Consequently, Simple-PPA lacks (and will always lack) all of the features that Debusine offers: there is no auto-building, no CI, nor any other type of QA. It's the simplest possible type of APT repository: you just upload packages, they get imported into an archive, and the archive is exposed via a web server. Under the hood, reprepro does all the heavy lifting.

However, this also means it's trivial to set up. The following is the entire configuration that simple-ppa.debian.net started with:

# simple-ppa.conf

[CORE]
SignWith = 2906D748B7551BC8
ExportDir = /srv/www/simple-ppa
MailFrom: Simple-PPA <admin@simple-ppa.debian.net>
Codenames = sid forky trixie trixie-backports bookworm bookworm-backports
AlsoAllow = forky: unstable
            trixie: unstable
            bookworm: unstable

[simple-ppa-dev]
Label = Simple-PPA's self-hosted development repository
# ckk's key
Uploaders = allow * by key E76004C5CEF0C94C+

[ckk]
Label = Christian Kastner at Simple-PPA
Uploaders = allow * by key E76004C5CEF0C94C+

The CORE section just sets some defaults and sensible rules. Two PPAs are defined, simple-ppa-dev and ckk, which accept packages signed by the key with the ID E76004C5CEF0C94C. These PPAs use the global defaults, but individual PPAs can override Architectures, Suites, and Components, and of course allow an arbitrary number of users.

Users upload to this archive using SFTP (e.g.: with dput-ng). Every 15 minutes, uploads get processed, with ACCEPTED or REJECTED mails sent to the Maintainer address. The APT archive of all PPAs is signed with a single global key.

I myself intend to use Debusine repositories soon, as the autobuilding and the QA tasks Debusine offers are something I need. However, I do still see a niche use case for Simple-PPA: when you need an APT archive, but don't want to do a deep dive into reprepro (which is extremely powerful).

If you'd like to give Simple-PPA a try, head over to simple-ppa.debian.net and follow the instructions for users.

16 December, 2025 09:15PM by Christian Kastner

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Lichess

I wish more pages on the Internet were like Lichess. It's fast. It feels like it only does one thing (even though it's really more like seven or eight)—well, perhaps except for the weird blogs. It does not feel like it's trying to sell me anything; in fact, it feels like it hardly even wants my money. (I've bought two T-shirts from their Spreadshirt, to support them.) It's super-efficient; I've seen their (public) balance sheets, and it feels like it runs off of a shoestring budget. (Take note, Wikimedia Foundation!) And, perhaps most relieving in this day and age, it does not try to grift any AI.

Yes, I know, chess.com is the juggernaut, and has probably done more for chess' popularity than FIDE ever did. But I still go to Lichess every now and then and just click that 2+1 button. (Generally without even logging in, so that I don't feel angry about it when I lose.) Be more like Lichess.

16 December, 2025 06:45PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debusine repositories now in beta (by Colin Watson)

We’re happy to announce that Debusine can now be used to maintain APT-compatible add-on package repositories for Debian. This facility is available in public beta to Debian developers and maintainers.

Why?

Debian developers typically put most of their effort towards maintaining the main Debian archive. However, it’s often useful to have other places to work, for various reasons:

  • Developers working on a set of packages might need to check that changes to several of them all work properly together on a real system.
  • Somebody fixing a bug might need to ask affected users to test the fix before uploading it to Debian.
  • Some projects are difficult to package in a way that meets Debian policy, or are too niche to include in Debian, but it’s still useful to distribute them in a packaged form.
  • For some packages, it’s useful to provide multiple upstream versions for multiple Debian releases, even though Debian itself would normally want to keep that to a minimum.

The Ubuntu ecosystem has had PPAs for a long time to meet these sorts of needs, but people working directly on Debian have had to make do with putting things together themselves using something like reprepro or aptly. Discussions about this have been happening for long enough that people started referring to PPAs for Debian as “bikesheds”, and users often find themselves trying to use Ubuntu PPAs on Debian systems and hoping that dependencies will be compatible enough for things to more or less work. This clearly isn’t ideal, and solving it is one of Freexian’s objectives for Debusine.

Developers publishing packages to Debusine repositories can take advantage of all Debusine’s existing facilities, including a battery of QA tests and regression tracking (coming soon). Repositories are signed using per-repository keys held in Debusine’s signing service, and uploads to repositories are built against the current contents of that repository as well as the corresponding base Debian release. All repositories include automatic built-in snapshot capabilities.

Who can use this service?

We’ve set up debusine.debian.net to allow using repositories. All Debian Developers and Debian Maintainers can log in there and publish packages to it. The resulting repositories are public by default.

debusine.debian.net only allows packages with licences that allow distribution by Debian, and it is intended primarily for work that could reasonably end up in Debian; Freexian reserves the right to remove repositories from it.

How can I use it?

If you are a Debian contributor, we’d be very excited to have you try this out, especially if you give us feedback. We have published instructions for developers on using this. Since this is a beta service, you can expect things to change, but we’ll maintain compatibility where we can.

If you’re interested in using this in a commercial setting, please contact Freexian to discuss what we can do for you.

16 December, 2025 12:00AM by Colin Watson

Monthly report about Debian Long Term Support, November 2025 (by Santiago Ruano Rincón)

The Debian LTS Team, funded by [Freexian’s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for November.

Activity summary

During the month of November, 18 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 33 DLAs fixing 219 CVEs.

The LTS Team kept going with the usual cadence of preparing security updates for Debian 11 “bullseye”, but also for Debian 12 “bookworm”, Debian 13 “trixie” and even Debian unstable. As in previous months, we are pleased to say that there have been multiple contributions of LTS uploads by Debian Fellows outside the regular LTS Team.

Notable security updates:

  • Guilhem Moulin prepared DLA 4365-1 for unbound, a caching DNS resolver, fixing a cache poisoning vulnerability that could lead to domain hijacking.
  • Another update related to DNS software was made by Andreas Henriksson. Andreas completed the work on bind9, released as DLA 4364-1 to fix cache poisoning and Denial of Service (DoS) vulnerabilities.
  • Chris Lamb released DLA 4374-1 to fix a potential arbitrary code execution vulnerability in pdfminer, a tool for extracting information from PDF documents.
  • Ben Hutchings published a regular security update for the linux 6.1 bullseye backport, as DLA 4379-1.
  • A couple of other important recurrent updates were prepared by Emilio Pozuelo, who handled firefox-esr and thunderbird (in collaboration with Christoph Goehre), published as DLAs DLA 4370-1 and DLA 4372-1, respectively.

Contributions from fellows outside the LTS Team:

  • Thomas Goirand uploaded a bullseye update for keystone and swift
  • Jeremy Bícha prepared the bullseye update for gst-plugins-base1.0
  • As mentioned above, Christoph Goehre prepared the bullseye update for thunderbird.
  • Mathias Behrle provided feedback about the tryton-server and tryton-sao vulnerabilities that were disclosed last month, and helped to review the bullseye patches for tryton-server.

Other than the regular LTS updates for bullseye, the LTS Team has also contributed updates to the latest Debian releases:

  • Bastien Roucariès prepared a bookworm update for squid, the web proxy cache server.
  • Carlos Henrique Lima Melara filed a bookworm point update request for gdk-pixbuf to fix CVE-2025-7345, a heap buffer overflow vulnerability that could lead to arbitrary code execution.
  • Daniel Leidert prepared bookworm and trixie updates for r-cran-gh to fix CVE-2025-54956, an issue that may expose user credentials in HTTP responses.
  • Along with the bullseye updates for unbound mentioned above, Guilhem helped to prepare the trixie update for unbound.
  • In collaboration with Lukas Märdian, Tobias Frost prepared trixie and bookworm updates for log4cxx, the C++ port of the logging framework for JAVA.
  • Jochen Sprickerhof prepared a bookworm update for syslog-ng.
  • Utkarsh completed the bookworm update for wordpress, addressing multiple security issues in the popular blogging tool.

Beyond security updates, there has been a significant effort in revamping our documentation, aiming to make the processes more clear and consistent for all the members of the team. This work was mainly carried out by Sylvain, Jochen and Roberto.

We would like to express our gratitude to the sponsors for making the Debian LTS project possible. Also, special thanks to the fellows outside the LTS team for their valuable help.

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

16 December, 2025 12:00AM by Santiago Ruano Rincón

December 15, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Unique security and privacy threats of large language models — a comprehensive survey

This post is an unpublished review for Unique security and privacy threats of large language models — a comprehensive survey

Much has been written about large language models (LLMs) being a risk to user security and privacy, including the issue that, being trained with datasets whose provenance and licensing are not always clear, they can be tricked into producing bits of data that should not be divulgated. I took on reading this article as means to gain a better understanding of this area. The article completely fulfilled my expectations.

This is a review article, which is not a common format for me to follow: instead of digging deep into a given topic, including an experiment or some way of proofing the authors’ claims, a review article will contain a brief explanation and taxonomy of the issues at hand, and a large number of references covering the field. And, at 36 pages and 151 references, that’s exactly what we get.

The article is roughly split in two parts: The first three sections present the issue of security and privacy threats as seen by the authors, as well as the taxonomy within which the review will be performed, and sections 4 through 7 cover the different moments in the life cycle of a LLM model (at pre-training, during fine-tuning, when deploying systems that will interact with end-users, and when deploying LLM-based agents), detailing their relevant publications. For each of said moments, the authors first explore the nature of the relevant risks, then present relevant attacks, and finally close outlining countermeasures to said attacks.

The text is accompanied all throughout its development with tables, pipeline diagrams and attack examples that visually guide the reader. While the examples presented are sometimes a bit simplistic, they are a welcome guide and aid to follow the explanations; the explanations for each of the attack models are necessarily not very deep, and I was often left wondering I correctly understood a given topic, or wanting to dig deeper – but being this a review article, it is absolutely understandable.

The authors present an easy to read prose, and this article covers an important spot in understanding this large, important, and emerging area of LLM-related study.

15 December, 2025 07:30PM

Russ Allbery

Review: Brigands & Breadknives

Review: Brigands & Breadknives, by Travis Baldree

Series: Legends & Lattes #3
Publisher: Tor
Copyright: 2025
ISBN: 1-250-33489-6
Format: Kindle
Pages: 325

Brigands & Breadknives is a secondary-world sword-and-sorcery fantasy and a sequel to both Legends & Lattes and Bookshops & Bonedust. It takes place shortly after Legends & Lattes chronologically, but Fern, the protagonist, was introduced in the Bookshops & Bonedust prequel.

You may have noticed I didn't describe this as cozy fantasy. That is intentional.

When we left Fern at the end of Bookshops & Bonedust, the rattkin was running a bookshop in the town of Murk. As Brigands & Breadknives opens, Fern is moving, for complicated and hard-to-describe personal reasons, to Thune where Viv has her coffee shop. Her plan is to open a new bookstore next door to Legends and Lattes. This is exactly the sort of plot one might expect from this series, and the first few chapters feel like yet another version of the first two novels. Then Fern makes an impulsive and rather inexplicable (even to herself) decision and the plot goes delightfully sideways.

Brigands & Breadknives is not, as Baldree puts it in the afterword, a book about fantasy small-business ownership as the answer to all of life's woes. It is, instead, a sword and sorcery story about a possibly immortal elven bounty hunter, her utterly baffling goblin prisoner, and a rattkin bookseller who becomes their unexpected travel companion for reasons she can't explain. It's a story about a mid-life crisis in a world and with supporting characters that I can only describe as inspired by a T. Kingfisher novel.

Baldree is not Ursula Vernon, of course. This book does not contain paladins or a romance, possibly to the relief of some readers. It's slower, a bit more introspective, and doesn't have as sharp of edges or the casual eerie unsettlingness. But there is a religious order that worships a tentacled space horror for entirely unexpected reasons, pompous and oleaginous talking swords with verbose opinions about everything, a mischievously chaotic orange-haired goblin who quickly became one of my favorite fantasy characters and then kept getting better, and a whole lot of heart. You may see why Kingfisher was my first thought for a comparison point.

Unlike Baldree's previous novels, there is a lot of combat and injury. I think some people will still describe this book as cozy, and I'm not going to argue too strongly because the conflicts are a bit lighter than the sort of rape and murder one would see in a Mercedes Lackey novel. But to me this felt like sword and sorcery in a Dungeons and Dragons universe made more interesting by letting the world-building go feral and a little bit sarcastic. Most of the book is spent traveling, there are a lot of random encounters that build into a connected plot, and some scenes (particularly the defense of the forest village) felt like they could have sold to the Swords and Sorceress anthology series.

Also, this was really good! I liked both Legends & Lattes and Bookshops & Bonedust, maybe a bit more than the prevailing opinion among reviewers since the anachronisms never bothered me, but I wasn't sure whether to dive directly into this book because I was expecting more of the same. This is not more of the same. I think it's clearly better writing and world-building than either of the previous books. It helps that Fern is the protagonist; as much as I like Viv, I think Fern is a more interesting character, and I am glad she got a book of her own.

Baldree takes a big risk on the emotional arc of this book. Fern starts the story in a bad state and makes some decisions to kick off the plot that are difficult to defend. She beats herself up for those decisions for most of the book, deservedly, and parts of that emotional turmoil are difficult to read. Baldree resists the urge to smooth everything over and instead provides a rather raw sense of depression, avoidance, and social anxiety that some readers are going to have to brace themselves for.

I respect the decision to not write the easy series book people probably expected, but I'm not sure Fern's emotional arc quite worked. Baldree is hinting at something that's hard to describe logically, and I'm not sure he was able to draw a clear enough map of Fern's thought process for the reader to understand her catharsis. The "follow your passion" self-help mindset has formed a gravitational singularity in the vicinity of this book's theme, it takes some skillful piloting to avoid being sucked into its event horizon, and I don't think Baldree quite managed to escape it. He made a valiant attempt, though, and it created a far more interesting book than one about safer emotions.

I wanted more of an emotional payoff than I got, but the journey, even with the moments of guilt and anxiety, was so worth it. The world-building is funnier and more interesting than the previous books of the series, and the supporting cast is fantastic. If you bailed on the series but you like sword and sorcery and T. Kingfisher novels, consider returning. You do probably need to read Bookshops & Bonedust first, if you haven't already, since it helps to know the start of Fern's story.

Recommended, and shortcomings aside, much better than I had expected.

Content notes: Bloody sword fights, major injury, some very raw emotions about letting down friends and destroying friendships.

Rating: 8 out of 10

15 December, 2025 03:25AM

December 14, 2025

hackergotchi for Evgeni Golov

Evgeni Golov

Home Assistant, Govee Lights Local, VLANs, Oh my!

We recently bought some Govee Glide Hexa Light Panels, because they have a local LAN API that is well integrated into Home Assistant. Or so we thought.

Our network is not that complicated, but there is a dedicated VLAN for IOT devices. Home Assistant runs in a container (with network=host) on a box in the basement, and that box has a NIC in the IOT VLAN so it can reach devices there easily. So far, this has never been a problem.

Enter the Govee LAN API. Or maybe its Python implementation. Not exactly sure who's to blame here.

The API involves sending JSON over multicast, which the Govee device will answer to.

No devices found on the network

After turning logging for homeassistant.components.govee_light_local to 11, erm debug, we see:

DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] Starting discovery with IP 192.168.42.2
DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] No devices found with IP 192.168.42.2

That's not the IP address in the IOT VLAN!

Turns out the integration recently got support for multiple NICs, but Home Assistant doesn't just use all the interfaces it sees by default.

You need to go to SettingsNetworkNetwork adapter and deselect "Autoconfigure", which will allow your to select individual interfaces.

Once you've done that, you'll see Starting discovery with IP messages for all selected interfaces and adding of Govee Lights Local will work.

14 December, 2025 03:48PM by evgeni

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

BH 1.90.0-1 on CRAN: New Upstream

Boost

Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over one hundred individual libraries. The BH package provides a sizeable subset of header-only libraries for (easier, no linking required) use by R. It is fairly widely used: the (partial) CRAN mirror logs (aggregated from the cloud mirrors) show over 41.5 million package downloads.

Version 1.90.0 of Boost was released a few days ago following the regular Boost release schedule of April, August and December releases. As before, we packaged it almost immediately and started testing following our annual update cycle which strives to balance being close enough to upstream and not stressing CRAN and the user base too much. The reverse depends check revealed only one really minor issue among the over three hundred direct reverse dependencies. And that issue was addressed yesterday within hours by a truly responsove maintainer (and it helped that a related issue had been addressed months earlier with version 1.89.). So big thanks to Jean-Romain Roussel for the prompt fix, and to Andrew Johnson for the earlier test with 1.89.0.

As last year with 1.87.0, no new Boost libraries were added to BH so the (considerable) size is more or less unchanged. It lead to CRAN doing a manual inspection but as there were no other issues it sailed through as is now in the CRAN repository.

The short NEWS entry follows.

Changes in version 1.90.0-1 (2025-12-13)

  • Upgrade to Boost 1.90.0, patched as usual to comment-out diagnostic suppression messages per the request of CRAN

  • Minor upgrades to continuous integration

Via my CRANberries, there is a diffstat report relative to the previous release. Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

14 December, 2025 03:03PM

December 13, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

I was wondering if there was some debian thread and noticed maybe something is broken in my mail setup.

I was wondering if there was some debian thread and noticed maybe something is broken in my mail setup. The amount of emails I am receiving seems to be very small.

13 December, 2025 01:41AM by Junichi Uekawa

December 12, 2025

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Updates about DebConf Video Team Sprint, rebootstrap, SBOM tooling in Debian and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-11

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf Video Team Sprint

The DebConf Video Team records, streams, and publishes talks from DebConf and many miniDebConfs. A lot of the infrastructure development happens during setup for these events, but we also try to organize a sprint once a year to work on infrastructure, when there isn’t a DebConf about to happen. Stefano attended the sprint in Herefordshire this year and wrote up a report.

rebootstrap, by Helmut Grohne

A number of jobs were stuck in architecture-specific failures. gcc-15 and dpkg still disagree about whether PIE is enabled occasionally and big endian mipsen needed fixes in systemd. Beyond this regular uploads of libxml2 and gcc-15 required fixes and rebasing of pending patches.

Earlier, Loongson used rebootstrap to create the initial package set for loong64 and Miao Wang now submitted their changes. Therefore, there is now initial support for suites other than unstable and use with derivatives.

Building the support for Software Bill Of Materials tooling in Debian, by Santiago Ruano Rincón

Vendors of Debian-based products may/should be paying attention to the evolution of different jurisdictions (such as the CRA or updates on CISA’s Minimum Elements for a Software Bill of Materials) that require to make available Software Bill of Materials (SBOM) of their products. It is important then to have tools in Debian to make it easier to produce such SBOMs.

In this context, Santiago continued the work on packaging libraries related to SBOMs. This includes the packaging of the SPDX python library (python-spdx-tools), and its dependencies rdflib and mkdocs-include-markdown-plugin. System Package Data Exchange (SPDX), defined by ISO/IEC 5962:2021, is an open standard capable of representing systems with software components as SBOMs and other data and security references. SPDX and CycloneDX (whose python library python3-cyclonedx-lib was packaged by prior efforts this year), encompass the two main SBOM standards available today.

Miscellaneous contributions

  • Carles improved po-debconf-manager: added checking status of bug reports automatically via python-debianbts; changed some command line options naming or output based on user feedback; finished refactoring user interaction to rich; codebase is now flake8-compliant; added type safety with mypy.
  • Carles, using po-debconf-manager, created 19 bug reports for translations where the merge requests were pending; reviewed and created merge requests for 4 packages.
  • Carles planned a second version of the tool that detects packages that Recommends or Suggests packages which are not in Debian. He is taking ideas from dumat.
  • Carles submitted a pull request to python-unidiff2 (adapted from the original pull request to python-unidiff). He also started preparing a qnetload update.
  • Stefano did miscellaneous python package updates: mkdocs-macros-plugin, python-confuse, python-pip, python-mitogen.
  • Stefano reviewed a beets upload for a new maintainer who is taking it over.
  • Stefano handled some debian.net infrastructure requests.
  • Stefano updated debian.social infrastructure for the “trixie” point release.
  • The update broke jitsi.debian.social, Stefano put some time into debugging it and eventually enlisted upstream assistance, who solved the problem!
  • Stefano worked on some patches for Python that help Debian:
    • GH-139914: The main HP PA-RISC support patch for 3.14.
    • GH-141930: We observed an unhelpful error when failing to write a .pyc file during package installation. We may have fixed the problem, and at least made the error better.
    • GH-141011: Ignore missing ifunc support on HP PA-RISC.
  • Stefano spun up a website for hamburg2026.mini.debconf.org.
  • Raphaël reviewed a merge request updating tracker.debian.org to rely on bootstrap
    version 5.
  • Emilio coordinated various transitions.
  • Helmut sent patches for 26 cross build failures.
  • Helmut officially handed over the cleanup of the /usr-move transition.
  • Helmut monitored the transition moving libcrypt-dev out of build-essential and bumped the remaining bugs to rc-severity in coordination with the release team.
  • Helmut updated the Build-Profiles patch for debian-policy incorporating feedback from Sean Whitton with a lot of help from Nattie Mayer-Hutchings and Freexian colleagues.
  • Helmut discovered that the way mmdebstrap deals with start-stop-daemon may result in broken output and sent a patch.
  • As a result of armel being removed from “sid”, but not from “forky”, the multiarch hinter broke. Helmut fixed it.
  • Helmut uploaded debvm accepting a patch from Luca Boccassi to fix it for newer
    systemd.
  • Colin began preparing for the second stage of the OpenSSH GSS-API key exchange package split.
  • Colin caught and fixed a devscripts regression due to it breaking part of Debusine.
  • Colin packaged django-pgtransaction and backported it to “trixie”, since it looks useful for Debusine.
  • Thorsten uploaded the packages lprng, cpdb-backend-cups, cpdb-libs and ippsample to fix some RC bugs as well as other bugs that accumulated over time. He also uploaded cups-filters to all Debian releases to fix three CVEs.

12 December, 2025 12:00AM by Anupa Ann Joseph

December 11, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#056: Running r-ci with R-devel

Welcome to post 56 in the R4 series.

The recent post #54 reviewed a number of earlier posts on r-ci, our small (but very versatile) runner for continunous integration (CI) with R. The post also introduced the notion of using a container in the ‘matrix’ of jobs defined and running in parallel. The initial motivation was the (still ongoing, and still puzzling) variation in run-times of GitHub Actions. So when running CI and relying on r2u for the ‘fast, easy, reliable: pick all three!’ provision of CRAN packages as Ubuntu binaries, a small amount of time is spent prepping a basic Ubuntu instance with the necessary setup. This can be as fast as maybe 20 to 30 seconds, but it can also stretch to almost two minutes when GitHub is busier or out of sorts for other reasons. When the CI job itself is short, that is a nuisance. We presented relying on a pre-made r2u4ci container that adds just a few commands to the standard r2u container to be complete for CI. And with that setup CI runs tend to be reliably faster.

This situation is still evolving. I have not converted any of my existing CI scripts (apart from a test instance or two), but I keep monitoring the situation. However, this also offered another perspective: why not rely on a different container for a different CI aspect? When discussing the CI approach with Jeff the other day (and helping add CI to his mmap repo), it occurred to me we could also use on of the Rocker containers for R-devel. A minimal change to the underlying run.sh script later, this was accomplished. An example is provided as both a test and an illustration in the repo for package RcppInt64 in its script ci.yaml:

    strategy:
      matrix:
        include:
          - { name: container, os: ubuntu-latest, container: rocker/r2u4ci }
          - { name: r-devel,   os: ubuntu-latest, container: rocker/drd }
          - { name: macos,     os: macos-latest }
          - { name: ubuntu,    os: ubuntu-latest }

    runs-on: ${{ matrix.os }}
    container: ${{ matrix.container }}

This runs both a standard Ubuntu setup (fourth entry) and the alternate just described relying on the container (first entry) along with the (usually commented-out) optional macOS setup (third entry). And line two brings the drd container from Rocker. The CI runner script now checks for a possible Rdevel binary as provided inside drd (along with alias RD) and uses it when present. And that is all that there is: no other change on the user side; tests now run under R-devel. You can see some of the initial runs at the rcppint64 repo actions log. Another example is now also at Jeff’s mmap repo.

It should be noted that this relies on R-devel running packages made with R-release. Every few years this breaks when R needs to break its binary API. If and when that happens this option will be costlier as the R-devel instance will then have to (re-)install its R package dependencies. This can be accomodated easily as a step in the yaml file. And under ‘normal’ circumstances it is not needed.

Having easy access to recent builds of R-devel (the container refreshes weekly on a schedule) with the convenience of r2u gives another option for package testing. I may continue to test locally with R-devel as my primary option, and most likely keep my CI small and lean (usually just one R-relase run on Ubuntu) but having another option at GitHub Actions is also a good thing.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

11 December, 2025 06:29PM

December 08, 2025

Thorsten Alteholz

My Debian Activities in November 2025

Debian LTS/ELTS

This was my hundred-thirty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian and my eighty-eighth ELTS month. As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.

During my allocated time I uploaded or worked on:

  • [DLA 4381-1] net-snmp security update to fix two CVEs related to denial of service.
  • [DLA 4382-1] libsdl2 security update to fix one CVE related to a memory leak and a denial of service.
  • [DLA 4380-1] cups-filters security update to fix three CVEs related to out of bounds read or writes or a heap buffer overflow.
  • [ELA-1586-1] cups-filters security update to fix three CVEs in Buster and Stretch, related to out of bounds read or writes or a heap buffer overflow.
  • [libcupsfilters] upload to unstable to fix two CVEs
  • [cups-filters] upload to unstable to fix three CVEs
  • [cups] upload to unstable to fix two CVEs
  • [rlottie] upload to unstable to finally fix three CVEs
  • [rplay] upload to unstable to finally fix one CVE
  • [#1121342] trixie-pu bug for libcupsfilters to fix two CVEs in Trixie.
  • [#1121391] trixie-pu bug for cups-filter to fix three CVEs in Trixie.
  • [#1121392] bookworm-pu bug for cups-filter to fix three CVEs in Bookworm.
  • [#112433] trixie-pu bug for rlottie to finally fix three CVEs in Trixie.
  • [#112437] bookworm-pu bug for rlottie to finally fix three CVEs in Bookworm.

I also attended the monthly LTS/ELTS meeting and did a week of LTS/ELTS frontdesk duties. I also stumbled upon a bug in python3-paramiko, where the parsing of include statements in the ssh_config does not work. Rather annoying but already fixed in the newest version, that only needs to find its way to my old VM.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

I also uploaded cups to Trixie, to fix bug #1109471 related to a configuration problem with the admin panel.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

  • siril to unstable (sponsored upload).
  • supernovas to unstable (sponsored upload).

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

On my fight against outdated RFPs, I closed 30 of them in November.

I started with about 3500 open RFP bugs. and after working six months on this project, I have closed 183 bugs. Of course new bugs appeared, so the overall number of bugs is only down to about 3360.

Though I view this as a successful project, I also have to admit that it is a bit boring to work on this daily. Therefore I close this diary again and will add the closed RFP bugs to my bug logbook now. I also try to close some of these bugs by really uploading some software, probably one package per month.

FTP master

This month I accepted 236 and rejected 16 packages. The overall number of packages that got accepted was 247.

08 December, 2025 03:20PM by alteholz

François Marier

Learning a new programming language with an LLM

I started learning Go this year. First, I picked a Perl project I wanted to rewrite, got a good book and ignored AI tools since I thought they would do nothing but interfere with learning. Eventually though, I decided to experiment a bit and ended up finding a few ways to use AI assistants effectively even when learning something new.

Searching more efficiently

The first use case that worked for me was search. Instead of searching on a traditional search engine and then ending up on Stack Overflow, I could get the answer I was looking for directly in an AI side-window in my editor. Of course, that's bad news for Stack Overflow.

I was however skeptical from the beginning since LLMs make mistakes, sometimes they making up function signatures or APIs that don't exist. Therefore I got into the habit of going to the official standard library documentation to double-check suggestions. For example, if the LLM suggests using strings.SplitN, I verify the function signature and behaviour carefully before using it. Basically, "don't trust and do verify."

I stuck to the standard library in my project, but if an LLM recommends third-party dependencies for you, make sure they exist and that Socket doesn't flag them as malicious. Research has found that 5-20% of packages suggested by LLMs don't actually exist, making this a real attack vector (dubbed "slopsquatting").

Autocomplete is too distracting

A step I took early on was to disable AI autocomplete in my editor. When learning a new language, you need to develop muscle memory for the syntax. Also, Go is no Java. There's not that much boilerplate to write in general.

I found it quite distracting to see some almost correct code replace my thinking about the next step. I can see how one could go faster with these suggestions, but being a developer is not just about cranking out lines of code as fast as possible, it's also about constantly learning new things (and retaining them).

Asking about idiomatic code

One of the most useful prompts when learning a new language is "Is this the most idiomatic way to do this in Go?". Large language models are good at recognizing patterns and can point out when you're writing code that works but doesn't follow the conventions of the language. This is especially valuable early on when you don't yet have a feel for what "good" code looks like in that language.

It's usually pretty easy (at least for an experience developer) to tell when the LLM suggestion is actually counter productive or wrong. If it increases complexity or is harder to read/decode, it's probably not a good idea to do it.

Reviews

One way a new dev gets better is through code review. If you have access to a friend who's an expert in the language you're learning, then you can definitely gain a lot by asking for feedback on your code.

If you don't have access to such a valuable resource, or as a first step before you consult your friend, I found that AI-assisted code reviews can be useful:

  1. Get the model to write the review prompt for you. Describe what you want reviewed and let it generate a detailed prompt.
  2. Feed that prompt to multiple models. They each have different answers and will detect different problems.
  3. Be prepared to ignore 50% of what they recommend. Some suggestions will be stylistic preferences, others will be wrong, or irrelevant.

The value is in the other 50%: the suggestions that make you think about your code differently or catch genuine problems.

Similarly for security reviews:

  • A lot of what they flag will need to be ignored (false positives, or things that don't apply to your threat model).
  • Some of it may highlight areas for improvement that you hadn't considered.
  • Occasionally, they will point out real vulnerabilities.

But always keep in mind that AI chatbots are trained to be people-pleasers and often feel the need to suggest something when nothing was needed

An unexpected benefit

One side effect of using AI assistants was that having them write the scaffolding for unit tests motivated me to increase my code coverage. Trimming unnecessary test cases and adding missing ones is pretty quick when the grunt work is already done, and I ended up testing more of my code (being a personal project written in my own time) than I might have otherwise.

Learning

In the end, I continue to believe in the value of learning from quality books (I find reading paper-based most effective). In addition, I like to create Anki questions for common mistakes or things I find I have to look up often. Remembering something will always be faster than asking an AI tool.

So my experience this year tells me that LLMs can supplement traditional time-tested learning techniques, but I don't believe it obsoletes them.

P.S. I experimented with getting an LLM to ghost-write this post for me from an outline (+ a detailed style guide) and I ended up having to rewrite at least 75% of it. It was largely a waste of time.

08 December, 2025 12:15AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian's /usr-move transition has been completed (by Helmut Grohne)

By now, the /usr-merge is an old transition. Effectively, it turns top-level directories such as /bin into symbolic links pointing below /usr. That way the entire operating system can be contained below the /usr hierarchy enabling e.g. image based update mechanisms. It was first supported in Debian 9, which is no longer in active use at this point (except for users of Freexian’s ELTS offer). When it became mandatory in Debian 12, it wasn’t really done though, because Debian’s package manager was not prepared to handle file system objects being referred to via two different paths. With nobody interested in handling the resulting issues, Freexian stepped in and funded a project lead by Helmut Grohne to resolve the remaining issues.

While the initial idea was to enhance the package manager, Debian’s members disagreed. They preferred an approach where files were simply tracked with their physical location while handling the resulting misbehavior of the package manager using package-specific workarounds. This has been recorded in the DEP17 document. During the Debian 13 release cycle, the plan has been implemented. A tool for detecting possible problems was developed specifically for this transition. Since all files are now tracked with their physical location and necessary workarounds have been added, problematic behavior is no longer triggered. An upgrade from Debian 12 to Debian 13 is unlikely to run into aliasing problems as a result.

This whole project probably consumed more than 1500 hours of work from Debian contributors, of which 700 were sponsored by Freexian through the work of Helmut Grohne. What remains is eventually removing the workarounds.

08 December, 2025 12:00AM by Helmut Grohne

December 07, 2025

Vincent Bernat

Compressing embedded files in Go

Go’s embed feature lets you bundle static assets into an executable, but it stores them uncompressed. This wastes space: a web interface with documentation can bloat your binary by dozens of megabytes. A proposition to optionally enable compression was declined because it is difficult to handle all use cases. One solution? Put all the assets into a ZIP archive! 🗜️

Code

The Go standard library includes a module to read and write ZIP archives. It contains a function that turns a ZIP archive into an io/fs.FS structure that can replace embed.FS in most contexts.1

package embed

import (
  "archive/zip"
  "bytes"
  _ "embed"
  "fmt"
  "io/fs"
  "sync"
)

//go:embed data/embed.zip
var embeddedZip []byte

var dataOnce = sync.OnceValue(func() *zip.Reader {
  r, err := zip.NewReader(bytes.NewReader(embeddedZip), int64(len(embeddedZip)))
  if err != nil {
    panic(fmt.Sprintf("cannot read embedded archive: %s", err))
  }
  return r
})

func Data() fs.FS {
  return dataOnce()
}

We can build the embed.zip archive with a rule in a Makefile. We specify the files to embed as dependencies to ensure changes are detected.

common/embed/data/embed.zip: console/data/frontend console/data/docs
common/embed/data/embed.zip: orchestrator/clickhouse/data/protocols.csv 
common/embed/data/embed.zip: orchestrator/clickhouse/data/icmp.csv
common/embed/data/embed.zip: orchestrator/clickhouse/data/asns.csv
common/embed/data/embed.zip:
    mkdir -p common/embed/data && zip --quiet --recurse-paths --filesync $@ $^

The automatic variable $@ is the rule target, while $^ expands to all the dependencies, modified or not.

Space gain

Akvorado, a flow collector written in Go, embeds several static assets:

  • CSV files to translate port numbers, protocols or AS numbers, and
  • HTML, CSS, JS, and image files for the web interface, and
  • the documentation.
Breakdown of space used by each package before and after introducing embed.zip. It is displayed as a treemap and we can see many embedded files replaced by a bigger one.
Breakdown of the space used by each component before (left) and after (right) the introduction of embed.zip.

Embedding these assets into a ZIP archive reduced the size of the Akvorado executable by more than 4 MiB:

$ unzip -p common/embed/data/embed.zip | wc -c | numfmt --to=iec
7.3M
$ ll common/embed/data/embed.zip
-rw-r--r-- 1 bernat users 2.9M Dec  7 17:17 common/embed/data/embed.zip

Performance loss

Reading from a compressed archive is not as fast as reading a flat file. A simple benchmark shows it is more than 4× slower. It also allocates some memory.2

goos: linux
goarch: amd64
pkg: akvorado/common/embed
cpu: AMD Ryzen 5 5600X 6-Core Processor
BenchmarkData/compressed-12     2262   526553 ns/op   610 B/op   10 allocs/op
BenchmarkData/uncompressed-12   9482   123175 ns/op     0 B/op    0 allocs/op

Each access to an asset requires a decompression step, as seen in this flame graph:

&#128444; Flame graph when reading data from embed.zip compared to reading data directly
CPU flame graph comparing the time spent on CPU when reading data from embed.zip (left) versus reading data directly (right). Because the Go testing framework executes the benchmark for uncompressed data 4 times more often, it uses the same horizontal space as the benchmark for compressed data. The graph is interactive.

While a ZIP archive has an index to quickly find the requested file, seeking inside a compressed file is currently not possible.3 Therefore, the files from a compressed archive do not implement the io.ReaderAt or io.Seeker interfaces, unlike directly embedded files. This prevents some features, like serving partial files or detecting MIME types when serving files over HTTP.


For Akvorado, this is an acceptable compromise to save a few mebibytes from an executable of almost 100 MiB. Next week, I will continue this futile adventure by explaining how I prevented Go from disabling dead code elimination! 🦥


  1. You can safely read multiple files concurrently. However, it does not implement ReadDir() and ReadFile() methods. ↩︎

  2. You could keep frequently accessed assets in memory. This reduces CPU usage and trades cached memory for resident memory. ↩︎

  3. SOZip is a profile that enables fast random access in a compressed file. However, Go’s archive/zip module does not support it. ↩︎

07 December, 2025 11:05PM by Vincent Bernat

Iustin Pop

Yes, still alive!

Yeah, again three months have passed since my last (trivial) post, and I really don’t know where the time has flown.

I suppose the biggest problem was the long summer vacation, which threw me off-track, and then craziness started. Work work work, no time for anything, which kept me fully busy in August, and then “you should travel”.

So mid-September I went on my first business trip since Covid, again to Kirkland, which in itself was awesome. Flew out Sunday, and as I was concerned I was going to lose too much fitness—had a half-marathon planned on the weekend after the return—I ran every morning of the four days I was there. And of course, on the last day, I woke up even earlier (05:30 AM), went out to run before sunrise, intending to do a very simple “run along the road that borders the lake for 2.5K, then back”. And right at the farthest point, a hundred metres before my goal of turning around, I tripped, started falling, and as I was falling, I hit—sideways—a metal pole. I was in a bus station, it was the pole that has the schedule at the top, and I hit it at relatively full speed, right across my left-side ribs. The crash took the entire air out of my lungs, and I don’t remember if I ever felt pain/sensation like that—I was seriously not able to breathe for 20 seconds or so, and I was wondering if I’m going to pass out at this rate.

Only 20 seconds, because my Garmin started howling like a police siren, and the screen was saying something along the lines of: “Incident detected; contacting emergency services in 40…35…” and I was fumbling to cancel that, since a) I wasn’t that bad, b) notifying my wife that I had a crash would have not been a smart idea.

My left leg was scraped in a few places, my left hand pretty badly, or more than just scraped, so my focus was on limping back, and finding a fountain to wash my injuries, which I did, so I kept running with blood dripping down my hand. Fun fun, everything was hurting, I took an Uber for the ~1Km to the office, had many meetings, took another Uber and flew back to Zurich. Seattle → San Francisco → Zürich, I think 14 hours, with my ribs hurting pretty badly. But I got home (Friday afternoon), and was wondering if I can run or not on Saturday.

Saturday comes, I feel pretty OK, so I said let’s try, will stop if the pain is too great. I pick up my number, I go to the start, of course in the last block and not my normal block, and I start running. After 50 metres, I knew this won’t be good enough, but I said, let’s make it to the first kilometre. Then to the first fuelling point, then to the first aid point, at which moment I felt good enough to go to the second one.

Long story short, I ran the whole half marathon, with pain. Every stop for fuelling was mentally hard, as the pain stopped, and I knew I had to start running again, and the pain would resume. In the end, managed to finish: two and a half hours, instead of just two hours, but alive and very happy. Of course, I didn’t know what was waiting for me… Sunday I wake up in heavy pain, and despite painkillers, I was not feeling much better. The following night was terrible, Monday morning I went to the doctor, had X-rays, discussion with a radiologist. “Not really broken, but more than just bruised. See this angle here? Bones don’t have angles normally”. Painkillers, chest/abdomen wrapping, no running! So my attempts to “not lose fitness” put me off running for a couple of weeks.

Then October came, and I was getting better, but work was getting even more crazy. I don’t know where November passed, honestly, and now we’re already in December. I did manage to run, quite well, managed to bike a tiny bit and swim a little, but I’m not in a place where I can keep a regular and consistent schedule.

On the good side, I managed this year, for the first time since Covid, to not get sick. Hey, a sport injury is 100× better than a sickness, like I had in previous years, taking me out for two weeks. But life was crazy enough that I didn’t read some of my email accounts for months, and I’m just now starting to catch up to, well, baseline.

Of course, “the” rib—the lowest one on the left side—is long-healed, or so I thought. After some strength training early this week, I was very sore the next day, and I wanted to test whether my rib is still sore. I touched it at “the point”, and it hurt so badly I couldn’t believe. Two and a half months, and it’s not done-done.

And now it’s just two weeks before Christmas and New Year’s, and that time off will ruin my rhythm again. At least ski vacation is booked, ski service is done, and slowly, work is getting in good enough shape to actually enjoy thinking about vacation.

So, in the end, a very adventurous last third of the year, and that wasn’t even all. As I’m writing this, my right wrist is bandaged and for the past 24 hours it hasn’t hurt too much, but that’s another, and not so interesting, story.

I’ll close with a yay for always being behind/backlogged, but alive and relatively well. My sport injuries are “elective injuries” so to speak, and I’m very thankful for that. See you in the next post!

07 December, 2025 08:37PM

December 06, 2025

Simon Josefsson

Reproducible Guix Container Images

Around a year ago I wrote about Guix Container Images for GitLab CI/CD and these images have served the community well. Besides continous use in CI/CD, these Guix container images are used to confirm reproducibility of the source tarball artifacts in the releases of Libtasn1 v4.20, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, SASL v2.2.2, Guile-GnuTLS v5.0.1, and OATH Toolkit v2.6.13. See how all those release announcements mention a Guix commit? That’s the essential supply-chain information about the Guix build environment that allows the artifacts to be re-created. To make sure this is repeatable, the release tarball artifacts are re-created from source code every week in the verify-reproducible-artifacts project, that I wrote about earlier. Guix’s time travelling feature make this sustainable to maintain, and hopefully will continue to be able to reproduce the exact same tarball artifacts for years to come.

During the last year, unfortunately Guix was removed from Debian stable. My Guix container images were created from Debian with that Guix package. My setup continued to work since the old stage0 Debian+Guix containers were still available. Such a setup is not sustainable, as there will be bit-rot and we don’t want to rely on old containers forever, which (after the removal of Guix in Debian) could not be re-produced any more. Let this be a reminder how user-empowering features such as Guix time-travelling is! I have reworked my Guix container image setup, and this post is an update on the current status of this effort.

The first step was to re-engineer Debian container images with Guix, and I realized these were useful on their own, and warrant a separate project. A more narrowly scoped project makes will hopefully make it easier to keep them working. Now instead of apt-get install guix they use the official Guix guix-install.sh approach. Read more about that effort in the announcement of Debian with Guix.

The second step was to reconsider my approach to generate the Guix images. The earlier design had several stages. First, Debian+Guix containers were created. Then from those containers, a pure Guix container was created. Finally, using the pure Guix container another pure Guix container was created. The idea behind that GCC-like approach was to get to reproducible images that were created from an image that had no Debian left on it. However, I never managed to finish this. Partially because I hadn’t realized that every time you build a Guix container image from Guix, you effectively go back in time. When using Guix version X to build a container with Guix on it, it will not put Guix version X into the container but will put whatever version of Guix is available in its package archive, which will be an earlier version, such as version X-N. I had hope to overcome this somehow (running a guix pull in newly generated images may work), but never finished this before Guix was removed from Debian.

So what could a better design look like?

For efficiency, I had already started experimenting with generating the final images directly from the Debian+Guix images, and after reproducibility bugs were fixed I was able to get to reproducible images. However, I was still concerned that the Debian container could taint the process somehow, and was also concerned about the implied dependency on non-free software in Debian.

I’ve been using comparative rebuilds using “similar” distributions to confirm artifact reproducibility for my software projects, comparing builds on Trisquel 11 with Ubuntu 22.04, and AlmaLinux 9 with RockyLinux 9 for example. This works surprisingly well. Including one freedom-respecting distribution like Trisquel will detect if any non-free software has bearing on artifacts. Using different architectures, such as amd64 vs arm64 also help with deeper supply-chain concerns.

My conclusion was that I wanted containers with the same Guix commit for both Trisquel and Ubuntu. Given the similarity with Debian, adapting and launching the Guix on Trisquel/Debian project was straight forward. So we now have Trisquel 11/12 and Ubuntu 22.04/24.04 images with the same Guix on them.

Do you see where the debian-with-guix and guix-on-dpkg projects are leading to?

We are now ready to look at the modernized Guix Container Images project. The tags are the same as before:

registry.gitlab.com/debdistutils/guix/container:latest
registry.gitlab.com/debdistutils/guix/container:slim
registry.gitlab.com/debdistutils/guix/container:extra
registry.gitlab.com/debdistutils/guix/container:gash

The method to create them is different. Now there is a “build” job that uses the earlier Guix+Trisquel container (for amd64) or Guix+Debian (for arm64, pending Trisquel arm64 containers). The build job create the final containers directly. Next a Ubuntu “reproduce” job is launched that runs the same commands, failing if it cannot generate the bit-by-bit identical container. Then single-arch images are tested (installing/building GNU hello and building libksba), and then pushed to the GitLab registry, adding multi-arch images in the process. Then the final multi-arch containers are tested by building Guile-GnuTLS and, on success, uploaded to the Docker Hub.

How would you use them? A small way to start the container is like this:

jas@kaka:~$ podman run -it --privileged --entrypoint=/bin/sh registry.gitlab.com/debdistutils/guix/container:latest
sh-5.2# env HOME=/ guix describe # https://issues.guix.gnu.org/74949
  guix 21ce6b3
    repository URL: https://git.guix.gnu.org/guix.git
    branch: master
    commit: 21ce6b392ace4c4d22543abc41bd7c22596cd6d2
sh-5.2# 

The need for --entrypoint=/bin/sh is because Guix’s pack command sets up the entry point differently than most other containers. This could probably be fixed if people want that, and there may be open bug reports about this.

The need for --privileged is more problematic, but is discussed upstream. The above example works fine without it, but running anything more elaborate with guix-daemon installing packages will trigger a fatal error. Speaking of that, here is a snippet of commands that allow you to install Guix packages in the container.

cp -rL /gnu/store/*profile/etc/* /etc/
echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
echo 'root:x:0:' > /etc/group
groupadd --system guixbuild
for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
guix install hello
GUIX_PROFILE="/var/guix/profiles/per-user/root/guix-profile"
. "$GUIX_PROFILE/etc/profile"
hello

This could be simplified, but we chose to not hard-code in our containers because some of these are things that probably shouldn’t be papered over but fixed properly somehow. In some execution environments, you may need to pass --disable-chroot to guix-daemon.

To use the containers to build something in a GitLab pipeline, here is an example snippet:

test-amd64-latest-wget-configure-make-libksba:
  image: registry.gitlab.com/debdistutils/guix/container:latest
  before_script:
  - cp -rL /gnu/store/*profile/etc/* /etc/
  - echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
  - echo 'root:x:0:' > /etc/group
  - groupadd --system guixbuild
  - for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
  - export HOME=/
  - env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
  - guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
  - guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="//.guix-profile"
  - . "$GUIX_PROFILE/etc/profile"
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

More help on the project page for the Guix Container Images.

That’s it for tonight folks, and remember, Happy Hacking!

06 December, 2025 10:22PM by simon

hackergotchi for Jonathan Dowland

Jonathan Dowland

thesis

It's done! It's over! I've graduated, I have the scroll, I'm staring at the eye-watering prices for the official photographer snap, I'm adjusting to post-thesis life.

My PhD thesis revisions have been accepted and my thesis is now available from Newcastle University Library's eThesis repository.

As part of submitting my corrections, I wrote a brief report detailing the changes I made from my thesis at the time of the viva. I also produced a latexdiff marked-up copy of the thesis to visualise the exact changes. In order to shed some light on the post-viva corrections process, at least at my institution, and in the hope that they are some use to someone, I'm sharing those documents:

06 December, 2025 09:41PM

Taavi Väänänen

How to import a new Wikipedia language edition (in hard mode)

I created the latest Wikipedia language edition, the Toki Pona Wikipedia, last month. Unlike most other wikis which start their lives in the Wikimedia Incubator before the full wiki is created, in this case the community had been using a completely external MediaWiki site to build the wiki before it was approved as a "proper" Wikipedia wiki,1 and now that external wiki needed to be imported to the newly created Wikimedia-hosted wiki. (As far as I'm aware, the last and previously only time an external wiki has been imported to a Wikimedia project was in 2013 when Wikitravel was forked as Wikivoyage.)

Creating a Wikimedia wiki these days is actually pretty straightforward, at least when compared to what it used to be like a couple of years ago. Today the process mostly involves using a script to generate two configuration changes, one to add the basic configuration for a wiki to operate and an another to add the wiki to the list of all wikis that exist, and then running a script to create the wiki database in between of deploying those two configuration changes. And then you wait half an hour while the script to tell all Wikidata client wikis about the new wiki runs on one wiki at a time.

The primary technical challenge in importing a third-party wiki is that there's no SUL making sure that a single username maps to the same account on both wikis. This means that the usual strategy of using the functionality I wrote in CentralAuth to manually create local accounts can't be used as is, and so we needed to come up with a new way of matching everyone's contributions to their existing Wikimedia accounts.

(Side note: While the user-facing interface tries to present a single "global" user account that can be used on all public Wikimedia wikis, in reality the account management layer in CentralAuth is mostly just a glue layer to link together individual "local" accounts on each wiki that the user has ever visited. These local accounts have independent user ID numbers — for example I am user #35938993 on the English Wikipedia but #4 on the new Toki Pona Wikipedia — and are what most of MediaWiki code interacts with except for a few features specifically designed with cross-wiki usage in mind. This distinction is also still very much present and visible in the various administrative and anti-abuse workflows.)

The approach we ended up choosing was to re-write the dump file before importing, so that a hypothetical account called $Name would be turned $Name~wikipesija.org after the import.2 We also created empty user accounts that would take ownership of the edits to be imported so that we could use the standard account management tools on them later on. MediaWiki supports importing contributions without a local account to attribute them to, but it doesn't seem to be possible to convert an imported actor3 to a regular user later on which we wanted to keep as a possibility, even with the minor downside of creating a few hundred users that'll likely never get touched again later.

We also made specific decisions to add the username suffix to everyone, not to just those names that'd conflicted with existing SUL accounts, and to deal with renaming users that wanted their contributions linked to an existing SUL account only after the import. This both reduced complexity and thus risk from the import phase, which already had much more unknowns compared to the rest of the process, but also were much better options ethically as well: suffixing all names meant we would not imply that those people chose to be Wikimedians with those specific usernames (when in reality it was us choosing to import those edits to the Wikimedia universe), and doing renames using the standard MediaWiki account management tooling meant that it produced the normal public log entries that all other MediaWiki administrative actions create.

With all of the edits imported, the only major thing remaining was doing those merges I mentioned earlier to attribute imported edits to people's existing SUL accounts. Thankfully, the local account -based system makes it actually pretty simple. Usually CentralAuth prevents renaming individual local accounts that are attached to a global account, but that check can be bypassed with a maintenance script or a privileged enough account. Renaming the user automatically detached it from the previous global account, after which an another maintenance script could be used to attach the user to the correct global account.


  1. That external site was a fork of a fork of the original Toki Pona Wikipedia that was closed in 2005. And because cool URIs don't change, we made the the URLs that the old Wikipedia was using work again. Try it: https://art-tokipona.wikipedia.org↩︎

  2. wikipesija.org was the domain where the old third-party wiki was hosted on, and ~ was used as a separator character in usernames during the SUL finalization in the early 2010s so using it here felt appropriate as well. ↩︎

  3. An actor is a MediaWiki term and a database table referring to anything that can do edits or logged actions. Usually an actor is a user account or an IP address, but an imported user name in a specific format can also be represented as an actor. ↩︎

06 December, 2025 12:00AM by Taavi Väänänen (taavi@majava.org)

Kathara Sasikumar

My Journey Through the LFX Linux Kernel Mentorship Program

My Journey Through the LFX Linux Kernel Mentorship Program When I first decided to apply for the Linux Foundation’s LFX kernel mentorship program, I knew it would be tough. At the beginning, there were 12 tasks I had to complete to show I understood the basics of kernel development and to get accepted into the program. They helped me understand what I was getting into. Now that the mentorship is almost over, I can say this experience changed how I think about working with the Linux kernel.

06 December, 2025 12:00AM

December 04, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in November 2025

My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

I began preparing for the second stage of the GSS-API key exchange package split (some details have changed since that message). It seems that we’ll need to wait until Ubuntu 26.04 LTS has been released, but that’s close enough that it’s worth making sure we’re ready. This month I just did some packaging cleanups that would otherwise have been annoying to copy, such as removing support for direct upgrades from pre-bookworm. I’m considering some other package rearrangements to make the split easier to manage, but haven’t made any decisions here yet.

This also led me to start on a long-overdue bug triage pass, mainly consisting of applying usertags to lots of our open bugs to sort them by which program they apply to, and also closing a few that have been fixed, since some bugs will eventually need to be reassigned to GSS-API packages and it would be helpful to make them easier to find. At the time of writing, about 30% of the bug list remains to be categorized this way.

Python packaging

I upgraded these packages to new upstream versions:

I packaged django-pgtransaction and backported it to trixie, since we plan to use it in Debusine; and I adopted python-certifi for the Python team.

I fixed or helped to fix several other build/test failures:

I fixed a couple of other bugs:

Other bits and pieces

Code reviews

04 December, 2025 05:55PM by Colin Watson

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in November 2025

04 December, 2025 02:59PM by Ben Hutchings

December 03, 2025

Reproducible Builds

Reproducible Builds in November 2025

Welcome to the report for November 2025 from the Reproducible Builds project!

These monthly reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As always, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. “10 years of Reproducible Build” at SeaGL
  2. Distribution work
  3. Tool development
  4. Website updates
  5. Miscellaneous news
  6. Software Supply Chain Security of Web3
  7. Upstream patches

10 years of Reproducible Builds’ at SeaGL 2025

On Friday 8th November, Chris Lamb gave a talk called 10 years of Reproducible Builds at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Chris’ talk:

[…] introduces the concept of reproducible builds, its technical underpinnings and its potentially transformative impact on software security and transparency. It is aimed at developers, security professionals and policy-makers who are concerned with enhancing trust and accountability in our software. It also provides a history of the Reproducible Builds project, which is approximately ten years old. How are we getting on? What have we got left to do? Aren’t all the builds reproducible now?


Distribution work

In Debian this month, Jochen Sprickerhof created a merge request to replace the use of reprotest in Debian’s Salsa Continuous Integration (CI) pipeline with debrebuild. Joschen cites the advantages as being threefold: firstly, that “only one extra build needed”; it “uses the same sbuild and ccache tooling as the normal build”; and “works for any Debian release”. The merge request was merged by Emmanuel Arias and is now active.

kpcyrd posted to our mailing list announcing the initial release of repro-threshold, which implements an APT transport that “defines a threshold of at least X of my N trusted rebuilders need to confirm they reproduced the binary” before installing Debian packages. “Configuration can be done through a config file, or through a curses-like user interface.

Holger then merged two commits by Jochen Sprickerhof in order to address a fakeroot-related reproducibility issue in the debian-installer, and Jörg Jaspert deployed a patch by Ivo De Decker for a bug originally filed by Holger in February 2025 related to some Debian packages not being archived on snapshot.debian.org.

Elsewhere, Roland Clobus performed some analysis on the “live” Debian trixie images, which he determined were not reproducible. However, in a follow-up post, Roland happily reports that the issues have been handled. In addition, 145 reviews of Debian packages were added, 12 were updated and 15 were removed this month adding to our knowledge about identified issues.

Lastly, Jochen Sprickerhof filed a bug announcing their intention to “binary NMU” a very large number of the R programming language after a reproducibility-related toolchain bug was fixed.


Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Julien Malka and Arnout Engelen launched the new hash collection server for NixOS. Aside from improved reporting to help focus reproducible builds efforts within NixOS, it collects build hashes as individually-signed attestations from independent builders, laying the groundwork for further tooling.


Tool development

diffoscope version 307 was uploaded to Debian unstable (as well as version 309). These changes included further attempts to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [][][]

In addition, reprotest versions 0.7.31 and 0.7.32 were uploaded to Debian unstable by Holger Levsen, who also made the following changes:

  • Do not vary the architecture personality if the kernel is not varied. (Thanks to Raúl Cumplido). []
  • Drop the debian/watch file, as Lintian now flags this as error for ‘native’ Debian packages. [][]
  • Bump Standards-Version to 4.7.2, with no changes needed. []
  • Drop the Rules-Requires-Root header as it is no longer required.. []

In addition, however, Vagrant Cascadian fixed a build failure by removing some extra whitespace from an older changelog entry. []


Website updates

Once again, there were a number of improvements made to our website this month including:


Miscellaneous news


Software Supply Chain Security of Web3

Via our mailing list, Martin Monperrus let us know about their recently-published page on the Software Supply Chain Security of Web3. The abstract of their paper is as follows:

Web3 applications, built on blockchain technology, manage billions of dollars in digital assets through decentralized applications (dApps) and smart contracts. These systems rely on complex, software supply chains that introduce significant security vulnerabilities. This paper examines the software supply chain security challenges unique to the Web3 ecosystem, where traditional Web2 software supply chain problems intersect with the immutable and high-stakes nature of blockchain technology. We analyze the threat landscape and propose mitigation strategies to strengthen the security posture of Web3 systems.

Their paper lists reproducible builds as one of the mitigating strategies. A PDF of the full text is available to download.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

03 December, 2025 08:28PM

Michael Ablassmeier

libvirt 11.10 VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN

As with libvirt 11.10 a new flag for backup operation has been inroduced: VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN.

According to the documentation “It instructs libvirt to avoid termination of the VM if the guest OS shuts down while the backup is still running. The VM is in that scenario reset and paused instead of terminated allowing the backup to finish. Once the backup finishes the VM process is terminated.”

Added support for this in virtnbdbackup 2.40.

03 December, 2025 12:00AM

December 02, 2025

Simon Josefsson

Guix on Trisquel & Ubuntu for Reproducible CI/CD Artifacts

Last week I published Guix on Debian container images that prepared for today’s announcement of Guix on Trisquel/Ubuntu container images.

I have published images with reasonably modern Guix for Trisquel 11 aramo, Trisquel 12 ecne, Ubuntu 22.04 and Ubuntu 24.04. The Ubuntu images are available for both amd64 and arm64, but unfortunately Trisquel arm64 containers aren’t available yet so they are only for amd64. Images for ppc64el and riscv64 are work in progress. The currently supported container names:

registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix

Or you prefer guix-on-dpkg on Docker Hub:

docker.io/jas4711/guix-on-dpkg:trisquel11-guix
docker.io/jas4711/guix-on-dpkg:trisquel12-guix
docker.io/jas4711/guix-on-dpkg:ubuntu22.04-guix
docker.io/jas4711/guix-on-dpkg:ubuntu24.04-guix

You may use them as follows. See the guix-on-dpkg README for how to start guix-daemon and installing packages.

jas@kaka:~$ podman run -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
root@guix:/# head -1 /etc/os-release 
NAME="Trisquel GNU/Linux"
root@guix:/# guix describe
  guix 136fc8b
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: 136fc8bfe91a64d28b6c54cf8f5930ffe787c16e
root@guix:/# 

You may now be asking yourself: why? Fear not, gentle reader, because having two container images of roughly similar software is a great tool for attempting to build software artifacts reproducible, and comparing the result to spot differences. Obviously.

I have been using this pattern to get reproducible tarball artifacts of several software releases for around a year and half, since libntlm 1.8.

Let’s walk through how to setup a CI/CD pipeline that will build a piece of software, in four different jobs for Trisquel 11/12 and Ubuntu 22.04/24.04. I am in the process of learning Codeberg/Forgejo CI/CD, so I am still using GitLab CI/CD here, but the concepts should be the same regardless of platform. Let’s start by defining a job skeleton:

.guile-gnutls: &guile-gnutls
  before_script:
  - /root/.config/guix/current/bin/guix-daemon --version
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARGS &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - type guix
  - guix --version
  - guix describe
  - time guix install --verbosity=0 wget gcc-toolchain autoconf automake libtool gnutls guile pkg-config
  - time apt-get update
  - time apt-get install -y make git texinfo
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  script:
  - git clone https://codeberg.org/guile-gnutls/guile-gnutls.git
  - cd guile-gnutls
  - git checkout v5.0.1
  - ./bootstrap
  - ./configure
  - make V=1
  - make V=1 check VERBOSE=t
  - make V=1 dist
  after_script:
  - mkdir -pv out/$CI_JOB_NAME_SLUG/src
  - mv -v guile-gnutls/*-src.tar.* out/$CI_JOB_NAME_SLUG/src/
  - mv -v guile-gnutls/*.tar.* out/$CI_JOB_NAME_SLUG/
  artifacts:
    paths:
    - out/**

This installs some packages, clones guile-gnutls (it could be any project, that’s just an example), build it and return tarball artifacts. The artifacts are the git-archive and make dist tarballs.

Let’s instantiate the skeleton into four jobs, running the Trisquel 11/12 jobs on amd64 and the Ubuntu 22.04/24.04 jobs on arm64 for fun.

guile-gnutls-trisquel11-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
  extends: .guile-gnutls

guile-gnutls-ubuntu22.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
  extends: .guile-gnutls

guile-gnutls-trisquel12-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
  extends: .guile-gnutls

guile-gnutls-ubuntu24.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
  extends: .guile-gnutls

Running this pipeline will result in artifacts that you want to confirm for reproducibility. Let’s add a pipeline job to do the comparison:

guile-gnutls-compare:
  image: alpine:latest
  needs: [ guile-gnutls-trisquel11-amd64,
           guile-gnutls-trisquel12-amd64,
           guile-gnutls-ubuntu22.04-arm64,
           guile-gnutls-ubuntu24.04-arm64 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
  - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
  - cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
  artifacts:
    when: always
    paths:
    - ./out/**

Look how beautiful, almost like ASCII art! The commands print SHA256 checksums of the artifacts, sorted in a couple of ways, and then proceeds to compare relevant artifacts. What would the output of such a run be, you may wonder? You can look for yourself in the guix-on-dpkg pipeline but here is the gist of it:

$ cd out
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-ubuntu22-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-ubuntu24-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-ubuntu22-04-arm64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-ubuntu24-04-arm64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz

That’s it for today, but stay tuned for more updates on using Guix in containers, and remember; Happy Hacking!

02 December, 2025 10:01PM by simon

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

duckdb-mlpack 0.0.5: Added kmeans, version helpers, documentation

A new release of the still-recent duckdb extension for mlpack, the C++ header-only library for machine learning, was merged into the duckdb community extensions repo today, and has been updated at its duckdb ‘mlpack’ extension page.

This release 0.0.5 adds one new method: kmeans clustering. We also added two version accessors for both mlpack and armadillo. We found during the work on random forests (added in 0.0.4) that the multithreaded random number generation was not quite right in the respective upstream codes. This has by now been corrected in armadillo 15.2.2 as well as the trunk version of mlpack so if you build with those, and set a seed, then your forests and classification will be stable across reruns. We added a second state variable mlpack_silent that can be used to suppress even the minimal prediction quality summary some methods show, and expanded the documentation.

For more details, see the repo for code, issues and more, and the extension page for more about this duckdb community extension.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

02 December, 2025 05:40PM

Birger Schacht

Status update, November 2025

I started this month with a week of vacation which was followed by a small planned surgery and two weeks of sick leave. Nonetheless, I packaged and uploaded new releases of a couple of packages:

  • swayidle updated to version 1.9.0-1
  • swaylock updated to version 1.8.4-1
  • foot updated to version 1.25.0-1
  • swayimg updated to version 4.6-1
  • scdoc updated to version 1.11.4-1
  • wofi updated to version 1.5.1-1
  • xdg-desktop-portal-wlr updated to version 0.8.0-1

Besides that I reactivated I project I started in summer 2024: debiverse.org. The idea of that was to have interfaces to Debian bugs and packages that are usable on mobile devices (I know, ludicrous!). Back then I started with Flask and Sqlalchemy, but that soon got out of hand. I now switched the whole stack to FastAPI and SQLModel which makes it a lot easier to manage. And the upside is that it comes with an API and OpenAPI docs. For the rendered HTML pages I use Jinja2 with Tailwind as CSS framework. I am currently using udd-mirror as database backend, which works pretty good (for this single user project). It would be nice to have some of the data in a faster index, like Typesense or Meilisearch. This way it would be possible to have faceted search or more performant full text search. But I haven’t found any software that could provide this that is packaged in Debian.

Screenshot of the debiverse bug report list

Screenshot of the debiverse swagger API

02 December, 2025 05:28AM

François Marier

Recovering from a broken update on the Turris Omnia

The recent Turris OS update from 7.2.3 to 9.0.0 took down my WiFi entirely. The wired network still works fine, but wireless is completely broken.

Factory reset

It turns out the Omnia has an extensive (and fast) factory reset / recovery mode via the hardware reset button.

Unfortunately, the factory image didn't work for me, possibly because I don't use the stock WiFi radios anymore.

Rolling back with schnapps

Thanks to the fact that the Omnia uses a btrfs root filesystem, and the liberal use of snapshots around updates, I was able to rollback to the pre-9.0.0 state.

First, I connected to the router using ssh:

ssh root@192.168.1.1

Then I listed the available snapshots:

$ schnapps list
# | Type      | Size        | Date                        | Description
------+-----------+-------------+-----------------------------+------------------------------------
  500 | post      |    15.98MiB | 2025-08-09 11:27:48 -0700   | Automatic post-update snapshot (TurrisOS 7.2.2 - hbs)
  506 | pre       |    17.92MiB | 2025-09-12 03:44:32 -0700   | Automatic pre-update snapshot (TurrisOS 7.2.2 - hbs)
  507 | post      |    17.88MiB | 2025-09-12 03:45:14 -0700   | Automatic post-update snapshot (TurrisOS 7.2.3 - hbs)
  515 | time      |    20.03MiB | 2025-11-02 01:05:01 -0700   | Snapshot created by cron
  516 | time      |    20.05MiB | 2025-11-09 01:05:01 -0800   | Snapshot created by cron
  517 | time      |    20.29MiB | 2025-11-16 01:05:00 -0800   | Snapshot created by cron
  518 | time      |    20.64MiB | 2025-11-23 01:05:01 -0800   | Snapshot created by cron
  519 | time      |    20.83MiB | 2025-11-30 01:05:00 -0800   | Snapshot created by cron
  520 | pre       |    87.91MiB | 2025-11-30 07:41:10 -0800   | Automatic pre-update snapshot (TurrisOS 7.2.3 - hbs)
  521 | post      |   196.32MiB | 2025-11-30 07:48:11 -0800   | Automatic post-update snapshot (TurrisOS 9.0.0 - hbs)
  523 | pre       |     4.44MiB | 2025-11-30 20:47:31 -0800   | Automatic pre-update snapshot
  524 | post      |   224.00KiB | 2025-11-30 20:47:43 -0800   | Automatic post-update snapshot
  525 | rollback  |   224.00KiB | 2025-12-01 04:56:32 +0000   | Rollback to snapshot factory
  526 | pre       |     4.44MiB | 2025-11-30 21:04:19 -0800   | Automatic pre-update snapshot
  527 | post      |   272.00KiB | 2025-11-30 21:04:31 -0800   | Automatic post-update snapshot
  528 | rollback  |   272.00KiB | 2025-12-01 05:13:38 +0000   | Rollback to snapshot factory
  529 | pre       |     4.52MiB | 2025-11-30 21:28:44 -0800   | Automatic pre-update snapshot
  530 | single    |   208.00KiB |                             | 
  531 | rollback  |   224.00KiB | 2025-12-01 05:29:47 +0000   | Rollback to snapshot factory

Finally, I rolled back to the exact state I was on before the 9.0.0 update:

$ schnapps rollback 520
Current state saved as snapshot number 532
Rolled back to snapshot 520

Full wipe

As an aside, it turns out that the factory reset functionality is implemented as a brtfs rollback to a special factory snapshot. This is why is so fast, but it also means that doing a simple factory reset doesn't wipe the data on your router. If you are planning to sell your device or otherwise dispose of it, you also need to delete all btrfs snapshots

Conclusion

While this update was very disappointing, especially since it's never happened before with major updates on Turris OS, it made me discover just how great the recovery tools are. It would be pretty tricky to fully brick one of these devices.

02 December, 2025 05:05AM

December 01, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities November 2025

Another short status update of what happened on my side last month. Hand holding the release machinery for Phosh 0.51.0 but there's more:

See below for details on the above and more:

phosh

  • Better auto brightness (MR)
  • Update CI to forky (MR)
  • Test mobile data connection in CI (MR)
  • Add DebugControl interface (MR)
  • Release 0.51~rc1
  • caffeine prefs: Fix resize when adding intervals (MR)
  • Robustify plugin-prefs screenshot tests (MR)
  • Another build systemd dependency fix (MR)
  • Gesture to tune brightness on lock screen (MR)

phoc

  • Update ci to forky (MR)
  • Exit cleanly on SIGTERM (MR)
  • Release (0.51~rc1), 0.51.0)
  • Fix segfault triggered in alpine CI (MR)
  • Cancel preedit on submit (avoids resubmitted text in e.g. chatty or flare) (MR)

phosh-mobile-settings

  • Test suite robustness (MR)
  • Update CI (MR)
  • Release 0.51~rc1)

stevia

xdg-desktop-portal-phosh

  • Release 0.51~rc1, 0.50.0
  • Unbreak nightly builds (MR)
  • Unbreak 32bit builds (MR)
  • Drop C file chooser impl (MR)

pfs

  • pfs-open: Allow to open arbitrary directories and start fixing clippy warnings (MR)
  • More clippy cleanups (MR)
  • Allow to ship schema (MR)
  • Run a smoke test in ci (MR)
  • Implement org.freedesktop.FileManager1 in the demo (MR, MR, MR)
  • dir-view: Don't thumbnail when disabled (MR)

Phrog

  • Fix osk dependencies (MR)

gmobile

  • run-phosh: Allow to run headless (MR)
  • Release 0.5.0 (MR)
  • display-panel: Allow to take screenshots (MR)
  • Add hwdb and udev rules for torch min brightness (MR)

feedbackd

feedbackd-device-themes

libcall-ui

  • Ignore callaudiod deprecations as we otherwise break compilation of downstreams (MR)
  • Same for 0.1.x branch (MR)
  • Release (0.1.5)

wirepumber

  • doc: Fix make run invocation (MR)

Chatty

mobile-broadband-povider-info

Debian

  • stevia: Upload 0.51~rc1, 0.51.0)
  • phrog: Use stevia instead of osk-stub (MR)
  • meta-phosh: Modernize dependencies (MR)
  • phosh: Drop osk-stub (MR)
  • phosh: Upload 0.51~rc1
  • phoc: Upload 0.41~rc1
  • p-m-s: Upload 0.51~rc1
  • feedbackd-device-themes: Upload 0.8.7
  • m-b-p-i: Uplaod 20251101
  • debcargo-conf: Backport ashpd patch (MR)
  • xdg-desktop-portal-phosh: Get it into unstable (MR, MR)

Mobian

  • librem5: Drop exponential brightness (MR)

wlroots

  • input-method-unstable-v2: Fix two protocol issues (MR)

libqrtr-glib

  • Fix transfer annotation to unbreak usage from Python (MR)
  • Move doc build to gi-docgen (MR)

libadwaits-rs

  • Allow None for parent in adw_dialog_choose (MR)

phosh-site

  • Lint tools (MR)
  • Add slideshow to landing page (MR)
  • Add more videos (MR)
  • Fix typos and links (MR)
  • Update nightly details (MR)

bengalos-debs

  • Fix phrog build (MR, MR)
  • Enable arm64 builds (MR)

gtk

  • Drop unused defines (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • pfs: Create folder support (MR)`
  • portal: Create thumbnails via thumbnailer service (MR)
  • phosh: caffeine plugin prefs (MR)
  • phosh: lower torch brightness (MR)
  • phosh: wi-fi hotspot QR code (MR)
  • phosh/caffeine: Close status page when selecting an interval (MR)
  • phosh/caffeine: Use empty state (MR)
  • bengalos-recpipes: prep supporting multiple disk layouts (MR)
  • xdg-p-p: Longer test timeout (MR)
  • p-m-s: Volume slider for media rols (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 December, 2025 06:52PM

Russ Allbery

Review: Forever and a Day

Review: Forever and a Day, by Haley Cass

Series: Those Who Wait #1.5
Publisher: Haley Cass
Copyright: 2020
ISBN: 979-8-5902-5966-3
Format: Kindle
Pages: 101

Forever and a Day is a coda to Haley Cass's self-published sapphic romance novel Those Who Wait. There is no point in reading it unless you have already read and enjoyed the full book and wanted more of a denouement.

Given that Those Who Wait is a romance novel, it is definitionally not a spoiler to reveal that Sutton and Charlotte ended up together. This novella is seven scenes sketching out the next few years of their lives, interspersed with press clippings and social media commentary. These tie up loose ends, give the characters a bit more time together, throw in one more conflict and resolution, add one more sex scene, and stick a few exclamation points after the happily ever after.

I am the sort of person who likes long denouements in stories, so I'm the target audience for this sort of sequel that's essentially additional chapters to the book. (The funniest version of this I've read is Jacqueline Carey's Saints Astray.) They are usually not great literature, since there are good reasons for not including these chapters in the book. That is exactly what this is: a few more chapters of the characters being happy, entirely forgettable, and of interest only to people who want that.

Cass does try to introduce a bit of a plot via some light family conflict, which was sweet and mostly worked, and some conflict over having children, which was very stereotyped and which I did not enjoy as much. I thought the earlier chapters of this novella were the stronger ones, although I do have to give the characters credit in the later chapters for working through conflict in a mature and fairly reasonable way. It does help, though, when the conflict is entirely resolved by one character being right and the other character being happily wrong. That's character conflict on easy mode.

I was happy to see that Sutton got a career, although as in the novel I wish Cass had put some more effort into describing Sutton's efforts in building that career. The details are maddeningly vague, which admittedly matches the maddeningly vague description of Charlotte's politics but which left me unsatisfied.

Charlotte's political career continues to be pure wish fulfillment in the most utterly superficial and trivialized way, and it bothered me even more in the novella than it did in the novel. We still have absolutely no idea what she stands for, what she wants to accomplish, and why anyone would vote for her, and yet we get endless soft-focus paeans to how wonderful she will be for the country. Her opponents are similarly vague to the point that the stereotypes Cass uses to signal their inferiority to Charlotte are a little suspect.

I'm more critical of this in 2025 than I would have been in 2015 because the last ten years have made clear the amount of damage an absolute refusal to stand for anything except hazy bromides causes, and I probably shouldn't be this annoyed that Cass chose to vaguely gesture towards progressive liberalism without muddying her romance denouement with a concrete political debate. But, just, gah. I found the last chapter intensely annoying, in part because the narrative of that chapter was too cliched and trite to sufficiently distract me from the bad taste of the cotton-candy politics.

Other than that, this was minor, sweet, and forgettable. If you want another few chapters of an already long novel, this delivers exactly what you would expect. If the novel was plenty, nothing about this novella is going to change your mind and you can safely skip it. I really liked the scene between Charlotte and Sutton's mom, though, and I'm glad I read the novella just for that.

Rating: 6 out of 10

01 December, 2025 04:20AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

It's already December.

It's already December. I haven't figure out why suspend resume no longer works on my workstation.

01 December, 2025 04:06AM by Junichi Uekawa

November 30, 2025

Utkarsh Gupta

FOSS Activites in November 2025

Here’s my monthly but brief update about the activities I’ve done in the FOSS world.

Debian

Whilst I didn’t get a chance to do much, here are still a few things that I worked on:

  • Did a few sessions with the new DFSG team to help kickstart things, et al.
  • Assited a few folks in getting their patches submitted via Salsa.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

I joined Canonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did, here’s a quick TL;DR of what I did:

  • Successfully released Resolute Snapshot 1!
    • This one was particularly interesting as it was done without the ISO tracker and cdimage access.
    • There are some wrinkles that need ironing out for the next snapshot.
  • Resolute Raccoon is now fully and formally open.
  • Assisted a bunch of folks with my Archive Admin and Release team hats to:
    • review NEW packages for Ubuntu Studio.
    • remove old binaries that are stalling transition and/or migration.
    • LTS requalification of Ubuntu flavours.
    • bootstrapping dotnet-10 packages.
    • removal of openjdk-19 from Jammy, which sparked some interesting discussions.

Debian (E)LTS

This month I have worked 22 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:

  • wordpress: There were multiple vulnerabilities reported in Wordpress, leading to Sent Data & Cross-site Scripting.

    • [bookworm]: Roberto rightly pointed out that the upload to bookworm hadn’t gone through last month, so I re-uploaded wordpress/6.1.9+dfsg1-0+deb12u1 to bookworm-security.
    • This is now released as DSA 6075-1.
  • ruby-rack: There were multiple vulnerabilities reported in Rack, leading to DoS (memory exhaustion) and proxy bypass.

    • [ELTS]: Last month I had backported fixes for CVE-2025-46727 & CVE-2025-32441 to buster and stretch but the other backports were being a bit tricky due to really old versions.
    • I spent a bit more time but there’s a lot to demystify. Gonna take a bit of break from this one and come back to this after doing other updates. Might even consider sending a RFH to the list.
  • libwebsockets: Multiple issues were reported in LWS causing denial of service and stack-based buffer overflow.

  • mako: It was found that Mako, a Python template library, was vulnerable to a denial of service attack via crafted regular expressions.

    • [LTS]: For bullseye, these were fixed via 1.1.3+ds1-2+deb11u1. And released as DLA 4393-1.
    • Backporting tests was an interesting exercise as I had to make them compatible with the bullseye version. :)
  • ceph: Affected by CVE-2024-47866, using the argument x-amz-copy-source to put an object and specifying an empty string as its content leads to the RGW daemon crashing, resulting in a DoS attack.

    • [LTS]: Whilst the patch is straightforward, backports are a bit tricky. I’ve prepared the update but would like to reach out to zigo, the maintainer, to make sure nothing regresses.
    • [ELTS]: Same as LTS, I’d like to get a quick review and upload to LTS first before I start staging uploads for ELTS.
  • [LTS] Attended the monthly LTS meeting on IRC. Summary here.

    • It was also followed by a 50-minute post-meeting technical discussion/question session.
  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates. Thanks, Sylvain, for a great documentation summary.


Until next time.
:wq for today.

30 November, 2025 05:41AM

Russ Allbery

Review: The Last Soul Among Wolves

Review: The Last Soul Among Wolves, by Melissa Caruso

Series: The Echo Archives #2
Publisher: Orbit
Copyright: August 2025
ISBN: 0-316-30404-2
Format: Kindle
Pages: 355

The Last Soul Among Wolves is urban high fantasy with strong mystery vibes. It is a direct sequel to The Last Hour Between Worlds. You need the previous book for some character setup (and this book would spoil it badly), but you don't have to remember the first book in detail. Only the main plot outcomes are directly relevant and the characters will remind you of those.

Kembrel Thorne is a Hound, the equivalent of a police detective in the medieval-inspired city setting of this series, but this book does not open with an official assignment. Instead, she has been dragged by her childhood friend Jaycel Morningrey as company for a reading of the will of old lady Lovegrace, reclusive owner of a gothic mansion on an island connected to the city by an intermittent sandbar. A surprise reunion with her gang of childhood friends ensues, followed by the revelation that they are all in serious trouble.

Shortly after Kem left the group to become a Hound, the remaining four, plus several other apparently random people, got entangled with a powerful Echo artifact. Now that Lovegrace has died, one of them will inherit the artifact and the ability to make a wish, but only one. The rest will be killed at decreasing intervals until only the winner is left alive.

The Last Hour Between Worlds was fae fantasy built around a problem that was more of a puzzle than a mystery. The Last Soul Among Wolves is closer to a classic mystery: A cast of characters are brought together and semi-isolated in a rural house, they start dying, and it's up to the detective to solve the mystery of their death before it's too late. In this case, the initial mechanism of death is supernatural and not in doubt — the challenge instead is how to stop it from happening again — but Kem's problems quickly become more complicated.

As mystery plots go, this is more thriller than classical despite the setting. There are a few scenes of analyzing clues, but Kem is more likely to use the time-honored protagonist technique of throwing herself into danger and learning what's going on via the villain monologues. As readers of the previous book would expect, Rika Nonesuch is here too, hired by another of Kem's old friends, and the two navigate their personal feelings and the rivalry between their guilds in much the way that they did in the Last Hour Between Worlds. As in the first book, there is a sapphic romance subplot, but it's a very slow burn asexual romance.

The best part of this series continues to be the world-building. The previous book introduced the idea of the Echoes and sent the characters exploring into stranger and stranger depths. This book fleshes out the rules in more detail, creating something that feels partly like a fae realm and partly like high fantasy involving gods, but diverges from both into a logic of its own. The ending satisfyingly passes my test of fantasy mysteries: Resolving the mystery requires understanding and applying the rules of the setting, which are sufficiently strange to create interesting outcomes but coherent enough that the reader doesn't feel like the author is cheating.

There are some hissable villains here, but my favorite part of this book was the way Caruso added a lot of nuance and poignancy to the Echoes rather than showing them only as an uncanny threat. That choice made the world feel deeper and richer. It's not yet clear whether that element is setup for a longer-term series plot, but I hope Caruso will develop the story in that direction.

It felt to me like Caruso is aiming for an ongoing series rather than a multi-volume story with a definite ending. She avoids a full episodic reset — Rika, in particular, gets considerable character development and new complications that bode well for future volumes — but it doesn't feel like the series is building towards an imminent climax. This is not a complaint. I enjoy these characters and this world and will happily keep devouring each new series entry.

If you liked The Last Hour Between Worlds, I think you will like this. It doesn't have the same delight of initial discovery of the great world-building, but the plot is satisfying and a bit more complex and the supporting characters are even better than those in the first book. Once again, Caruso kept me turning the pages, and I'm now looking forward to a third volume. Recommended.

The third book in the series has not yet been announced, but there are indications on social media that it is coming.

Rating: 7 out of 10

30 November, 2025 04:03AM

hackergotchi for Otto Kekäläinen

Otto Kekäläinen

DEP-18: A proposal for Git-based collaboration in Debian

Featured image of post DEP-18: A proposal for Git-based collaboration in Debian

I am a huge fan of Git, as I have witnessed how it has made software development so much more productive compared to the pre-2010s era. I wish all Debian source code were in Git to reap the full benefits.

Git is not perfect, as it requires significant effort to learn properly, and the ecosystem is complex with even more things to learn ranging from cryptographic signatures and commit hooks to Git-assisted code review best practices, ‘forge’ websites and CI systems.

Sure, there is still room to optimize its use, but Git certainly has proven itself and is now the industry standard. Thus, some readers might be surprised to learn that Debian development in 2025 is not actually based on Git. In Debian, the version control is done by the Debian archive itself. Each ‘commit’ is a new upload to the archive, and the ‘commit message’ is the debian/changelog entry. The ‘commit log’ is available at snapshots.debian.org.

In practice, most Debian Developers (people who have the credentials to upload to the Debian archive) do use Git and host their packaging source code on salsa.debian.org – the GitLab instance of Debian. This is, however, based on each DD’s personal preferences. The Debian project does not have any policy requiring that packages be hosted on salsa.debian.org or be in version control at all.

Is collaborative software development possible without git and version control software?

Debian, however, has some peculiarities that may be surprising to people who have grown accustomed to GitHub, GitLab or various company-internal code review systems.

In Debian:

  • The source code of the next upload is not public but resides only on the developer’s laptop.
  • Code contributions are plain patch files, based on the latest revision released in the Debian archive (where the unstable area is equivalent to the main development branch).
  • These patches are submitted by email to a bug tracker that does no validation or testing whatsoever.
  • Developers applying these patches typically have elaborate Mutt or Emacs setups to facilitate fetching patches from email.
  • There is no public staging area, no concept of rebasing patches or withdrawing a patch and replacing it with a better version.
  • The submitter won’t see any progress information until a notification email arrives after a new version has been uploaded to the Debian archive.

This system has served Debian for three decades. It is not broken, but using the package archive just feels… well, archaic.

There is a more efficient way, and indeed the majority of Debian packages have a metadata field Vcs-Git that advertises which version control repository the maintainer uses. However, newcomers to Debian are surprised to notice that not all packages are hosted on salsa.debian.org but at various random places with their own account and code submission systems, and there is nothing enforcing or even warning if the code there is out of sync with what was uploaded to Debian. Any Debian Developer can at any time upload a new package with whatever changes, bypassing the Git repository, even when the package advertised a Git repository. All PGP signed commits, Git tags and other information in the Git repository are just extras currently, as the Debian archive does not enforce or validate anything about them.

This also makes contributing to multiple packages in parallel hard. One can’t just go on salsa.debian.org and fork a bunch of repositories and submit Merge Requests. Currently, the only reliable way is to download source packages from Debian unstable, develop patches on top of them, and send the final version as a plain patch file by email to the Debian bug tracker. To my knowledge, no system exists to facilitate working with the patches in the bug tracker, such as rebasing patches 6 months later to detect if they or equivalent changes were applied or if sending refreshed versions is needed.

To newcomers in Debian, it is even more surprising that there are packages that are on salsa.debian.org but have the Merge Requests feature disabled. This is often because the maintainer does not want to receive notification emails about new Merge Requests, but rather just emails from bugs.debian.org. This may sound arrogant, but keep in mind that these developers put in the effort to set up their Mutt/Emacs workflow for the existing Debian process, and extending it to work with GitLab notifications is not trivial. There are also purists who want to do everything via the command-line (without having to open a browser, run JavaScript and maintain a live Internet connection), and tools like glab are not convenient enough for the full workflow.

Inefficient ways of working prevent Debian from flourishing

I would claim, based on my personal experiences from the past 10+ years as a Debian Developer, that the lack of high-quality and productive tooling is seriously harming Debian. The current methods of collaboration are cumbersome for aspiring contributors to learn, and suboptimal to use both for new and seasoned contributors.

There are no exit interviews for contributors who left Debian, no comprehensive data on reasons to contribute or stop contributing, nor are there any metrics tracking how many people tried but failed to contribute to Debian. Some data points to support my concerns do exist:

  • The contributor database shows that the number of contributors is growing slower than Debian’s popularity.
  • Most packages are maintained by one person working alone (just pick any package at random and look at the upload history).

Debian should embrace git, but decision-making is slow

Debian is all about community and collaboration. One would assume that Debian prioritized above all making collaboration tools and processes simpler, faster and less error-prone, as it would help both current and future package maintainers. Yet, it isn’t so, due to some reasons unique to Debian.

There is no single company or entity running Debian, and it has managed to operate as a pure meritocracy and do-cracy for over 30 years. This is impressive and admirable. Unfortunately, some of the infrastructure and technical processes are also nearly 30 years old and very difficult to change due to the same reason: the nature of Debian’s distributed decision-making process.

As a software developer and manager with 25+ years of experience, I strongly feel that developing software collaboratively using Git is a major step forward that Debian needs to take, in one form or another, and I hope to see other DDs voice their support if they agree.

Debian Enhancement Proposal 18

Following how consensus is achieved in Debian, I started drafting DEP-18 in 2024, and it is currently awaiting enough thumbs up at https://salsa.debian.org/dep-team/deps/-/merge_requests/21 to get into CANDIDATE status next.

In summary the DEP-18 proposes that everyone keen on collaborating should:

  1. Maintain Debian packaging sources in Git on Salsa.
  2. Use Merge Requests to show your work and to get reviews.
  3. Run Salsa CI before upload.

The principles above are not novel. According to stats at e.g. trends.debian.net, and UDD, ~93% of all Debian source packages are already hosted on salsa.debian.org. As of June 1st, 2025, only 1640 source packages remain that are not hosted on Salsa. The purpose of DEP-18 is to state in writing what Debian is currently doing for most packages, and thus express what among others new contributors should be learning and doing, so basic collaboration is smooth and free from structural obstacles.

Most packages are also already allowing Merge Requests and using Salsa CI, but there hasn’t been any written recommendation anywhere in Debian to do so. The Debian Policy (v.4.7.2) does not even mention the word “Salsa” a single time. The current process documentation on how to do non-maintainer uploads or salvaging packages are all based on uploading packages to the archive, without any consideration of using git-based collaboration such as posting a Merge Request first. Personally I feel posting a Merge Request would be a better approach, as it would invite collaborators to discuss and provide code reviews. If there are no responses, the submitter can proceed to merge, but compared to direct uploads to the Debian archive, the Merge Request practice at least tries to offer a time and place for discussions and reviews to happen.

It could very well be that in the future somebody comes up with a new packaging format that makes upstream source package management easier, or a monorepo with all packages, or some other future structures or processes. Having a DEP to state how to do things now does not prevent people from experimenting and innovating if they intentionally want to do that. The DEP is merely an expression of the minimal common denominators in the packaging workflow that maintainers and contributors should follow, unless they know better.

Transparency and collaboration

Among the DEP-18 recommendations is:

The recommended first step in contributing to a package is to use the built-in “Fork” feature on Salsa. This serves two purposes. Primarily, it allows any contributor to publish their Git branches and submit them as Merge Requests. Additionally, the mere existence of a list of “Forks” enables contributors to discover each other, and in rare cases when the original package is not accepting improvements, collaboration could arise among the contributors and potentially lead to permanent forks in the general meaning. Forking is a fundamental part of the dynamics in open source that helps drive quality and agreement. The ability to fork ultimately serves as the last line of defense of users’ rights. Git supports this by making both temporary and permanent forks easy to create and maintain.

Further, it states:

Debian packaging work should be reasonably transparent and public to allow contributors to participate. A maintainer should push their pending changes to Salsa at regular intervals, so that a potential contributor can discover if a particular change has already been made or a bug has been fixed in version control, and thus avoid duplicate work.

Debian maintainers should make reasonable efforts to publish planned changes as Merge Requests on Salsa, and solicit feedback and reviews. While pushing changes directly on the main Git branch is the fastest workflow, second only to uploading all changes directly to Debian repositories, it is not an inclusive way to develop software. Even packages that are maintained by a single maintainer should at least occasionally publish Merge Requests to allow new contributors to step up and participate.

I think these are key aspects leading to transparency and true open source collaboration. Even though this talks about Salsa — which is based on GitLab — the concepts are universal and will work also on other forges, like Forgejo or GitHub. The point is that sharing work-in-progress on a real-time platform, with CI and other supporting features, empowers and motivates people to iterate on code collaboratively. As an example of an anti-pattern, Oracle MySQL publishes the source code for all their releases and are license-compliant, but as they don’t publish their Git commits in real-time, it does not feel like a real open source project. Non-Oracle employees are not motivated to participate as second-class developers who are kept in the dark. Debian should embrace git and sharing work in real-time, embodying a true open source spirit.

Recommend, not force

Note that the Debian Enhancement Proposals are not binding. Only the Debian Policy and Technical Committee decisions carry that weight. The nature of collaboration is voluntary anyway, so the DEP does not need to force anything on people who don’t want to use salsa.debian.org.

The DEP-18 is also not a guide for package maintainers. I have my own views and have written detailed guides in blog articles if you want to read more on, for example, how to do code reviews efficiently.

Within DEP-18, there is plenty of room to work in many different ways, and it does not try to force one single workflow. The goal here is to simply have agreed-upon minimal common denominators among those who are keen to collaborate using salsa.debian.org, not to dictate a complete code submission workflow.

Once we reach this, there will hopefully be less friction in the most basic and recurring collaboration tasks, giving DDs more energy to improve other processes or just invest in having more and newer packages for Debian users to enjoy.

Next steps

In addition to lengthy online discussions on mailing lists and DEP reviews, I also presented on this topic at DebConf 2025 in Brest, France. Unfortunately the recording is not yet up on Peertube.

The feedback has been overwhelmingly positive. However, there are a few loud and very negative voices that cannot be ignored. Maintaining a Linux distribution at the scale and complexity of Debian requires extraordinary talent and dedication, and people doing this kind of work often have strong views, and most of the time for good reasons. We do not want to alienate existing key contributors with new processes, so maximum consensus is desirable.

We also need more data on what the 1000+ current Debian Developers view as a good process to avoid being skewed by a loud minority. If you are a current or aspiring Debian Developer, please add a thumbs up if you think I should continue with this effort (or a thumbs down if not) on the Merge Request that would make DEP-18 have candidate status.

There is also technical work to do. Increased Git use will obviously lead to growing adoption of the new tag2upload feature, which will need to get full git-buildpackage support so it can integrate into salsa.debian.org without turning off Debian packaging security features. The git-buildpackage tool itself also needs various improvements, such as making contributing to multiple different packages with various levels of diligence in debian/gbp.conf maintenance less error-prone.

Eventually, if it starts looking like all Debian packages might get hosted on salsa.debian.org, I would also start building a review.debian.org website to facilitate code review aspects that are unique to Debian, such as tracking Merge Requests across GitLab projects in ways GitLab can’t do, highlighting which submissions need review most urgently, feeding code reviews and approvals into the contributors.debian.org database for better attribution and so forth.

Details on this vision will be in a later blog post, so subscribe to updates!

30 November, 2025 12:00AM

November 29, 2025

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, October 2025 (by Roberto C. Sánchez)

The Debian LTS Team, funded by Freexian’s Debian LTS offering, is pleased to report its activities for October.

Activity summary

During the month of October, 21 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 37 DLAs fixing 893 CVEs.

The team has continued in its usual rhythm, preparing and uploading security updates targeting LTS and ELTS, as well as helping with updates to oldstable, stable, testing, and unstable. Additionally, the team received several contributions of LTS uploads from Debian Developers outside the standing LTS Team.

Notable security updates:

  • https-everywhere, prepared by Markus Koschany, deals with a problem created by ownership of the https-rulesets.org domain passing to a malware operator
  • openjdk-17 and openjdk-11, prepared by Emilio Pozuelo Monfort, fixes XML external entity and certificate validation vulnerabilities
  • intel-microcode, prepared by Tobias Frost, fixes a variety of privilege escalation and denial of service vulnerabilities

Notable non-security updates:

  • distro-info-data, prepared by Stefano Rivera, updates information concerning current and upcoming Debian and Ubuntu releases

Contributions from outside the LTS Team:

  • Lukas Märdian, a Debian Developer, provided an update of log4cxx
  • Andrew Ruthven, one of the request-tracker4 maintainers, provided an update of request-tracker4
  • Christoph Goehre, co-maintainer of thunderbird, provided an update of thunderbird

Beyond the typical LTS updates, the team also helped the Debian community more broadly:

  • Guilhem Moulin prepared oldstable/stable updates of libxml2, and an unstable update of libxml2.9
  • Bastien Roucariès prepared oldstable/stable updates of imagemagick
  • Daniel Leidert prepared an oldstable update of python-authlib, oldstable update of libcommons-lang-java and stable update of libcommons-lang3-java
  • Utkarsh Gupta prepared oldstable/stable/testing/unstable updates of ruby-rack

The LTS Team is grateful for the opportunity to contribute to making LTS a high quality for sponsors and users. We are also particularly grateful for the collaboration from others outside the time; their contributions are important to the success of the LTS effort.

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

29 November, 2025 12:00AM by Roberto C. Sánchez

November 28, 2025

hackergotchi for Clint Adams

Clint Adams

monkeying around bitrot

One of the servers to which I SSH ratcheted up its public key requirements and thus the Monkeysphere key I've been using for 15 years stopped working.

Unfortunately, monkeysphere gen-subkey hardcodes RSA keys and if I'm going to be forced to use a new subkey I want mine to be of the 25519 variety. Therefore, to add a subkey by hand:

gpg --expert --edit-key $KEYID

Follow roughly what's in /usr/share/monkeysphere/m/gen_subkey, but change the key type to 11 (ECC (set your own capabilities)), don't bother with Encrypt capability, and pick Curve25519.

monkeysphere subkey-to-ssh-agent and agent-transfer will be all happy with the "ed25519" subkey without any code modifications, and you won't need to rewrite monkeysphere from scratch to use Sequoia for the next 15 years.

Posted on 2025-11-28
Tags:

28 November, 2025 08:56PM

Simon Josefsson

Container Images for Debian with Guix

The debian-with-guix-container project build and publish container images of Debian GNU/Linux stable with GNU Guix installed.

The images are like normal Debian stable containers but have the guix tool and a reasonable fresh guix pull.

Supported architectures include amd64 and arm64. The multi-arch container is called:

registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable

It may also be accessed via debian-with-guix at Docker Hub as:

docker.io/jas4711/debian-with-guix:stable

The container images may be used like this:

$ podman run --privileged -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
root@guix:/# hello
bash: hello: command not found
root@guix:/# guix describe
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild &
[1] 21
root@guix:/# GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
root@guix:/# guix describe
Generation 2    Nov 28 2025 10:14:11    (current)
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# guix install --verbosity=0 hello
accepted connection from pid 55, user root
The following package will be installed:
   hello 2.12.2

hint: Consider setting the necessary environment variables by running:

     GUIX_PROFILE="/root/.guix-profile"
     . "$GUIX_PROFILE/etc/profile"

Alternately, see `guix package --search-paths -p "/root/.guix-profile"'.

root@guix:/# GUIX_PROFILE="/root/.guix-profile"
root@guix:/# . "$GUIX_PROFILE/etc/profile"
root@guix:/# hello
Hello, world!
root@guix:/# 

Below is an example GitLab pipeline job that demonstrate how to run guix install to install additional dependencies, and then download and build a package that pick up the installed package from the system.

test-wget-configure-make-libksba-amd64:
  image: registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
  before_script:
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARG &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  - apt-get install --update -y --no-install-recommends build-essential wget ca-certificates bzip2
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

The images were initially created for use in GitLab CI/CD Pipelines but should work for any use.

The images are built in a GitLab CI/CD pipeline, see .gitlab-ci.yml.

The containers are derived from official Debian stable images with Guix installed and a successful run of guix pull, built using buildah invoked from build.sh using image/Containerfile that runs image/setup.sh.

The pipeline also push images to the GitLab container registry, and then also to Docker Hub.

Guix binaries are downloaded from the Guix binary tarballs project because of upstream download site availability and bandwidth concerns.

Enjoy these images! Hopefully they can help you overcome the loss of Guix in Debian which made it a mere apt-get install guix away before.

There are several things that may be improved further. An alternative to using podman --privileged is to use --security-opt seccomp=unconfined --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN which may be slightly more fine-grained.

For ppc64el support I ran into an error message that I wasn’t able to resolve:

guix pull: error: while setting up the build environment: cannot set host name: Operation not permitted

For riscv64, I can’t even find a Guix riscv64 binary tarball for download, is there one anywhere?

For arm64 containers, it seems that you need to start guix-daemon with --disable-chroot to get something to work, at least on GitLab.com’s shared runners, otherwise you will get this error message:

guix install: error: clone: Invalid argument

Building the images themselves also require disabling some security functionality, and I was not able to build images with buildah without providing --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN otherwise there were errors like this:

guix pull: error: cloning builder process: Operation not permitted
guix pull: error: clone: Operation not permitted
guix pull: error: while setting up the build environment: cannot set loopback interface flags: Operation not permitted

Finally on amd64 it seems --security-opt seccomp=unconfined is necessary, otherwise there is an error message like this, even if you use --disable-chroot:

guix pull: error: while setting up the child process: in phase setPersonality: cannot set personality: Function not implemented

This particular error is discussed upstream, but I think generally that these error suggest that guix-daemon could use more optional use of features: if some particular feature is not available, gracefully fall back to another mode of operation, instead of exiting with an error. Of course, it should never fall back to an insecure mode of operation, unless the user requests that.

Happy Hacking!

28 November, 2025 04:32PM by simon

Russell Coker

10gbit and 40gbit Home Networking

Aliexpress has a 4 port 2.5gbit switch with 2*SFP+ sockets for $34.35 delivered [1]. 4 ports isn’t very good for the more common use cases (if daisy chaining them then it’s only
2 available for devices) so this is really a device for use with 10Gbit uplink.

Aliexpress has a pair of SFP+ 10Gbit devices with 1M of copper between them for $15.79 delivered [2]. That page also offers a pair of QSFP+ 40Gbit devices with 1M of copper between them for $27.79 delivered.

They have a dual port SFP+ card for a server with two of the pairs of SFP+ 10gbit devices with copper between them for $32.51 delivered [3].

So you can get a 2.5gbit switch with two 10gbit uplink cables to nearby servers for $66.86 including postage. I don’t need this but it is tempting. I spent $93.78 to get 2.5gbit networking [4] so spending $66.86 to get part of my network to 10gbit isn’t much.

It is $99.81 including postage for a Mellanox 2*40Gbit QSFP+ card and two QSFP+ adaptors with 3M of copper between them [5]. It is $55.81 including postage for the Mellanox card without the cable. So that’s $155.62 for a point to point 40gbit link between systems that are less than 3M apart, that’s affordable for a home lab. As an aside the only NVMe I’ve tested which can deliver such speeds was in a Thinkpad and the Thinkpad entered a thermal throttling state after a few seconds of doing that.

The best price I could see for a 40Gbit switch is $1280 for a L3 Managed switch with 2*40G QSFP+ slot ports, 4*10G SFP+ ports, and 48*2.5G RJ45 ports [6]. That’s quite affordable for the SME market but a bit expensive for home users (although I’m sure that someone on r/homelab has one).

I’m not going to get 40Gbit, that’s well above what I need and while a point to point link is quite affordable I don’t have servers in that range. But I am seriously considering 10Gbit, I get paid to do enough networking stuff that having some hands on experience with 10Gbit could be useful.

For a laptop a 5gbit ethernet USB device is $29.48 including delivery which isn’t too expensive [7]. The faster ones seem to be all Thunderbolt and well over $100, which is disappointing as USB 3.2 can do up to 20Gbit. If I start doing 10gbit over ethernet I’ll get one of those USB devices for testing.

For a single server it’s cheaper and easier to get a 4 port 2.5Gbit ethernet for $55.61 [8].

28 November, 2025 08:13AM by etbe

November 27, 2025

Russ Allbery

Review: A Matter of Execution

Review: A Matter of Execution, by Nicholas & Olivia Atwater

Series: Tales of the Iron Rose #0
Publisher: Starwatch Press
Copyright: 2024
ISBN: 1-998257-08-8
Format: Kindle
Pages: 131

A Matter of Execution is the introductory novella that kicked off the Tales of the Iron Rose series. It is steampunk fantasy with airships. I previously read and reviewed the subsequent novel, Echoes of the Imperium.

As noted in that review, I read the novel first. That was a mistake; this is a much better place to start. A Matter of Execution was clearly intended as the introduction of all of these characters. More importantly, I think reading the novella first would have given me enough affinity with the characters to not mind the worst part of Echoes of the Imperium: the extremely slow first half that seemed filled with the protagonist's impostor syndrome.

A Matter of Execution opens, fittingly, with Captain William Blair, a goblin, former Imperial soldier, Oathbreaker, and series first-person protagonist being carted to his execution. He is not alone; in the same prison wagon is an arrogant (and racist) man named Strahl, the killer of one of the rulers of Lyonesse.

Strahl is rather contemptuous of Blair's claim to be a captain, given that he's both a goblin and an Oathbreaker. Strahl quickly revises that opinion when Blair's crew, somewhat predictably given that he is the series protagonist, creates a daring escape for both of them. The heat of action gives both a chance to gain some respect for the other, which explains why Blair is not only willing to invite Strahl to join his crew, but to go back for Strahl's companion.

Breaking out Strahl's companion will be a more difficult, and surprising, problem.

Nicholas Atwater is a role-playing game GM, something that you will learn in the "about the author" section at the end of this novella but probably will have guessed by then. Even more than Echoes of the Imperium, this novella feels like a (good) write-up of an RPG adventure. A wildly varied cast of characters come together and form a party with a well-defined objective that has some surrounding mysteries and surprises. Each of those characters get their individual moments to show off their specific skills. Readers with a certain gaming background will know exactly where to insert the Borderlands-style title card with a slightly demented description of each character.

This is not a complaint. You may be able to see the bones of the setup adventure for a long-running campaign, but I like this style of character introduction and the story moves right along. There are a ton of varied characters, some interesting villains and maybe-villains, a rather satisfying heist setup, and some good chemistry and a bit of banter. This is not a deep story — it's clearly an introductory episode for both the characters and the world background — but it's a fun way to spend a few hours.

I think the best part of this series is the world-building. If you have read my review of Echoes of the Imperium, you have unfortunately been mildly spoiled for the revelation in this novella. I don't think it hurt the story that much; you will be able to predict what obvious gaps in the novel backstory the novella is going to fill in, but it's just as enjoyable to see how that happens. But the Atwaters aren't going to drop any of the big world-building bombs in the introductory novella, of course. Instead, you get a gradual introduction to the nature of magic in this world, some of the political setup of the recent war, and a quick introduction to the capabilities of Strahl's mysterious companion.

If you've not yet read this series, I recommend starting here. It's a quick investment to see if you'll be interested. The novel is heavier and slower, and the pacing of the first half isn't great, but the world-building is even better.

If you've already read the novel, this is still worth reading as long as you enjoyed it. You'll have a few moments of "oh, that's how that happened," and it's a fun and fast-moving way to spend a bit more time with the characters.

Followed by Echoes of the Imperium. The back matter of the novella says that The Winds of Fortune is supposedly forthcoming.

Rating: 7 out of 10

27 November, 2025 05:34AM

Russell Coker

PineTime Band

I’ve had a Pine Time for just over 2 years [1]. About a year ago I had a band break and replaced it from a spare PineTime and now I just had another break. Having the band only last one year isn’t that great, but it’s fortunate that the break only affects the inner layer of plastic so there is no risk of the watch suddenly falling off and being broken or lost. The Pine64 web site has a page about this with bad options, one broken link and a few Amazon items that are have ridiculous postage [2].

I started writing this post while using the band from a Colmi P80 [3]. I bought one for a relative who wanted the metal band and the way the Aliexpress seller does it is to sell the package with the plastic band and include the metal band in the package so I had a spare band. It fits quite well and none of the reported problems of the PineTime having insufficient space between the spring bar and the watch. The Colmi band in question is described as “rose gold” but is more like “pinkish beige” and doesn’t match the style of the black PineTime.

I ordered a couple of cheap bands from AliExpress which cost $9.77 and $13.55 including postage while the ones that Pine64 recommend have over $15 postage from Amazon!

The 20mm Silicone Magnetic Buckle Watch Strap Band For Huawei GT2 Smart Watch Connected Bracelet Black Watchband Man [4] cost $13.55 including postage. It has a magnetic unfold mechanism which I find a bit annoying and it doesn’t allow easily changing the length. I don’t think I’ll choose that again. But it basically works and is comfortable.

The 20mm Metal Strap for Huawei Watch GT2 3 Quick Release Stainless Steel Watch Band for Samsung Galaxy Watch Bracelet [5] cost $9.77 including postage. I found this unreasonably difficult to put on and not particularly comfortable. But opinion will vary on that, it is cheap and will appeal to some people’s style.

Conclusion

There are claims that getting a replacement band for a PineTime is difficult. My experience is that every band with a 20mm attachment works as long as it’s designed for a square watch, some of the bands are designed to partly go around a round face and wouldn’t fit. I expect that some bands won’t fit, but I don’t think that it’s enough of a problem to be worried about buying a random band from AliExpress. The incidence of bands not fitting will probably be lower than the incidence of other AliExpress products not doing quite what you want (while meeting the legal criteria of doing what they are claimed to do) and not being used.

I’m now wearing the PineTime with the “Magnetic Buckle Watch Strap Band” and plan to wear it for the next year or so.

27 November, 2025 12:37AM by etbe

Valhalla's Things

PDF Planners 2026

Posted on November 27, 2025
Tags: madeof:atoms, madeof:bits, craft:bookbinding

A few years ago I wrote some planner generating code to make myself a custom planner; in November 2023 I generated a few, and posted them here on the blog, in case somebody was interested in using them.

In 2024 I tried to do the same, and ended up being even more late, to the point where I didn’t generate any (uooops).

I did, however, start to write a Makefile to automate the generation (and got stuck on the fact that there wasn’t an easy way to deduce the correct options needed from just the template name); this year, with the same promptness as in 2023 I got back to the Makefile and finished it, so maybe next year I will be able to post them early enough for people to print and bind them? maybe :)

Anyway, these are all of the variants I currently generate, for 2026.

The files with -book in the name have been imposed on A4 paper for a 16 pages signature. All of the fonts have been converted to paths, for ease of printing (yes, this means that customizing the font requires running the script, but the alternative also had its drawbacks).

In English:

daily-95×186-en.pdf

blank daily pages, 95 mm × 186 mm;

daily-A5-en.pdf daily-A5-en-book.pdf

blank daily pages, A5;

daily-A6-en.pdf daily-A6-en-book.pdf

blank daily pages, A6;

daily-graph-A5-en.pdf daily-graph-A5-en-book.pdf

graph paper (4 mm) daily pages, A5;

daily-points4mm-A5-en.pdf daily-points4mm-A5-en-book.pdf

pointed paper (4 mm), A5;

daily-ruled-A5-en.pdf daily-ruled-A5-en-book.pdf

ruled paper daily pages, A5;

week_on_two_pages-A6-en.pdf week_on_two_pages-A6-en-book.pdf

weekly planner, one week on two pages, A6;

week_on_one_page-A6-en.pdf week_on_one_page-A6-en-book.pdf

weekly planner, one week per page, A6;

week_on_one_page_dots-A6-en.pdf week_on_one_page_dots-A6-en-book.pdf

weekly planner, one week per page with 4 mm dots, A6;

week_health-A6-en.pdf week_health-A6-en-book.pdf

weekly health tracker, one week per page with 4 mm dots, A6;

month-A6-en.pdf month-A6-en-book.pdf

monthly planner, A6;

And the same planners, in Italian:

daily-95×186-it.pdf

blank daily pages, 95 mm × 186 mm;

daily-A5-it.pdf daily-A5-it-book.pdf

blank daily pages, A5;

daily-A6-it.pdf daily-A6-it-book.pdf

blank daily pages, A6;

daily-graph-A5-it.pdf daily-graph-A5-it-book.pdf

graph paper (4 mm) daily pages, A5;

daily-points4mm-A5-it.pdf daily-points4mm-A5-it-book.pdf

pointed paper (4 mm), A5;

daily-ruled-A5-it.pdf daily-ruled-A5-it-book.pdf

ruled paper daily pages, A5;

week_on_two_pages-A6-it.pdf week_on_two_pages-A6-it-book.pdf

weekly planner, one week on two pages, A6;

week_on_one_page-A6-it.pdf week_on_one_page-A6-it-book.pdf

weekly planner, one week per page, A6;

week_on_one_page_dots-A6-it.pdf week_on_one_page_dots-A6-it-book.pdf

weekly planner, one week per page with 4 mm dots, A6;

week_health-A6-it.pdf week_health-A6-it-book.pdf

weekly health tracker, one week per page with 4 mm dots, A6;

month-A6-it.pdf month-A6-it-book.pdf

monthly planner, A6;

Some of the planners include ephemerids and moon phase data: these have been calculated for the town of Como, and specifically for geo:45.81478,9.07522?z=17, because that’s what everybody needs, right?

If you need the ephemerids for a different location and can’t run the script yourself (it depends on pdfjam, i.e. various GB of LaTeX, and a few python modules such as dateutil, pypdf and jinja2), feel free to ask: unless I receive too many requests to make this sustainable I’ll generate them and add them to this post.

I hereby release all the PDFs linked in this blog post under the CC0 license.

You may notice that I haven’t decided on a license for the code dump repository; again if you need it for something (that is compatible with its unsupported status) other than running it for personal use (for which afaik there is an implicit license) let me know and I’ll push “decide on a license” higher on the stack of things to do :D

Finishing the Makefile meant that I had to add a tiny feature to one of the scripts involved, which required me to add a dependency to pypdf: up to now I have been doing the page manipulations with pdfjam, which is pretty convenient to use, but also uses LaTeX, and apparently not every computer comes with texlive installed (shocking, I know).

If I’m not mistaken, pypdf can do all of the things I’m doing with pdfjam, so maybe for the next year I could convert my script to use that one instead.

But then the planners 2027 will be quick and easy, and I will be able to publish them promptly, right?

27 November, 2025 12:00AM

November 26, 2025

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (September and October 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Evangelos Ribeiro Tzaras (devrts)
  • Andrea Bolognani (abologna)

The following contributors were added as Debian Maintainers in the last two months:

  • Rylie Pavlik
  • Yuchin Tsai
  • Daniel Markstedt
  • Guido Berhörster
  • Renzo Davoli

Congratulations!

26 November, 2025 04:00PM by Jean-Pierre Giraud

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tidyCpp 0.0.8 on CRAN: Maintenance

Another maintenance release of the tidyCpp package arrived on CRAN this morning, the first in about two years. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the (now updated, see below) vignette for motivating examples.

This release contains mostly internal upkeep of the usual type: refreshing continuous integration, updating links, switching to Authors@R. But as we wrap the C API of R here too, changes made in R-devel this week affected the two reverse-dependencies (i.e. “downstream”) packages (of mine) using this. So we commented-out the definitions for the five now-hidden accessors so that these downstream packages can build again under R-devel.

The NEWS entry follows.

Changes in tidyCpp version 0.0.8 (2025-11-25)

  • Updated continuous integration setup several times

  • Updated README.md documentation with link to R API site

  • Updated example snippets to use of Protect

  • Updated documentation in defines.h header

  • Updated internals.h header reflecting in R API changes

As it happens, hours after the release at CRAN a helpful issue ticket was opened detailing more than a handful of typos in the vignette. This has been corrected, and I am not exporting the vignette via GitHub Pages so the motivating examples vignette contains the corrections.

Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 November, 2025 02:57PM

November 25, 2025

Russell Coker

EDID and my 8K TV

I previously blogged about buying a refurbished Hisense 65u80g 8K TV with the aim of making it a large monitor [1] and about searching for a suitable video card for 8k [2]. After writing the second post I bought an Intel Arc B580 which also did a maximum of 4096*2160 resolution.

This post covers many attempts to try and get the TV to work correctly and it doesn’t have good answers. The best answer might be to not buy Hisense devices but I still lack data.

Attempts to Force 8K

I posted on Lemmy again about this [3] and got a single response, which is OK as it was a good response. They didn’t give me the answer on a silver platter but pointed me in the right direction of EDID [4].

I installed the Debian packages read-edid, wxedid, and edid-decode.

The command “get-edid > out.edid” saves the binary form of the edid to a file. The command “wxedid out.edid” allows graphical analysis of the EDID data. The command “edid-decode out.edid” dumps a plain text representation of the output, the command “edid-decode out.edid|grep VIC|cut -d: -f2|sort -n” shows an ordered list of video modes, in my case the highest resolution is 4096×2160 which is the highest that Linux had allowed me to set with two different video cards and a selection of different cables (both HDMI and DisplayPort).

xrandr --newmode 7680x4320 1042.63  7680 7984 7760 7824  4320 4353 4323 4328
xrandr --addmode HDMI-3 7680x4320
xrandr --output HDMI-3 --mode 7680x4320

I ran the above commands and got the below error:

xrandr: Configure crtc 0 failed

At this time I don’t know how much of this is due to the video card and how much is due to the TV. The parameters for xrandr came from a LLM because I couldn’t find any Google results on what 8K parameters to use. As an aside if you have a working 8K TV or monitor connected to a computer please publish the EDID data, xrandr, and everything else you can think of.

I found a Github repository for EDID data [5] but that didn’t have an entry for my TV and didn’t appear to have any other entry for an 8K device I could use.

Resolution for Web Browsing

I installed a browser on the TV, Chrome and Firefox aren’t available for a TV and the Play Store program tells you that (but without providing a reason) when you search for them. I tried the site CodeShack What is my Screen Resolution [6] which said that my laptop is 2460*1353 while the laptop display is actually 2560*1440. So apparently I have 100 pixels used for the KDE panel at the left of the screen and 87 pixels used by the Chrome tabs and URL bar – which seems about right. My Note 9 phone reports 384*661 out of it’s 2960*1440 display so it seems that Chrome on my phone is running web sites at 4/15 of the native resolution and about 16% of the height of the screen is used by the system notification bar, the back/home/tasklist buttons (I choose buttons instead of swipe for navigation in system settings), and the URL bar when I have “Screen zoom” in system settings at 1/4. When I changed “Screen zoom” to 0/4 the claimed resolution changed to 411*717 (2/7 of the native resolution). Font size changes didn’t change the claimed resolution. The claimed “Browser Viewport Size” by CodeShack is 1280*720 which is 1/6 of the real horizontal resolution and slightly more than 1/6 of the vertical resolution, it claims that the Pixel Density is 2* and a screen resolution of 970*540 which means to imply that the browser is only working at 1920*1080 resolution!

Netflix

When I view Netflix shows using the Netflix app running on the TV is reports “4K” which doesn’t happen on Linux PCs (as they restrict 4K content to platforms with DRM) and in the “Device” setting it reports “Device Model” as “Hisense_SmartTV 8K FFM” so the Netflix app knows all about 4K content and knows the text string “8K”.

YouTube

When I view a YouTube video that’s described as being 8K I don’t get a request for paying for YouTube Premium which is apparently what happens nowadays when you try to play actual 8K video. I turn on “State for Nerds” and one line has “Viewport / Frames 1920×1080*2.00” and another has “Current / Optimal Res 3840×2160@60 / 3840×2160@60” so it seems that the YouTube app is seeing the screen as 4K but choosing to only display FullHD even when I have Quality set to “2160p60 HDR”. It declares the network speed to be over 100mbit most of the time and the lowest it gets is 60mbit while 50mbit is allegedly what’s required for 8K.

I installed a few Android apps to report hardware capabilities and they reported the screen resolution to be 1920*1080.

Have I Been Ripped Off?

It looks like I might have been ripped off by this. I can’t get any app other than Netflix to display 4K content. My PC will only connect to it at 4K. Android apps (including YouTube) regard it as 1920*1080.

The “AI Upscaling” isn’t really that great and in most ways it seems at best equivalent to a 4K TV and less than a 4K TV that runs Android apps with an actual 4K display buffer.

Next Steps

The next things I plan to do are to continue attempts to get the TV to do what it’s claimed to be capable of, either an Android app that can display 8K content or a HDMI input of 8K content will do. Running a VNC client on the TV would be an acceptable way of getting an 8K display from a Linux PC.

I need to get a somewhat portable device that can give 8K signal output. Maybe a mini PC with a powerful GPU or maybe one of those ARM boards that’s designed to drive an 8K sign. Then I can hunt for stores that have 8K TVs on display.

It would be nice if someone made a USB device that does 8K video output – NOT a USB-C DisplayPort alternative mode that uses the video hardware on the laptop. Then I could take a laptop to any place that has an 8K display to show and connect my laptop to it.

The one thing I haven’t done yet is testing 8K MP4 files on a USB stick. That’s mainly due to a lack of content and the fact that none of the phone cameras I have access to can do 8K video. I will try displaying 8K PNG and JPEG files from a USB stick.

Most people would give up about now. But I am determined to solve this and buying another large TV isn’t out of the question.

25 November, 2025 07:09AM by etbe

November 24, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppQuantuccia 0.1.3 on CRAN: Micro Maintenance

A minor release of RcppQuantuccia arrived on CRAN moments ago. RcppQuantuccia started from the Quantuccia header-only subset / variant of QuantLib which it brings it to R. This project validated the idea of making the calendaring functionality of QuantLib available in a more compact and standalone project – which we now do with qlcal which can be seen as a successor package to this earlier package. qlcal tracks QuantLib (releases) closely and provides approximately quarterly updates. Switching to using qlcal is generally recommended.

This release, the first in almost exactly two years, only updates internals (as detailed below). Notably it switches to ‘Authors@R’ to avoid a nag from CRAN on two platforms. The complete list changes for this release follows.

Changes in version 0.1.3 (2025-11-24)

  • A badge URL and link have been updated in README.md

  • The continuous integration sript switched first to r-ci-setup and then to the r-ci action with embedded setup

  • The DESCRIPTION file now uses Authors@R

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 November, 2025 06:22PM

November 21, 2025

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

Transferring Signal on Android

Transferring a Signal account between two Android devices

I spent far too much time recently trying to get a Signal Private Messenger account to transfer from one device to another.

What I eventually found worked was a very finicky path to enable functioning "Wi-Fi Direct", which I go into below.

I also offer some troubleshooting and recovery-from-failure guidance.

All of this blogpost uses "original device" to refer to the Android pocket supercomputer that already has Signal installed and set up, and "new device" to mean the Android device that doesn't yet have Signal on it.

Why Transfer?

Signal Private Messenger is designed with the expectation that the user has a "primary device", which is either an iPhone or an Android pocket supercomputer.

If you have an existing Signal account, and try to change your primary device by backing up and restoring from backup, it looks to me like Signal will cause your long-term identity keys to be changed. This in turn causes your peers to see a message like "Your safety number with Alice has changed."

These warning messages are the same messages that they would get if an adversary were to take over your account. So it's a good idea to minimize them when there isn't an account takeover — false alarms train people to ignore real alarms.

You can avoid "safety number changed" warnings by using signal's "account transfer" process during setup, at least if you're transferring between two Android devices.

However, my experience was that the transfer between two Android devices was very difficult to get to happen at all. I ran into many errors trying to do this, until I finally found a path that worked.

Dealing with Failure

After each failed attempt at a transfer, my original device's Signal installation would need to be re-registered. Having set a PIN meant that i could re-register the device without needing to receive a text message or phone call.

Set a PIN before you transfer!

Also, after a failure, you need to re-link any "linked device" (i.e. any Signal Desktop or iPad installation). If any message came in during the aborted transfer, the linked device won't get a copy of that message.

Finally, after a failed transfer, i recommend completely uninstalling Signal from the new device, and starting over with a fresh install on the new device.

Permissions

My understanding is that Signal on Android uses Wi-Fi Direct to accomplish the transfer. But to use Wi-Fi Direct, Signal needs to have the right permissions.

On each device:

  • Entirely stop the Signal app
  • Go to Settings » Apps » Signal » Permissions
  • Ensure that the following permissions are all enabled whenever the app is running:
  • Location
  • Nearby Devices
  • Network

Preparing for Wi-Fi Direct

The transfer process depends on "Wi-Fi Direct", which is a bit of a disaster on its own.

I found that if i couldn't get Wi-Fi Direct to work between the two devices, then the Signal transfer was guaranteed to fail.

So, for clearer debugging, i first tried to establish a Wi-Fi Direct link on Android, without Signal being involved at all.

Setting up a Wi-Fi Direct connection directly failed, multiple times, until i found the following combination of steps, to be done on each device:

  • Turn off Bluetooth
  • Ensure Wi-Fi is enabled
  • Disconnect from any Wi-Fi network you are connected to (go to the "Internet" or "Wi-Fi" settings page, long-press on the currently connected network, and choose "Disconnect"). If your device knows how to connect to multiple local Wi-Fi networks, disconnct from each of them in turn until you are in a stable state where Wi-Fi is enabled, but no network is connected.
  • Close to the bottom of the "Inteernet" or "Wi-Fi" settings page, choose "Network Preferences" and then "Wi-Fi Direct"
  • if there are any entries listed under "Remembered groups", tap them and choose to "Forget this group"
  • If there are Peer devices that say "Invited", tap them and choose to "Cancel invitation"

I found that this configuration is the most likely to enable a successful Wi-Fi Direct connection, where clicking "invite" on one device would pop up an alert on the other asking to accept the connection, and result in a "Connected" state between the two devices.

Actually Transferring

Start with both devices fully powered up and physically close to one another (on the same desk should be fine).

On the new device:

  • Reboot the device, and log into the profile you want to use
  • Enable Internet access via Wi-Fi.
  • Remove any old version of Signal.
  • Install Signal, but DO NOT OPEN IT!
  • Set up the permissions for the Signal app as described above
  • Open Signal, and choose "restore or transfer" -- you still need to be connected to the network at this point.
  • The new device should display a QR code.

On the original device:

  • Reboot the device, and log into the profile that has the Signal account you're looking to transfer
  • Enable Internet access via Wi-Fi, using the same network that the old device is using.
  • Make sure the permissions for Signal are set up as described above
  • Open Signal, and tap the camera button
  • Point the camera at the new device's QR code

Now tap the "continue" choices on both devices until they both display a message that they are searching for each other. You might see the location indicator (a green dot) turn on during this process.

If you see an immediate warning of failure on either device, you probably don't have the permissions set up right.

You might see an alert (a "toast") on one of the devices that the other one is trying to connect. You should click OK on that alert.

In my experience, both devices are likely to get stuck "searching" for each other. Wait for both devices to show Signal's warning that the search has timed out.

At this point, leave Signal open on both devices, and go through all the steps described above to prepare for Wi-Fi Direct. Your Internet access will be disabled.

Now, tap "Try again" in Signal on both devices, pressing the buttons within a few seconds of each other. You should see another alert that one device is trying to connect to the other. Press OK there.

At this point, the transfer should start happening! The old device will indicate what percentag has been transferred, and the new device will indicate how many messages hav been transferred.

When this is all done, re-connect to Wi-Fi on the new device.

Temporal gap for Linked Devices

Note that during this process, if new messages are arriving, they will be queuing up for you.

When you reconnect to wi-fi, the queued messages will flow to your new device. But the process of transferring automatically unlinks any linked devices. So if you want to keep your instance of Signal Desktop with as short a gap as possible, you should re-link that installation promptly after the transfer completes.

Clean-up

After all this is done successfully, you probably want to go into the Permissions settings and turn off the Location and Nearby Devices permissions for Signal on both devices.

I recommend also going into Wi-Fi Direct and removing any connected devices and forgetting any existing connections.

Conclusion

This is an abysmally clunky user experience, and I'm glad I don't have to do it often. It would have been much simpler to make a backup and restore from it, but I didn't want to freak out my contacts with a safety number change.

By contrast, when i wanted extend a DeltaChat account across two devices, the transfer was prompt and entirely painless -- i just had to make sure the devices were on the same network, and then scanned a QR code from one to the other. And there was no temporal gap for any other deviees. And i could use Delta on both devices simultaneously until i was convinced that it would work on the new device -- Delta doesn't have the concept of a primary account.

I wish Signal made it that easy! Until it's that easy, i hope the processes described here are useful to someone.

21 November, 2025 05:00AM by Daniel Kahn Gillmor

November 20, 2025

hackergotchi for Bálint Réczey

Bálint Réczey

Think you can’t interpose static binaries with LD_PRELOAD? Think again!

Well, you are right, you can’t. At least not directly. This is well documented in many projects relying on interposing binaries, like faketime.

But what if we could write something that would take a static binary, replace at least the direct syscalls with ones going through libc and load it with the dynamic linker? We are in luck, because the excellent QEMU project has a user space emulator! It can be compiled as a dynamically linked executable, honors LD_PRELOAD and uses the host libc’s syscall – well, at least sometimes. Sometimes syscalls just bypass libc.

The missing piece was a way to make QEMU always take the interposable path and call the host libc instead of using an arch-specifix assembly routine (`safe_syscall_base`) to construct the syscall and going directly to the kernel. Luckily, this turned out to be doable. A small patch later, QEMU gained a switch that forces all syscalls through libc. Suddenly, our static binaries started looking a lot more dynamic!

$ faketime '2008-12-24 08:15:42'  qemu-x86_64 ./test_static_clock_gettime
2008-12-24 08:15:42.725404654
$ file test_static_clock_gettime 
test_clock_gettime: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, ...

With this in place, Firebuild can finally wrap even those secretive statically linked tools. QEMU runs them, libc catches their syscalls, LD_PRELOAD injects libfirebuild.so, and from there the usual interposition magic happens. The result: previously uncachable build steps can now be traced, cached, and shortcut just like their dynamic friends.

There is one more problem though. Why would the static binaries deep in the build be run by QEMU? Firebuild also intercepts the `exec()` calls and now it rewrites them on the fly whenever the executed binary would be statically linked!

$ firebuild -d comm bash -c ./test_static
...
FIREBUILD: fd 9.1: ({ExecedProcess 161077.1, running, "bash -c ./test_static", fds=[0: {FileFD ofd={FileO
FD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOFD #3 type=FD_PIPE_OUT w} {Pipe #0} close_o
n_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=FD_PIPE_OUT w} {Pipe #1} close_on_popen=fal
se cloexec=false}, 3: {FileFD NULL} /* times 2 */]})
{
    "[FBBCOMM_TAG]": "exec",
    "file": "test_static",
    "// fd": null,
    "// dirfd": null,
    "arg": [
        "./test_static"
    ],
    "env": [
        "SHELL=/bin/bash",
 ...
        "FB_SOCKET=/tmp/firebuild.cpMn75/socket",
        "_=./test_static"
    ],
    "with_p": false,
    "// path": null,
    "utime_u": 0,
    "stime_u": 1017
}
FIREBUILD: -> proc_ic_msg()  (message_processor.cc:782)  proc={ExecedProcess 161077.1, running, "bash -c 
./test_static", fds=[0: {FileFD ofd={FileOFD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOF
D #3 type=FD_PIPE_OUT w} {Pipe #0} close_on_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=F
D_PIPE_OUT w} {Pipe #1} close_on_popen=false cloexec=false}, 3: {FileFD NULL} /* times 2 */]}, fd_conn=9.
1, tag=exec, ack_num=0
FIREBUILD:   -> send_fbb()  (utils.cc:292)  conn=9.1, ack_num=0 fd_count=0
Sending message with ancillary fds []:
{
    "[FBBCOMM_TAG]": "rewritten_args",
    "arg": [
        "/usr/bin/qemu-user-interposable",
        "-libc-syscalls",
        "./test_static"
    ],
    "path": "/usr/bin/qemu-user-interposable"
}
...
FIREBUILD: -> accept_ic_conn()  (firebuild.cc:139)  listener=6
...
FIREBUILD: fd 9.2: ({Process NULL})
{
    "[FBBCOMM_TAG]": "scproc_query",
    "pid": 161077,
    "ppid": 161073,
    "cwd": "/home/rbalint/projects/firebuild/test",
    "arg": [
        "/usr/bin/qemu-user-interposable",
        "-libc-syscalls",
        "./test_static"
    ],
    "env_var": [
        "CCACHE_DISABLE=1",
...
        "SHELL=/bin/bash",
        "SHLVL=0",
        "_=./test_static"
    ],
    "umask": "0002",
    "jobserver_fds": [],
    "// jobserver_fifo": null,
    "executable": "/usr/bin/qemu-user-interposable",
    "// executed_path": null,
    "// original_executed_path": null,
    "libs": [
        "/lib/x86_64-linux-gnu/libatomic.so.1",
        "/lib/x86_64-linux-gnu/libc.so.6",
        "/lib/x86_64-linux-gnu/libglib-2.0.so.0",
        "/lib/x86_64-linux-gnu/libm.so.6",
        "/lib/x86_64-linux-gnu/libpcre2-8.so.0",
        "/lib64/ld-linux-x86-64.so.2"
    ],
    "version": "0.8.5.1"
}

The QEMU patch is forwarded to qemu-devel. If it lands, anyone using QEMU user-mode emulation could benefit — not just Firebuild.

For Firebuild users, though, the impact is immediate. Toolchains that mix dynamic and static helpers? Cross-builds that pull in odd little statically linked utilities? Previously “invisible” steps in your builds? All now fair game for caching.

Firebuild 0.8.5 ships this new capability out of the box. Just update, make sure you’re using a patched QEMU, and enjoy the feeling of watching even static binaries fall neatly into place in your cached build graph. Ubuntu users can get the prebuilt patched QEMU packages from the Firebuild PPA already.

Static binaries, welcome to the party!

20 November, 2025 08:56PM by Réczey Bálint

November 19, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

While it is cold-ish season in the North hemisphere...

Last week, our university held a «Mega Vaccination Center». Things cannot be small or regular with my university, ever! According to the official information, during last week ≈31,000 people were given a total of ≈74,000 vaccine dosis against influenza, COVID-19, pneumococcal disease and measles (specific vaccines for each person selected according to an age profile).

I was a tiny blip in said numbers. One person, three shots. Took me three hours, but am quite happy to have been among the huge crowd.

Long, long line

(↑ photo credit: La Jornada, 2025.11.14)

Really vaccinated!

And why am I bringing this up? Because I have long been involved in organizing DebConf, the best conference ever, naturally devoted to improving Debian GNU/Linux. And last year, our COVID reaction procedures ended up hurting people we care about. We, as organizers, are taking it seriously to shape a humane COVID handling policy that is, at the same time, responsible and respectful for people who are (reasonably!) afraid to catch the infection. No, COVID did not disappear in 2022, and its effects are not something we can turn a blind eye to.

Next year, DebConf will take place in Santa Fe, Argentina, in July. This means, it will be a Winter DebConf. And while you can catch COVID (or influenza, or just a bad cold) at any time of year, odds are a bit higher.

I know not every country still administers free COVID or influenza vaccines to anybody who requests them. And I know that any protection I might have got now will be quite weaker by July. But I feel it necessary to ask of everyone who can get it to get a shot. Most Northern Hemisphere countries will have a vaccination campaign (or at least, higher vaccine availability) before Winter.

If you plan to attend DebConf (hell… If you plan to attend any massive gathering of people travelling from all over the world to sit at a crowded auditorium) during the next year, please… Act responsibly. For yourself and for those surrounding you. Get vaccinated. It won’t absolutely save you from catching it, but it will reduce the probability. And if you do catch it, you will probably have a much milder version. And thus, you will spread it less during the first days until (and if!) you start developing symptoms.

19 November, 2025 03:59AM

Michael Ablassmeier

building SLES 16 vagrant/libvirt images using guestfs tools

SLES 16 has been released. In the past, SUSE offered ready built vagrant images. Unfortunately that’s not the case anymore, as with more recent SLES15 releases the official images were gone.

In the past, it was possible to clone existing projects on the opensuse build service to build the images by yourself, but i couldn’t find any templates for SLES 16.

Naturally, there are several ways to build images, and the tooling around involves kiwi-ng, opensuse build service, or packer recipes etc.. (existing packer recipes wont work anymore, as Yast has been replaced by a new installer, called agma). All pretty complicated, …

So my current take on creating a vagrant image for SLE16 has been the following:

  • Spin up an QEMU virtual machine
  • Manually install the system, all in default except for one special setting: In the Network connection details, “Edit Binding settings” and set the Interface to not bind a particular MAC address or interface. This will make the system pick whatever network device naming scheme is applied during boot.
  • After installation has finished, shutdown.

Two guestfs-tools that can now be used to modify the created qcow2 image:

  • run virt-sysrpep on the image to wipe settings that might cause troubles:
 virt-sysprep -a sles16.qcow2
  • create a simple shellscript that setups all vagrant related settings:
#!/bin/bash
useradd vagrant
mkdir -p /home/vagrant/.ssh/
chmod 0700 /home/vagrant/.ssh/
echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIF
o9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9W
hQ== vagrant insecure public key" > /home/vagrant/.ssh/authorized_keys
chmod 0600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant:vagrant /home/vagrant/
# apply recommended ssh settings for vagrant boxes
SSHD_CONFIG=/etc/ssh/sshd_config.d/99-vagrant.conf
if [[ ! -d "$(dirname ${SSHD_CONFIG})" ]]; then
    SSHD_CONFIG=/etc/ssh/sshd_config
    # prepend the settings, so that they take precedence
    echo -e "UseDNS no\nGSSAPIAuthentication no\n$(cat ${SSHD_CONFIG})" > ${SSHD_CONFIG}
else
    echo -e "UseDNS no\nGSSAPIAuthentication no" > ${SSHD_CONFIG}
fi
SUDOERS_LINE="vagrant ALL=(ALL) NOPASSWD: ALL"
if [ -d /etc/sudoers.d ]; then
    echo "$SUDOERS_LINE" >| /etc/sudoers.d/vagrant
    visudo -cf /etc/sudoers.d/vagrant
    chmod 0440 /etc/sudoers.d/vagrant
else
    echo "$SUDOERS_LINE" >> /etc/sudoers
    visudo -cf /etc/sudoers
fi
 
mkdir -p /vagrant
chown -R vagrant:vagrant /vagrant
systemctl enable sshd
  • use virt-customize to upload the script into the qcow image:
 virt-customize -a sle16.qcow2 --upload vagrant.sh:/tmp/vagrant.sh
  • execute the script via:
 virt-customize -a sle16.qcow2 --run-command "/tmp/vagrant.sh"

After this, use the create-box.sh from the vagrant-libvirt project to create an box image:

https://github.com/vagrant-libvirt/vagrant-libvirt/blob/main/tools/create_box.sh

and add the image to your environment:

 create_box.sh sle16.qcow2 sle16.box
 vagrant box add --name my/sles16 test.box

the resulting box is working well within my CI environment as far as i can tell.

19 November, 2025 12:00AM

November 18, 2025

Sahil Dhiman

Anchors in Life

Just like a ship needs an anchor to stabilize and hold it to port, humans too, I feel, have and require anchors to hold them in life. It could be an emotional anchor, a physical anchor, an anchor that stimulates your curiosity, a family member, a friend or a partner or a spiritual being.

An anchor holds you and helps you stabilize in stormy weather. An anchor can keep you going or stop you from going. An anchor orients you, helps you formulate your values and beliefs.

An anchor could be someone or something or oneself (thanks Saswata for the thought). Writing here is one of my anchors; what’s your anchor?

18 November, 2025 11:33AM

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

App Store Oligopoly

A Call for Public Discussion about App Store Oligopoly

Over on the ACLU's Free Future blog, I just published an article titled Your Smartphone, Their Rules: How App Stores Enable Corporate-Government Censorship.

Free Software users and developers likely already understand the reasons why it matters who controls what tools you have access to. Hopefully this post can help clarify, even to people typically used to common non-free tooling, that there are real world risks to consolidated, proprietary control over computing and communication tools.

Big shout out to the projects out there doing good work in the "pocket supercomputer" space, providing an escape valve for many users and a counter-example to centralized corporate control, including F-Droid, GrapheneOS, and phosh.

The screws are tightening on user freedom, in the very place where most computing is happening today. The smartphone is already far too similar to an ankle monitor than it should be.

Please, publish your own suggestions on creative forms of mutual technical liberation. These are communications tools, so no person can fix the problems alone.

I would love to see a flourishing of non-Android, non-iOS systems in people's pockets, but i also know with the market the way it is, that is a long haul. Until that happens, we should also try to keep Android open, check out keepandroidopen.org for more suggestions.

18 November, 2025 05:00AM by Daniel Kahn Gillmor

November 17, 2025

Rodrigo Siqueira

XDC 2025

It has been a long time since I published any update in this space. Since this was a year of colossal changes for me, maybe it is also time for me to make something different with this blog and publish something just for a change — why not start talking about XDC 2025?

This year, I attended XDC 2025 in Vienna as an Igalia developer. I was thrilled to see some faces from people I worked with in the past and people I’m working with now. I had a chance to hang out with some folks I worked with at AMD (Harry, Alex, Leo, Christian, Shashank, and Pierre), many Igalians (Žan, Job, Ricardo, Paulo, Tvrtko, and many others), and finally some developers from Valve. In particular, I met Tímur in person for the first time, even though we have been talking for months about GPU recovery. Speaking of GPU recovery, we held a workshop on this topic together.

The workshop was packed with developers from different companies, which was nice because it added different angles on this topic. We began our discussion by focusing on the topic of job resubmission. Christian began sharing a brief history of how the AMDGPU driver started handling resubmission and the associated issues. After learning from erstwhile experience, amdgpu ended up adopting the following approach:

  1. When a job cause a hang, call driver specific handler.
  2. Stop the scheduler.
  3. Copy all jobs from the ring buffer, minus the job that caused the issue, to a temporary ring.
  4. Reset the ring buffer.
  5. Copy back the other jobs to the ring buffer.
  6. Resume the scheduler.

Below, you can see one crucial series associated with amdgpu recovery implementation:

The next topic was a discussion around the replacement of drm_sched_resubmit_jobs() since this function became deprecated. Just a few drivers still use this function, and they need a replacement for that. Some ideas were floating around to extract part of the specific implementation from some drivers into a generic function. The next day, Philipp Stanner continued to discuss this topic in his workshop, DRM GPU Scheduler.

Another crucial topic discussed was improving GPU reset debuggability to narrow down which operations cause the hang (keep in mind that GPU recovery is a medicine, not the cure to the problem). Intel developers shared their strategy for dealing with this by obtaining hints from userspace, which helped them provide a better set of information to append to the devcoredump. AMD could adopt this alongside dumping the IB data into the devcoredump (I am already investigating this).

Finally, we discussed strategies to avoid hang issues regressions. In summary, we have two lines of defense:

  • IGT: At the IGT level, we can have more tests that insert malicious instructions into the ring buffer, forcing the driver into an invalid state and triggering the recovery process.
  • HangTest suite: HangTest suite is a tool that simulates some potential hangs using Vulkan. Some tests are already available in this suite, but we should explore more creative combinations for trying to trigger hangs.
Lighting talk

This year, as always, XDC was super cool, packed with many engaging presentations which I highly recommend everyone check out. If you are interested, check the schedule and the presentation recordings available on the X.Org Foundation Youtube page. Anyway, I hope this blog post marks the inauguration of a new era for this site, where I will start posting more content ranging from updates to tutorials. See you soon.

17 November, 2025 12:00AM

Valhalla's Things

Historically Inaccurate Hemd

Posted on November 17, 2025
Tags: madeof:atoms, craft:sewing

A woman wearing a white shirt with a tall, thick collar with lines of blue embroidery, closed in the front with small buttons; the sleeves are wide and billowing, gathered at the cuffs with more blue embroidery. She's keeping her hands at the waist so that the shirt, which reaches to mid thigh, doesn't look like a shapeless tent from the neck down.

After cartridge pleating and honeycombing, I was still somewhat in the mood for that kind of fabric manipulation, and directing my internet searches in that vague direction, and I stumbled on this: https://katafalk.wordpress.com/2012/06/26/patternmaking-for-the-kampfrau-hemd-chemise/

Now, do I want to ever make myself a 16th century German costume, especially a kampfrau one? No! I’m from lake Como! Those are the enemies who come down the Alps pillaging and bringing the Black Death with them!

Although I have to admit that at times during my day job I have found the idea of leaving everything to go march with the Jägermonsters attractive. You know, the exciting prospective of long days of march spent knitting sturdy socks, punctuated by the excitement of settling down in camp and having a chance of doing lots of laundry. Or something. Sometimes being a programmer will make you think odd things.

Anyway, going back to the topic, no, I didn’t need an historically accurate hemd. But I did need a couple more shirts for daily wear, I did want to try my hand at smocking, and this looked nice, and I was intrigued by the way the shaping of the neck and shoulder worked, and wondered how comfortable it would be.

And so, it had to be done.

I didn’t have any suitable linen, but I did have quite a bit of cotton voile, and since I wasn’t aiming at historical accuracy it looked like a good option for something where a lot of fabric had to go in a small space.

At first I considered making it with a bit less fabric than the one in the blog, but then the voile was quite thin, so I kept the original measurement as is, only adapting the sleeve / sides seams to my size.

The same woman, from the back. This time the arms are out, so that the big sleeves show better, but the body does look like a tent.

With the pieces being rectangles the width of the fabric, I was able to have at least one side of selvedge on all seams, and took advantage of it by finishing the seams by simply folding the allowances to one sides so that the selvedge was on top, and hemstitching them down as I would have done with a folded edge when felling.

Also, at first I wanted to make the smocking in white on white, but then I thought about a few hanks of electric blue floss I had in my stash, and decided to just go with it.

The initial seams were quickly made, then I started the smocking at the neck, and at that time the project went on hold while I got ready to go to DebConf. Then I came back and took some time to get back into a sewing mood, but finally the smocking on the next was finished, and I could go on with the main sewing, which, as I expected, went decently fast for a handsewing project.

detail of the smocking in progress on the collar, showing the lines of basting thread I used as a reference, and the two in progress zig-zag lines being worked from each side.

While doing the diagonal smocking on the collar I counted the stitches to make each side the same length, which didn’t completely work because the gathers weren’t that regular to start with, and started each line from the two front opening going towards the center back, leaving a triangle with a different size right in the middle. I think overall it worked well enough.

Then there were a few more interruptions, but at last it was ready! just as the weather turned cold-ish and puffy shirts were no longer in season, but it will be there for me next spring.

I did manage to wear it a few times and I have to say that the neck shaping is quite comfortable indeed: it doesn’t pull in odd ways like the classical historically accurate pirate shirt sometimes does, and the heavy gathering at the neck makes it feel padded and soft.

The same shirt belted (which looks nicer); one hand is held out to show that the cuff is a bit too wide and falls down over the hand.

I’m not as happy with the cuffs: the way I did them with just honeycombing means that they don’t need a closure, and after washing and a bit of steaming they lie nicely, but then they tend to relax in a wider shape. The next time I think I’ll leave a slit in the sleeves, possibly make a different type of smocking (depending on whether I have enough fabric) and then line them like the neck so that they are stable.

Because, yes, I think that there will be another time: I have a few more project before that, and I want to spend maybe another year working from my stash, but then I think I’ll buy some soft linen and make at least another one, maybe with white-on-white smocking so that it will be easier to match with different garments.

17 November, 2025 12:00AM

November 16, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Game slowrunning

In 2013, I finished Zelda II: The Adventure of Link (on emulator), which I'd first played the summers of 1992 and 1993 (or thereabouts). At ~20 years between first start and first finish, it's a kind of weird opposite of speedrunning, and a personal best for me.

But this weekend, I trounced that record; in 1990 (I think!), we got a 512 kB RAM expansion for the Amiga 500 for the first time, which allowed us to play our warezed copy of Pool of Radiance without understanding much of the story or really reading that much English. And a couple of weeks ago, I realized that I had bought the game on GOG.com in 2018 and not done much about it… and went to finish it.

Pool of Radiance, fighting Thyranthraxus

Due to poor planning on my part, this ended up being a bit of a challenge run, with no stat modification, only five people in the party, no excessive rerolling (only 2–3 for each), no multiclassing, no glitches, no save-states (after finding out they help very little :-) ), very limited NPCs (only story NPCs plus a couple of hireds immediately killed for items, as opposed to the Amiga runs where we basically had only one PC and the rest top-grade NPCs!) and no Gold Box Companion.

However: Extensive guide use (the Internet is great!), and savescumming. Oh my, so much savescumming.

So that's 35 years from first start to first finish. We'll see when I get to Police Quest I…

16 November, 2025 11:46AM

Russ Allbery

Cumulative haul

I haven't posted a book haul in forever, so lots of stuff stacked up, including a new translation of Bambi that I really should get around to reading.

Nicholas & Olivia Atwater — A Matter of Execution (sff)
Nicholas & Olivia Atwater — Echoes of the Imperium (sff)
Travis Baldree — Brigands & Breadknives (sff)
Elizabeth Bear — The Folded Sky (sff)
Melissa Caruso — The Last Hour Between Worlds (sff)
Melissa Caruso — The Last Soul Among Wolves (sff)
Haley Cass — Forever and a Day (romance)
C.L. Clark — Ambessa: Chosen of the Wolf (sff)
C.L. Clark — Fate's Bane (sff)
C.L. Clark — The Sovereign (sff)
August Clarke — Metal from Heaven (sff)
Erin Elkin — A Little Vice (sff)
Audrey Faye — Alpha (sff)
Emanuele Galletto, et al. — Fabula Ultima: Core Rulebook (rpg)
Emanuele Galletto, et al. — Fabula Ultima: Atlas High Fantasy (rpg)
Emanuele Galletto, et al. — Fabula Ultima: Atlas Techno Fantasy (rpg)
Alix E. Harrow — The Everlasting (sff)
Alix E. Harrow — Starling House (sff)
Antonia Hodgson — The Raven Scholar (sff)
Bel Kaufman — Up the Down Staircase (mainstream)
Guy Gavriel Kay — All the Seas of the World (sff)
N.K. Jemisin & Jamal Campbell — Far Sector (graphic novel)
Mary Robinette Kowal — The Martian Conspiracy (sff)
Matthew Kressel — Space Trucker Jess (sff)
Mark Lawrence — The Book That Held Her Heart (sff)
Yoon Ha Lee — Moonstorm (sff)
Michael Lewis (ed.) — Who Is Government? (non-fiction)
Aidan Moher — Fight, Magic, Items (non-fiction)
Saleha Mohsin — Paper Soldiers (non-fiction)
Ada Palmer — Inventing the Renaissance (non-fiction)
Suzanne Palmer — Driving the Deep (sff)
Suzanne Palmer — The Scavenger Door (sff)
Suzanne Palmer — Ghostdrift (sff)
Terry Pratchett — Where's My Cow (graphic novel)
Felix Salten & Jack Zipes (trans.) — The Original Bambi (classic)
L.M. Sagas — Cascade Failure (sff)
Jenny Schwartz — The House That Walked Between Worlds (sff)
Jenny Schwartz — House in Hiding (sff)
Jenny Schwartz — The House That Fought (sff)
N.D. Stevenson — Scarlet Morning (sff)
Rory Stewart — Politics on the Edge (non-fiction)
Emily Tesh — The Incandescent (sff)
Brian K. Vaughan & Fiona Staples — Saga #1 (graphic novel)
Scott Warren — The Dragon's Banker (sff)
Sarah Wynn-Williams — Careless People (non-fiction)

As usual, I have already read and reviewed a whole bunch of these. More than I had expected, actually, given that I've not had a great reading year this year so far.

I am, finally, almost caught up with reviews, with just one book read and not yet reviewed. And hopefully I'll have lots of time to read for the last month and a half of the year.

16 November, 2025 06:32AM

hackergotchi for Vasudev Kamath

Vasudev Kamath

Moving blog from self hosting to Github Pages

I haven't been blogging as much as I used to. For a while, I've been hosting my blog on a single-core DigitalOcean droplet, which cost me around $7 per month. It also hosted my mail server. Most of the time, the droplet was idle, and I wasn't actively using my personal email much. Since it was self-hosted, I didn't have confidence that it would always be up, so I relied on Gmail as my personal email for everything—from banking to social media.

Now, I feel this cost is just a waste of money, even though it's not huge. So, I decided to move my blog back to GitHub Pages, now published using GitHub Workflows. I've stopped my mail server for the time being, and it won't be reachable for a while. For any personal queries or comments, please reach me on my Personal Gmail.

I'm not sure how active I'll be with blogging again, but I'll try my best to return to my old habit of writing at least a post every week or two. Not because people will read it, but because it gives me the option to explore new things, experiment, and take notes along the way.

16 November, 2025 05:02AM by copyninja

November 15, 2025

Andrew Cater

2025-11-15 17:16 UTC Debian media testing for point release 13.2 of Trixie

*Busy* day in Cambridge. A roomful of people, large numbers of laptops and a lot of parallel installations.

Joined here by Emyr, Chris, Helen and Simon with Isy doing speech installs from her university accommodation. Two Andy's always makes it interesting. Steve providing breakfast, as ever.

We're almost there: the last test install is being repeated to flush out a possible bug. Other release processes are being done in the background.

Thanks again to Steve for hosting and all the hard work that goes into this from everybody.


 

15 November, 2025 08:39PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Jonathan Dowland

Jonathan Dowland

Zoom R8

When I started looking at synths again, I had a feeling I would want to record from them, and ideally not with a computer. To that end, I also bought a second-hand standalone multitrack recorder, the Zoom R8.

Zoom R8

It's a little desktop console with two inputs, a built-in mic, and 8 sliders for adjusting the playback of 8 (ostensibly) independent tracks. It has a USB port to interface with a computer, and features some onboard effects (delay, reverb, that kind-of thing).

Look a bit closer, and the USB port is mini-USB, which gives away its age (and I'll never get rid of mini-USB cables, will I?). The two inputs are mono, so to capture stereo output from the minilogue-xd I need to tie up both inputs. Also, the 8 tracks are mono, so it's more like a stereo 4-track.

The effects (what little I've played with them) are really pretty cool; and it's great to apply them to a live signal. We had some fun running them over a bass guitar. However you can only use them for 44.1 kHz sample rate. If you ignore the effects the device supports 48 kHz.

I've ended up using it as my main USB interface on my computer; It's great for that. The internal mic ended up being too weak to use for video calls. As a USB interface, my computer can receive the signal from the synth (and I've wasted more time than I care to admit trying to wrestle with the Linux audio stack to do something with that).

It can also run on batteries, which opens up the possibility of recording piano with my daughter, or field recording or suchlike.

Writing this up serves as a reminder to me of why I bought it, and I now intend to spend a little more time using it that way and stop wasting time fighting ALSA/PulseAudio/PipeWire/PortAudio/etc.

15 November, 2025 10:14AM

November 14, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

404 not found

Found this grafitti on the wall behind my house today:

404 not found!

14 November, 2025 07:27PM