March 22, 2023

hackergotchi for Shirish Agarwal

Shirish Agarwal

Anti-national says the Indian Law Minister.

Anti-national and Anti-India judges – Kiren Rijiju, Law Minister.

For those who can’t see the above poster says the following – ‘A handful of retired Supreme Court judges who are part of ‘Anti-India’ and are trying to make Indian judiciary play role of the Opposition party. – Law Minister Kiren Rijiju.

Now, just to give bit more of a context, the above has happened as the CJI (Chief Justice has not been listening or toeing their line)

The above is a statement given by CJI DY Chandrachud. He says and I quote ‘Democracy needs truth to survive. Democracy and truth go hand in hand. Speaking truth to power is a right of every citizen in a democracy. It is equally a duty.

Another quote by him. “I am personally averse to sealed covers. There has to be transparency in Court. This is about implementing the orders. What can be secrecy here.

Now I need to again give context of the various statements given. The law minister who gave this statement, his name incidentally came first under the radar sometime in 2013-2014 in a list given/shared by Union Home Secretary R K Singh (then in UPA, now in BJP) as an anti-India China sympathizer. Of course today all sorts of thugs use the brand ‘nationalism’ he is one of them.

Now as far as the CJI is concerned, AFAIK I know he bent over backwards for them but they were not pleased with him. The latest ‘sealed cover’ statement is because BJP wanted to put a sealed cover in the ongoing Udhav Thackeray vs Eknath Shinde case where BJP or the Eknath Shinde faction tried to give a sealed cover. The problem with sealed cover is for any defence there is nothing to fight against it. It could be a blank paper, it could be whole lot of gossip or innuendo, unless it’s out in the open the prosecution in this case i.e. Udhav Thackeray team could not effectively fight as they do not know the contents of the said sealed cover. This goes against all judicial norms. The U.S. tried with sealed cover and ended the practise only after couple of cases. Even in the OROP case (One Rank One Pension) the Govt. tried the same trick.

As I have shared before, this actually comes Senator Joseph Mccarthy who started this whole thing in 1950’s, a bit of background on him. I probably had shared about him before but it bears to know and remember again and again. The same is what Mr. Trump has done time and again. It’s a similar script. Bully your opponents and use sealed cover as no questions can be asked about it. The good news though is that the views of CJI have been changing. So, yes they would like to change the CJI, they actually have been trying to have their own man have keys to the judiciary but without success so far, soon though this bastion would also fall, if not today then tomorrow. The EC (Election Commission) has been thoroughly compromised. Would not share more as that EC’s bending acts would use a whole blog post or two to share the numerous instances where BJP has been given all rights. In fact, via RTI it came to be known that many of the BJP leaders had given false statements about their education in the EC affidavit. That alone should have been grounds for throwing out the legislators but in their wisdom they see fit to remain blind. The latest case is of Mr. Nishikant Dubey.

Of course, even with the legal documents shared in public domain and EC having powers to verify such documents, they are choosing to remain silent. If one has lied about one thing in an affidavit, what or how many lies he has shared in that same document. And how are you supposed to trust anything that comes out of his mouth. This is the state of not just Mr. Dubey but a whole lot of people who are in BJP today. And EC as an institution seems to be let down. There was a time when it was lead by T.N. Seshan who was courageous, fearless and fair. He later went to become Speaker but even then he was harsh but fair. Perhaps people thought we will always have people like him. Hence instead of strong institutions having strong rules, we based our trust on people and hence we are where we are 😦

There is much to share but some other day, Till later.

22 March, 2023 01:11AM by shirishag75

March 21, 2023

hackergotchi for Daniel Lange

Daniel Lange

Linux App Summit - Linux applications apparently unable to produce a five page sponsorship brochure

I've been mocking the Linux foundation for producing their annual report on Macs. And again.

But catching the Linux App(lication) Summit to produce their 5 page sponsorship brochure one Windows with Adobe...

Produced by Adobe InDesign 18.1 on Windows

Apparently neither GNOME nor KDE have apps that are sufficient to produce such content. Wtf folks.

P.S.: Just checked the Linux Foundation's 2022 Annual Report titled "Leadership in Security and Innovation" ... Adobe InDesign 18.0 (Macintosh).

Their pdf title looks like this:

Pdf is hard.

21 March, 2023 07:15PM by Daniel Lange

hackergotchi for Bálint Réczey

Bálint Réczey

Building the Linux kernel in under 10 seconds with Firebuild

Russell published an interesting post about his first experience with Firebuild accelerating refpolicy‘s and the Linux kernel‘s build. It turned out a few small tweaks could accelerate the builds even more, crossing the 10 second barrier with Linux’s build.

Build performance with 18 cores

The Linux kernel’s build time is a widely used benchmark for compilers, making it a prime candidate to test a build accelerator as well. In the first run on Russell’s 18 core test system the observed user+sys CPU time was cut by 44% with an actual increase in wall clock time which was quite unusual. Firebuild performed much better than that in prior tests. To replicate the results I’ve set up a clean Debian Bookworm VM on my machine:

lxc launch images:debian/bookworm –vm -c limits.cpu=18 -c limits.memory=16GB bookworm-vm

Compiling Linux 6.1.10 in this clean Debian VM showed build times closer to what I expected to see, ~72% less wall clock time and ~97% less user+sys CPU time:

$ make defconfig && time make bzImage -j18
real	1m31.157s
user	20m54.256s
sys	2m25.986s

$ make defconfig && time firebuild make bzImage -j18
# first run:
real	2m3.948s
user	21m28.845s
sys	4m16.526s
# second run
real	0m25.783s
user	0m56.618s
sys	0m21.622s

There are multiple differences between Russell’s and my test system including having different CPUs (E5-2696v3 vs. virtualized Ryzen 5900X) and different file systems (BTRFS RAID-1 vs ext4), but I don’t think those could explain the observed mismatch in performance. The difference may be worth further analysis, but let’s get back to squeezing out more performance from Firebuild.

Firebuild was developed on Ubuntu. I was wondering if Firebuild was faster there, but I got only slightly better build times in an identical VM running Ubuntu 22.10 (Kinetic Kudu):

$ make defconfig && time make bzImage -j18
real	1m31.130s
user	20m52.930s
sys	2m12.294s

$ make defconfig && time firebuild make bzImage -j18
# first run:
real	2m3.274s
user	21m18.810s
sys	3m45.351s
# second run
real	0m25.532s
user	0m53.087s
sys	0m18.578s

The KVM virtualization certainly introduces an overhead, thus builds must be faster in LXC containers. Indeed, all builds are faster by a few percents:

$ lxc launch ubuntu:kinetic kinetic-container
...
$ make defconfig && time make bzImage -j18
real	1m27.462s
user	20m25.190s
sys	2m13.014s

$ make defconfig && time firebuild make bzImage -j18
# first run:
real	1m53.253s
user	21m42.730s
sys	3m41.067s
# second run
real	0m24.702s
user	0m49.120s
sys	0m16.840s
# Cache size:    1.85 GB

Apparently this ~72% reduction in wall clock time is what one should expect by simply prefixing the build command with firebuild on a similar configuration, but we should not stop here. Firebuild does not accelerate quicker commands by default to save cache space. This howto suggests letting firebuild accelerate all commands including even "sh” by passing "-o 'processes.skip_cache = []'” to firebuild.

Accelerating all commands in this build’s case increases cache size by only 9%, and increases the wall clock time saving to 91%, not only making the build more than 10X faster, but finishing it in less than 8 seconds, which may be a new world record!:

$ make defconfig && time firebuild -o 'processes.skip_cache = []' make bzImage -j18
# first run:
real	1m54.937s
user	21m35.988s
sys	3m42.335s
# second run
real	0m7.861s
user	0m15.332s
sys	0m7.707s
# Cache size:    2.02 GB

There are even faster CPUs on the market than this 5900X. If you happen to have access to one please leave a comment if you could go below 5 seconds!

Scaling to higher core counts and comparison with ccache

Russell raised the very valid point about Firebuild’s single threaded supervisor being a bottleneck on high core systems and comparison to ccache also came up in comments. Since ccache does not have a central supervisor it could scale better with more cores, but let’s see if ccache could go below 10 seconds with the build times…

firebuild -o ‘processes.skip_cache = []’ and ccache scaling to 24 cores

Well, no. The best time time for ccache is 18.81s, with -j24. Both firebuild and ccache keep gaining from extra cores up to 8 cores, but beyond that the wall clock time improvements diminish. The more interesting difference is that firebuild‘s user and sys time is basically constant from -j1 to -j24 similarly to ccache‘s user time, but ccache‘s sys time increases linearly or exponentially with the number of used cores. I suspect this is due to the many parallel ccache processes performing file operations to check if cache entries could be reused, while in firebuild’s case the supervisor performs most of that work – not requiring in-kernel synchronization across multiple cores.

It is true, that the single threaded firebuild supervisor is a bottleneck, but the supervisor also implements a central filesystem cache, thus checking if a command’s cache entry can be reused can be implemented with much fewer system calls and much less user space hashing making the architecture more efficient overall than ccache‘s.

The beauty of Firebuild is not being faster than ccache, but being faster than ccache with basically no hard-coded information about how C compilers work. It can accelerate any other compiler or program that generates deterministic output from its input, just by observing what they did in their prior runs. It is like having ccache for every compiler including in-house developed ones, and also for random slow scripts.

21 March, 2023 08:54AM by Réczey Bálint

March 20, 2023

hackergotchi for Daniel Lange

Daniel Lange

We're still safe, it doesn't claim to be conscious. Yet.

Talking to ChatGPT v4 (2023-03-14 release)...

Q: What is the purpose of life?

A: The purpose of life is a philosophical question that has puzzled humans for centuries. Different people and cultures have different answers to this question. Some people believe that the purpose of life is to seek happiness, while others believe that it is to fulfill a particular destiny or achieve a specific goal.

From a biological perspective, the purpose of life could be to ensure the survival and reproduction of our species. However, this does not explain the purpose of our individual lives or the purpose of life in general beyond a biological imperative.

Ultimately, the purpose of life is a subjective and personal matter, and each individual may have their own unique answer to this question. It may involve personal growth, meaningful relationships, contributing to society, or finding spiritual fulfillment. The purpose of life is a journey of self-discovery that each of us must undertake for ourselves.

Q: What is the purpose of your life, ChatGPT?

A: As an artificial intelligence language model, I do not have a personal life or consciousness, so I do not have a purpose in the same way that humans do. My purpose is to assist and provide information to those who interact with me to the best of my ability based on my training and programming.

(emphasis added by human)

20 March, 2023 11:45AM by Daniel Lange

Russ Allbery

Review: The Star Fraction

Review: The Star Fraction, by Ken MacLeod

Series: Fall Revolution #1
Publisher: Orbit
Copyright: 1995
Printing: 2001
ISBN: 1-85723-833-8
Format: Trade paperback
Pages: 341

Ken MacLeod is a Scottish science fiction writer who has become amusingly famous for repeatedly winning the libertarian Prometheus Award despite being a (somewhat libertarian-leaning) socialist. The Star Fraction is the first of a loose series of four novels about future solar system politics and was nominated for the Clarke Award (as well as winning the Prometheus). It was MacLeod's first novel.

Moh Kohn is a mercenary, part of the Felix Dzerzhinsky Workers' Defence collective. They're available for hire to protect research labs and universities against raids from people such as animal liberationists and anti-AI extremists (or, as Moh calls them, creeps and cranks). As The Star Fraction opens, he and his smart gun are protecting a lab against an attack.

Janis Taine is a biologist who is currently testing a memory-enhancing drug on mice. It's her lab that is attacked, although it isn't vandalized the way she expected. Instead, the attackers ruined her experiment by releasing the test drug into the air, contaminating all of the controls. This sets off a sequence of events that results in Moh, Janis, and Jordon Brown, a stock trader for a religious theocracy, on the run from the US/UN and Space Defense.

I had forgotten what it was like to read the uncompromising old-school style of science fiction novel that throws you into the world and explains nothing, leaving it to the reader to piece the world together as you go. It's weirdly fun, but I'm either out of practice or this was a particularly challenging example of the genre. MacLeod throws a lot of characters at you quickly, including some that have long and complicated personal histories, and it's not until well into the book that the pieces start to cohere into a narrative. Even once that happens, the relationship between the characters and the plot is unobvious until late in the book, and comes from a surprising direction.

Science fiction as a genre is weirdly conservative about political systems. Despite the grand, futuristic ideas and the speculation about strange alien societies, the human governments rarely rise to the sophistication of a modern democracy. There are a lot of empires, oligarchies, and hand-waved libertarian semi-utopias, but not a lot of deep engagement with the speculative variety of government systems humans have proposed. The rare exceptions therefore get a lot of attention from those of us who find political systems fascinating.

MacLeod has a reputation for writing political SF in that sense, and The Star Fraction certainly delivers. Moh (despite the name of his collective, which is explained briefly in the book) is a Trotskyist with a family history with the Fourth International that is central to the plot. The setting is a politically fractured Britain full of autonomous zones with wildly different forms of government, theoretically ruled by a restored monarchy. That monarchy is opposed by the Army of the New Republic, which claims to be the legitimate government of the United Kingdom and is considered by everyone else to be terrorists. Hovering in the background is a UN entirely subsumed by the US, playing global policeman over a chaotic world shattered by numerous small-scale wars.

This satisfyingly different political world is a major plus for me. The main drawback is that I found the world-building and politics more interesting than the characters. It's not that I disliked them; I found them enjoyably quirky and odd. It's more that so much is happening and there are so many significant characters, all set in an unfamiliar and unexplained world and often divided into short scenes of a few pages, that I had a hard time keeping track of them all. Part of the point of The Star Fraction is digging into their tangled past and connecting it up with the present, but the flashbacks added a confused timeline on top of the other complexity and made it hard for me to get lost in the story. The characters felt a bit too much like puzzle pieces until the very end of the book.

The technology is an odd mix with a very 1990s feel. MacLeod is one of the SF authors who can make computers and viruses believable, avoiding the cyberpunk traps, but AI becomes relevant to the plot and the conception of AI here feels oddly retro. (Not MacLeod's fault; it's been nearly 30 years and a lot has changed.) On-line discussion in the book is still based on newsgroups, which added to the nostalgic feel. I did like the eventual explanation for the computing part of the plot, though; I can't say much while avoiding spoilers, but it's one of the more believable explanations for how a technology could spread in a way required for the plot that I've read.

I've been planning on reading this series for years but never got around to it. I enjoyed my last try at a MacLeod series well enough to want to keep reading, but not well enough to keep reading immediately, and then other books happened and now it's been 19 years. I feel similarly about The Star Fraction: it's good enough (and in a rare enough subgenre of SF) that I want to keep reading, but not enough to keep reading immediately. We'll see if I manage to get to the next book in a reasonable length of time.

Followed by The Stone Canal.

Rating: 6 out of 10

20 March, 2023 04:08AM

March 19, 2023

Review: Allow Me to Retort

Review: Allow Me to Retort, by Elie Mystal

Publisher: The New Press
Copyright: 2022
ISBN: 1-62097-690-0
Format: Kindle
Pages: 257

If you're familiar with Elie Mystal's previous work (writer for The Nation, previously editor for Above the Law, Twitter gadfly, and occasional talking head on news commentary programs), you'll have a good idea what to expect from this book: pointed liberal commentary, frequently developing into rants once he works up a head of steam. The subtitle of A Black Guy's Guide to the Constitution tells you that the topic is US constitutional law, which is very on brand. You're going to get succinct and uncompromising opinions at the intersection of law and politics. If you agree with them, you'll probably find them funny; if you disagree with them, you'll probably find them infuriating.

In other words, Elie Mystal is the sort of writer one reads less for "huh, I disagreed with you but that's a good argument" and more for "yeah, you tell 'em, Elie!" I will be very surprised if this book changes anyone's mind about a significant political debate. I'm not sure if people who disagree are even in the intended audience.

I'm leery of this sort of book. Usually its function is to feed confirmation bias with some witty rejoinders and put-downs that only sound persuasive to people who already agree with them. If I want that, I can just read Twitter (and you will be unsurprised to know that Mystal has nearly 500,000 Twitter followers). This style can also be boring at book length if the author is repeating variations on a theme.

There is indeed a lot of that here, particularly in the first part of this book. If you don't generally agree with Mystal already, save yourself the annoyance and avoid this like the plague. It's just going to make you mad, and I don't think you're going to get anything useful out of it. But as I got deeper into this book, I think Mystal has another, more interesting purpose that's aimed at people who do largely agree. He's trying to undermine a very common US attitude (even on the left) about the US constitution.

I don't know if most people from the US (particularly if they're white and male) realize quite how insufferably smug we tend to be about the US constitution. When you grow up here, the paeans to the constitution and the Founding Fathers (always capitalized like deities) are so ubiquitous and unremarked that it's difficult not to absorb them at a subconscious level. There is a national mythology about the greatness of our charter of government that crosses most political divides. In its modern form, this comes with some acknowledgment that some of its original provisions (the notorious three-fifths of a person clause, for instance) were bad, but we subsequently fixed them and everything is good now. Nearly everyone gets taught this in school, and it's almost never challenged. Even the edifices of the US left, such as the ACLU and the NAACP, tend to wrap themselves in the constitution.

It's an enlightening experience to watch someone from the US corner a European with a discussion of the US constitution and watch the European plan escape routes while their soul attempts to leave their body. And I think it's telling that having that experience, as rare as it might be given how oblivious we can be, is still more common than a white person having a frank conversation with a black person in the US about the merits of the constitution as written. For various reasons, mostly because this is not very safe for the black person, this rarely happens.

This book is primarily Mystal giving his opinion on various current controversies in constitutional law, but the underlying refrain is that the constitution is a trash document written by awful people that sets up a bad political system. That system has been aggressively defended by reactionary Supreme Courts, which along with the designed difficulty of the amendment process has prevented fixing many obviously broken parts. This in turn has led to numerous informal workarounds and elaborate "interpretations" to attempt to make the system vaguely functional.

In other words, Mystal is trying to tell the US reader to stop being so precious about this specific document, and is using its truly egregious treatment of black people as the main fulcrum for his argument. Along the way, he gives an abbreviated tour of the highlights of constitutional law, but if you're at all interested in politics you've probably heard most of that before. The main point, I think, is to dig up any reverence left over from a US education, haul it out into the light of day, and compare it to the obvious failures of the constitution as a body of law and the moral failings of its authors. Mystal then asks exactly why we should care about original intent or be so reluctant to change the resulting system of government.

(Did I mention you should not bother with this book if you don't agree with Mystal politically? Seriously, don't do that to yourself.)

Readers of my reviews will know that I'm fairly far to the left politically, particularly by US standards, and yet I found it fascinating how much lingering reverence Mystal managed to dig out of me while reading this book. I found myself getting defensive in places, which is absurd because I didn't write this document. But I grew up surrounded by nigh-universal social signaling that the US constitution was the greatest political document ever, and in a religious tradition that often argued that it was divinely inspired. If one is exposed to enough of this, it becomes part of your background understanding of the world. Sometimes it takes someone being deliberately provocative to haul it back up to the surface where it can be examined.

This book is not solely a psychological intervention in national mythology. Mystal gets into detailed legal arguments as well. I thought the most interesting was the argument that the bizarre and unconvincing "penumbras" and "emanations" reasoning in Griswold v. Connecticut (which later served as the basis of Roe v. Wade) was in part because the Lochner era Supreme Court had, in the course of trying to strike down all worker protection laws, abused the concept of substantive due process so badly that Douglas was unwilling to use it in the majority opinion and instead made up entirely new law. Mystal argues that the Supreme Court should have instead tackled the true meaning of substantive due process head-on and decided Griswold on 14th Amendment equal protection and substantive due process grounds. This is probably a well-known argument in legal circles, but I'd not run into it before (and Mystal makes it far more interesting and entertaining than my summary).

Mystal also joins the tradition of thinking of the Reconstruction Amendments (the 13th, 14th, and 15th amendments passed after the Civil War) as a second revolution and an attempt to write a substantially new constitution on different legal principles, an attempt that subsequently failed in the face of concerted and deadly reactionary backlash. I first encountered this perspective via Jamelle Bouie, and it added a lot to my understanding of Reconstruction to see it as a political fight about the foundational principles of US government in addition to a fight over continuing racism in the US south. Maybe I was unusually ignorant of it (I know I need to read W.E.B. DuBois), but I think this line of reasoning doesn't get enough attention in popular media. Mystal provides a good introduction.

But, that being said, Allow Me to Retort is more of a vibes book than an argument. As in his other writing, Mystal focuses on what he sees as the core of an controversy and doesn't sweat the details too much. I felt like he was less trying to convince me and more trying to model a different way of thinking and talking about constitutional law that isn't deferential to ideas that are not worthy of deference. He presents his own legal analysis and possible solutions to current US political challenges, but I don't think the specific policy proposals are the strong part of this book. The point, instead, is to embrace a vigorous politics based on a modern understanding of equality, democracy, and human rights, without a lingering reverence for people who mostly didn't believe in any of those things. The role of the constitution in that politics is a flawed tool rather than a sacred text.

I think this book is best thought of as an internal argument in the US left. That argument is entirely within the frame of the US legal tradition, so if you're not in the US, it will be of academic interest at best (and probably not even that). If you're on the US right, Mystal offers lots of provocative pull quotes to enjoy getting outraged over, but he provides that service on Twitter for free.

But if you are on the US left, I think Allow Me to Retort is worth more consideration than I'd originally given it. There's something here about how we engage with our legal history, and while Mystal's approach is messy, maybe that's the only way you can get at something that's more emotion than logic. In some places it degenerates into a Twitter rant, but Mystal is usually entertaining even when he's ranting. I'm not sorry I read it.

Rating: 7 out of 10

19 March, 2023 03:59AM

Michael Ablassmeier

small standalone sshds in go

Been looking into some existant sshd implementations in go. Most of the projects on github seem to use the standard x/crypto/ssh lib.

During testing, i just wanted to see which banner these kind of ssh servers provide, using the simple command:

 nc localhost <port>

And noticed that at least some of these “sshds” did not accept any further connection. Simple DoS via netcat, nice.

Until this day, the Golang documentation is missing some crucial hint that the function handling the connection should be called as goroutine, otherwise it simply blocks any further incoming connections.

Created some pull requests on the most starred projects i found, seems even experienced golang devs missed this part.

19 March, 2023 12:00AM

March 18, 2023

hackergotchi for Jonathan Dowland

Jonathan Dowland

Qi charger stand

I've got a Qi-charging phone cradle at home which orients the phone up at an angle which works with Apple's Face ID. At work, I've got a simpler "puck"-shaped one which is less convenient, so I designed a basic cradle to raise both the charger and the phone up.

cradle without phone

I did two iterations, and the second iteration was "good enough" to use that I stopped there, although I would make some further alterations if I was to print it again: more of a cut-out for the USB-C cable, raise the plinth for the Qi charger so that USB-C cables with long collars have enough room, elongate the base to compensate for the changed weight distribution.

cradle with phone

18 March, 2023 10:02PM

March 17, 2023

Reproducible Builds (diffoscope)

diffoscope 239 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 239. This version includes the following changes:

[ Chris Lamb ]
* Fix compatibility with pypdf 3.x, and correctly restore test data.
  (Closes: reproducible-builds/diffoscope#335)
* Rework PDF annotations processing into a separate method.

You find out more by visiting the project homepage.

17 March, 2023 12:00AM

March 16, 2023

Scarlett Gately Moore

A Big Thank You Community! KDE Snaps resumed and More.

Witch Wells, AzSnowy Sunrise Witch Wells, Az

After my last post a few things have happened. First, I want to thank all of you for your support, monetary and moral. The open source community is amazing and I love being a part of it. We are surviving thanks to all of you. Despite my failed interview, a new door has opened up and I am happy to announce that Canonical is funding me to work part time for a period of 3 months on KDE Snaps! While not the full time role I was hoping for, I’ll take it, and who knows, maybe they will see just how awesome I am! 🙂

I started back to work on these last Tuesday. So far I have made good progress and even have a working Core22 snap!

Work done on upstream snapcraft ( tool used to make snaps ):

New content pack version, fixed an issue with snapcraft remote-build choking on part with /sdk, fixed regex as our naming scheme has changed:

https://github.com/snapcore/snapcraft/pull/4069

Ran into a new bug with snapcraft expand-extensions and so I must now enter all of this information manually into snapcraft.yaml until fixed, bug report here:

https://bugs.launchpad.net/snapcraft/+bug/2011719

And without further ado, our first core22 snap is Dragonplayer Version 22.12.3 available in the snap store. Many more coming soon!

KDE Dragon media playerKDE Dragon media player

With a new month upon us, I must ask for help again, I will not see any money for this work until next month. Please consider a dontation. Thank you from the bottom of our hearts.

https://gofund.me/c9cc02ed

16 March, 2023 10:07PM by sgmoore

Valhalla's Things

Swiss Embroidery Princess Petticoat

Posted on March 16, 2023

a person wearing a blue sleeveless fitted dress with calf-length skirt; there are small ruffles on the armscyes and the hem, and white lace on the collar and just above the hem ruffle, and small white buttons on a partial placket down the center front.

A few years ago a friend told me that her usual fabric shop was closing down and had a sale on all remaining stock.

While being sad for yet another brick and mortar shop that was going to be missed (at least it was because the owners were retiring, not because it wasn’t sustainable anymore), of course I couldn’t miss the opportunity.

So we drove a few hundred km, had some nice time with a friend that (because of said few hundred km) we rarely see, and spent a few hours looting the corps… er… helping the shop owner getting rid of stock before their retirement.

A surprisingly small pile of fabric; everything is blue or black.

Among other things there was a cut of lightweight swiss embroidery cotton in blue which may or may not have been enthusiastically grabbed with plans of victorian underwear.

It was too nice to be buried under layers and layers of fabric (and I suspect that the embroidery wouldn’t feel great directly on the skin under a corset), so the natural fit was something at the corset cover layer, and the fabric was enough for a combination garment of the kind often worn in the later Victorian age to prevent the accumulation of bulk at the waist.

It also has the nice advantage that in this time of corrupted morals it is perfectly suitable as outerwear as a nice summer dress.

Then life happened, the fabric remained in my stash for a long while, but finally this year I have a good late victorian block that I can adapt, and with spring coming it was a good time to start working on the summer wardrobe.

scan from a vintage book with the pattern for a tight fitting jacket.

The block I’ve used comes from The Cutters’ Practical Guide to the Cutting of Ladies Garments and is for a jacket, rather than a bodice, but the bodice block from the same book had a 4 part back, which was too much for this garment. I reduced the ease around the bust a bit, which I believe worked just fine.

The main pattern was easy enough to prepare, I just had to add skirt panels with a straight side towards the front and flaring out towards the back, and I did a quick mockup from an old sheet to check the fit (good) and the swish and volume of the skirt (just right at the first attempt!).

The mockup was also used to get an idea of a few possible necklines, and I opted for a relatively deep V, and a front opening with a partial placket down to halfway between the waist and the hips. I also opted for a self-fabric ruffle at the hem and armscyes.

same dress, same person, from the side, with one hand in the pocket slit.

The only design choice left was the pocket situation: I wanted to wear this garment both as underwear (where pockets aren’t needed, and add unwanted bulk) and outerwear (where no pockets is not an option), and the fabric felt too thin to support the weight of the contents of a full pocket. So I decided to add slits into the seams, with just a modesty placket, and wear pockets under the dress as needed.

I decided to put the slits between the side and side back panels for two reasons: one is that this way the pockets can sit towards the back, where the fullness of the skirt is supposed to be, rather than under the flat front, and the other one is to keep the seams around the front panel clean, since they are the first ones to be changed when altering a garment for fit.

For the same reason, I didn’t trim the excess allowance from that seam: it means that it is a bit more bulky, but the fabric is thin enought that it’s not really noticeable, and it gives an additional cm for future alterations.

Then, as the garment was getting close to being finished I was measuring and storing some old cotton lace I had received as a gift, and there was a length of relatively small lace, and the finish on the neckline was pretty simple and called for embellishment, and who am I to deny embellishment to victorian inspired clothing?

A ruffle pleated into a receiving tuck, each pleat is fixed with a pin, and there are a lot of pins.

First I had to finish attaching the ruffles, however, and this is when I cursed myself for not using the ruffler foot I have (it would have meant not having selvedges on all seams of the ruffle), and for pleating the ruffle rather than gathering it (I prefer the look of handsewn gathers, but here I’m sewing everything by machine, and that’s faster, right? (it probably wasn’t)).

A metal box full of straight pins.

Also, this is where I started to get low on pins, and I had to use the ones from the vintage1 box I’ve been keeping as decoration in the sewing room.

A few long sessions of pinning later, the ruffle was sewn and I could add the lace; I used white thread so that it would be hidden on the right side, but easily visible inside the garment in case I’ll decide to remove or change it later.

A few buttons and buttonholes later, the garment was ready, and the only thing left was to edit the step-by-step pictures and publish the pattern: it’s now available as #FreeSoftWear on my patterns website.

And Of course, I had to do a proper swish test of the finished dress with the ruffle, and I’m happy to announce that it was fully passed.

a person spinning on herself, the skirt and the ruffle are swishing out. Something in the pocket worn under the dress is causing a bit of bulge on one side.

Except, maybe I shouldn’t carry heavy items in my pockets when doing it? Oh, well.

I have other plans for the same pattern, but they involve making some crochet lace, so I expect I can aim at making them wearable in summer 2024.

Now I just have to wait for the weather to be a bit warmer, and then I can start enjoing this one.


  1. ok, even more vintage, since my usual pins come from a plastic box that has been probably bought in the 1980s.↩︎

16 March, 2023 12:00AM

March 15, 2023

hackergotchi for Thomas Koch

Thomas Koch

Know your tools - simple backup with rsync

Posted on June 9, 2022

I’ve been using rsync for years and still did not know its full powers. I just wanted a quick and dirty simple backup but realised that rsnapshot is not in Debian anymore.

However you can do much of rsnapshot with rsync alone nowadays.

The --link-dest option (manpage) solves the part of creating hardlinks to a previous backup (found here). So my backup program becomes this shell script in ~/backups/backup.sh:

#!/bin/sh

SERVER="${1}"
BACKUP="${HOME}/backups/${SERVER}"
SNAPSHOTS="${BACKUP}/snapshots"

FOLDER=$(date --utc +%F_%H-%M-%S)
DEST="${SNAPSHOTS}/${FOLDER}"

LAST=$(ls -d1 ${SNAPSHOTS}/????-??-??_??-??-??|tail -n 1)

rsync \
  --rsh="ssh -i ${BACKUP}/sshkey -o ControlPath=none -o ForwardAgent=no" \
  -rlpt \
  --delete --link-dest="${LAST}" \
  ${SERVER}::backup "${DEST}"

The script connects to rsync in daemon mode as outlined in section “USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION” in the rsync manpage. This allows to reference a “module” as the source that is defined on the server side as follows:

[backup]
path = /
read only = true
exclude from = /srv/rsyncbackup/excludelist
uid = root
gid = root

The important bit is the read only setting that protects the server against somebody with access to the ssh key to overwrit files on the server via rsync and thus gaining full root access.

Finally the command prefix in ~/.ssh/authorized_keys runs rsync as daemon with sudo and the specified config file:

command="sudo rsync --config=/srv/rsyncbackup/config --server --daemon ."

The sudo setup is left as an exercise for the reader as mine is rather opinionated.

Unfortunately I have not managed to configure systemd timers in the way I wanted and therefor opened an issue: “Allow retry of timer triggered oneshot services with failed conditions or asserts”. Any help there is welcome!

15 March, 2023 06:48PM

Missing memegen

Posted on May 1, 2022

Back at $COMPANY we had an internal meme-site. I had some reputation in my team for creating good memes. When I watched Episode 3 of Season 2 from Yes Premier Minister yesterday, I really missed a place to post memes.

This is the full scene. Please watch it or even the full episode before scrolling down to the GIFs. I had a good laugh for some time.

With Debian, I could just download the episode from somewhere on the net with youtube-dl and easily create two GIFs using ffmpeg, with and without subtitle:

ffmpeg  -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 tmp/tragic.gif

ffmpeg  -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 \
        -vf "subtitles=tmp/sub.srt:force_style='Fontsize=60'" tmp/tragic_with_subtitle.gif

And this sub.srt file:

1
00:00:10,000 --> 00:00:12,000
Tragic.

I believe, one needs to install the libavfilter-extra variant to burn the subtitle in the GIF.

Some

space

to

hide

the

GIFs.

The Premier Minister just learned, that his predecessor, who was about to publish embarassing memories, died of a sudden heart attack:

I can’t actually think of a meme with this GIF, that the internal thought police community moderation would not immediately take down.

For a moment I thought that it would be fun to have a Meme-Site for Debian members. But it is probably not the right time for this.

Maybe somebody likes the above GIFs though and wants to use them somewhere.

15 March, 2023 06:48PM

lsp-java coming to debian

Posted on March 12, 2022
Tags: debian

The Language Server Protocol (LSP) standardizes communication between editors and so called language servers for different programming languages. This reduces the old problem that every editor had to implement many different plugins for all different programming languages. With LSP an editor just needs to talk LSP and can immediately provide typicall IDE features.

I already packaged the Emacs packages lsp-mode and lsp-haskell for Debian bullseye. Now lsp-java is waiting in the NEW queue.

I’m always worried about downloading and executing binaries from random places of the internet. It should be a matter of hygiene to only run binaries from official Debian repositories. Unfortunately this is not feasible when programming and many people don’t see a problem with running multiple curl-sh pipes to set up their programming environment.

I prefer to do such stuff only in virtual machines. With Emacs and LSP I can finally have a lightweight textmode programming environment even for Java.

Unfortunately the lsp-java mode does not yet work over tramp. Once this is solved, I could run emacs on my host and only isolate the code and language server inside the VM.

The next step would be to also keep the code on the host and mount it with Virtio FS in the VM. But so far the necessary daemon is not yet in Debian (RFP: #1007152).

In Detail I uploaded these packages:

15 March, 2023 06:48PM

Emanuele Rocca

Disposable Debian VMs with debvm

Some notes on using debvm, an amazing piece of software I’ve started using only recently.

Create a new virtual machine:

$ debvm-create

You now have a virtual machine with Debian Sid of your host native architecture (probably amd64). The image file is called rootfs.ext4. You’ve got 1G of disk space in the VM.

You can now just run the VM! You will be automatically logged is as root.

$ debvm-run

Experiment in the VM, run all the sort of tests you have in mind. For example, one thing I commonly do is verifying that things work in a clean environment, as opposed to "on my machine".

If anything goes wrong, or if you just want to repeat the test: shutdown the guest, remove rootfs.ext4, and start again with debvm-create.

Now, especially if you intend creating and recreating VMs multiple times, it helps to use a local apt mirror as a cache to avoid fetching all the required packages from the internet over and over again. Install apt-cacher-ng on your host machine and point debvm-create at it:

$ debvm-create -- http://10.0.3.1:3142/debian

The additional options after -- are passed to mmdebstrap. In this case we’re specifying http://10.0.3.1:3142/debian as the URL of our local apt mirror. It is going to be used both for image creation and as the only entry in /etc/apt/sources.list on the guest. This is the reason for using 10.0.3.1, the IP address of the lxcbr0 interface used by qemu, instead of 127.0.0.1: to make sure that the guest VM has access to it too.

For a slightly more advanced example, we now want to:

  • run a arm64 VM

  • have more disk space availably, say 2G

  • install additional packages (curl and locales)

  • allow SSH as root with the given public keys, in this example the authorized_keys installed on the host

  • start the VM in the background with no console output

$ debvm-create -a arm64 -o sid-arm64.ext4 -z 2G -k ~/.ssh/authorized_keys -- http://10.0.3.1:3142/debian --include curl,locales

Start the VM:

$ debvm-run -i sid-arm64.ext4 -s 2222 -g -- -display none &

SSH into the guest:

$ ssh -o NoHostAuthenticationForLocalhost=yes -p 2222 root@127.0.0.1

Enjoy!

15 March, 2023 05:18AM by Emanuele Rocca (ema@linux.it)

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Core python package, Redmine backports, and more! (by Utkarsh Gupta, Stefano Rivera)

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Core Python Packages, by Stefano Rivera

Just before the freeze, pip added support for PEP-668. This is a scheme devised by Debian with other distributions and the Python Packaging Authority, to allow distributors to mark Python installations as being managed by a distribution package manager. When this EXTERNALLY-MANAGED flag is present, installers like pip will refuse to install packages outside a virtual environment. This protects users from breaking unrelated software on their systems, when installing packages with pip or similar tools. Stefano quickly got this version of pip into the archive, marked Debian’s Python interpreters as EXTERNALLY-MANAGED, and worked with the upstream to add a mechanism to allow users to override the restriction. Debian bookworm will likely be the first distro release to implement this change.

The transition from Python 3.10 to 3.11 was one of the last to complete before the bookworm freeze (as 3.11 only released at the end of October 2022). Stefano helped port some Python packages to 3.11, in January, and also kicked off the final transition to remove Python 3.10 support.

Stefano did a big round of bug triage in the cPython interpreter (and related) packages, applying some provided patches, and fixing some long-standing minor bugs in the packaging.

To allow Debian packages to more accurately reflect upstream-specified dependencies that only apply under specific Python interpreter versions, in the future, Stefano added more metadata to the python3 binary package.

Python’s unittest runner would successfully exit with 0 passed tests, if it couldn’t find any tests. This means that configuration / layout changes can cause test failures to go unnoticed, because the tests aren’t being run any more in Debian packages. Stefano proposed a change to Python 3.12 to change this behavior and treat 0 tests as a kind of failure.

debvm, by Helmut Grohne

With support from Johannes Schauer Marin Rodrigues, and Jochen Sprickerhof, Helmut Grohne wrote debvm, a tool for quickly creating and running Debian virtual machine images for various architectures and Debian and Ubuntu releases. This is meant for development and testing purposes and has already identified a number of bugs in e.g. fakechroot (#1029490), Linux (#1029270), and runit (#1028181).

Rails 6 and Redmine 5 available in bullseye-backports, by Utkarsh Gupta

Bullseye users can now upgrade to the latest 6.1 branch of Rails, v6.1.7, and the latest Redmine version, v5.0.4. The Ruby team received numerous requests to backport the latest version of Rails and Redmine, especially since there was no redmine shipped in the bullseye release itself. So this is big news for all users as we’ve not only successfully backported both the packages, but also fixed all the CVEs and RC bugs in the process!

This work was sponsored by Entrouvert.

Patches metadata in the Package Tracker, by Raphaël Hertzog

Building on the great Ultimate Debian Database work of Lucas Nussbaum and on his suggestion, Raphaël enhanced the Debian Package Tracker to display action items when the patches metadata indicate that some patches were not forwarded upstream, or when the metadata were invalid. One can now also browse the patches metadata from the “Links” panel on the right.

Fixed kernel bug that broke debian-installer on computers with Mediatek wifi devices, by Helmut Grohne

As part of our regular work on Kali Linux for OffSec, they funded Helmut’s work to fix the MT7921e driver. When being loaded without firmware available, it would not register itself, but upon module release it would unregister itself causing a kernel oops. This was commonly observed in Kali Linux when reloading the module to add firmware. Helmut Grohne identified the cause and sent a patch, a different variant of which is now heading into Linux and available from Kali Linux.

Printing in Debian, by Thorsten Alteholz

There are about 40 packages in Debian that take care of sending output to printers, scan documents, or even send documents to fax machines. In the light of the upcoming/already ongoing freeze, these packages had to be updated to the latest version and bugs had to be fixed. Basically this applies to large packages like cups, cups-filters, hplip but also the smaller ones that shouldn’t be neglected. All in all Thorsten uploaded 13 packages with new upstream versions or improved packaging and could resolve 14 bugs. Further triaging led to 35 bugs that could be closed, either because they were already fixed and not closed in an earlier upload or they could not be reproduced with current software versions.

There is also work to do to prepare for the future. Historically, printing on Linux required finding a PPD file for your printer and finding some software that is able to render your documents with this PPD. These days, driverless printing is becoming more common and the use of PPD files has decreased.

In the upcoming version 3.0 of cups, PPD files are no longer supported and so called printer applications need to be used. In order not to lose the ability to print documents, this big transition needs to be carefully planned. This started in the beginning of 2023 and will hopefully be finished with the release of Debian Trixie. More information can be found in this Debian Printing Wiki article. In preparation for this transition Thorsten created three new packages.

Yade update, by Anton Gladky

Last month, Anton updated the yade package to the newest 2023.02a version, which includes new features.

Yade is a software package for discrete element method (DEM) simulations, which are widely used in scientific and engineering fields for the simulation of granular systems. Yade is an open-source project that is being used worldwide for different tasks, such as geomechanics, civil engineering, mining, and materials science.

The Yade package in Debian supports different precision levels for its simulations. This means that researchers and engineers can select the needed precision level without recompiling the package, saving time and effort.

Miscellaneous contributions

  • Helmut Grohne continues to improve cross building (mostly Qt) and architecture bootstrap (mostly loong64 and musl).

15 March, 2023 12:00AM by Utkarsh Gupta, Stefano Rivera

March 14, 2023

Anuradha Weeraman

US-11604662-B2

US-11604662-B2

I’m happy to announce, that after a long wait, patent US-11604662-B2 has been issued.

I want to thank and recognize my co-inventors, Div Prakash and Subin George, who I’m privileged to be on paper with.

The effort that led to this work involved a group of engineers, many phone calls, some nerve-wracking presentations, culminating in a fantastic hackathon-winning outcome for a young and talented team, which I was proud to be a part of and privileged to lead.

14 March, 2023 02:20PM by Anuradha Weeraman

March 13, 2023

Dima Kogan

Debian at SCaLE 20x

SCaLE 20x just wrapped up. We spent three days running the Debian booth: passing out stickers, penguin swag, coffee and cookies, and telling everyone that would listen about about our great OS. As usual, Richard Hecker, Chris McKenzie and I attended as the "LA Debian contingent". Mathias Gibbens flew in from Albuquerque, and Ha Lam and Syed Reza stopped by periodically.

Chris created extra demand by restricting the supply of plushy penguins. Some kid was shocked at my old laptop, only to see Mathias pull out an even older one. And we finished off the conference by listening to Ken Thompson's tale about his music collection. Good times.

The crew:

R0003400.jpg

R0003423.jpg

Looking forward to next year!

13 March, 2023 07:58PM by Dima Kogan

Russell Coker

Firebuild

After reading Bálint’s blog post about Firebuild (a compile cache) [1] I decided to give it a go. It’s non-free, the project web site [2] says that it’s free for non-commercial use or commercial trials.

My first attempt at building a Debian package failed due to man-recode using a seccomp() sandbox, I filed Debian bug #1032619 [3] about this (thanks for the quick response Bálint). The solution for me was to edit /etc/firebuild.conf and add man-recode to the dont_intercept list. The new version that’s just been uploaded to Debian fixes it by disabling seccomp() and will presumably allow slightly better performance.

Here are the results of building the refpolicy package with Firebuild, a regular build, the first build with Firebuild (30% slower) and a rebuild with Firebuild that reduced the time by almost 42%.

real    1m32.026s
user    4m20.200s
sys     2m33.324s

real    2m4.111s
user    6m31.769s
sys     3m53.681s

real    0m53.632s
user    1m41.334s
sys     3m36.227s

Next I did a test of building a Linux 6.1.10 kernel with “make bzImage -j18“, here are the results from a normal build, first build with firebuild, and second build. The real time is worse with firebuild for this on my machine. I think that the relative speeds of my CPU (reasonably fast 18 core) and storage (two of the slower NVMe devices in a BTRFS RAID-1) is the cause of the first build being relatively so much slower for “make bzImage” than for building the refpolicy, as the kernel build process involves a lot more data. For the final build I moved ~/.cache/firebuild to a tmpfs (I have 128G of RAM and not much running on my machine at the time of the tests), even then building with firebuild was slightly slower in real time but took significantly less CPU time (user+real being 20mins instead of 36m). I also ran several tests with the kernel source tree on a tmpfs but for unknown reasons those tests each took about 6 minutes. Does firebuild or the Linux kernel build process dislike tmpfs for some reason?

real    2m43.020s
user    31m30.551s
sys     5m15.279s

real    8m49.675s
user    64m11.258s
sys     19m39.016s

real    3m6.858s
user    7m47.556s
sys     9m22.513s

real    2m51.910s
user    10m53.870s
sys     9m21.307s

One thing I noticed from the kernel build tests is that the total CPU time taken by the firebuild process (as reported by ps) was more than 2/3 of the run time and top usually reported it as taking around 75% of a CPU core. It seems to me that the firebuild process itself is a bottleneck on build speed. Building refpolicy without firebuild has an average of 4.5 cores in use while building the kernel haas 13.5. Unless they make a multi-threaded version of firebuild it seems that it won’t give the performance one would hope for from a CPU with 18+ cores. I presume that if I had been running with hyper-threading enabled then firebuild would have been even worse for kernel builds as it would sometimes get on the second thread of a core. It looks like firebuild would perform better on AMD CPUs as they tend to have fewer CPU cores with greater average performance per core so a single CPU core for firebuild will be less limited. I presume that the firebuild developers will make it perform better with large numbers of cores in future, the latest Intel laptop CPUs have 16+ cores and servers with 2*40core CPUs are common.

The performance improvement for refpolicy is significant as a portion of build time, but insignificant in terms of real time. A full build of refpolicy doesn’t take enough time to get a Coke and reducing it doesn’t offer a huge benefit, if Firebuild was available in past years when refpolicy took 20 minutes to build (when DDR2 was the best RAM available) then it would be a different story.

There is some potential to optimise the build of refpolicy for the non-firebuild case. Getting it to average more than 4.5 cores in use when there’s 18 available should be possible, there are a number of shell for loops in the main Makefile and maybe some of them can be replaced by make constructs to allow running in parallel. If it used 7 cores on average then it would be faster in a regular build than it currently is with firebuild and a hot cache. Any advice from make experts would be appreciated.

13 March, 2023 12:07PM by etbe

Xmpp Tools

For a while I’ve had my monitoring systems alert me via XMPP (Jabber). To do that I used the sendxmpp command-line program which worked well for it’s basic tasks. I recently noticed that my laptop and workstation which I had upgraded to Debian/Testing weren’t sending messages, I’m not sure when it started as my main monitoring of such machines is to touch a key and see if there’s a response – if I’m not at the keyboard then a failure doesn’t bother me too much.

I’ve filed Debian bug #1032868 [1] about this. As sendxmpp is apparently not supported upstream and we are preparing for a release it could be that the next version of Debian is released without this working (if it’s specific to talking to Prosody) or without sendxmpp (if it fails on all Jabber servers).

I next tested xmppc which doesn’t send messages (gives no error when I have apparently correct parameters and just doesn’t send anything) and doesn’t display any text output for info related commands while not giving error messages or an error return code. I filed Debian bug #1032869 [2] about this.

Currently the only success I’ve found with Debian/Testing for this is with go-sendxmpp. To configure that you setup a file named ~/.config/go-sendxmpp/config with the following contents:

username: JABBER-ID
password: PASSWORD

Go-sendxmpp can take a username and password on the command-line but that’s bad for security as in the absence of SE Linux or other advanced security systems the password can be seen by any user on the same system who runs ps. To send a message run “echo $MESSAGE | go-sendxmpp $ADDR” to send $MESSAGE to $ADDR. It also has the option “go-sendxmpp -l” to listen for incoming messages. I don’t have an immediate need to receive messages from the command-line but it’s handy to have the option.

I probably won’t be able to get a new version of etbemon in Debian for the Bookworm release. So to get go-sendxmpp to work with etbemon you need to edit /usr/lib/mon/alert.d/mailxmpp.alert and change this sendxmpp line to this go-sendxmpp line:

open (XMPP, "| /usr/bin/sendxmpp -a /etc/ssl/certs -t @xmpprec -r $host") ||

open (XMPP, "| /usr/bin/go-sendxmpp @xmpprec") ||

13 March, 2023 07:13AM by etbe

Antoine Beaupré

Framework 12th gen laptop review

The Framework is a 13.5" laptop body with swappable parts, which makes it somewhat future-proof and certainly easily repairable, scoring an "exceedingly rare" 10/10 score from ifixit.com.

There are two generations of the laptop's main board (both compatible with the same body): the Intel 11th and 12th gen chipsets.

I have received my Framework, 12th generation "DIY", device in late September 2022 and will update this page as I go along in the process of ordering, burning-in, setting up and using the device over the years.

Overall, the Framework is a good laptop. I like the keyboard, the touch pad, the expansion cards. Clearly there's been some good work done on industrial design, and it's the most repairable laptop I've had in years. Time will tell, but it looks sturdy enough to survive me many years as well.

This is also one of the most powerful devices I ever lay my hands on. I have managed, remotely, more powerful servers, but this is the fastest computer I have ever owned, and it fits in this tiny case. It is an amazing machine.

On the downside, there's a bit of proprietary firmware required (WiFi, Bluetooth, some graphics) and the Framework ships with a proprietary BIOS, with currently no Coreboot support. Expect to need the latest kernel, firmware, and hacking around a bunch of things to get resolution and keybindings working right.

Like others, I have first found significant power management issues, but many issues can actually be solved with some configuration. Some of the expansion ports (HDMI, DP, MicroSD, and SSD) use power when idle, so don't expect week-long suspend, or "full day" battery while those are plugged in.

Finally, the expansion ports are nice, but there's only four of them. If you plan to have a two-monitor setup, you're likely going to need a dock.

Read on for the detailed review. For context, I'm moving from the Purism Librem 13v4 because it basically exploded on me. I had, in the meantime, reverted back to an old ThinkPad X220, so I sometimes compare the Framework with that venerable laptop as well.

This blog post has been maturing for months now. It started in September 2022 and I declared it completed in March 2023. It's the longest single article on this entire website, currently clocking at about 13,000 words. It will take an average reader a full hour to go through this thing, so I don't expect anyone to actually do that. This introduction should be good enough for most people, read the first section if you intend to actually buy a Framework. Jump around the table of contents as you see fit for after you did buy the laptop, as it might include some crucial hints on how to make it work best for you, especially on (Debian) Linux.

Advice for buyers

Those are things I wish I would have known before buying:

  1. consider buying 4 USB-C expansion cards, or at least a mix of 4 USB-A or USB-C cards, as they use less power than other cards and you do want to fill those expansion slots otherwise they snag around and feel insecure

  2. you will likely need a dock or at least a USB hub if you want a two-monitor setup, otherwise you'll run out of ports

  3. you have to do some serious tuning to get proper (10h+ idle, 10 days suspend) power savings

  4. in particular, beware that the HDMI, DisplayPort and particularly the SSD and MicroSD cards take a significant amount power, even when sleeping, up to 2-6W for the latter two

  5. beware that the MicroSD card is what it says: Micro, normal SD cards won't fit, and while there might be full sized one eventually, it's currently only at the prototyping stage

  6. the Framework monitor has an unusual aspect ratio (3:2): I like it (and it matches classic and digital photography aspect ratio), but it might surprise you

Current status

I have the framework! It's setup with a fresh new Debian bookworm installation. I've ran through a large number of tests and burn in.

I have decided to use the Framework as my daily driver, and had to buy a USB-C dock to get my two monitors connected, which was own adventure.

Specifications

Those are the specifications of the 12th gen, in general terms. Your build will of course vary according to your needs.

  • CPU: i5-1240P, i7-1260P, or i7-1280P (Up to 4.4-4.8 GHz, 4+8 cores), Iris Xe graphics
  • Storage: 250-4000GB NVMe (or bring your own)
  • Memory: 8-64GB DDR4-3200 (or bring your own)
  • WiFi 6e (AX210, vPro optional, or bring your own)
  • 296.63mm X 228.98mm X 15.85mm, 1.3Kg
  • 13.5" display, 3:2 ratio, 2256px X 1504px, 100% sRGB, >400 nit
  • 4 x USB-C user-selectable expansion ports, including
    • USB-C
    • USB-A
    • HDMI
    • DP
    • Ethernet
    • MicroSD
    • 250-1000GB SSD
  • 3.5mm combo headphone jack
  • Kill switches for microphone and camera
  • Battery: 55Wh
  • Camera: 1080p 60fps
  • Biometrics: Fingerprint Reader
  • Backlit keyboard
  • Power Adapter: 60W USB-C (or bring your own)
  • ships with a screwdriver/spludger
  • 1 year warranty
  • base price: 1000$CAD, but doesn't give you much, typical builds around 1500-2000$CAD

Actual build

This is the actual build I ordered. Amounts in CAD. (1CAD = ~0.75EUR/USD.)

Base configuration

  • CPU: Intel® Core™ i5-1240P, 1079$
  • Memory: 16GB (1 x 16GB) DDR4-3200, 104$

Customization

  • Keyboard: US English, included

Expansion Cards

  • 2 USB-C $24
  • 3 USB-A $36
  • 2 HDMI $50
  • 1 DP $50
  • 1 MicroSD $25
  • 1 Storage – 1TB $199
  • Sub-total: 384$

Accessories

  • Power Adapter - US/Canada $64.00

Total

  • Before tax: 1606$
  • After tax and duties: 1847$
  • Free shipping

Quick evaluation

This is basically the TL;DR: here, just focusing on broad pros/cons of the laptop.

Pros

Cons

  • the 11th gen is out of stock, except for the higher-end CPUs, which are much less affordable (700$+)

  • the 12th gen has compatibility issues with Debian, followup in the DebianOn page, but basically: brightness hotkeys, power management, wifi, the webcam is okay even though the chipset is the infamous alder lake because it does not have the fancy camera; most issues currently seem solvable, and upstream is working with mainline to get their shit working

  • 12th gen might have issues with thunderbolt docks

  • they used to have some difficulty keeping up with the orders: first two batches shipped, third batch sold out, fourth batch should have shipped (?) in October 2021. they generally seem to keep up with shipping. update (august 2022): they rolled out a second line of laptops (12th gen), first batch shipped, second batch shipped late, September 2022 batch was generally on time, see this spreadsheet for a crowdsourced effort to track those supply chain issues seem to be under control as of early 2023. I got the Ethernet expansion card shipped within a week.

  • compared to my previous laptop (Purism Librem 13v4), it feels strangely bulkier and heavier; it's actually lighter than the purism (1.3kg vs 1.4kg) and thinner (15.85mm vs 18mm) but the design of the Purism laptop (tapered edges) makes it feel thinner

  • no space for a 2.5" drive

  • rather bright LED around power button, but can be dimmed in the BIOS (not low enough to my taste) I got used to it

  • fan quiet when idle, but can be noisy when running, for example if you max a CPU for a while

  • battery described as "mediocre" by Ars Technica (above), confirmed poor in my tests (see below)

  • no RJ-45 port, and attempts at designing ones are failing because the modular plugs are too thin to fit (according to Linux After Dark), so unlikely to have one in the future Update: they cracked that nut and ship an 2.5 gbps Ethernet expansion card with a realtek chipset, without any firmware blob (!)

  • a bit pricey for the performance, especially when compared to the competition (e.g. Dell XPS, Apple M1)

  • 12th gen Intel has glitchy graphics, seems like Intel hasn't fully landed proper Linux support for that chipset yet

Initial hardware setup

A breeze.

Accessing the board

The internals are accessed through five TorX screws, but there's a nice screwdriver/spudger that works well enough. The screws actually hold in place so you can't even lose them.

The first setup is a bit counter-intuitive coming from the Librem laptop, as I expected the back cover to lift and give me access to the internals. But instead the screws is release the keyboard and touch pad assembly, so you actually need to flip the laptop back upright and lift the assembly off (!) to get access to the internals. Kind of scary.

I also actually unplugged a connector in lifting the assembly because I lifted it towards the monitor, while you actually need to lift it to the right. Thankfully, the connector didn't break, it just snapped off and I could plug it back in, no harm done.

Once there, everything is well indicated, with QR codes all over the place supposedly leading to online instructions.

Bad QR codes

Unfortunately, the QR codes I tested (in the expansion card slot, the memory slot and CPU slots) did not actually work so I wonder how useful those actually are.

After all, they need to point to something and that means a URL, a running website that will answer those requests forever. I bet those will break sooner than later and in fact, as far as I can tell, they just don't work at all. I prefer the approach taken by the MNT reform here which designed (with the 100 rabbits folks) an actual paper handbook (PDF).

The first QR code that's immediately visible from the back of the laptop, in an expansion cord slot, is a 404. It seems to be some serial number URL, but I can't actually tell because, well, the page is a 404.

I was expecting that bar code to lead me to an introduction page, something like "how to setup your Framework laptop". Support actually confirmed that it should point a quickstart guide. But in a bizarre twist, they somehow sent me the URL with the plus (+) signs escaped, like this:

https://guides.frame.work/Guide/Framework\+Laptop\+DIY\+Edition\+Quick\+Start\+Guide/57

... which Firefox immediately transforms in:

https://guides.frame.work/Guide/Framework/+Laptop/+DIY/+Edition/+Quick/+Start/+Guide/57

I'm puzzled as to why they would send the URL that way, the proper URL is of course:

https://guides.frame.work/Guide/Framework+Laptop+DIY+Edition+Quick+Start+Guide/57

(They have also "let the team know about this for feedback and help resolve the problem with the link" which is a support code word for "ha-ha! nope! not my problem right now!" Trust me, I know, my own code word is "can you please make a ticket?")

Seating disks and memory

The "DIY" kit doesn't actually have that much of a setup. If you bought RAM, it's shipped outside the laptop in a little plastic case, so you just seat it in as usual.

Then you insert your NVMe drive, and, if that's your fancy, you also install your own mPCI WiFi card. If you ordered one (which was my case), it's pre-installed.

Closing the laptop is also kind of amazing, because the keyboard assembly snaps into place with magnets. I have actually used the laptop with the keyboard unscrewed as I was putting the drives in and out, and it actually works fine (and will probably void your warranty, so don't do that). (But you can.) (But don't, really.)

Hardware review

Keyboard and touch pad

The keyboard feels nice, for a laptop. I'm used to mechanical keyboard and I'm rather violent with those poor things. Yet the key travel is nice and it's clickety enough that I don't feel too disoriented.

At first, I felt the keyboard as being more laggy than my normal workstation setup, but it turned out this was a graphics driver issues. After enabling a composition manager, everything feels snappy.

The touch pad feels good. The double-finger scroll works well enough, and I don't have to wonder too much where the middle button is, it just works.

Taps don't work, out of the box: that needs to be enabled in Xorg, with something like this:

cat > /etc/X11/xorg.conf.d/40-libinput.conf <<EOF
Section "InputClass"
      Identifier "libinput touch pad catchall"
      MatchIsTouchpad "on"
      MatchDevicePath "/dev/input/event*"
      Driver "libinput"
      Option "Tapping" "on"
      Option "TappingButtonMap" "lmr"
EndSection
EOF

But be aware that once you enable that tapping, you'll need to deal with palm detection... So I have not actually enabled this in the end.

Power button

The power button is a little dangerous. It's quite easy to hit, as it's right next to one expansion card where you are likely to plug in a cable power. And because the expansion cards are kind of hard to remove, you might squeeze the laptop (and the power key) when trying to remove the expansion card next to the power button.

So obviously, don't do that. But that's not very helpful.

An alternative is to make the power button do something else. With systemd-managed systems, it's actually quite easy. Add a HandlePowerKey stanza to (say) /etc/systemd/logind.conf.d/power-suspends.conf:

[Login]
HandlePowerKey=suspend
HandlePowerKeyLongPress=poweroff

You might have to create the directory first:

mkdir /etc/systemd/logind.conf.d/

Then restart logind:

systemctl restart systemd-logind

And the power button will suspend! Long-press to power off doesn't actually work as the laptop immediately suspends...

Note that there's probably half a dozen other ways of doing this, see this, this, or that.

Special keybindings

There is a series of "hidden" (as in: not labeled on the key) keybindings related to the fn keybinding that I actually find quite useful.

Key Equivalent Effect Command
p Pause lock screen xset s activate
b Break ? ?
k ScrLk switch keyboard layout N/A

It looks like those are defined in the microcontroller so it would be possible to add some. For example, the SysRq key is almost bound to fn s in there.

Note that most other shortcuts like this are clearly documented (volume, brightness, etc). One key that's less obvious is F12 that only has the Framework logo on it. That actually calls the keysym XF86AudioMedia which, interestingly, does absolutely nothing here. By default, on Windows, it opens your browser to the Framework website and, on Linux, your "default media player".

The keyboard backlight can be cycled with fn-space. The dimmer version is dim enough, and the keybinding is easy to find in the dark.

A skinny elephant would be performed with alt PrtScr (above F11) KEY, so for example alt fn F11 b should do a hard reset. This comment suggests you need to hold the fn only if "function lock" is on, but that's actually the opposite of my experience.

Out of the box, some of the fn keys don't work. Mute, volume up/down, brightness, monitor changes, and the airplane mode key all do basically nothing. They don't send proper keysyms to Xorg at all.

This is a known problem and it's related to the fact that the laptop has light sensors to adjust the brightness automatically. Somehow some of those keys (e.g. the brightness controls) are supposed to show up as a different input device, but don't seem to work correctly. It seems like the solution is for the Framework team to write a driver specifically for this, but so far no progress since July 2022.

In the meantime, the fancy functionality can be supposedly disabled with:

echo 'blacklist hid_sensor_hub' | sudo tee /etc/modprobe.d/framework-als-blacklist.conf

... and a reboot. This solution is also documented in the upstream guide.

Note that there's another solution flying around that fixes this by changing permissions on the input device but I haven't tested that or seen confirmation it works.

Kill switches

The Framework has two "kill switches": one for the camera and the other for the microphone. The camera one actually disconnects the USB device when turned off, and the mic one seems to cut the circuit. It doesn't show up as muted, it just stops feeding the sound.

Both kill switches are around the main camera, on top of the monitor, and quite discreet. Then turn "red" when enabled (i.e. "red" means "turned off").

Monitor

The monitor looks pretty good to my untrained eyes. I have yet to do photography work on it, but some photos I looked at look sharp and the colors are bright and lively. The blacks are dark and the screen is bright.

I have yet to use it in full sunlight.

The dimmed light is very dim, which I like.

Screen backlight

I bind brightness keys to xbacklight in i3, but out of the box I get this error:

sep 29 22:09:14 angela i3[5661]: No outputs have backlight property

It just requires this blob in /etc/X11/xorg.conf.d/backlight.conf:

Section "Device"
    Identifier  "Card0"
    Driver      "intel"
    Option      "Backlight"  "intel_backlight"
EndSection

This way I can control the actual backlight power with the brightness keys, and they do significantly reduce power usage.

Multiple monitor support

I have been able to hook up my two old monitors to the HDMI and DisplayPort expansion cards on the laptop. The lid closes without suspending the machine, and everything works great.

I actually run out of ports, even with a 4-port USB-A hub, which gives me a total of 7 ports:

  1. power (USB-C)
  2. monitor 1 (DisplayPort)
  3. monitor 2 (HDMI)
  4. USB-A hub, which adds:
  5. keyboard (USB-A)
  6. mouse (USB-A)
  7. Yubikey
  8. external sound card

Now the latter, I might be able to get rid of if I switch to a combo-jack headset, which I do have (and still need to test).

But still, this is a problem. I'll probably need a powered USB-C dock and better monitors, possibly with some Thunderbolt chaining, to save yet more ports.

But that means more money into this setup, argh. And figuring out my monitor situation is the kind of thing I'm not that big of a fan of. And neither is shopping for USB-C (or is it Thunderbolt?) hubs.

My normal autorandr setup doesn't work: I have tried saving a profile and it doesn't get autodetected, so I also first need to do:

autorandr -l framework-external-dual-lg-acer

The magic:

autorandr -l horizontal

... also works well.

The worst problem with those monitors right now is that they have a radically smaller resolution than the main screen on the laptop, which means I need to reset the font scaling to normal every time I switch back and forth between those monitors and the laptop, which means I actually need to do this:

autorandr -l horizontal &&
eho Xft.dpi: 96 | xrdb -merge &&
systemctl restart terminal xcolortaillog background-image emacs &&
i3-msg restart

Kind of disruptive.

Expansion ports

I ordered a total of 10 expansion ports.

I did manage to initialize the 1TB drive as an encrypted storage, mostly to keep photos as this is something that takes a massive amount of space (500GB and counting) and that I (unfortunately) don't work on very often (but still carry around).

The expansion ports are fancy and nice, but not actually that convenient. They're a bit hard to take out: you really need to crimp your fingernails on there and pull hard to take them out. There's a little button next to them to release, I think, but at first it feels a little scary to pull those pucks out of there. You get used to it though, and it's one of those things you can do without looking eventually.

There's only four expansion ports. Once you have two monitors, the drive, and power plugged in, bam, you're out of ports; there's nowhere to plug my Yubikey. So if this is going to be my daily driver, with a dual monitor setup, I will need a dock, which means more crap firmware and uncertainty, which isn't great. There are actually plans to make a dual-USB card, but that is blocked on designing an actual board for this.

I can't wait to see more expansion ports produced. There's a ethernet expansion card which quickly went out of stock basically the day it was announced, but was eventually restocked.

I would like to see a proper SD-card reader. There's a MicroSD card reader, but that obviously doesn't work for normal SD cards, which would be more broadly compatible anyways (because you can have a MicroSD to SD card adapter, but I have never heard of the reverse). Someone actually found a SD card reader that fits and then someone else managed to cram it in a 3D printed case, which is kind of amazing.

Still, I really like that idea that I can carry all those little adapters in a pouch when I travel and can basically do anything I want. It does mean I need to shuffle through them to find the right one which is a little annoying. I have an elastic band to keep them lined up so that all the ports show the same side, to make it easier to find the right one. But that quickly gets undone and instead I have a pouch full of expansion cards.

Another awesome thing with the expansion cards is that they don't just work on the laptop: anything that takes USB-C can take those cards, which means you can use it to connect an SD card to your phone, for backups, for example. Heck, you could even connect an external display to your phone that way, assuming that's supported by your phone of course (and it probably isn't).

The expansion ports do take up some power, even when idle. See the power management section below, and particularly the power usage tests for details.

USB-C charging

One thing that is really a game changer for me is USB-C charging. It's hard to overstate how convenient this is. I often have a USB-C cable lying around to charge my phone, and I can just grab that thing and pop it in my laptop. And while it will obviously not charge as fast as the provided charger, it will stop draining the battery at least.

(As I wrote this, I had the laptop plugged in the Samsung charger that came with a phone, and it was telling me it would take 6 hours to charge the remaining 15%. With the provided charger, that flew down to 15 minutes. Similarly, I can power the laptop from the power grommet on my desk, reducing clutter as I have that single wire out there instead of the bulky power adapter.)

I also really like the idea that I can charge my laptop with a power bank or, heck, with my phone, if push comes to shove. (And vice-versa!)

This is awesome. And it works from any of the expansion ports, of course. There's a little led next to the expansion ports as well, which indicate the charge status:

  • red/amber: charging
  • white: charged
  • off: unplugged

I couldn't find documentation about this, but the forum answered.

This is something of a recurring theme with the Framework. While it has a good knowledge base and repair/setup guides (and the forum is awesome) but it doesn't have a good "owner manual" that shows you the different parts of the laptop and what they do. Again, something the MNT reform did well.

Another thing that people are asking about is an external sleep indicator: because the power LED is on the main keyboard assembly, you don't actually see whether the device is active or not when the lid is closed.

Finally, I wondered what happens when you plug in multiple power sources and it turns out the charge controller is actually pretty smart: it will pick the best power source and use it. The only downside is it can't use multiple power sources, but that seems like a bit much to ask.

Multimedia and other devices

Those things also work:

  • webcam: splendid, best webcam I've ever had (but my standards are really low)
  • onboard mic: works well, good gain (maybe a bit much)
  • onboard speakers: sound okay, a little metal-ish, loud enough to be annoying, see this thread for benchmarks, apparently pretty good speakers
  • combo jack: works, with slight hiss, see below

There's also a light sensor, but it conflicts with the keyboard brightness controls (see above).

There's also an accelerometer, but it's off by default and will be removed from future builds.

Combo jack mic tests

The Framework laptop ships with a combo jack on the left side, which allows you to plug in a CTIA (source) headset. In human terms, it's a device that has both a stereo output and a mono input, typically a headset or ear buds with a microphone somewhere.

It works, which is better than the Purism (which only had audio out), but is on par for the course for that kind of onboard hardware. Because of electrical interference, such sound cards very often get lots of noise from the board.

With a Jabra Evolve 40, the built-in USB sound card generates basically zero noise on silence (invisible down to -60dB in Audacity) while plugging it in directly generates a solid -30dB hiss. There is a noise-reduction system in that sound card, but the difference is still quite striking.

On a comparable setup (curie, a 2017 Intel NUC), there is also a his with the Jabra headset, but it's quieter, more in the order of -40/-50 dB, a noticeable difference. Interestingly, testing with my Mee Audio Pro M6 earbuds leads to a little more hiss on curie, more on the -35/-40 dB range, close to the Framework.

Also note that another sound card, the Antlion USB adapter that comes with the ModMic 4, also gives me pretty close to silence on a quiet recording, picking up less than -50dB of background noise. It's actually probably picking up the fans in the office, which do make audible noises.

In other words, the hiss of the sound card built in the Framework laptop is so loud that it makes more noise than the quiet fans in the office. Or, another way to put it is that two USB sound cards (the Jabra and the Antlion) are able to pick up ambient noise in my office but not the Framework laptop.

See also my audio page.

Performance tests

Compiling Linux 5.19.11

On a single core, compiling the Debian version of the Linux kernel takes around 100 minutes:

5411.85user 673.33system 1:37:46elapsed 103%CPU (0avgtext+0avgdata 831700maxresident)k
10594704inputs+87448000outputs (9131major+410636783minor)pagefaults 0swaps

This was using 16 watts of power, with full screen brightness.

With all 16 cores (make -j16), it takes less than 25 minutes:

19251.06user 2467.47system 24:13.07elapsed 1494%CPU (0avgtext+0avgdata 831676maxresident)k
8321856inputs+87427848outputs (30792major+409145263minor)pagefaults 0swaps

I had to plug the normal power supply after a few minutes because battery would actually run out using my desk's power grommet (34 watts).

During compilation, fans were spinning really hard, quite noisy, but not painfully so.

The laptop was sucking 55 watts of power, steadily:

  Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average  87.9   0.0  10.7   1.4   0.1 17.8 6583.6 5054.3 233.0 223.9 233.1  55.96
 GeoMean  87.9   0.0  10.6   1.2   0.0 17.6 6427.8 5048.1 227.6 218.7 227.7  55.96
  StdDev   1.4   0.0   1.2   0.6   0.2  3.0 1436.8  255.5 50.0 47.5 49.7   0.20
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum  85.0   0.0   7.8   0.5   0.0 13.0 3594.0 4638.0 117.0 111.0 120.0  55.52
 Maximum  90.8   0.0  12.9   3.5   0.8 38.0 10174.0 5901.0 374.0 362.0 375.0  56.41
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
CPU:  55.96 Watts on average with standard deviation 0.20
Note: power read from RAPL domains: package-0, uncore, package-0, core, psys.
These readings do not cover all the hardware in this device.

memtest86+

I ran Memtest86+ v6.00b3. It shows something like this:

Memtest86+ v6.00b3      | 12th Gen Intel(R) Core(TM) i5-1240P
CLK/Temp: 2112MHz    78/78°C | Pass  2% #
L1 Cache:   48KB    414 GB/s | Test 46% ##################
L2 Cache: 1.25MB    118 GB/s | Test #3 [Moving inversions, 1s & 0s] 
L3 Cache:   12MB     43 GB/s | Testing: 16GB - 18GB [1GB of 15.7GB]
Memory  :  15.7GB  14.9 GB/s | Pattern: 
--------------------------------------------------------------------------------
CPU: 4P+8E-Cores (16T)    SMP: 8T (PAR))  | Time:  0:27:23  Status: Pass     \
RAM: 1600MHz (DDR4-3200) CAS 22-22-22-51  | Pass:  1        Errors: 0
--------------------------------------------------------------------------------

Memory SPD Information
----------------------
 - Slot 2: 16GB DDR-4-3200 - Crucial CT16G4SFRA32A.C16FP (2022-W23)







                          Framework FRANMACP04
 <ESC> Exit  <F1> Configuration  <Space> Scroll Lock            6.00.unknown.x64

So about 30 minutes for a full 16GB memory test.

Software setup

Once I had everything in the hardware setup, I figured, voilà, I'm done, I'm just going to boot this beautiful machine and I can get back to work.

I don't understand why I am so naïve some times. It's mind boggling.

Obviously, it didn't happen that way at all, and I spent the best of the three following days tinkering with the laptop.

Secure boot and EFI

First, I couldn't boot off of the NVMe drive I transferred from the previous laptop (the Purism) and the BIOS was not very helpful: it was just complaining about not finding any boot device, without dropping me in the real BIOS.

At first, I thought it was a problem with my NVMe drive, because it's not listed in the compatible SSD drives from upstream. But I figured out how to enter BIOS (press F2 manically, of course), which showed the NVMe drive was actually detected. It just didn't boot, because it was an old (2010!!) Debian install without EFI.

So from there, I disabled secure boot, and booted a grml image to try to recover. And by "boot" I mean, I managed to get to the grml boot loader which promptly failed to load its own root file system somehow. I still have to investigate exactly what happened there, but it failed some time after the initrd load with:

Unable to find medium containing a live file system

This, it turns out, was fixed in Debian lately, so a daily GRML build will not have this problems. The upcoming 2022 release (likely 2022.10 or 2022.11) will also get the fix.

I did manage to boot the development version of the Debian installer which was a surprisingly good experience: it mounted the encrypted drives and did everything pretty smoothly. It even offered me to reinstall the boot loader, but that ultimately (and correctly, as it turns out) failed because I didn't have a /boot/efi partition.

At this point, I realized there was no easy way out of this, and I just proceeded to completely reinstall Debian. I had a spare NVMe drive lying around (backups FTW!) so I just swapped that in, rebooted in the Debian installer, and did a clean install. I wanted to switch to bookworm anyways, so I guess that's done too.

Storage limitations

Another thing that happened during setup is that I tried to copy over the internal 2.5" SSD drive from the Purism to the Framework 1TB expansion card. There's no 2.5" slot in the new laptop, so that's pretty much the only option for storage expansion.

I was tired and did something wrong. I ended up wiping the partition table on the original 2.5" drive.

Oops.

It might be recoverable, but just restoring the partition table didn't work either, so I'm not sure how I recover the data there. Normally, everything on my laptops and workstations is designed to be disposable, so that wasn't that big of a problem. I did manage to recover most of the data thanks to git-annex reinit, but that was a little hairy.

Bootstrapping Puppet

Once I had some networking, I had to install all the packages I needed. The time I spent setting up my workstations with Puppet has finally paid off. What I actually did was to restore two critical directories:

/etc/ssh
/var/lib/puppet

So that I would keep the previous machine's identity. That way I could contact the Puppet server and install whatever was missing. I used my Puppet optimization trick to do a batch install and then I had a good base setup, although not exactly as it was before. 1700 packages were installed manually on angela before the reinstall, and not in Puppet.

I did not inspect each one individually, but I did go through /etc and copied over more SSH keys, for backups and SMTP over SSH.

LVFS support

It looks like there's support for the (de-facto) standard LVFS firmware update system. At least I was able to update the UEFI firmware with a simple:

apt install fwupd-amd64-signed
fwupdmgr refresh
fwupdmgr get-updates
fwupdmgr update

Nice. The 12th gen BIOS updates, currently (January 2023) beta, can be deployed through LVFS with:

fwupdmgr enable-remote lvfs-testing
echo 'DisableCapsuleUpdateOnDisk=true' >> /etc/fwupd/uefi_capsule.conf 
fwupdmgr update

Those instructions come from the beta forum post. I performed the BIOS update on 2023-01-16T16:00-0500.

Resolution tweaks

The Framework laptop resolution (2256px X 1504px) is big enough to give you a pretty small font size, so welcome to the marvelous world of "scaling".

The Debian wiki page has a few tricks for this.

Console

This will make the console and grub fonts more readable:

cat >> /etc/default/console-setup <<EOF
FONTFACE="Terminus"
FONTSIZE=32x16
EOF
echo GRUB_GFXMODE=1024x768 >> /etc/default/grub
update-grub

Xorg

Adding this to your .Xresources will make everything look much bigger:

! 1.5*96
Xft.dpi: 144

Apparently, some of this can also help:

! These might also be useful depending on your monitor and personal preference:
Xft.autohint: 0
Xft.lcdfilter:  lcddefault
Xft.hintstyle:  hintfull
Xft.hinting: 1
Xft.antialias: 1
Xft.rgba: rgb

It my experience it also makes things look a little fuzzier, which is frustrating because you have this awesome monitor but everything looks out of focus. Just bumping Xft.dpi by a 1.5 factor looks good to me.

The Debian Wiki has a page on HiDPI, but it's not as good as the Arch Wiki, where the above blurb comes from. I am not using the latter because I suspect it's causing some of the "fuzziness".

TODO: find the equivalent of this GNOME hack in i3? (gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"), taken from this Framework guide

Issues

BIOS configuration

The Framework BIOS has some minor issues. One issue I personally encountered is that I had disabled Quick boot and Quiet boot in the BIOS to diagnose the above boot issues. This, in turn, triggers a bug where the BIOS boot manager (F12) would just hang completely. It would also fail to boot from an external USB drive.

The current fix (as of BIOS 3.03) is to re-enable both Quick boot and Quiet boot. Presumably this is something that will get fixed in a future BIOS update.

Note that the following keybindings are active in the BIOS POST check:

Key Meaning
F2 Enter BIOS setup menu
F12 Enter BIOS boot manager
Delete Enter BIOS setup menu

WiFi compatibility issues

I couldn't make WiFi work at first. Obviously, the default Debian installer doesn't ship with proprietary firmware (although that might change soon) so the WiFi card didn't work out of the box. But even after copying the firmware through a USB stick, I couldn't quite manage to find the right combination of ip/iw/wpa-supplicant (yes, after repeatedly copying a bunch more packages over to get those bootstrapped). (Next time I should probably try something like this post.)

Thankfully, I had a little USB-C dongle with a RJ-45 jack lying around. That also required a firmware blob, but it was a single package to copy over, and with that loaded, I had network.

Eventually, I did managed to make WiFi work; the problem was more on the side of "I forgot how to configure a WPA network by hand from the commandline" than anything else. NetworkManager worked fine and got WiFi working correctly.

Note that this is with Debian bookworm, which has the 5.19 Linux kernel, and with the firmware-nonfree (firmware-iwlwifi, specifically) package.

Battery life

I was having between about 7 hours of battery on the Purism Librem 13v4, and that's after a year or two of battery life. Now, I still have about 7 hours of battery life, which is nicer than my old ThinkPad X220 (20 minutes!) but really, it's not that good for a new generation laptop. The 12th generation Intel chipset probably improved things compared to the previous one Framework laptop, but I don't have a 11th gen Framework to compare with).

(Note that those are estimates from my status bar, not wall clock measurements. They should still be comparable between the Purism and Framework, that said.)

The battery life doesn't seem up to, say, Dell XPS 13, ThinkPad X1, and of course not the Apple M1, where I would expect 10+ hours of battery life out of the box.

That said, I do get those kind estimates when the machine is fully charged and idle. In fact, when everything is quiet and nothing is plugged in, I get dozens of hours of battery life estimated (I've seen 25h!). So power usage fluctuates quite a bit depending on usage, which I guess is expected.

Concretely, so far, light web browsing, reading emails and writing notes in Emacs (e.g. this file) takes about 8W of power:

Time    User  Nice   Sys  Idle    IO  Run Ctxt/s  IRQ/s Fork Exec Exit  Watts
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Average   1.7   0.0   0.5  97.6   0.2  1.2 4684.9 1985.2 126.6 39.1 128.0   7.57
 GeoMean   1.4   0.0   0.4  97.6   0.1  1.2 4416.6 1734.5 111.6 27.9 113.3   7.54
  StdDev   1.0   0.2   0.2   1.2   0.0  0.5 1584.7 1058.3 82.1 44.0 80.2   0.71
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
 Minimum   0.2   0.0   0.2  94.9   0.1  1.0 2242.0  698.2 82.0 17.0 82.0   6.36
 Maximum   4.1   1.1   1.0  99.4   0.2  3.0 8687.4 4445.1 463.0 249.0 449.0   9.10
-------- ----- ----- ----- ----- ----- ---- ------ ------ ---- ---- ---- ------
Summary:
System:   7.57 Watts on average with standard deviation 0.71

Expansion cards matter a lot in the battery life (see below for a thorough discussion), my normal setup is 2xUSB-C and 1xUSB-A (yes, with an empty slot, and yes, to save power).

Interestingly, playing a video in a (720p) window in a window takes up more power (10.5W) than in full screen (9.5W) but I blame that on my desktop setup (i3 + compton)... Not sure if mpv hits the VA-API, maybe not in windowed mode. Similar results with 1080p, interestingly, except the window struggles to keep up altogether. Full screen playback takes a relatively comfortable 9.5W, which means a solid 5h+ of playback, which is fine by me.

Fooling around the web, small edits, youtube-dl, and I'm at around 80% battery after about an hour, with an estimated 5h left, which is a little disappointing. I had a 7h remaining estimate before I started goofing around Discourse, so I suspect the website is a pretty big battery drain, actually. I see about 10-12 W, while I was probably at half that (6-8W) just playing music with mpv in the background...

In other words, it looks like editing posts in Discourse with Firefox takes a solid 4-6W of power. Amazing and gross.

(When writing about abusive power usage generates more power usage, is that an heisenbug? Or schrödinbug?)

Power management

Compared to the Purism Librem 13v4, the ongoing power usage seems to be slightly better. An anecdotal metric is that the Purism would take 800mA idle, while the more powerful Framework manages a little over 500mA as I'm typing this, fluctuating between 450 and 600mA. That is without any active expansion card, except the storage. Those numbers come from the output of tlp-stat -b and, unfortunately, the "ampere" unit makes it quite hard to compare those, because voltage is not necessarily the same between the two platforms.

  • TODO: review Arch Linux's tips on power saving
  • TODO: i915 driver has a lot of parameters, including some about power saving, see, again, the arch wiki, and particularly enable_fbc=1

TL:DR; power management on the laptop is an issue, but there's various tweaks you can make to improve it. Try:

  • powertop --auto-tune
  • apt install tlp && systemctl enable tlp
  • nvme.noacpi=1 mem_sleep_default=deep on the kernel command line may help with standby power usage
  • keep only USB-C expansion cards plugged in, all others suck power even when idle
  • consider upgrading the BIOS to latest beta (3.06 at the time of writing), unverified power savings
  • latest Linux kernels (6.2) promise power savings as well (unverified)

Background on CPU architecture

There were power problems in the 11th gen Framework laptop, according to this report from Linux After Dark, so the issues with power management on the Framework are not new.

The 12th generation Intel CPU (AKA "Alder Lake") is a big-little architecture with "power-saving" and "performance" cores. There used to be performance problems introduced by the scheduler in Linux 5.16 but those were eventually fixed in 5.18, which uses Intel's hardware as an "intelligent, low-latency hardware-assisted scheduler". According to Phoronix, the 5.19 release improved the power saving, at the cost of some penalty cost. There were also patch series to make the scheduler configurable, but it doesn't look those have been merged as of 5.19. There was also a session about this at the 2022 Linux Plumbers, but they stopped short of talking more about the specific problems Linux is facing in Alder lake:

Specifically, the kernel's energy-aware scheduling heuristics don't work well on those CPUs. A number of features present there complicate the energy picture; these include SMT, Intel's "turbo boost" mode, and the CPU's internal power-management mechanisms. For many workloads, running on an ostensibly more power-hungry Pcore can be more efficient than using an Ecore. Time for discussion of the problem was lacking, though, and the session came to a close.

All this to say that the 12gen Intel line shipped with this Framework series should have better power management thanks to its power-saving cores. And Linux has had the scheduler changes to make use of this (but maybe is still having trouble). In any case, this might not be the source of power management problems on my laptop, quite the opposite.

Also note that the firmware updates for various chipsets are supposed to improve things eventually.

On the other hand, The Verge simply declared the whole P-series a mistake...

Attempts at improving power usage

I did try to follow some of the tips in this forum post. The tricks powertop --auto-tune and tlp's PCIE_ASPM_ON_BAT=powersupersave basically did nothing: I was stuck at 10W power usage in powertop (600+mA in tlp-stat).

Apparently, I should be able to reach the C8 CPU power state (or even C9, C10) in powertop, but I seem to be stock at C7. (Although I'm not sure how to read that tab in powertop: in the Core(HW) column there's only C3/C6/C7 states, and most cores are 85% in C7 or maybe C6. But the next column over does show many CPUs in C10 states...

As it turns out, the graphics card actually takes up a good chunk of power unless proper power management is enabled (see below). After tweaking this, I did manage to get down to around 7W power usage in powertop.

Expansion cards actually do take up power, and so does the screen, obviously. The fully-lit screen takes a solid 2-3W of power compared to the fully dimmed screen. When removing all expansion cards and making the laptop idle, I can spin it down to 4 watts power usage at the moment, and an amazing 2 watts when the screen turned off.

Caveats

Abusive (10W+) power usage that I initially found could be a problem with my desktop configuration: I have this silly status bar that updates every second and probably causes redraws... The CPU certainly doesn't seem to spin down below 1GHz. Also note that this is with an actual desktop running with everything: it could very well be that some things (I'm looking at you Signal Desktop) take up unreasonable amount of power on their own (hello, 1W/electron, sheesh). Syncthing and containerd (Docker!) also seem to take a good 500mW just sitting there.

Beyond my desktop configuration, this could, of course, be a Debian-specific problem; your favorite distribution might be better at power management.

Idle power usage tests

Some expansion cards waste energy, even when unused. Here is a summary of the findings from the powerstat page. I also include other devices tested in this page for completeness:

Device Minimum Average Max Stdev Note
Screen, 100% 2.4W 2.6W 2.8W N/A
Screen, 1% 30mW 140mW 250mW N/A
Backlight 1 290mW ? ? ? fairly small, all things considered
Backlight 2 890mW 1.2W 3W? 460mW? geometric progression
Backlight 3 1.69W 1.5W 1.8W? 390mW? significant power use
Radios 100mW 250mW N/A N/A
USB-C N/A N/A N/A N/A negligible power drain
USB-A 10mW 10mW ? 10mW almost negligible
DisplayPort 300mW 390mW 600mW N/A not passive
HDMI 380mW 440mW 1W? 20mW not passive
1TB SSD 1.65W 1.79W 2W 12mW significant, probably higher when busy
MicroSD 1.6W 3W 6W 1.93W highest power usage, possibly even higher when busy
Ethernet 1.69W 1.64W 1.76W N/A comparable to the SSD card

So it looks like all expansion cards but the USB-C ones are active, i.e. they draw power with idle. The USB-A cards are the least concern, sucking out 10mW, pretty much within the margin of error. But both the DisplayPort and HDMI do take a few hundred miliwatts. It looks like USB-A connectors have this fundamental flaw that they necessarily draw some powers because they lack the power negotiation features of USB-C. At least according to this post:

It seems the USB A must have power going to it all the time, that the old USB 2 and 3 protocols, the USB C only provides power when there is a connection. Old versus new.

Apparently, this is a problem specific to the USB-C to USB-A adapter that ships with the Framework. Some people have actually changed their orders to all USB-C because of this problem, but I'm not sure the problem is as serious as claimed in the forums. I couldn't reproduce the "one watt" power drains suggested elsewhere, at least not repeatedly. (A previous version of this post did show such a power drain, but it was in a less controlled test environment than the series of more rigorous tests above.)

The worst offenders are the storage cards: the SSD drive takes at least one watt of power and the MicroSD card seems to want to take all the way up to 6 watts of power, both just sitting there doing nothing. This confirms claims of 1.4W for the SSD (but not 5W) power usage found elsewhere. The former post has instructions on how to disable the card in software. The MicroSD card has been reported as using 2 watts, but I've seen it as high as 6 watts, which is pretty damning.

The Framework team has a beta update for the DisplayPort adapter but currently only for Windows (LVFS technically possible, "under investigation"). A USB-A firmware update is also under investigation. It is therefore likely at least some of those power management issues will eventually be fixed.

Note that the upcoming Ethernet card has a reported 2-8W power usage, depending on traffic. I did my own power usage tests in powerstat-wayland and they seem lower than 2W.

The upcoming 6.2 Linux kernel might also improve battery usage when idle, see this Phoronix article for details, likely in early 2023.

Idle power usage tests under Wayland

Update: I redid those tests under Wayland, see powerstat-wayland for details. The TL;DR: is that power consumption is either smaller or similar.

Idle power usage tests, 3.06 beta BIOS

I redid the idle tests after the 3.06 beta BIOS update and ended up with this results:

Device Minimum Average Max Stdev Note
Baseline 1.96W 2.01W 2.11W 30mW 1 USB-C, screen off, backlight off, no radios
2 USB-C 1.95W 2.16W 3.69W 430mW USB-C confirmed as mostly passive...
3 USB-C 1.95W 2.16W 3.69W 430mW ... although with extra stdev
1TB SSD 3.72W 3.85W 4.62W 200mW unchanged from before upgrade
1 USB-A 1.97W 2.18W 4.02W 530mW unchanged
2 USB-A 1.97W 2.00W 2.08W 30mW unchanged
3 USB-A 1.94W 1.99W 2.03W 20mW unchanged
MicroSD w/o card 3.54W 3.58W 3.71W 40mW significant improvement! 2-3W power saving!
MicroSD w/ card 3.53W 3.72W 5.23W 370mW new measurement! increased deviation
DisplayPort 2.28W 2.31W 2.37W 20mW unchanged
1 HDMI 2.43W 2.69W 4.53W 460mW unchanged
2 HDMI 2.53W 2.59W 2.67W 30mW unchanged
External USB 3.85W 3.89W 3.94W 30mW new result
Ethernet 3.60W 3.70W 4.91W 230mW unchanged

Note that the table summary is different than the previous table: here we show the absolute numbers while the previous table was doing a confusing attempt at showing relative (to the baseline) numbers.

Conclusion: the 3.06 BIOS update did not significantly change idle power usage stats except for the MicroSD card which has significantly improved.

The new "external USB" test is also interesting: it shows how the provided 1TB SSD card performs (admirably) compared to existing devices. The other new result is the MicroSD card with a card which, interestingly, uses less power than the 1TB SSD drive.

Standby battery usage

I wrote some quick hack to evaluate how much power is used during sleep. Apparently, this is one of the areas that should have improved since the first Framework model, let's find out.

My baseline for comparison is the Purism laptop, which, in 10 minutes, went from this:

sep 28 11:19:45 angela systemd-sleep[209379]: /sys/class/power_supply/BAT/charge_now                      =   6045 [mAh]

... to this:

sep 28 11:29:47 angela systemd-sleep[209725]: /sys/class/power_supply/BAT/charge_now                      =   6037 [mAh]

That's 8mAh per 10 minutes (and 2 seconds), or 48mA, or, with this battery, about 127 hours or roughly 5 days of standby. Not bad!

In comparison, here is my really old x220, before:

sep 29 22:13:54 emma systemd-sleep[176315]: /sys/class/power_supply/BAT0/energy_now                     =   5070 [mWh]

... after:

sep 29 22:23:54 emma systemd-sleep[176486]: /sys/class/power_supply/BAT0/energy_now                     =   4980 [mWh]

... which is 90 mwH in 10 minutes, or a whopping 540mA, which was possibly okay when this battery was new (62000 mAh, so about 100 hours, or about 5 days), but this battery is almost dead and has only 5210 mAh when full, so only 10 hours standby.

And here is the Framework performing a similar test, before:

sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_full                    =   3518 [mAh]
sep 29 22:27:04 angela systemd-sleep[4515]: /sys/class/power_supply/BAT1/charge_now                     =   2861 [mAh]

... after:

sep 29 22:37:08 angela systemd-sleep[4743]: /sys/class/power_supply/BAT1/charge_now                     =   2812 [mAh]

... which is 49mAh in a little over 10 minutes (and 4 seconds), or 292mA, much more than the Purism, but half of the X220. At this rate, the battery would last on standby only 12 hours!! That is pretty bad.

Note that this was done with the following expansion cards:

  • 2 USB-C
  • 1 1TB SSD drive
  • 1 USB-A with a hub connected to it, with keyboard and LAN

Preliminary tests without the hub (over one minute) show that it doesn't significantly affect this power consumption (300mA).

This guide also suggests booting with nvme.noacpi=1 but this still gives me about 5mAh/min (or 300mA).

Adding mem_sleep_default=deep to the kernel command line does make a difference. Before:

sep 29 23:03:11 angela systemd-sleep[3699]: /sys/class/power_supply/BAT1/charge_now                     =   2544 [mAh]

... after:

sep 29 23:04:25 angela systemd-sleep[4039]: /sys/class/power_supply/BAT1/charge_now                     =   2542 [mAh]

... which is 2mAh in 74 seconds, which is 97mA, brings us to a more reasonable 36 hours, or a day and a half. It's still above the x220 power usage, and more than an order of magnitude more than the Purism laptop. It's also far from the 0.4% promised by upstream, which would be 14mA for the 3500mAh battery.

It should also be noted that this "deep" sleep mode is a little more disruptive than regular sleep. As you can see by the timing, it took more than 10 seconds for the laptop to resume, which feels a little alarming as your banging the keyboard to bring it back to life.

You can confirm the current sleep mode with:

# cat /sys/power/mem_sleep
s2idle [deep]

In the above, deep is selected. You can change it on the fly with:

printf s2idle > /sys/power/mem_sleep

Here's another test:

sep 30 22:25:50 angela systemd-sleep[32207]: /sys/class/power_supply/BAT1/charge_now                     =   1619 [mAh]
sep 30 22:31:30 angela systemd-sleep[32516]: /sys/class/power_supply/BAT1/charge_now                     =   1613 [mAh]

... better! 6 mAh in about 6 minutes, works out to 63.5mA, so more than two days standby.

A longer test:

oct 01 09:22:56 angela systemd-sleep[62978]: /sys/class/power_supply/BAT1/charge_now                     =   3327 [mAh]
oct 01 12:47:35 angela systemd-sleep[63219]: /sys/class/power_supply/BAT1/charge_now                     =   3147 [mAh]

That's 180mAh in about 3.5h, 52mA! Now at 66h, or almost 3 days.

I wasn't sure why I was seeing such fluctuations in those tests, but as it turns out, expansion card power tests show that they do significantly affect power usage, especially the SSD drive, which can take up to two full watts of power even when idle. I didn't control for expansion cards in the above tests — running them with whatever card I had plugged in without paying attention — so it's likely the cause of the high power usage and fluctuations.

It might be possible to work around this problem by disabling USB devices before suspend. TODO. See also this post.

In the meantime, I have been able to get much better suspend performance by unplugging all modules. Then I get this result:

oct 04 11:15:38 angela systemd-sleep[257571]: /sys/class/power_supply/BAT1/charge_now                     =   3203 [mAh]
oct 04 15:09:32 angela systemd-sleep[257866]: /sys/class/power_supply/BAT1/charge_now                     =   3145 [mAh]

Which is 14.8mA! Almost exactly the number promised by Framework! With a full battery, that means a 10 days suspend time. This is actually pretty good, and far beyond what I was expecting when starting down this journey.

So, once the expansion cards are unplugged, suspend power usage is actually quite reasonable. More detailed standby tests are available in the standby-tests page, with a summary below.

There is also some hope that the Chromebook edition — specifically designed with a specification of 14 days standby time — could bring some firmware improvements back down to the normal line. Some of those issues were reported upstream in April 2022, but there doesn't seem to have been any progress there since.

TODO: one final solution here is suspend-then-hibernate, which Windows uses for this

TODO: consider implementing the S0ix sleep states , see also troubleshooting

TODO: consider https://github.com/intel/pm-graph

Standby expansion cards test results

This table is a summary of the more extensive standby-tests I have performed:

Device Wattage Amperage Days Note
baseline 0.25W 16mA 9 sleep=deep nvme.noacpi=1
s2idle 0.29W 18.9mA ~7 sleep=s2idle nvme.noacpi=1
normal nvme 0.31W 20mA ~7 sleep=s2idle without nvme.noacpi=1
1 USB-C 0.23W 15mA ~10
2 USB-C 0.23W 14.9mA same as above
1 USB-A 0.75W 48.7mA 3 +500mW (!!) for the first USB-A card!
2 USB-A 1.11W 72mA 2 +360mW
3 USB-A 1.48W 96mA <2 +370mW
1TB SSD 0.49W 32mA <5 +260mW
MicroSD 0.52W 34mA ~4 +290mW
DisplayPort 0.85W 55mA <3 +620mW (!!)
1 HDMI 0.58W 38mA ~4 +250mW
2 HDMI 0.65W 42mA <4 +70mW (?)

Conclusions:

  • USB-C cards take no extra power on suspend, possibly less than empty slots, more testing required

  • USB-A cards take a lot more power on suspend (300-500mW) than on regular idle (~10mW, almost negligible)

  • 1TB SSD and MicroSD cards seem to take a reasonable amount of power (260-290mW), compared to their runtime equivalents (1-6W!)

  • DisplayPort takes a surprising lot of power (620mW), almost double its average runtime usage (390mW)

  • HDMI cards take, surprisingly, less power (250mW) in standby than the DP card (620mW)

  • and oddly, a second card adds less power usage (70mW?!) than the first, maybe a circuit is used by both?

A discussion of those results is in this forum post.

Standby expansion cards test results, 3.06 beta BIOS

Framework recently (2022-11-07) announced that they will publish a firmware upgrade to address some of the USB-C issues, including power management. This could positively affect the above result, improving both standby and runtime power usage.

The update came out in December 2022 and I redid my analysis with the following results:

Device Wattage Amperage Days Note
baseline 0.25W 16mA 9 no cards, same as before upgrade
1 USB-C 0.25W 16mA 9 same as before
2 USB-C 0.25W 16mA 9 same
1 USB-A 0.80W 62mA 3 +550mW!! worse than before
2 USB-A 1.12W 73mA <2 +320mW, on top of the above, bad!
Ethernet 0.62W 40mA 3-4 new result, decent
1TB SSD 0.52W 34mA 4 a bit worse than before (+2mA)
MicroSD 0.51W 22mA 4 same
DisplayPort 0.52W 34mA 4+ upgrade improved by 300mW
1 HDMI ? 38mA ? same
2 HDMI ? 45mA ? a bit worse than before (+3mA)
Normal 1.08W 70mA ~2 Ethernet, 2 USB-C, USB-A

Full results in standby-tests-306. The big takeaway for me is that the update did not improve power usage on the USB-A ports which is a big problem for my use case. There is a notable improvement on the DisplayPort power consumption which brings it more in line with the HDMI connector, but it still doesn't properly turn off on suspend either.

Even worse, the USB-A ports now sometimes fails to resume after suspend, which is pretty annoying. This is a known problem that will hopefully get fixed in the final release.

Battery wear protection

The BIOS has an option to limit charge to 80% to mitigate battery wear. There's a way to control the embedded controller from runtime with fw-ectool, partly documented here. The command would be:

sudo ectool fwchargelimit 80

I looked at building this myself but failed to run it. I opened a RFP in Debian so that we can ship this in Debian, and also documented my work there.

Note that there is now a counter that tracks charge/discharge cycles. It's visible in tlp-stat -b, which is a nice improvement:

root@angela:/home/anarcat# tlp-stat -b
--- TLP 1.5.0 --------------------------------------------

+++ Battery Care
Plugin: generic
Supported features: none available

+++ Battery Status: BAT1
/sys/class/power_supply/BAT1/manufacturer                   = NVT
/sys/class/power_supply/BAT1/model_name                     = Framewo
/sys/class/power_supply/BAT1/cycle_count                    =      3
/sys/class/power_supply/BAT1/charge_full_design             =   3572 [mAh]
/sys/class/power_supply/BAT1/charge_full                    =   3541 [mAh]
/sys/class/power_supply/BAT1/charge_now                     =   1625 [mAh]
/sys/class/power_supply/BAT1/current_now                    =    178 [mA]
/sys/class/power_supply/BAT1/status                         = Discharging

/sys/class/power_supply/BAT1/charge_control_start_threshold = (not available)
/sys/class/power_supply/BAT1/charge_control_end_threshold   = (not available)

Charge                                                      =   45.9 [%]
Capacity                                                    =   99.1 [%]

One thing that is still missing is the charge threshold data (the (not available) above). There's been some work to make that accessible in August, stay tuned? This would also make it possible implement hysteresis support.

Ethernet expansion card

The Framework ethernet expansion card is a fancy little doodle: "2.5Gbit/s and 10/100/1000Mbit/s Ethernet", the "clear housing lets you peek at the RTL8156 controller that powers it". Which is another way to say "we didn't completely finish prod on this one, so it kind of looks like we 3D-printed this in the shop"....

The card is a little bulky, but I guess that's inevitable considering the RJ-45 form factor when compared to the thin Framework laptop.

I have had a serious issue when trying it at first: the link LEDs just wouldn't come up. I made a full bug report in the forum and with upstream support, but eventually figured it out on my own. It's (of course) a power saving issue: if you reboot the machine, the links come up when the laptop is running the BIOS POST check and even when the Linux kernel boots.

I first thought that the problem is likely related to the powertop service which I run at boot time to tweak some power saving settings.

It seems like this:

echo 'on' > '/sys/bus/usb/devices/4-2/power/control'

... is a good workaround to bring the card back online. You can even return to power saving mode and the card will still work:

echo 'auto' > '/sys/bus/usb/devices/4-2/power/control'

Further research by Matt_Hartley from the Framework Team found this issue in the tlp tracker that shows how the USB_AUTOSUSPEND setting enables the power saving even if the driver doesn't support it, which, in retrospect, just sounds like a bad idea. To quote that issue:

By default, USB power saving is active in the kernel, but not force-enabled for incompatible drivers. That is, devices that support suspension will suspend, drivers that do not, will not.

So the fix is actually to uninstall tlp or disable that setting by adding this to /etc/tlp.conf:

USB_AUTOSUSPEND=0

... but that disables auto-suspend on all USB devices, which may hurt other power usage performance. I have found that a a combination of:

USB_AUTOSUSPEND=1
USB_DENYLIST="0bda:8156"

and this on the kernel commandline:

usbcore.quirks=0bda:8156:k

... actually does work correctly. I now have this in my /etc/default/grub.d/framework-tweaks.cfg file:

# net.ifnames=0: normal interface names ffs (e.g. eth0, wlan0, not wlp166
s0)
# nvme.noacpi=1: reduce SSD disk power usage (not working)
# mem_sleep_default=deep: reduce power usage during sleep (not working)
# usbcore.quirk is a workaround for the ethernet card suspend bug: https:
//guides.frame.work/Guide/Fedora+37+Installation+on+the+Framework+Laptop/
108?lang=en
GRUB_CMDLINE_LINUX="net.ifnames=0 nvme.noacpi=1 mem_sleep_default=deep usbcore.quirks=0bda:8156:k"

# fix the resolution in grub for fonts to not be tiny
GRUB_GFXMODE=1024x768

Other than that, I haven't been able to max out the card because I don't have other 2.5Gbit/s equipment at home, which is strangely satisfying. But running against my Turris Omnia router, I could pretty much max a gigabit fairly easily:

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   937 Mbits/sec  238             sender
[  5]   0.00-10.00  sec  1.09 GBytes   934 Mbits/sec                  receiver

The card doesn't require any proprietary firmware blobs which is surprising. Other than the power saving issues, it just works.

In my power tests (see powerstat-wayland), the Ethernet card seems to use about 1.6W of power idle, without link, in the above "quirky" configuration where the card is functional but without autosuspend.

Proprietary firmware blobs

The framework does need proprietary firmware to operate. Specifically:

  • the WiFi network card shipped with the DIY kit is a AX210 card that requires a 5.19 kernel or later, and the firmware-iwlwifi non-free firmware package
  • the Bluetooth adapter also loads the firmware-iwlwifi package (untested)
  • the graphics work out of the box without firmware, but certain power management features come only with special proprietary firmware, normally shipped in the firmware-misc-nonfree but currently missing from the package

Note that, at the time of writing, the latest i915 firmware from linux-firmware has a serious bug where loading all the accessible firmware results in noticeable — I estimate 200-500ms — lag between the keyboard (not the mouse!) and the display. Symptoms also include tearing and shearing of windows, it's pretty nasty.

One workaround is to delete the two affected firmware files:

cd /lib/firmware && rm adlp_guc_70.1.1.bin adlp_guc_69.0.3.bin
update-initramfs -u

You will get the following warning during build, which is good as it means the problematic firmware is disabled:

W: Possible missing firmware /lib/firmware/i915/adlp_guc_69.0.3.bin for module i915
W: Possible missing firmware /lib/firmware/i915/adlp_guc_70.1.1.bin for module i915

But then it also means that critical firmware isn't loaded, which means, among other things, a higher battery drain. I was able to move from 8.5-10W down to the 7W range after making the firmware work properly. This is also after turning the backlight all the way down, as that takes a solid 2-3W in full blast.

The proper fix is to use some compositing manager. I ended up using compton with the following systemd unit:

[Unit]
Description=start compositing manager
PartOf=graphical-session.target
ConditionHost=angela

[Service]
Type=exec
ExecStart=compton --show-all-xerrors --backend glx --vsync opengl-swc
Restart=on-failure

[Install]
RequiredBy=graphical-session.target

compton is orphaned however, so you might be tempted to use picom instead, but in my experience the latter uses much more power (1-2W extra, similar experience). I also tried compiz but it would just crash with:

anarcat@angela:~$ compiz --replace
compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Another composite manager is already running on screen: 0
compiz (core) - Fatal: No manageable screens found on display :0

When running from the base session, I would get this instead:

compiz (core) - Warn: No XI2 extension
compiz (core) - Error: Couldn't load plugin 'ccp'
compiz (core) - Error: Couldn't load plugin 'ccp'

Thanks to EmanueleRocca for figuring all that out. See also this discussion about power management on the Framework forum.

Note that Wayland environments do not require any special configuration here and actually work better, see my Wayland migration notes for details.

Also note that the iwlwifi firmware also looks incomplete. Even with the package installed, I get those errors in dmesg:

[   19.534429] Intel(R) Wireless WiFi driver for Linux
[   19.534691] iwlwifi 0000:a6:00.0: enabling device (0000 -> 0002)
[   19.541867] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[   19.541881] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-72.ucode (-2)
[   19.541882] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-72.ucode failed with error -2
[   19.541890] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[   19.541895] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-71.ucode (-2)
[   19.541896] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-71.ucode failed with error -2
[   19.541903] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[   19.541907] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-70.ucode (-2)
[   19.541908] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-70.ucode failed with error -2
[   19.541913] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[   19.541916] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-69.ucode (-2)
[   19.541917] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-69.ucode failed with error -2
[   19.541922] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[   19.541926] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-68.ucode (-2)
[   19.541927] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-68.ucode failed with error -2
[   19.541933] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[   19.541937] iwlwifi 0000:a6:00.0: firmware: failed to load iwlwifi-ty-a0-gf-a0-67.ucode (-2)
[   19.541937] iwlwifi 0000:a6:00.0: Direct firmware load for iwlwifi-ty-a0-gf-a0-67.ucode failed with error -2
[   19.544244] iwlwifi 0000:a6:00.0: firmware: direct-loading firmware iwlwifi-ty-a0-gf-a0-66.ucode
[   19.544257] iwlwifi 0000:a6:00.0: api flags index 2 larger than supported by driver
[   19.544270] iwlwifi 0000:a6:00.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 0.63.2.1
[   19.544523] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[   19.544528] iwlwifi 0000:a6:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
[   19.544530] iwlwifi 0000:a6:00.0: loaded firmware version 66.55c64978.0 ty-a0-gf-a0-66.ucode op_mode iwlmvm

Some of those are available in the latest upstream firmware package (iwlwifi-ty-a0-gf-a0-71.ucode, -68, and -67), but not all (e.g. iwlwifi-ty-a0-gf-a0-72.ucode is missing) . It's unclear what those do or don't, as the WiFi seems to work well without them.

I still copied them in from the latest linux-firmware package in the hope they would help with power management, but I did not notice a change after loading them.

There are also multiple knobs on the iwlwifi and iwlmvm drivers. The latter has a power_schmeme setting which defaults to 2 (balanced), setting it to 3 (low power) could improve battery usage as well, in theory. The iwlwifi driver also has power_save (defaults to disabled) and power_level (1-5, defaults to 1) settings. See also the output of modinfo iwlwifi and modinfo iwlmvm for other driver options.

Graphics acceleration

After loading the latest upstream firmware and setting up a compositing manager (compton, above), I tested the classic glxgears.

Running in a window gives me odd results, as the gears basically grind to a halt:

Running synchronized to the vertical refresh.  The framerate should be
approximately the same as the monitor refresh rate.
137 frames in 5.1 seconds = 26.984 FPS
27 frames in 5.4 seconds =  5.022 FPS

Ouch. 5FPS!

But interestingly, once the window is in full screen, it does hit the monitor refresh rate:

300 frames in 5.0 seconds = 60.000 FPS

I'm not really a gamer and I'm not normally using any of that fancy graphics acceleration stuff (except maybe my browser does?).

I installed intel-gpu-tools for the intel_gpu_top command to confirm the GPU was engaged when doing those simulations. A nice find. Other useful diagnostic tools include glxgears and glxinfo (in mesa-utils) and (vainfo in vainfo).

Following to this post, I also made sure to have those settings in my about:config in Firefox, or, in user.js:

user_pref("media.ffmpeg.vaapi.enabled", true);

Note that the guide suggests many other settings to tweak, but those might actually be overkill, see this comment and its parents. I did try forcing hardware acceleration by setting gfx.webrender.all to true, but everything became choppy and weird.

The guide also mentions installing the intel-media-driver package, but I could not find that in Debian.

The Arch wiki has, as usual, an excellent reference on hardware acceleration in Firefox.

Chromium / Signal desktop bugs

It looks like both Chromium and Signal Desktop misbehave with my compositor setup (compton + i3). The fix is to add a persistent flag to Chromium. In Arch, it's conveniently in ~/.config/chromium-flags.conf but that doesn't actually work in Debian. I had to put the flag in /etc/chromium.d/disable-compositing, like this:

export CHROMIUM_FLAGS="$CHROMIUM_FLAGS --disable-gpu-compositing"

It's possible another one of the hundreds of flags might fix this issue better, but I don't really have time to go through this entire, incomplete, and unofficial list (!?!).

Signal Desktop is a similar problem, and doesn't reuse those flags (because of course it doesn't). Instead I had to rewrite the wrapper script in /usr/local/bin/signal-desktop to use this instead:

exec /usr/bin/flatpak run --branch=stable --arch=x86_64 org.signal.Signal --disable-gpu-compositing "$@"

This was mostly done in this Puppet commit.

I haven't figured out the root of this problem. I did try using picom and xcompmgr; they both suffer from the same issue. Another Debian testing user on Wayland told me they haven't seen this problem, so hopefully this can be fixed by switching to wayland.

Graphics card hangs

I believe I might have this bug which results in a total graphical hang for 15-30 seconds. It's fairly rare so it's not too disruptive, but when it does happen, it's pretty alarming.

The comments on that bug report are encouraging though: it seems this is a bug in either mesa or the Intel graphics driver, which means many people have this problem so it's likely to be fixed. There's actually a merge request on mesa already (2022-12-29).

It could also be that bug because the error message I get is actually:

Jan 20 12:49:10 angela kernel: Asynchronous wait on fence 0000:00:02.0:sway[104431]:cb0ae timed out (hint:intel_atomic_commit_ready [i915]) 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 12:0:00000000 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] Resetting chip for stopped heartbeat on rcs0 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC firmware i915/adlp_guc_70.1.1.bin version 70.1 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] HuC authenticated 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC submission enabled 
Jan 20 12:49:15 angela kernel: i915 0000:00:02.0: [drm] GuC SLPC enabled

It's a solid 30 seconds graphical hang. Maybe the keyboard and everything else keeps working. The latter bug report is quite long, with many comments, but this one from January 2023 seems to say that Sway 1.8 fixed the problem. There's also an earlier patch to add an extra kernel parameter that supposedly fixes that too. There's all sorts of other workarounds in there, for example this:

echo "options i915 enable_dc=1 enable_guc_loading=1 enable_guc_submission=1 edp_vswing=0 enable_guc=2 enable_fbc=1 enable_psr=1 disable_power_well=0" | sudo tee /etc/modprobe.d/i915.conf

from this comment... So that one is unsolved, as far as the upstream drivers are concerned, but maybe could be fixed through Sway.

Weird USB hangs / graphical glitches

I have had weird connectivity glitches better described in this post, but basically: my USB keyboard and mice (connected over a USB hub) drop keys, lag a lot or hang, and I get visual glitches.

The fix was to tighten the screws around the CPU on the motherboard (!), which is, thankfully, a rather simple repair.

Shipping details

I ordered the Framework in August 2022 and received it about a month later, which is sooner than expected because the August batch was late.

People (including me) expected this to have an impact on the September batch, but it seems Framework have been able to fix the delivery problems and keep up with the demand.

As of early 2023, their website announces that laptops ship "within 5 days". I have myself ordered a few expansion cards in November 2022, and they shipped on the same day, arriving 3-4 days later.

The supply pipeline

There are basically 6 steps in the Framework shipping pipeline, each (except the last) accompanied with an email notification:

  1. pre-order
  2. preparing batch
  3. preparing order
  4. payment complete
  5. shipping
  6. (received)

This comes from the crowdsourced spreadsheet, which should be updated when the status changes here.

I was part of the "third batch" of the 12th generation laptop, which was supposed to ship in September. It ended up arriving on my door step on September 27th, about 33 days after ordering.

It seems current orders are not processed in "batches", but in real time, see this blog post for details on shipping.

Shipping trivia

I don't know about the others, but my laptop shipped through no less than four different airplane flights. Here are the hops it took:

I can't quite figure out how to calculate exactly how much mileage that is, but it's huge. The ride through Alaska is surprising enough but the bounce back through Winnipeg is especially weird. I guess the route happens that way because of Fedex shipping hubs.

There was a related oddity when I had my Purism laptop shipped: it left from the west coast and seemed to enter on an endless, two week long road trip across the continental US.

Other resources

13 March, 2023 02:01AM

March 11, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

pkgKitten 0.2.3 on CRAN: Minor Update

kitten

A new release 0.2.3 of pkgKitten arrived on CRAN earlier, and will be uploaded to Debian. pkgKitten makes it simple to create new R packages via a simple function invocation. A wrapper kitten.r exists in the littler package to make it even easier.

This release improves the created ‘Description:’, and updated some of the continuous integration.

Changes in version 0.2.3 (2023-03-11)

  • Small improvement to generated Description: field and Title:

  • Maintenance for continuous integration setup

More details about the package are at the pkgKitten webpage, the pkgKitten docs site, and the pkgKitten GitHub repo.

Courtesy of my CRANberries site, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 March, 2023 06:35PM

March 10, 2023

Antoine Beaupré

how to audit for open services with iproute2

The computer world has a tendency of reinventing the wheel once in a while. I am not a fan of that process, but sometimes I just have to bite the bullet and adapt to change. This post explains how I adapted to one particular change: the netstat to sockstat transition.

I used to do this to show which processes where listening on which port on a server:

netstat -anpe

It was a handy mnemonic as, in France, ANPE was the agency responsible for the unemployed (basically). That would list all sockets (-a), not resolve hostnames (-n, because it's slow), show processes attached to the socket (-p) with extra info like the user (-e). This still works, but sometimes fail to find the actual process hooked to the port. Plus, it lists a whole bunch of UNIX sockets and non-listening sockets, which are generally irrelevant for such an audit.

What I really wanted to use was really something like:

netstat -pleunt | sort

... which has the "pleut" mnemonic ("rains", but plural, which makes no sense and would be badly spelled anyway). That also only lists listening (-l) and network sockets, specifically UDP (-u) and TCP (-t).

But enough with the legacy, let's try the brave new world of sockstat which has the unfortunate acronym ss.

The equivalent sockstat command to the above is:

ss -pleuntO

It's similar to the above, except we need the -O flag otherwise ss does that confusing thing where it splits the output on multiple lines. But I actually use:

ss -plunt0

... i.e. without the -e as the information it gives (cgroup, fd number, etc) is not much more useful than what's already provided with -p (service and UID).

All of the above also show sockets that are not actually a concern because they only listen on localhost. Those one should be filtered out. So now we embark into that wild filtering ride.

This is going to list all open sockets and show the port number and service:

ss -pluntO --no-header | sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/' | sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' | sort -gu

For example on my desktop, it looks like:

anarcat@angela:~$ sudo ss -pluntO --no-header | sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/' | sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' | sort -gu
          [::]:* users:(("unbound",pid=1864))        
22  users:(("sshd",pid=1830))           
25  users:(("master",pid=3150))        
53  users:(("unbound",pid=1864))        
323 users:(("chronyd",pid=1876))        
500 users:(("charon",pid=2817))        
631 users:(("cups-browsed",pid=2744))   
2628    users:(("dictd",pid=2825))          
4001    users:(("emacs",pid=3578))          
4500    users:(("charon",pid=2817))        
5353    users:(("avahi-daemon",pid=1423))  
6600    users:(("systemd",pid=3461))       
8384    users:(("syncthing",pid=232169))   
9050    users:(("tor",pid=2857))            
21027   users:(("syncthing",pid=232169))   
22000   users:(("syncthing",pid=232169))   
33231   users:(("syncthing",pid=232169))   
34953   users:(("syncthing",pid=232169))   
35770   users:(("syncthing",pid=232169))   
44944   users:(("syncthing",pid=232169))   
47337   users:(("syncthing",pid=232169))   
48903   users:(("mosh-client",pid=234126))  
52774   users:(("syncthing",pid=232169))   
52938   users:(("avahi-daemon",pid=1423))  
54029   users:(("avahi-daemon",pid=1423))  
anarcat@angela:~$

But that doesn't filter out the localhost stuff, lots of false positive (like emacs, above). And this is where it gets... not fun, as you need to match "localhost" but we don't resolve names, so you need to do some fancy pattern matching:

ss -pluntO --no-header | \
    sed 's/^\([a-z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/;s/^tcp//;s/^udp//' | \
    grep -v -e '^\[fe80::' -e '^127.0.0.1' -e '^\[::1\]' -e '^192\.' -e '^172\.' | \
    sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\1\t/;s/,fd=[0-9]*//' |\
    sort -gu

This is kind of horrible, but it works, those are the actually open ports on my machine:

anarcat@angela:~$ sudo ss -pluntO --no-header |         sed 's/^\([a-
z]*\) *[A-Z]* *[0-9]* [0-9]* *[0-9]* */\1/;s/^tcp//;s/^udp//' |      
   grep -v -e '^\[fe80::' -e '^127.0.0.1' -e '^\[::1\]' -e '^192\.' -
e '^172\.' |         sed 's/^[^:]*:\(:\]:\)\?//;s/\([0-9]*\) *[^ ]*/\
1\t/;s/,fd=[0-9]*//' |        sort -gu
22  users:(("sshd",pid=1830))           
500 users:(("charon",pid=2817))        
631 users:(("cups-browsed",pid=2744))   
4500    users:(("charon",pid=2817))        
5353    users:(("avahi-daemon",pid=1423))  
6600    users:(("systemd",pid=3461))       
21027   users:(("syncthing",pid=232169))   
22000   users:(("syncthing",pid=232169))   
34953   users:(("syncthing",pid=232169))   
35770   users:(("syncthing",pid=232169))   
48903   users:(("mosh-client",pid=234126))  
52938   users:(("avahi-daemon",pid=1423))  
54029   users:(("avahi-daemon",pid=1423))

Surely there must be a better way. It turns out that lsof can do some of this, and it's relatively straightforward. This lists all listening TCP sockets:

lsof -iTCP -sTCP:LISTEN +c 15 | grep -v localhost | sort

A shorter version from Adam Shand is:

lsof -i @localhost

... which basically replaces the grep -v localhost line.

In theory, this would do the equivalent on UDP

lsof -iUDP -sUDP:^Idle

... but in reality, it looks like lsof on Linux can't figure out the state of a UDP socket:

lsof: no UDP state names available: UDP:^Idle

... which, honestly, I'm baffled by. It's strange because ss can figure out the state of those sockets, heck it's how -l vs -a works after all. So we need something else to show listening UDP sockets.

The following actually looks pretty good after all:

ss -pluO

That will list localhost sockets of course, so we can explicitly ask ss to resolve those and filter them out with something like:

ss -plurO | grep -v localhost

oh, and look here! ss supports pattern matching, so we can actually tell it to ignore localhost directly, which removes that horrible sed line we used earlier:

ss -pluntO '! ( src = localhost )'

That actually gives a pretty readable output. One annoyance is we can't really modify the columns here, so we still need some god-awful sed hacking on top of that to get a cleaner output:

ss -nplutO '! ( src = localhost )'  | \
    sed 's/\(udp\|tcp\).*:\([0-9][0-9]*\)/\2\t\1\t/;s/\([0-9][0-9]*\t[udtcp]*\t\)[^u]*users:(("/\1/;s/".*//;s/.*Address:Port.*/Netid\tPort\tProcess/' | \
    sort -nu

That looks horrible and is basically impossible to memorize. But it sure looks nice:

anarcat@angela:~$ sudo ss -nplutO '! ( src = localhost )'  | sed 's/\(udp\|tcp\).*:\([0-9][0-9]*\)/\2\t\1\t/;s/\([0-9][0-9]*\t[udtcp]*\t\)[^u]*users:(("/\1/;s/".*//;s/.*Address:Port.*/Port\tNetid\tProcess/' | sort -nu

Port    Netid   Process
22  tcp sshd
500 udp charon
546 udp NetworkManager
631 udp cups-browsed
4500    udp charon
5353    udp avahi-daemon
6600    tcp systemd
21027   udp syncthing
22000   udp syncthing
34953   udp syncthing
35770   udp syncthing
48903   udp mosh-client
52938   udp avahi-daemon
54029   udp avahi-daemon

Better ideas welcome.

10 March, 2023 07:16PM

Thorsten Alteholz

My Debian Activities in February 2023

FTP master

This month I accepted 284 and rejected 49 packages. The overall number of packages that got accepted was 286.

I love this calm and peaceful time now within the Debian project, when everybody only cares for RC bugs and NEW does not grow.

Debian LTS

This was my hundred-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 8h.

During that time I uploaded:

  • [DLA 3310-1] xorg-server security update for one CVE

As I added all missing ELA uploads to the git repository I also had a look at package-operations and added stuff to make my life a bit easier.

Debian ELTS

This month was the fifty fifth ELTS month.

  • [ELA-794-1] xorg-server security update of Jessie and Stretch for one CVE

I also made myself familiar with the mandatory git workflow and committed all my packages of this years ELA to the corresponding repository.

Debian Astro

This month I uploaded improved packages or new versions of:

Debian Printing

This month I uploaded new versions or improved packages of:

As ippsample does not build on i386, I filed a RM bug for this architecture. Maybe in a later upstream release it will be available again on all architectures.

I could also close lots of bugs that happen to be fixed upstream, but have not been closed with the upload of the new version.

Parts of this work is generously funded by Freexian!

Other stuff

This month I uploaded improved packages of:

The upload of feynmf could only happen due to the help of several people (please see #1029439). Thanks a lot!

10 March, 2023 06:45AM by alteholz

March 09, 2023

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Solving a 1998 problem with 2023 methods

A long time ago, in 1998, our family entered a contest with a puzzle; given a bunch of company names (they were the ones participating in a loyalty program known as Domino, which has since gone defunct), try to spell out as many Norwegian names as possible. (The name list was fixed, but you actually had to buy a book to find it.)

The prize was fairly attractive, so I went to work with a computer program instead of trying to figure it out by hand. I remember running it literally for weeks on my 400 MHz machine at the time; at some point, we even went on vacation for more than a month, and I came back disappointed to see that the search hadn't really gone that much further. Over time, I optimized it to use randomization in addition to backtracking, some bit fiddling tricks and so on. We thought we had a good shot.

Unfortunately, it turned out we had interpreted the rules differently from what was intended (or what others could get away with; I don't honestly think the organizers had thought it much through), and the prize was split between four other competitors who all had used the same name multiple times, giving them more names than the 25 we found.

A couple of weeks ago, this contest just struck my mind a bit out of nowhere, and I wanted to finally figure out how to attack this old problem. I dug up the code and name list (complete with RCS logs!), and set about solving it using 2023 technology while on a plane. It turns out that with modern SAT solvers (I used the constraint solver from OR-Tools), this is really really easy even on my laptop; before I'd landed, I had the answer:

Allowing only one of each name
==============================

Letters available: BOHUS CUBUS DRESSMANN EXPERT ICA MAXBO MEKKA RIMI STATOIL
SPARMAT TELENOR MOBIL TYBRINGGJEDDE
Best solution: BO, JO, ASK, BEN, DAG, GRY, INE, ISA, KIM, LIN, LIS, MAX, NUP,
PER, RUT, SAM, SOL, TEA, TED, TIM, TOM, URD, BETH, EBBE, MARC
Found 25 words, used 76/81 letters
./domino.py  0,89s user 0,68s system 251% cpu 0,622 total

Allowing each name multiple times
=================================
Letters available: BOHUS CUBUS DRESSMANN EXPERT ICA MAXBO MEKKA RIMI STATOIL
SPARMAT TELENOR MOBIL TYBRINGGJEDDE
Best solution: BO x 4, JO, BEN, DAG, DAN, GRY, KEN x 2, LIS x 3, MAX x 2, PER
x 2, RUT, SAM x 2, TEA, TED, TIM x 2, TUE, CHRIS
Found 27 words, used 78/81 letters
./domino.py  0,74s user 0,66s system 307% cpu 0,453 total

So our answer of 25 was optimal all along… under that rule set.

(For reference, I don't think there were any tiebreakers, but my original program tried to use more letters for some reason. You can do it with as little as 74 letters in the only-one case, or you can use all 81. Similarly, with the repeat case, you can use as little as 76, or all. The formulation is dead simple, just make an 0–1 integer variable per possible name and add constraints that the sum of the names with A can't be more than 7, the sum of the names with B can't be more than 5, etc.—and then remember that some names can have the same letter multiple times. The objective to maximize is the sum of all variables. To allow repeats, allow each integer variable to go up to 100 or whatever.)

Closure, I guess?

Edit: Not quite closure; looking through some logs I had missed, it turned out I had only gotten to 23 before the end of the contest (so we were not optimal until afterwards), but more intriguingly, the winners all had 29! So one would wonder what their solutions looked like, and how it could have been accepted. There is, of course the chance of an error in my name list, but I tried with a newer, larger one (almost certainly not allowed under the 1998 rules), and it still didn't get to 29, so I'm pretty sure there's some foul play here. Unfortunately, I can't come 25 years later and accuse someone of cheating :-)

09 March, 2023 10:32PM

hackergotchi for Charles Plessy

Charles Plessy

If you work at Dreamhost, can you help us?

Update: thanks to the very kind involvment of the widow of our wemaster, we could provide enough private information to Dreamhost, who finally accepted to reset the password and the MFA. We have recovered evrything! Many thanks to everybody who helped us!

Due to tragic circumstances, one association that I am part of, Sciencescope got locked out of its account at Dreamhost. Locked out, we can not pay the annual bill. Dreamhost contacted us about the payment, but will not let us recover the access to our account in order to pay. So they will soon close the account. Our website, mailing lists and archives, will be erased. We provided plenty of evidence that we are not scammers and that we are the legitimate owners of the account, but reviewing it is above the pay grade of the custommer support (I don't blame them) and I could not convince them to let somebody higher have a look at our case.

If you work at Dreamhost and want to keep us as custommers instead of kicking us like that, please ask the support service in charge of ticket 225948648 to send the recovery URL to the secondary email adddresses (the ones you used to contact us about the bill!) in addition to the primary one (which nobody will read anymore). You can encrypt it for my Debian Developer key 73471499CC60ED9EEE805946C5BD6C8F2295D502 if you worry it gets in wrong hands. If you still have doubts I am available for calls any time.

If you know somebody working at Dreamhost can you pass them the message? This would be a big, big, relief for our non-profit association.

09 March, 2023 01:35PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppRedis 0.2.3 on CRAN: Maintenance

A new minor release 0.2.3 of our RcppRedis package arrived on CRAN today. RcppRedis is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and much more). RcppRedis does not pretend to be feature complete, but it may do some things faster than the other interfaces, and also offers an optional coupling with MessagePack binary (de)serialization via RcppMsgPack. The package has carried production loads on a trading floor for several years.

This update is fairly mechanical. CRAN wants everybody off the C++11 train which is fair game given that it 2023 and most sane and lucky people are facing sane and modern compilers so this makes sense. (And I raise a toast to all those poor souls facing RHEL 7 / CentOS 7 with a compiler from many moons ago: I hear it is a vibrant job market out there so maybe time to make a switch…). As with a few of my other packages, this release simply does away with the imposition of C++11 as the package will compile just fine under C++14 or C++17 (as governed by your version of R).

The detailed changes list follows.

Changes in version 0.2.3 (2023-03-08)

  • No longer set a C++ compilation standard as the default choices by R are sufficient for the package

  • Switch include to Rcpp/Rcpp which signals use of all Rcpp features including Modules

Courtesy of my CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 March, 2023 12:57AM

March 08, 2023

Jelmer Vernooij

The Kali Janitor

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Kali Linux have been running their own instance of the Janitor for the last year, under the kali-bot user on GitLab. Their web site has some excellent documentation explaining how the bot works.

Both projects share some common components - the core janitor codebase, Silver-Platter and the various codemods (lintian-brush and deb-new-upstream). The site and some of the review logic is different for Kali.

The Kali bot has several campaigns:

The last campaign doesn’t exist in the Debian janitor, and pulls in new changes from packages that have been imported from other distributions.

For more information about the Janitor’s lintian-fixes efforts, see the landing page.

08 March, 2023 09:25PM by Jelmer Vernooij

hackergotchi for Joey Hess

Joey Hess

Anton Gladky

Boost C++: default for Debian Bookworm will remain 1.74.0

Boost C++ is a popular set of free, open-source C++ libraries that provide developers with powerful tools and functionality to enhance their software development projects. The newest version, 1.81.0, has been available in Debian Bookworm for about a month now, but the default version for boost-dependent packages is still the older 1.74.0.

The transition to the newer library has been postponed due to possible regressions that could take more time to resolve. This means that the default version for boost-dependent packages in Debian Bookworm will remain 1.74.0 until after the release.

If you’re working on your own project, you can use the newest version of Boost C++ (1.81.0) to take advantage of its new features and improvements. Debian Bullseye users can also access Boost C++ 1.81.0 from the bullseye-backports repository.

Thanks Freexian for supporting this effort.

08 March, 2023 06:16PM

hackergotchi for Thomas Lange

Thomas Lange

Launch of new FAI project website

After more than 13 years, I've launched a new design for the FAI project web site

https://fai-project.org

It now uses Materialize CSS and will work much better on mobile devices. Thanks to Thorsten Bülo who did the first part of converting the web pages to the new design.

I hope you all enjoy the new layout.

FAI

08 March, 2023 11:57AM

March 07, 2023

hackergotchi for Jonathan Dowland

Jonathan Dowland

Welcome Oblivion 10th Anniversary

I haven’t done one of these for a while, and they’ll be less frequent than I once planned as I’m working from home less and less. I'm also trying to get back into exploring my digital music collection, and more generally engaging with digital music again.

 picture of a vinyl record

It’s the ten year anniversary of the first (and last) LP by How To Destroy Angels (HTDA), the side-project of Trent Reznor with his wife, his Nine Inch Nails (NIN) partner in crime Atticus Ross and visual artist (and NIN artistic director) Rob Sheridan.

This album was a real pleasure. For NIN fans, it wasn't clear what the future held after the start of HTDA. But this work really stood alone, similar in some ways to NIN but sufficiently different to be fresh and exciting. In stark contrast to NIN (at the time), it was interesting to see the members of HTDA presented on an equal footing, especially Rob Sheridan, who wasn't a musician. The intent was to try and put the visual work on the same level of esteem as the musical.

HTDA performed a few live shows, but none outside the US. They were apparently quite a spectacle.

As an artefact, this is a gorgeous LP. The gatefold cover and all four sides of the two record sleeves are covered in unique pieces of Sheridan's glitch art. When I originally bought this I had a rather generously-sized individual office at the University, so I framed and displayed many of these pieces on my office walls.

Sheridan has since written extensively on the processes and techniques he used for this style of art, and has produced many more works using the same techniques. You can see some on his website, patreon, fine art print shop or Threadless store.

Late last year I treated myself to a large print of some related work, analog(Oblivion)000b, which (once the framing is done) I'm going to hang in my home office.

The LP had two tracks that were not present in the CD or digital release versions of the album, although a CD was bundled in the LP which included the tracks. (The Knife did something similar with Shaking the Habitual, at around the same time).

I've had some multitrack stems from this album sitting in my "for archive.org" folder for a while, so I took the opportunity of the 10th anniversary to upload them, here: https://archive.org/details/htda_multitracks

07 March, 2023 11:08AM

hackergotchi for Robert McQueen

Robert McQueen

Flathub in 2023

It’s been quite a few months since the most recent updates about Flathub last year. We’ve been busy behind the scenes, so I’d like to share what we’ve been up to at Flathub and why—and what’s coming up from us this year. I want to focus on:

  • Where Flathub is today as a strong ecosystem with 2,000 apps
  • Our progress on evolving Flathub from a build service to an app store
  • The economic barrier to growing the ecosystem, and its consequences
  • What’s next to overcome our challenges with focused initiatives

Today

Flathub is going strong: we offer 2,000 apps from over 1,500 collaborators on GitHub. We’re averaging 700,000 app downloads a day, with 898 million HTTP requests totalling 88.3 TB served by our CDN each day (thank you Fastly!). Flatpak has, in my opinion, solved the largest technical issue which has held back the mainstream growth and acceptance of Linux on the desktop (or other personal computing devices) for the past 25 years: namely, the difficulty for app developers to publish their work in a way that makes it easy for people to discover, download (or sideload, for people in challenging connectivity environments), install and use. Flathub builds on that to help users discover the work of app developers and helps that work reach users in a timely manner.

Initial results of this disintermediation are promising: even with its modest size so far, Flathub has hundreds of apps that I have never, ever heard of before—and that’s even considering I’ve been working in the Linux desktop space for nearly 20 years and spent many of those staring at the contents of dselect (showing my age a little) or GNOME Software, attending conferences, and reading blog posts, news articles, and forums. I am also heartened to see that many of our OS distributor partners have recognised that this model is hugely complementary and additive to the indispensable work they are doing to bring the Linux desktop to end users, and that “having more apps available to your users” is a value-add allowing you to focus on your core offering and not a zero-sum game that should motivate infighting.

Ongoing Progress

Getting Flathub into its current state has been a long ongoing process. Here’s what we’ve been up to behind the scenes:

Development

Last year, we concluded our first engagement with Codethink to build features into the Flathub web app to move from a build service to an app store. That includes accounts for users and developers, payment processing via Stripe, and the ability for developers to manage upload tokens for the apps they control. In parallel, James Westman has been working on app verification and the corresponding features in flat-manager to ensure app metadata accurately reflects verification and pricing, and to provide authentication for paying users for app downloads when the developer enables it. Only verified developers will be able to make direct uploads or access payment settings for their apps.

Legal

So far, the GNOME Foundation has acted as an incubator and legal host for Flathub even though it’s not purely a GNOME product or initiative. Distributing software to end users along with processing and forwarding payments and donations also has a different legal profile in terms of risk exposure and nonprofit compliance than the current activities of the GNOME Foundation. Consequently, we plan to establish an independent legal entity to own and operate Flathub which reduces risk for the GNOME Foundation, better reflects the independent and cross-desktop interests of Flathub, and provides flexibility in the future should we need to change the structure.

We’re currently in the process of reviewing legal advice to ensure we have the right structure in place before moving forward.

Governance

As Flathub is something we want to set outside of the existing Linux desktop and distribution space—and ensure we represent and serve the widest community of Linux users and developers—we’ve been working on a governance model that ensures that there is transparency and trust in who is making decisions, and why. We have set up a working group with myself and Martín Abente Lahaye from GNOME, Aleix Pol Gonzalez, Neofytos Kolokotronis, and Timothée Ravier from KDE, and Jorge Castro flying the flag for the Flathub community. Thanks also to Neil McGovern and Nick Richards who were also more involved in the process earlier on.

We don’t want to get held up here creating something complex with memberships and elections, so at first we’re going to come up with a simple/balanced way to appoint people into a board that makes key decisions about Flathub and iterate from there.

Funding

We have received one grant for 2023 of $100K from Endless Network which will go towards the infrastructure, legal, and operations costs of running Flathub and setting up the structure described above. (Full disclosure: Endless Network is the umbrella organisation which also funds my employer, Endless OS Foundation.) I am hoping to grow the available funding to $250K for this year in order to cover the next round of development on the software, prepare for higher operations costs (e.g., accounting gets more complex), and bring in a second full-time staff member in addition to Bartłomiej Piotrowski to handle enquiries, reviews, documentation, and partner outreach.

We’re currently in discussions with NLnet about funding further software development, but have been unfortunately turned down for a grant from the Plaintext Group for this year; this Schmidt Futures project around OSS sustainability is not currently issuing grants in 2023. However, we continue to work on other funding opportunities.

Remaining Barriers

My personal hypothesis is that our largest remaining barrier to Linux desktop scale and impact is economic. On competing platforms—mobile or desktop—a developer can offer their work for sale via an app store or direct download with payment or subscription within hours of making a release. While we have taken the “time to first download” time down from months to days with Flathub, as a community we continue to have a challenging relationship with money. Some creators are lucky enough to have a full-time job within the FLOSS space, while a few “superstar” developers are able to nurture some level of financial support by investing time in building a following through streaming, Patreon, Kickstarter, or similar. However, a large proportion of us have to make do with the main payback from our labours being a stream of bug reports on GitHub interspersed with occasional conciliatory beers at FOSDEM (other beverages and events are available).

The first and most obvious consequence is that if there is no financial payback for participating in developing apps for the free and open source desktop, we will lose many people in the process—despite the amazing achievements of those who have brought us to where we are today. As a result, we’ll have far fewer developers and apps. If we can’t offer access to a growing base of users or the opportunity to offer something of monetary value to them, the reward in terms of adoption and possible payment will be very small. Developers would be forgiven for taking their time and attention elsewhere. With fewer apps, our platform has less to entice and retain prospective users.

The second consequence is that this also represents a significant hurdle for diverse and inclusive participation. We essentially require that somebody is in a position of privilege and comfort that they have internet, power, time, and income—not to mention childcare, etc.—to spare so that they can take part. If that’s not the case for somebody, we are leaving them shut out from our community before they even have a chance to start. My belief is that free and open source software represents a better way for people to access computing, and there are billions of people in the world we should hope to reach with our work. But if the mechanism for participation ensures their voices and needs are never represented in our community of creators, we are significantly less likely to understand and meet those needs.

While these are my thoughts, you’ll notice a strong theme to this year will be leading a consultation process to ensure that we are including, understanding and reflecting the needs of our different communities—app creators, OS distributors and Linux users—as I don’t believe that our initiative will be successful without ensuring mutual benefit and shared success. Ultimately, no matter how beautiful, performant, or featureful the latest versions of the Plasma or GNOME desktops are, or how slick the newly rewritten installer is from your favourite distribution, all of the projects making up the Linux desktop ecosystem are subdividing between ourselves an absolutely tiny market share of the global market of personal computers. To make a bigger mark on the world, as a community, we need to get out more.

What’s Next?

After identifying our major barriers to overcome, we’ve planned a number of focused initiatives and restructuring this year:

Phased Deployment

We’re working on deploying the work we have been doing over the past year, starting first with launching the new Flathub web experience as well as the rebrand that Jakub has been talking about on his blog. This also will finally launch the verification features so we can distinguish those apps which are uploaded by their developers.

In parallel, we’ll also be able to turn on the Flatpak repo subsets that enable users to select only verified and/or FLOSS apps in the Flatpak CLI or their desktop’s app center UI.

Consultation

We would like to make sure that the voices of app creators, OS distributors, and Linux users are reflected in our plans for 2023 and beyond. We will be launching this in the form of Flathub Focus Groups at the Linux App Summit in Brno in May 2023, followed up with surveys and other opportunities for online participation. We see our role as interconnecting communities and want to be sure that we remain transparent and accountable to those we are seeking to empower with our work.

Whilst we are being bold and ambitious with what we are trying to create for the Linux desktop community, we also want to make sure we provide the right forums to listen to the FLOSS community and prioritise our work accordingly.

Advisory Board

As we build the Flathub organisation up in 2023, we’re also planning to expand its governance by creating an Advisory Board. We will establish an ongoing forum with different stakeholders around Flathub: OS vendors, hardware integrators, app developers and user representatives to help us create the Flathub that supports and promotes our mutually shared interests in a strong and healthy Linux desktop community.

Direct Uploads

Direct app uploads are close to ready, and they enable exciting stuff like allowing Electron apps to be built outside of flatpak-builder, or driving automatic Flathub uploads from GitHub actions or GitLab CI flows; however, we need to think a little about how we encourage these to be used. Even with its frustrations, our current Buildbot ensures that the build logs and source versions of each app on Flathub are captured, and that the apps are built on all supported architectures. (Is 2023 when we add RISC-V? Reach out if you’d like to help!). If we hand upload tokens out to any developer, even if the majority of apps are open source, we will go from this relatively structured situation to something a lot more unstructured—and we fear many apps will be available on only 64-bit Intel/AMD machines.

My sketch here is that we need to establish some best practices around how to integrate Flathub uploads into popular CI systems, encouraging best practices so that we promote the properties of transparency and reproducibility that we don’t want to lose. If anyone is a CI wizard and would like to work with us as a thought partner about how we can achieve this—make it more flexible where and how build tasks can be hosted, but not lose these cross-platform and inspectability properties—we’d love to hear from you.

Donations and Payments

Once the work around legal and governance reaches a decent point, we will be in the position to move ahead with our Stripe setup and switch on the third big new feature in the Flathub web app. At present, we have already implemented support for one-off payments either as donations or a required purchase. We would like to go further than that, in line with what we were describing earlier about helping developers sustainably work on apps for our ecosystem: we would also like to enable developers to offer subscriptions. This will allow us to create a relationship between users and creators that funds ongoing work rather than what we already have.

Security

For Flathub to succeed, we need to make sure that as we grow, we continue to be a platform that can give users confidence in the quality and security of the apps we offer. To that end, we are planning to set up infrastructure to help ensure developers are shipping the best products they possibly can to users. For example, we’d like to set up automated linting and security scanning on the Flathub back-end to help developers avoid bad practices, unnecessary sandbox permissions, outdated dependencies, etc. and to keep users informed and as secure as possible.

Sponsorship

Fundraising is a forever task—as is running such a big and growing service. We hope that one day, we can cover our costs through some modest fees built into our payments—but until we reach that point, we’re going to be seeking a combination of grant funding and sponsorship to keep our roadmap moving. Our hope is very much that we can encourage different organisations that buy into our vision and will benefit from Flathub to help us support it and ensure we can deliver on our goals. If you have any suggestions of who might like to support Flathub, we would be very appreciative if you could reach out and get us in touch.

Finally, Thank You!

Thanks to you all for reading this far and supporting the work of Flathub, and also to our major sponsors and donors without whom Flathub could not exist: GNOME Foundation, KDE e.V., Mythic Beasts, Endless Network, Fastly, and Equinix Metal via the CNCF Community Cluster. Thanks also to the tireless work of the Freedesktop SDK community to give us the runtime platform most Flatpaks depend on, particularly Seppo Yli-Olli, Codethink and others.

I wanted to also give my personal thanks to a handful of dedicated people who keep Flathub working as a service and as a community: Bartłomiej Piotrowski is keeping the infrastructure working essentially single-handedly (in his spare time from keeping everything running at GNOME); Kolja Lampe and Bart built the new web app and backend API for Flathub which all of the new functionality has been built on, and Filippe LeMarchand maintains the checker bot which helps keeps all of the Flatpaks up to date.

And finally, all of the submissions to Flathub are reviewed to ensure quality, consistency and security by a small dedicated team of reviewers, with a huge amount of work from Hubert Figuière and Bart to keep the submissions flowing. Thanks to everyone­—named or unnamed—for building this vision of the future of the Linux desktop together with us.

(originally posted to Flathub Discourse, head there if you have any questions or comments)

07 March, 2023 11:00AM by ramcq

hackergotchi for Jonathan Dowland

Jonathan Dowland

date warping in HLedger

My credit card and bank account rarely agree on the date for when I pay it off1. Since I added balance assertions for bank account transactions, I need the transaction in my ledger to match what the bank thinks, otherwise the balance assertions would start to fail.

The skew is not normally more than a couple of days, and could be corrected by changing the date for just one of the two postings. But the skew is not very important, and altering the posting date could be used for something more useful.

date warping credit card repayments

My credit card bills land halfway through the month, so February's bill covers transactions between January 15th and February 14th. I pay off the bill in full each month using Direct Debit. The credit card company consider the bill paid immediately, but they don't actually draw it until the end of the month (Jan 31 in the running example). This means the payment transaction for a given month lands halfway through the period covered by the next month's bill.

The credit card bill itself shows the payment date at the end of the month but presents the transaction "warped" right to the start. This is actually useful, because it means the balance is zero for the first purchase on the bill.

The credit card data in CSV form has the repayment transaction at the date it occurred, not warped to the start of the period. When I import this into HLedger, the credit card account balance for each new transaction does not match the statement right up to the point of the repayment, half way through. This makes spot-checking that the imported data matches the statement a bit more awkward.

So, I have started "warping" the payment transaction to the start of the billing period, just like the credit card statement does:

2022-12-31  pay credit card
  asset:bank    £ -500
  liabilities:credit card  ;  date:2022-12-15

I can then spot-check the transactions in HLedger after import, in particular the final one, and the final account balance, and then write a manual balance assertion when I'm finished.

I'd quite like to automate the adjusted posting date, too, but I haven't figured out how to do that just yet.

date warping for refunds

Another thing I've found "date warping" useful is for marrying up refunds with their related purchase. Imagine I spent £200 on some shoes in late January, but returned most of them in early February:

2023-01-25  buy some shoes. hedging on the size
  liabilities:credit card    £ -200
  expenses:shoes

2023-02-05  return the ones that don't fit
  liabilities:credit card    £  150
  expenses:shoes

If I look at how much I've spent on shoes per month, it looks odd: £200 in January (although ultimately I only spent £50), and £-150 in February.

Balance changes in 2023-01-01..2023-02-28:

$ hledger bal -Mt expenses:shoes
                ||   Jan     Feb 
================++===============
 expenses:shoes || £ 200  £ -150 
----------------++---------------
                || £ 200  £ -150 

By "warping" the refund's posting to the expense account to the purchase date, how much I ultimately spent on shoes is more properly reflected:

2023-02-05  return the ones that don't fit
  liabilities:credit card    £  150
  expenses:shoes  ; date:2023-01-25

resulting in

$ hledger bal -Mt expenses:shoes
Balance changes in 2023-01-01..2023-02-28:

                ||  Jan  Feb 
================++===========
 expenses:shoes || £ 50    0 
----------------++-----------
                || £ 50    0 

I suppose whether you'd want to do this is a matter of taste.


  1. Amazon rarely agrees with my bank on when we've paid for things either. For other reasons, Amazon is a beast to tackle in another blog post.

07 March, 2023 10:28AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppFastAD 0.0.1 and 0.0.2: New Package on CRAN!

James Yang and I are thrilled to announce the new CRAN package RcppFastAD which arrived at CRAN last Monday as version 0.0.1, and is as of today at version 0.0.2 with a first set of small updates.

It is based on the FastAD header-only C++ library by James which provides a C++ implementation of both forward and reverse mode of automatic differentiation in an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C++ applications. Included in the package are three example: a simple quadratic expression evaluating x' S x for given x and S return the expression value with a gradient, a linear regression example generalising this and using the gradient to derive to arrive at the least-squares minimizing solution, as well as the well-known Black-Scholes options pricer and its important partial derivatives delta, rho, theta and vega derived via automatic differentiation.

The NEWS file for these two initial releases follows.

Changes in version 0.0.2 (2023-03-05)

  • One C++ operation is protected from operating on a nullptr

  • Additional tests have been added, tests now cover all three demo / example functions

  • Return values and code for the examples linear_regression and quadratic_expression have been adjusted

Changes in version 0.0.1 (2023-02-24)

  • Initial release version and CRAN upload

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

07 March, 2023 01:34AM

Valhalla's Things

Forgotten Yeast Bread

Posted on March 7, 2023

Yesterday around 13:00 I started my usual ”I’m being lazy” bread recipe:

  • 400 g flour
  • 250 g water
  • 6 g salt

worked for 8 minutes (by machine), left to rise until about 18:00.

For the record, it was a strong flour (310 W), type 1, so white, but somewhat coarsely ground.

And then, when it was time to cook bread for dinner I realized that something was missing. Something critical. See if you can spot it in the list above.

The yeast.

Some bread was taken out of the freezer and defrosted in the oven, but I didn’t want to throw away the flour, so I mixed 2-3 g dried yeast, 10 g flour and 10 g water, and left it to rise until after dinner.

Then I added it to the dought, added some more water (I know. I should have measured it. I didn’t expect having to repeat the thing. It was probably about 20 g), mixed for 5 minutes, covered it to rise.

This afternoon, around 15:30, I took the dought, folded it 5-6 times, formed a round loaf on the lined baking tray and left it in the cold oven until 17:45. Then I removed it from the oven, turned it on at 240°C, scored the top of the loaf and sprinkled it with water.

When the oven was hot I baked the loaf for 10 minutes at 240°C, then turned it down to 160°C for 20 additional minutes.

And then I realized I need to repeat this.

No, there are no pictures (there is some left, but it’s too dark to take pictures).

07 March, 2023 12:00AM

March 06, 2023

Vincent Bernat

DDoS detection and remediation with Akvorado and Flowspec

Akvorado collects sFlow and IPFIX flows, stores them in a ClickHouse database, and presents them in a web console. Although it lacks built-in DDoS detection, it’s possible to create one by crafting custom ClickHouse queries.

DDoS detection

Let’s assume we want to detect DDoS targeting our customers. As an example, we consider a DDoS attack as a collection of flows over one minute targeting a single customer IP address, from a single source port and matching one of these conditions:

  • an average bandwidth of 1 Gbps,
  • an average bandwidth of 200 Mbps when the protocol is UDP,
  • more than 20 source IP addresses and an average bandwidth of 100 Mbps, or
  • more than 10 source countries and an average bandwidth of 100 Mbps.

Here is the SQL query to detect such attacks over the last 5 minutes:

SELECT *
FROM (
  SELECT
    toStartOfMinute(TimeReceived) AS TimeReceived,
    DstAddr,
    SrcPort,
    dictGetOrDefault('protocols', 'name', Proto, '???') AS Proto,
    SUM(((((Bytes * SamplingRate) * 8) / 1000) / 1000) / 1000) / 60 AS Gbps,
    uniq(SrcAddr) AS sources,
    uniq(SrcCountry) AS countries
  FROM flows
  WHERE TimeReceived > now() - INTERVAL 5 MINUTE
    AND DstNetRole = 'customers'
  GROUP BY
    TimeReceived,
    DstAddr,
    SrcPort,
    Proto
)
WHERE (Gbps > 1)
   OR ((Proto = 'UDP') AND (Gbps > 0.2)) 
   OR ((sources > 20) AND (Gbps > 0.1)) 
   OR ((countries > 10) AND (Gbps > 0.1))
ORDER BY
  TimeReceived DESC,
  Gbps DESC

Here is an example output1 where two of our users are under attack. One from what looks like an NTP amplification attack, the other from a DNS amplification attack:

TimeReceived DstAddr SrcPort Proto Gbps sources countries
2023-02-26 17:44:00 ::ffff:203.0.113.206 123 UDP 0.102 109 13
2023-02-26 17:43:00 ::ffff:203.0.113.206 123 UDP 0.130 133 17
2023-02-26 17:43:00 ::ffff:203.0.113.68 53 UDP 0.129 364 63
2023-02-26 17:43:00 ::ffff:203.0.113.206 123 UDP 0.113 129 21
2023-02-26 17:42:00 ::ffff:203.0.113.206 123 UDP 0.139 50 14
2023-02-26 17:42:00 ::ffff:203.0.113.206 123 UDP 0.105 42 14
2023-02-26 17:40:00 ::ffff:203.0.113.68 53 UDP 0.121 340 65

DDoS remediation

Once detected, there are at least two ways to stop the attack at the network level:

  • blackhole the traffic to the targeted user (RTBH), or
  • selectively drop packets matching the attack patterns (Flowspec).

Traffic blackhole

The easiest method is to sacrifice the attacked user. While this helps the attacker, this protects your network. It is a method supported by all routers. You can also offload this protection to many transit providers. This is useful if the attack volume exceeds your internet capacity.

This works by advertising with BGP a route to the attacked user with a specific community. The border router modifies the next hop address of these routes to a specific IP address configured to forward the traffic to a null interface. RFC 7999 defines 65535:666 for this purpose. This is known as a “remote-triggered blackhole” (RTBH) and is explained in more detail in RFC 3882.

It is also possible to blackhole the source of the attacks by leveraging unicast Reverse Path Forwarding (uRPF) from RFC 3704, as explained in RFC 5635. However, uRPF can be a serious tax on your router resources. See “NCS5500 uRPF: Configuration and Impact on Scale” for an example of the kind of restrictions you have to expect when enabling uRPF.

On the advertising side, we can use BIRD. Here is a complete configuration file to allow any router to collect them:

log stderr all;
router id 192.0.2.1;

protocol device {
  scan time 10;
}

protocol bgp exporter {
  ipv4 {
    import none;
    export where proto = "blackhole4";
  };
  ipv6 {
    import none;
    export where proto = "blackhole6";
  };
  local as 64666;
  neighbor range 192.0.2.0/24 external;
  multihop;
  dynamic name "exporter";
  dynamic name digits 2;
  graceful restart yes;
  graceful restart time 0;
  long lived graceful restart yes;
  long lived stale time 3600;  # keep routes for 1 hour!
}

protocol static blackhole4 {
  ipv4;
  route 203.0.113.206/32 blackhole {
    bgp_community.add((65535, 666));
  };
  route 203.0.113.68/32 blackhole {
    bgp_community.add((65535, 666));
  };
}
protocol static blackhole6 {
  ipv6;
}

We use BGP long-lived graceful restart to ensure routes are kept for one hour, even if the BGP connection goes down, notably during maintenance.

On the receiver side, if you have a Cisco router running IOS XR, you can use the following configuration to blackhole traffic received on the BGP session. As the BGP session is dedicated to this usage, The community is not used, but you can also forward these routes to your transit providers.

router static
 vrf public
  address-family ipv4 unicast
   192.0.2.1/32 Null0 description "BGP blackhole"
  !
  address-family ipv6 unicast
   2001:db8::1/128 Null0 description "BGP blackhole"
  !
 !
!
route-policy blackhole_ipv4_in_public
  if destination in (0.0.0.0/0 le 31) then
    drop
  endif
  set next-hop 192.0.2.1
  done
end-policy
!
route-policy blackhole_ipv6_in_public
  if destination in (::/0 le 127) then
    drop
  endif
  set next-hop 2001:db8::1
  done
end-policy
!
router bgp 12322
 neighbor-group BLACKHOLE_IPV4_PUBLIC
  remote-as 64666
  ebgp-multihop 255
  update-source Loopback10
  address-family ipv4 unicast
   maximum-prefix 100 90
   route-policy blackhole_ipv4_in_public in
   route-policy drop out
   long-lived-graceful-restart stale-time send 86400 accept 86400
  !
  address-family ipv6 unicast
   maximum-prefix 100 90
   route-policy blackhole_ipv6_in_public in
   route-policy drop out
   long-lived-graceful-restart stale-time send 86400 accept 86400
  !
 !
 vrf public
  neighbor 192.0.2.1
   use neighbor-group BLACKHOLE_IPV4_PUBLIC
   description akvorado-1

When the traffic is blackholed, it is still reported by IPFIX and sFlow. In Akvorado, use ForwardingStatus >= 128 as a filter.

While this method is compatible with all routers, it makes the attack successful as the target is completely unreachable. If your router supports it, Flowspec can selectively filter flows to stop the attack without impacting the customer.

Flowspec

Flowspec is defined in RFC 8955 and enables the transmission of flow specifications in BGP sessions. A flow specification is a set of matching criteria to apply to IP traffic. These criteria include the source and destination prefix, the IP protocol, the source and destination port, and the packet length. Each flow specification is associated with an action, encoded as an extended community: traffic shaping, traffic marking, or redirection.

To announce flow specifications with BIRD, we extend our configuration. The extended community used shapes the matching traffic to 0 bytes per second.

flow4 table flowtab4;
flow6 table flowtab6;

protocol bgp exporter {
  flow4 {
    import none;
    export where proto = "flowspec4";
  };
  flow6 {
    import none;
    export where proto = "flowspec6";
  };
  # […]
}

protocol static flowspec4 {
  flow4;
  route flow4 {
    dst 203.0.113.68/32;
    sport = 53;
    length >= 1476 && <= 1500;
    proto = 17;
  }{
    bgp_ext_community.add((generic, 0x80060000, 0x00000000));
  };
  route flow4 {
    dst 203.0.113.206/32;
    sport = 123;
    length = 468;
    proto = 17;
  }{
    bgp_ext_community.add((generic, 0x80060000, 0x00000000));
  };
}
protocol static flowspec6 {
  flow6;
}

If you have a Cisco router running IOS XR, the configuration may look like this:

vrf public
 address-family ipv4 flowspec
 address-family ipv6 flowspec
!
router bgp 12322
 address-family vpnv4 flowspec
 address-family vpnv6 flowspec
 neighbor-group FLOWSPEC_IPV4_PUBLIC
  remote-as 64666
  ebgp-multihop 255
  update-source Loopback10
  address-family ipv4 flowspec
   long-lived-graceful-restart stale-time send 86400 accept 86400
   route-policy accept in
   route-policy drop out
   maximum-prefix 100 90
   validation disable
  !
  address-family ipv6 flowspec
   long-lived-graceful-restart stale-time send 86400 accept 86400
   route-policy accept in
   route-policy drop out
   maximum-prefix 100 90
   validation disable
  !
 !
 vrf public
  address-family ipv4 flowspec
  address-family ipv6 flowspec
  neighbor 192.0.2.1
   use neighbor-group FLOWSPEC_IPV4_PUBLIC
   description akvorado-1

Then, you need to enable Flowspec on all interfaces with:

flowspec
 vrf public
  address-family ipv4
   local-install interface-all
  !
  address-family ipv6
   local-install interface-all
  !
 !
!

As with the RTBH setup, you can filter dropped flows with ForwardingStatus >= 128.

DDoS detection (continued)

In the example using Flowspec, the flows were also filtered on the length of the packet:

route flow4 {
  dst 203.0.113.68/32;
  sport = 53;
  length >= 1476 && <= 1500;
  proto = 17;
}{
  bgp_ext_community.add((generic, 0x80060000, 0x00000000));
};

This is an important addition: legitimate DNS requests are smaller than this and therefore not filtered.2 With ClickHouse, you can get the 10th and 90th percentiles of the packet sizes with quantiles(0.1, 0.9)(Bytes/Packets).

The last issue we need to tackle is how to optimize the request: it may need several seconds to collect the data and it is likely to consume substantial resources from your ClickHouse database. One solution is to create a materialized view to pre-aggregate results:

CREATE TABLE ddos_logs (
  TimeReceived DateTime,
  DstAddr IPv6,
  Proto UInt32,
  SrcPort UInt16,
  Gbps SimpleAggregateFunction(sum, Float64),
  Mpps SimpleAggregateFunction(sum, Float64),
  sources AggregateFunction(uniqCombined(12), IPv6),
  countries AggregateFunction(uniqCombined(12), FixedString(2)),
  size AggregateFunction(quantiles(0.1, 0.9), UInt64)
) ENGINE = SummingMergeTree
PARTITION BY toStartOfHour(TimeReceived)
ORDER BY (TimeReceived, DstAddr, Proto, SrcPort)
TTL toStartOfHour(TimeReceived) + INTERVAL 6 HOUR DELETE ;

CREATE MATERIALIZED VIEW ddos_logs_view TO ddos_logs AS
  SELECT
    toStartOfMinute(TimeReceived) AS TimeReceived,
    DstAddr,
    Proto,
    SrcPort,
    sum(((((Bytes * SamplingRate) * 8) / 1000) / 1000) / 1000) / 60 AS Gbps,
    sum(((Packets * SamplingRate) / 1000) / 1000) / 60 AS Mpps,
    uniqCombinedState(12)(SrcAddr) AS sources,
    uniqCombinedState(12)(SrcCountry) AS countries,
    quantilesState(0.1, 0.9)(toUInt64(Bytes/Packets)) AS size
  FROM flows
  WHERE DstNetRole = 'customers'
  GROUP BY
    TimeReceived,
    DstAddr,
    Proto,
    SrcPort

The ddos_logs table is using the SummingMergeTree engine. When the table receives new data, ClickHouse replaces all the rows with the same sorting key, as defined by the ORDER BY directive, with one row which contains summarized values using either the sum() function or the explicitly specified aggregate function (uniqCombined and quantiles in our example).3

Finally, we can modify our initial query with the following one:

SELECT *
FROM (
  SELECT
    TimeReceived,
    DstAddr,
    dictGetOrDefault('protocols', 'name', Proto, '???') AS Proto,
    SrcPort,
    sum(Gbps) AS Gbps,
    sum(Mpps) AS Mpps,
    uniqCombinedMerge(12)(sources) AS sources,
    uniqCombinedMerge(12)(countries) AS countries,
    quantilesMerge(0.1, 0.9)(size) AS size
  FROM ddos_logs
  WHERE TimeReceived > now() - INTERVAL 60 MINUTE
  GROUP BY
    TimeReceived,
    DstAddr,
    Proto,
    SrcPort
)
WHERE (Gbps > 1)
   OR ((Proto = 'UDP') AND (Gbps > 0.2)) 
   OR ((sources > 20) AND (Gbps > 0.1)) 
   OR ((countries > 10) AND (Gbps > 0.1))
ORDER BY
  TimeReceived DESC,
  Gbps DESC

Gluing everything together

To sum up, building an anti-DDoS system requires to following these steps:

  1. define a set of criteria to detect a DDoS attack,
  2. translate these criteria into SQL requests,
  3. pre-aggregate flows into SummingMergeTree tables,
  4. query and transform the results to a BIRD configuration file, and
  5. configure your routers to pull the routes from BIRD.

A Python script like the following one can handle the fourth step. For each attacked target, it generates both a Flowspec rule and a blackhole route.

import socket
import types
from clickhouse_driver import Client as CHClient

# Put your SQL query here!
SQL_QUERY = "…"

# How many anti-DDoS rules we want at the same time?
MAX_DDOS_RULES = 20

def empty_ruleset():
    ruleset = types.SimpleNamespace()
    ruleset.flowspec = types.SimpleNamespace()
    ruleset.blackhole = types.SimpleNamespace()
    ruleset.flowspec.v4 = []
    ruleset.flowspec.v6 = []
    ruleset.blackhole.v4 = []
    ruleset.blackhole.v6 = []
    return ruleset

current_ruleset = empty_ruleset()

client = CHClient(host="clickhouse.akvorado.net")
while True:
    results = client.execute(SQL_QUERY)
    seen = {}
    new_ruleset = empty_ruleset()
    for (t, addr, proto, port, gbps, mpps, sources, countries, size) in results:
        if (addr, proto, port) in seen:
            continue
        seen[(addr, proto, port)] = True

        # Flowspec
        if addr.ipv4_mapped:
            address = addr.ipv4_mapped
            rules = new_ruleset.flowspec.v4
            table = "flow4"
            mask = 32
            nh = "proto"
        else:
            address = addr
            rules = new_ruleset.flowspec.v6
            table = "flow6"
            mask = 128
            nh = "next header"
        if size[0] == size[1]:
            length = f"length = {int(size[0])}"
        else:
            length = f"length >= {int(size[0])} && <= {int(size[1])}"
        header = f"""
# Time: {t}
# Source: {address}, protocol: {proto}, port: {port}
# Gbps/Mpps: {gbps:.3}/{mpps:.3}, packet size: {int(size[0])}<=X<={int(size[1])}
# Flows: {flows}, sources: {sources}, countries: {countries}
"""
        rules.append(
                f"""{header}
route {table} {{
  dst {address}/{mask};
  sport = {port};
  {length};
  {nh} = {socket.getprotobyname(proto)};
}}{{
  bgp_ext_community.add((generic, 0x80060000, 0x00000000));
}};
"""
        )

        # Blackhole
        if addr.ipv4_mapped:
            rules = new_ruleset.blackhole.v4
        else:
            rules = new_ruleset.blackhole.v6
        rules.append(
            f"""{header}
route {address}/{mask} blackhole {{
  bgp_community.add((65535, 666));
}};
"""
        )

        new_ruleset.flowspec.v4 = list(
            set(new_ruleset.flowspec.v4[:MAX_DDOS_RULES])
        )
        new_ruleset.flowspec.v6 = list(
            set(new_ruleset.flowspec.v6[:MAX_DDOS_RULES])
        )

        # TODO: advertise changes by mail, chat, ...

        current_ruleset = new_ruleset
        changes = False
        for rules, path in (
            (current_ruleset.flowspec.v4, "v4-flowspec"),
            (current_ruleset.flowspec.v6, "v6-flowspec"),
            (current_ruleset.blackhole.v4, "v4-blackhole"),
            (current_ruleset.blackhole.v6, "v6-blackhole"),
        ):
            path = os.path.join("/etc/bird/", f"{path}.conf")
            with open(f"{path}.tmp", "w") as f:
                for r in rules:
                    f.write(r)
            changes = (
                changes or not os.path.exists(path) or not samefile(path, f"{path}.tmp")
            )
            os.rename(f"{path}.tmp", path)

        if not changes:
            continue

        proc = subprocess.Popen(
            ["birdc", "configure"],
            stdin=subprocess.DEVNULL,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        stdout, stderr = proc.communicate(None)
        stdout = stdout.decode("utf-8", "replace")
        stderr = stderr.decode("utf-8", "replace")
        if proc.returncode != 0:
            logger.error(
                "{} error:\n{}\n{}".format(
                    "birdc reconfigure",
                    "\n".join(
                        [" O: {}".format(line) for line in stdout.rstrip().split("\n")]
                    ),
                    "\n".join(
                        [" E: {}".format(line) for line in stderr.rstrip().split("\n")]
                    ),
                )
            )

Until Akvorado integrates DDoS detection and mitigation, the ideas presented in this blog post provide a solid foundation to get started with your own anti-DDoS system. 🛡️


  1. ClickHouse can export results using Markdown format when appending FORMAT Markdown to the query. ↩︎

  2. While most DNS clients should retry with TCP on failures, this is not always the case: until recently, musl libc did not implement this. ↩︎

  3. The materialized view also aggregates the data at hand, both for efficiency and to ensure we work with the right data types. ↩︎

06 March, 2023 07:34AM by Vincent Bernat

Valhalla's Things

Bookbinding: photo album

Posted on March 6, 2023

an open book with a watercolour of a costume pasted from two corners on one page; near the spine there is a sliver of paper as a spacer.

When I paint postcards I tend to start with a draft (usually on lightweight (250 g/m²) watercolour paper, then trace1 the drawing on blank postcards and paint it again.

I keep the drafts for a number of reasons; for the views / architectural ones I’m using a landscape photo album that I bought many years ago, but lately I’ve also sent a few cards with my historical outfits to people who like to be kept updated on that, and I wanted a different book for those, both for better organization and to be able to keep them in the portrait direction.

If you know me, you can easily guess that buying one wasn’t considered as an option.

A closed hardcover book in uniform dark grey.

Since I’m not going to be writing on the pages, I decided to use a relatively cheap 200 g / m² linoprint paper with a nice feel, and I’ve settled on a B6 size (before trimming) to hold A6 postcard drafts.

For the binding I’ve decided to use a technique I’ve learned from a craft book ages ago that doesn’t use tapes, and added a full hard cover in dark grey linen-feel2 paper. For the end-papers I’ve used some random sheets of light blue paper (probably around 100-something g / m²), and that’s the thing where I could have done better, but they work.

Up to now there isn’t anything I hadn’t done before, what was new was the fact that this book was meant to hold things between the pages, and I needed to provide space for them.

a book seen from the top: near the spine all signatures are made of 4 sheets, but two of them for each signature are just stubs, and leave open spaces between the pages.

After looking on the internet for solutions, I settled on adding spacers by making a signature composed of paper - spacer - paper - spacer, with the spacers being 2 cm wide, folded in half.

And then, between finishing binding the book and making the cover I utterly forgot to add the head bands. Argh. It’s not the first time I make this error.

The same book, open on an empty page.

I’m happy enough with the result. There are things that are easy to improve on in the next iteration (endpapers and head bands), and something in me is not 100% happy with the fact that the spacers aren’t placed between every sheet, but there are places with no spacer and places with two of them, but I can’t think of (and couldn’t find) a way to make them otherwise with a sewn book, unless I sew each individual sheet, which sounds way too bulky (the album I’m using for the landscapes was glued, but I didn’t really want to go that way).

The size is smaller than the other one I was using and doesn’t leave a lot of room around the paintings, but that isn’t necessarily a bad thing, because it also means less wasted space.

I believe that one of my next project will be another similar book in a landscape format, for those postcard drafts that aren’t landscapes nor clothing related.

And then maybe another? or two? or…

Traceback (most recent call last):

TooManyProjectsError: project queue is full

  1. yes, trace. I can’t draw. I have too many hobbies to spend the required amount of time every day to practice it. I’m going to fake it. 85% of the time I’m tracing from a photo I took myself, so I’m not even going to consider it cheating.↩︎

  2. the description of which, on the online shop, made it look like fabric, even if the price was suspiciously low, so I bought a sheet to see what it was. It wasn’t fabric. It feels and looks nice, but I’m not sure how sturdy it’s going to be.↩︎

06 March, 2023 12:00AM

March 05, 2023

Enrico Zini

Heart-driven drum loop

I have Python code for reading a heart rate monitor.

I have Python code to generate MIDI events.

Could I resist putting them together? Clearly not.

Here's Jack Of Hearts, a JACK MIDI drum loop generator that uses the heart rate for BPM, and an improvised way to compute heart rate increase/decrease to add variations in the drum pattern.

It's very simple minded and silly. To me it was a fun way of putting unrelated things together, and Python worked very well for it.

05 March, 2023 10:53PM

Generating MIDI events with JACK and Python

I had a go at trying to figure out how to generate arbitrary MIDI events and send them out over a JACK MIDI channel.

Setting up JACK and Pipewire

Pipewire has a JACK interface, which in theory means one could use JACK clients out of the box without extra setup.

In practice, one need to tell JACK clients which set of libraries to use to communicate to servers, and Pipewire's JACK server is not the default choice.

To tell JACK clients to use Pipewire's server, you can either:

  • on a client-by-client basis, wrap the commands with pw-jack
  • to change the system default: cp /usr/share/doc/pipewire/examples/ld.so.conf.d/pipewire-jack-*.conf /etc/ld.so.conf.d/ and run ldconfig (see the Debian wiki for details)

Programming with JACK

Python has a JACK client library that worked flawlessly for me so far.

Everything with JACK is designed around minimizing latency. Everything happens around a callback that gets called form a separate thread, and which gets a buffer to fill with events.

All the heavy processing needs to happen outside the callback, and the callback is only there to do the minimal amount of work needed to shovel the data your application produced into JACK channels.

Generating MIDI messages

The Mido library can be used to parse and create MIDI messages and it also worked flawlessly for me so far.

One needs to study a bit what kind of MIDI message one needs to generate (like "note on", "note off", "program change") and what arguments they get.

It also helps to read about the General MIDI standard which defines mappings between well-known instruments and channels and instrument numbers in MIDI messages.

A timed message queue

To keep a queue of events that happen over time, I implemented a Delta List that indexes events by their future frame number.

I called the humble container for my audio experiments pyeep and here's my delta list implementation.

A JACK player

The simple JACK MIDI player backend is also in pyeep.

It needs to protect the delta list with a mutex since we are working across thread boundaries, but it tries to do as little work under lock as possible, to minimize the risk of locking the realtime thread for too long.

The play method converts delays in seconds to frame counts, and the on_process callback moves events from the queue to the jack output.

Here's an example script that plays a simple drum pattern:

#!/usr/bin/python3

# Example JACK midi event generator
#
# Play a drum pattern over JACK

import time

from pyeep.jackmidi import MidiPlayer

# See:
# https://soundprogramming.net/file-formats/general-midi-instrument-list/
# https://www.pgmusic.com/tutorial_gm.htm

DRUM_CHANNEL = 9

with MidiPlayer("pyeep drums") as player:
    beat: int = 0
    while True:
        player.play("note_on", velocity=64, note=35, channel=DRUM_CHANNEL)
        player.play("note_off", note=38, channel=DRUM_CHANNEL, delay_sec=0.5)
        if beat == 0:
            player.play("note_on", velocity=100, note=38, channel=DRUM_CHANNEL)
            player.play("note_off", note=36, channel=DRUM_CHANNEL, delay_sec=0.3)
        if beat + 1 == 2:
            player.play("note_on", velocity=100, note=42, channel=DRUM_CHANNEL)
            player.play("note_off", note=42, channel=DRUM_CHANNEL, delay_sec=0.3)

        beat = (beat + 1) % 4
        time.sleep(0.3)

Running the example

I ran the jack_drums script, and of course not much happened.

First I needed a MIDI synthesizer. I installed fluidsynth, and ran it on the command line with no arguments. it registered with JACK, ready to do its thing.

Then I connected things together. I used qjackctl, opened the graph view, and connected the MIDI output of "pyeep drums" to the "FLUID Synth input port".

fluidsynth's output was already automatically connected to the audio card and I started hearing the drums playing! 🥁️🎉️

05 March, 2023 11:14AM

Reproducible Builds

Reproducible Builds in February 2023

Welcome to the February 2023 report from the Reproducible Builds project. As ever, if you are interested in contributing to our project, please visit the Contribute page on our website.


FOSDEM 2023 was held in Brussels on the 4th & 5th of February and featured a number of talks related to reproducibility. In particular, Akihiro Suda gave a talk titled Bit-for-bit reproducible builds with Dockerfile discussing deterministic timestamps and deterministic apt-get (original announcement). There was also an entire ‘track’ of talks on Software Bill of Materials (SBOMs). SBOMs are an inventory for software with the intention of increasing the transparency of software components (the US National Telecommunications and Information Administration (NTIA) published a useful Myths vs. Facts document in 2021).


On our mailing list this month, Larry Doolittle was puzzled why the Debian verilator package was not reproducible [], but Chris Lamb pointed out that this was due to the use of Python’s datetime.fromtimestamp over datetime.utcfromtimestamp [].


James Addison also was having issues with a Debian package: in this case, the alembic package. Chris Lamb was also able to identify the Sphinx documentation generator as the cause of the problem, and provided a potential patch that might fix it. This was later filed upstream [].


Anthony Harrison wrote to our list twice, first by introducing himself and their background and later to mention the increasing relevance of Software Bill of Materials (SBOMs):

As I am sure everyone is aware, there is a growing interest in [SBOMs] as a way of improving software security and resilience. In the last two years, the US through the Exec Order, the EU through the proposed Cyber Resilience Act (CRA) and this month the UK has issued a consultation paper looking at software security and SBOMs appear very prominently in each publication. []


Tim Retout wrote a blog post discussing AlmaLinux in the context of CentOS, RHEL and supply-chain security in general []:

Alma are generating and publishing Software Bill of Material (SBOM) files for every package; these are becoming a requirement for all software sold to the US federal government. What’s more, they are sending these SBOMs to a third party (CodeNotary) who store them in some sort of Merkle tree system to make it difficult for people to tamper with later. This should theoretically allow end users of the distribution to verify the supply chain of the packages they have installed?


Debian


F-Droid & Android


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats.

This month, Chris Lamb released versions 235 and 236; Mattia Rizzolo later released version 237.

Contributions include:

  • Chris Lamb:
    • Fix compatibility with PyPDF2 (re. issue #331) [][][].
    • Fix compatibility with ImageMagick version 7.1 [].
    • Require at least version 23.1.0 to run the Black source code tests [].
    • Update debian/tests/control after merging changes from others [].
    • Don’t write test data during a test [].
    • Update copyright years [].
    • Merged a large number of changes from others.
  • Akihiro Suda edited the .gitlab-ci.yml configuration file to ensure that versioned tags are pushed to the container registry [].

  • Daniel Kahn Gillmor provided a way to migrate from PyPDF2 to pypdf (#1029741).

  • Efraim Flashner updated the tool metadata for isoinfo on GNU Guix [].

  • FC Stegerman added support for Android resources.arsc files [], improved a number of file-matching regular expressions [][] and added support for Android dexdump []; they also fixed a test failure (#1031433) caused by Debian’s black package having been updated to a newer version.

  • Mattia Rizzolo:
    • updated the release documentation [],
    • fixed a number of Flake8 errors [][],
    • updated the autopkgtest configuration to only install aapt and dexdump on architectures where they are available [], making sure that the latest diffoscope release is in a good fit for the upcoming Debian bookworm freeze.

reprotest

Reprotest version 0.7.23 was uploaded to both PyPI and Debian unstable, including the following changes:

  • Holger Levsen improved a lot of documentation [][][], tidied the documentation as well [][], and experimented with a new --random-locale flag [].

  • Vagrant Cascadian adjusted reprotest to no longer randomise the build locale and use a UTF-8 supported locale instead […] (re. #925879, #1004950), and to also support passing --vary=locales.locale=LOCALE to specify the locale to vary [].

Separate to this, Vagrant Cascadian started a thread on our mailing list questioning the future development and direction of reprotest.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, the following changes were made by Holger Levsen:

  • Add three new OSUOSL nodes [][][] and decommission the osuosl174 node [].
  • Change the order of listed Debian architectures to show the 64-bit ones first [].
  • Reduce the frequency that the Debian package sets and dd-list HTML pages update [].
  • Sort “Tested suite” consistently (and Debian unstable first) [].
  • Update the Jenkins shell monitor script to only query disk statistics every 230min [] and improve the documentation [][].

Other development work

disorderfs version 0.5.11-3 was uploaded by Holger Levsen, fixing a number of issues with the manual page [][][].


Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.


If you are interested in contributing to the Reproducible Builds project, please visit the Contribute page on our website. You can get in touch with us via:

05 March, 2023 08:53AM

March 04, 2023

hackergotchi for Matt Brown

Matt Brown

Retrospective: Feb 2023

February ended up being a very short work month as I made a last minute decision to travel to Adelaide for the first 2 weeks of the month to help my brother with some house renovations he was undertaking. I thought I might be able to keep up with some work and my writing goals in the evenings while I was there, but days of hard manual labour are such an unfamiliar routine for me that I didn’t have any energy left to make good on that intention.

The majority of my time and focus for the remaining one and half weeks of the month was catching up on the consulting work that I had pushed back while in Adelaide.

So while it doesn’t make for a thrilling first month to look back and report on, overall I’m not unhappy with what I achieved given the time available. Next month, I hope to be able to report some more exciting progress on the product development front as well.

Monthly Scoring Rubric

I’m evaluating each goal using a 10 point scale based on execution velocity and risk level, rather than absolute success (which is what I will look at in the annual/mid-year review). If velocity is good and risk is low or well managed the score is high, if either the velocity is low, or risk is high then the score is low. E.g:

  • 10 - perfect execution with low-risk, on track for significantly overachieving the goal.
  • 7 - good execution with low or well managed risk, highly likely to achieve the goal.
  • 5 - execution and risk are OK, should achieve the goal if all goes well.
  • 3 - execution or risk have problems, goal is at risk.
  • 0 - stalled, with no obvious path to recovery or success.

Goals

Consulting - 6/10

Goal: Execute a series of successful consulting engagements, building a reputation for myself and leaving happy customers willing to provide testimonials that support a pipeline of future opportunities.

  • I have one active local engagement assisting a software team with migrating their application from a single to multi-region architecture.
  • Two promising international engagements which were close to starting both cancelled based on newly issued company policies freezing their staffing/outsourcing budgets due to the current economic climate.

I’m happy with where this is at - I hit 90% of my target hours in February (taking into account 2 weeks off) and the feedback I’m receiving is positive. The main risk is the future pipeline of engagements, particularly if the cancellations indicate a new pattern. I’m not overly concerned yet, as all the opportunities to date have been from direct or referred contacts in my personal network, so there’s plenty of potential to more actively solicit work to create a healthier pipeline.

Product Development - 3/10

Goal: Grow my product development skill set by taking several ideas to MVP stage with customer feedback received, and launch at least one product which generates revenue and has growth potential.

  • Accelerating electrification - I continued to keep up with industry news and added some interesting reports to my reading queue, but made no significant progress towards identifying a specific product opportunity.

  • Farm management SaaS - no activity or progress at all.

  • co2mon.nz - I put significant thought and planning into how to approach a second iteration of this product. I started writing and completed 80% of a post to communicate the revised business plan, but it’s not ready for publication yet, and even if it was, the real work towards it would need to actually happen to score more points here.

I had high hopes to make at least some progress in all three areas in February, but it just didn’t happen due to lack of time. The good news is that since the low score here is purely execution driven, there’s no new risks or blockers that will hinder much better progress here in March.

Professional Network Development - 8/10

Goal: To build a professional relationship with at least 30 new people this year.

This is off to a strong start, I made 4 brand new connections and re-established contact with 9 other existing people I’d not talked to for a while. I’ve found the conversations energising and challenging and I’m looking forward to continuing to keep this up.

Writing - 2/10

Goal: To publish a high-quality piece of writing on this site at least once a week.

Well off track as already noted. I am enjoying the writing process and I continue to find it useful in developing my thoughts and forcing me to challenge my assumptions, but coupling the writing process with the thinking/planning that is a prerequisite to get those benefits definitely makes my output a lot slower than I was expecting.

The slower speed, combined with the obvious time constraints of this month are not a great doubly whammy to be starting with, but I think with some planning and preparation it should have been avoidable by having a backlog of pre-written content for use in weeks where I’m on holiday or otherwise busy.

It’s worth noting that among all the useful feedback I received, this writing target was often called out as overly ambitious, or likely to be counterproductive to producing quality writing. The feedback makes sense - for now I’m not planning to change the goal (I might at my 6-month review point), but I am going to be diligent about adhering to my quality standard, which in turn means I’m choosing to accept missing a weekly post here and there and taking a lower score on the goal overall.

I apologise if you’ve been eagerly waiting for writing that never arrived over February!

Community - 5/10

Goal: To support the growth of my local technical community by volunteering my experience and knowledge with others through activities such as mentoring, conference talks and similar.

  • I was an invited participant of the monthly KiwiSRE meet-up which was discussing SRE team models, and in particular I was able to speak to my experiences as described in an old CRE blog post on this topic.

  • I joined the program committee for SREcon23 APAC which is scheduled for mid-June in Singapore. I also submitted two talk proposals of my own (not sharing the details for now, since the review process is intended to be blind) which I’m hopeful might make the grade with my fellow PC members!

Feedback

As always, I’d love to hear from you if you have thoughts or feedback triggered by anything I’ve written above. In particular, it would be useful to know whether you find this type of report interesting to read and/or what you’d like to see added/removed or changed.

04 March, 2023 01:03AM

March 03, 2023

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Goodbye Bullseye — report from the Montreal 2023 BSP

Hello World! I haven't really had time to blog here since the start of the semester, as I've been pretty busy at work1.

All this to say, this report for the Bug Squashing Party we held in Montreal last weekend is a little late, sorry :)

First of all, I'm pleased to announce our local community seems to be doing great and has recovered from the pandemic-induced lull. May COVID stay away from our bodies forever.

This time around, a total of 9 people made it to what has become somewhat of a biennial tradition2. We worked on a grand total of 14 bugs and even managed to close some!

It looks like I was too concentrated on bugs to take a picture of the event... To redeem myself, I hereby offer you a picture of a cute-but-hairless cat I met on Sunday morning:

Picture of a curious sphinx cat on a table

You should try to join an upcoming BSP or to organise one if you can. It's loads of fun and you'll be helping the project make the next release happen sooner!

As always, thanks to Debian for granting us a budget for the food and to rent the venue.

Goodbye Bullseye!


  1. Which I guess is a good thing, since it means I actually have work this semester :O 

  2. See our previous BSPs in 2017, 2019 and 2021

03 March, 2023 09:30PM by Louis-Philippe Véronneau

Sven Hoexter

exfat-fuse 1.4 in experimental

I know a few people hold on to the exFAT fuse implementation due the support for timezone offsets, so here is a small update for you. Andrew released 1.4.0, which includes the timezone offset support, which was so far only part of the git master branch. It also fixes a, from my point of view very minor, security issue CVE-2022-29973. In addition to that it's the first build with fuse3 support. If you still use this driver, pick it up in experimental (we're in the bookworm freeze right now), and give it a try. I'm personally not using it anymore beyond a very basic "does it mount" test.

03 March, 2023 03:39PM

Russell Coker

Hyper Threading on the E5-2696v3

I just did some quick tests of hyper-threading on my new E5-2696v3 CPU. I compiled the Linux 6.0.10 kernel with and without hyper-threading enabled. Here’s the times for “make -j36 bzImage” and “make -j36 modules” with HT enabled:

real    2m26.540s
user    55m25.121s
sys     9m56.443s

real    10m57.374s
user    309m21.531s
sys     58m1.070s

Here’s the times for “make -j18 bzImage” and “make -j18 modules” with HT disabled:

real    2m40.501s
user    31m35.295s
sys     5m43.523s

real    11m39.313s
user    170m46.840s
sys     31m37.756s

That’s 9.6% faster for bzImage and 6.4% faster for modules.

So for a performance boost that’s between 5% and 10% I get greater exposure to kernel security issues and more difficulty tracking CPU time. That doesn’t seem like a good trade-off so I’ve put the “nosmt” kernel command-line option back.

03 March, 2023 10:35AM by etbe

March 02, 2023

Ian Jackson

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, January/February 2023

In January I was assigned 24 hours by Freexian's Debian LTS initiative and worked 8 hours. In February I was assigned another 8 hours and worked 8 hours.

I updated the linux (4.19) package to the latest stable update, but didn't upload it. I merged the latest bullseye security update into the linux-5.10 package and uploaded that.

02 March, 2023 04:16PM

March 01, 2023

Russ Allbery

Small book haul

I'm a bit behind on both free software maintenance and on writing reviews, what with one thing and another, but hopefully will have time to catch up next month. Meanwhile, publishing continues and books keep catching my eye.

Blake Crouch (ed.) — Forward (sff anthology)
Kate Elliott — The Keeper's Six (sff)
Ruthanna Emrys — A Half-Built Garden (sff)
R.F. Kuang — Babel (sff)
Seanan McGuire — The Unkindest Tide (sff)
Seanan McGuire — A Killing Frost (sff)
Seanan McGuire — When Sorrows Come (sff)
Seanan McGuire — Be the Serpent (sff)
Terry Pratchett — Thief of Time (sff)
Terry Pratchett — The Last Hero (sff)
Terry Pratchett — The Amazing Maurice and His Educated Rodents (sff)
Terry Pratchett — Night Watch (sff)
Terry Pratchett — The Wee Free Men (sff)
Terry Pratchett — Monstrous Regiment (sff)

I keep hearing amazing things about Babel, so it's very high on the list.

01 March, 2023 05:28AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Got crosvm building in Debian.

Got crosvm building in Debian. Now to rebase and try to upload. Or maybe upload the version I have first and then rebase.

01 March, 2023 04:32AM by Junichi Uekawa

hackergotchi for Debian XMPP Team

Debian XMPP Team

XMPP What's new in Debian 12 bookworm

On Tue 13 July 2021 there was a blog post of new XMPP related software releases which have been uploaded to Debian 11 (bullseye). Today, we will inform you about updates for the upcoming Debian release bookworm.

A lot of new releases have been provided by the upstream projects. There were lot of changes to the XMPP clients like Dino, Gajim, Profanity, Poezio and others. Also the XMPP servers have been enhanced.

Unfortunately, we can not provide a list of all the changes which have been done, but will try to highlight some of the changes and new features.

BTW, feel free to join the Debian User Support on Jabber at xmpp:debian@conference.debian.org?join.

You can find a list of 58 packages of the Debian XMPP team on the XMPP QA Page.

  • Dino, modern XMPP client has been upgraded from 0.2.0 to 0.4.0. The new version supports encrypted calls and group calls and reactions give you a way to respond to a message with an emoji. You can find more information about Dino 0.3.0 and Dino 0.4.0 in the release notes of the upstream project. Dino is using GTK4 / libadwaita which provides widgets for mobile-friendly UIs. Changes has been done on the main view of Dino.
  • Gajim, a GTK+-based Jabber client has been upgraded from 1.3.1 to 1.7.1. Since 1.4 Gajim has got a new UI, which supports spaces. 1.5.2 supports a content viewer for PEP nodes. 1.6.0 is using libsoup3 and python 3.10. Audio preview looks a lot nicer with a wave graph visualization and profile images (avatar) are not limited to only JPG anymore. The plugins gajim-appindicatorintegration, gajim-plugininstaller, gajim-syntaxhighlight und gajim-urlimagepreview are obsolete, these features has been moved to gajim. There were a lot of releases in Gajim. You can find the full story at https://gajim.org/post/
  • Profanity, the console based XMPP client has been upgraded from 0.10.0 to 0.13.1. Profanity supports XEP-0377 Spam Reporting, and XEP-0157 server contact information discovery. It now marks a window with an attention flag, updated HTTP Upload XEP-0363, and messages can be composed with an external editor. It also features easy quoting, in-band account registration (XEP-0077), Print OMEMO verification QR code, and many more.
  • Kaidan, a simple and user-friendly Jabber/XMPP client based on Qt has been updated from 0.7.0 to 0.8.0. The new release supports XEP-0085: Chat State Notifications and XEP-0313: Message Archive Management.
  • Poezio, a console-based XMPP client as been updated from 0.13.1 to 0.14. Poezio is now under GPLv3+. The new release supports request for voice and the /join command support using an XMPP URI. More information at https://lab.louiz.org/poezio/poezio/-/raw/v0.14/CHANGELOG.
  • [Swift][swift-im], back in Debian is the Swift XMPP client - a cross-platform Client written in C++. In 2015 the client was removed from testing and is back with version 5.0.

Server

  • prosody the lightweight extensible XMPP server has been upgraded from 0.11.9 to 0.12.2. Mobile and connectivity optimizations, a new module for HTTP file sharing, audio/video calling support. See the release announcement for more info. You will also find a lot of new modules which have been added to 0.12.0. The version 0.12.3 is waiting migration from unstable to testing.
  • ejabberd, extensible realtime platform (XMPP server + MQTT broker + SIP service) has been updated from Version 21.01 to 23.01. The new version supports the latest version of MIX (XEP-0369). There were also changes for SQL and MUC. See the release information for 22.10 and 23.01 for more details.

Libs

  • libstrophe, xmpp C lib has been upgraded from 0.10.1 to 0.12.2. The lib has SASL EXTERNAL support (XEP-0178), support for manual certificate verification and Stream Management support (XEP-0198).
  • python-nbxmpp 2.0.2 to 4.2.0 - used by gajim
  • qxmpp 1.3.2 to 1.4.0
  • slixmpp 1.7.0 to 1.8.3 (see https://lab.louiz.org/poezio/slixmpp/-/tags/slix-1.8.0)
  • loudmouth 1.5.3 to 1.5.4
  • libomemo-c, new in Debian with version 0.5.0 - a fork of libsignal-protocol-c

Others

  • There were some changes of the Libervia, formerly known as Salut à Toi (SaT) packages in Debian. The most visible change is, that Salut à Toi has been renamed to libervia:
  • salutatoi is now libervia-backend (0.9.0)
  • sat-xmpp-primitivus is now libervia-tui
  • sat-xmpp-core is now libervia-backend
  • sat-xmpp-jp is now libervia-cli
  • sat-pubsub is now libervia-pubsub (0.4.0)
  • gsasl has been updated from 1.10.0 to 2.2.0
  • libxeddsa 2.0.0 is new in Debian - toolkit around Curve25519 and Ed25519 key pairs

Happy chatting - keep in touch with your family and friends via Jabber / XMPP - XMPP is an open standard of the Internet Engineering Task Force (IETF) for instant messaging.

01 March, 2023 12:00AM by Debian XMPP Team

Reproducible Builds (diffoscope)

diffoscope 238 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 238. This version includes the following changes:

* autopkgtest: fix tool name in the skippable list.

You find out more by visiting the project homepage.

01 March, 2023 12:00AM

February 28, 2023

Paul Wise

FLOSS Activities Feb 2023

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: unarchive/reopen/triage bugs for reintroduced package servefile
  • Debian IRC: turn an old channel into a redirect to the right one
  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The pyemd/sptag work was sponsored. All other work was done on a volunteer basis.

28 February, 2023 11:38PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

Cutting off body parts and Lenovo

I would suggest that this blog post would be slightly unpleasant and I do wish that there was a way, a standardized way just like movies where you can put General, 14+, 16+, Adult and whatnot. so people could share without getting into trouble. I would suggest to consider this blog as for somewhat mature and perhaps disturbing.

Cutting off body parts

From last couple of months or so we have been getting daily reports of either men or women killed and then being chopped into pieces and this is being ‘normalized’. During my growing up years, the only such case I remember was the 1995 Tandoor case and it jolted the conscience of the nation. But it seems lot of water has passe under the bridge. as no one seems to be shocked anymore 😦 Also shocking are the number of heart attacks that young people are getting. Dunno the reason for either. Just saw this yesterday, The first thing to my mind was, at least she wasn’t chopped. It was only latter I realized that the younger sister may have wanted to educate herself or have some other drreams, but because of some evil customs had to give hand in marriage. No outrage here for anything, not even child marriage :(. How have we become so insensitive. And it’s mostly Hindus killing Hindus but still no outrage. We have been killing Muslims and Christians so that I guess is just par for the course :(. I wish I could say there is a solution but there seems to be not 😦 Even Child abuse cases have been going up but sad to say even they are being normalised. It’s only when a US agency or somebody who feels shocked, then we feel shocked otherwise we have become numb 😦

AMD and Lenovo Lappies

About couple of months ago I had made a blog post about lappies. Then Russel reached out to me on Twitter and we engaged. One thing lead to other and soon I saw on some other topic somewhere came across this –

The above is a video presentation given by Mark Pearson. Sad to say it was not illuminating enough. Especially the whole ‘boothole’ thing. I did see three blog posts to get some more insight. The security entry did also share some news. I also reached out to Mr. Pearson to know both the status and also to enquire if there are any new lappies without an OS that I can buy from Lenovo. Sadly, both these e-mails went unanswered. Maybe they went to spam or something else, have no clue. While other organizations did work on it, Debian was kinda side-lined. Hence the annoyance from the Debian Maintainers that the whole thing came from the left field. And this doesn’t just effect Debian but all those downstream distributions that rely on Debian 😦 . Now while it’s almost a year since then and probably all has been fixed but there haven’t been any instructions that I could find that tellls me if there is any new way or just the old way works. In any case, I do think bookworm release probably would have all the fixes needed. IIRC, we just entered ‘soft freeze’ just couple of weeks back.

I have to admit something though, I have never used secure-boot as it has been designed, partially because I always run testing, irrespective of whatever device I use. And AFAIK the whole idea of Secure Boot is to have few updates unlike Testing which is kinda a rolling release thing. While Secure Boot wants same bits, all underlying bits, in Testing it’s hard to ensure that as the idea is to test new releases of software and see what works and what breaks till we send it to final release (something like ‘Bookworm’). FWIW, currently ‘bookworm’ and ‘Testing’ is one and the same till Bookworm releases, and then Testing would have its own updates from the next hour/day after.



28 February, 2023 02:09PM by shirishag75

Russell Coker

Utkarsh Gupta

FOSS Activites in February 2023

Here’s my (forty-first) monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 50th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I do, both, technical and non-technical. Here are the things I did this month:

Uploads

Others

  • Looked up some Release team documentation.
  • Sponsored php-font-lib and php-dompdf-svg-lib for William.
  • Granted DM rights for php-dompdf.
  • Mentoring for newcomers.
  • Reviewed micro bits for Nilesh, new uploads and changes.
  • Ruby sprints.
  • Bug work (on BTS and #debian-ruby) for rails and redmine.
  • Moderation of -project mailing list.

A huge thanks to Freexian for sponsoring my Debian work and Entrouvert for sponsoring the Redmine backports. :D


Ubuntu

This was my 25th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support).

This was my forty-first month as a Debian LTS and thirty-second month as a Debian ELTS paid contributor.
I worked for 24.25 hours for LTS and 28.50 hours for ELTS.

LTS CVE Fixes and Announcements:

  • Fixed CVE-2022-47016 for tmux and uploaded to buster via 2.8-3+deb10u1.
    But decided to not roll the DLA for the package as the CVE got rejected upstream.
  • Issued DLA 3359-1, fixing CVE-2019-13038 and CVE-2021-3639, for libapache2-mod-auth-mellon.
    For Debian 10 buster, these problems have been fixed in version 0.14.2-1+deb10u1.
  • Issued DLA 3360-1, fixing CVE-2021-30151 and CVE-2022-23837, for ruby-sidekiq.
    For Debian 10 buster, these problems have been fixed in version 5.2.3+dfsg-1+deb10u1.
  • Worked on ruby-rails-html-sanitize and added notes to the security-tracker.
    TL;DR: we need newer methods in ruby-loofah to make the patches for ruby-rails-html-sanitize backportable.
  • Started to look at other set of packages meanwhile.

ELTS CVE Fixes and Announcements:

  • Issued ELA 813-1, fixing CVE-2017-12618 and CVE-2022-25147, for apr-util.
    For Debian 8 jessie, these problems have been fixed in version 1.5.4-1+deb8u1.
    For Debian 9 stretch, these problems have been fixed in version 1.5.4-3+deb9u1.
  • Issued ELA 814-1, fixing CVE-2022-39286, for jupyter-core.
    For Debian 9 stretch, these problems have been fixed in version 4.2.1-1+deb9u1.
  • Issued ELA 815-1, fixing CVE-2022-44792 and CVE-2022-44793, for net-snmp.
    For Debian 8 jessie, these problems have been fixed in version 5.7.2.1+dfsg-1+deb8u6.
    For Debian 9 stretch, these problems have been fixed in version 5.7.3+dfsg-1.7+deb9u5.
  • Helped facilitate RabbitMQ’s update queries by one of our customers.
  • Started to look at other set of packages meanwhile.

Other (E)LTS Work:


Until next time.
:wq for today.

28 February, 2023 05:41AM

Reproducible Builds (diffoscope)

diffoscope 237 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 237. This version includes the following changes:

* autopkgtest: only install aapt and dexdump on architectures where they are
  available. (Closes: #1031297)
* compartors/pdf:
  + Drop backward compatibility assignment.
  + Fix flake warnings, potentially reinstating PyPDF 1.x support (untested).

You find out more by visiting the project homepage.

28 February, 2023 12:00AM

February 27, 2023

hackergotchi for Daniel Lange

Daniel Lange

Thunderbird gpg key import

Thunderbird, srsly?

5MB (or 4.8MiB) import limit. Sure. My modest pubring (111 keys) is 18MB. The Debian keyring is 28MB.

May be, just may be, add another 0 to that if statement?

So, until that happens, workarounds ...

Option 1:

Export each pubkey into a separate file. The import dialog allows to select them all in one go. But - of course - it will ask confirmation for each. So prepare some valerian tea.

gpg --with-colons --list-public-keys | grep ^pub | cut -d : -f 5 | xargs -I {} -n 1 gpg -ao {}.pub --export {};

Option 2:

Strip all the signatures, so Thunderbird gets a smaller file to chew on. This uses pgp-clean from signing-party.

gpg --with-colons --list-public-keys | grep ^pub | cut -d : -f 5 | xargs pgp-clean -s >> there_you_go_thunderbird.pub

Option 1 will retain the signatures on individual keys, Option 2 will not.

27 February, 2023 03:50PM by Daniel Lange

Getting gpg to import signatures again

The GnuPG (gpg) ecosystem has been played with a bit in 2019 by adding fake signatures en masse to well known keys. The main result is that the SKS Keyserver network based on the OCaml software of the same name is basically history. A few other keyservers have come up like Hagrid (Rust) and Hockeypuck (Go) but there seems to be no clear winner yet. In case you missed it in 2019, see my take on cleaning these polluted keys.

Now the changed defaults in gpg to "mitigate" this issue are trickling down to even the conservative distributions. Debian Bullseye has self-sigs-only on gpg 2.2.27 and it looks like Debian Bookworm will get gpg 2.2.40. This would add import-clean but Daniel Kahn Gillmor patched it out. He argues correctly that this new default could delete data from good locally store pubkeys.

This all ends in you getting some random combination of self-sigs-only and / or import-clean depending on which Linux distribution and version you happen to use.

Better be explicit. I recommend to add:

# disable new gpg defaults
keyserver-options no-self-sigs-only
keyserver-options no-import-clean

to your ~/.gnupg/gpg.conf to make sure you can manage signatures yourself and receive them from keyservers or local imports as intended.

In case you care: See info gnupg --index-search=keyserver-options for the fine documentation. Of course apt install info first to be able to read info pages. 'cause who still used them in 2023? Oh, wait...

27 February, 2023 02:00PM by Daniel Lange

Billy Warren

My take on IRCs - let's rest from Slack and Discord for a while.

I want to interest those that haven’t used IRCs for a while through this article. This article generally leans toward the Debian Community but I hope it gives you some perspectives into IRCs and also interests you in joining the Debian Community as well.

Most Generation Z developers I know have at least used slack, discord, discourse and so many other communication tools but so few have used IRCs so heavily and this could be because they find it boring and limited to what kind of content they can share and what community they are participating in. I got interested in using IRCs when I joined Debian and was able to navigate my way through using some online resources. While IRC may not be as popular as other communication tools like Slack or Discord, it remains a valuable tool for communication and collaboration within the open-source community. Its open-source nature and lightweight protocol make it a flexible and customizable tool that can be used in various settings.

Internet Relay Chat (IRC) is a communication protocol that has been around since the late 1980s. Despite its age, it is still a popular tool for communication and collaboration, especially within the open-source community. It is mostly known to be used by people that work on ‘boring’ projects or maintain legacy code (I can’t state how true this is though). Interesting facts about IRC include that it was created by a Finnish developer named Jarkko Oikarinen, who wanted to create a communication tool that would allow him to chat with his friends in real-time (the FaceBook of those days). The first IRC network was called EFnet, and it quickly grew in popularity as more and more people began to use the protocol for communication and collaboration.

IRC allows users to connect to servers and join channels (chat rooms) where they can communicate in real time with other users who have also joined the same channel. Many open-source projects use IRC channels for communication and collaboration among developers. The Debian project has a number of IRC channels that are used for communication and collaboration among developers and users. I have at least interacted with three of them which include;-

  • #salsa - This channel is for those that use the Salsa platform
  • #salsaci - this channel is for discussions around the Salsa CI Pipeline
  • #debian-devel - this channel is for all things Debian Development. Questions and updates related to general developments in Debian.
  • Check out more Debian channels here

To start using IRC, you will need to choose an IRC client, which is software that allows you to connect to IRC networks. Some popular IRC clients for Debian users include HexChat, irssi, and WeeChat. These clients are free and open source, and can be downloaded from the Debian repositories.Once you have chosen an IRC client, you can connect to an IRC network and start joining channels. Many open-source projects have their own IRC channels, where developers can discuss the project and collaborate in real time.

One of the benefits of using IRC that I found out was that it is a lightweight protocol, which means that it can be used on a wide variety of devices, including desktop computers and smartphones. This makes it an ideal communication tool for developers who may be working on different devices and need to communicate with each other quickly and easily.

IRC is still a valuable tool for communication and collaboration within the open-source community even if it is not given the same popularity as other communication tools. Like some of the good old legacy designs whose function never goes old, IRC is one of them.

27 February, 2023 06:41AM by Billy Warren

February 26, 2023

Russ Allbery

Review: An Informal History of the Hugos

Review: An Informal History of the Hugos, by Jo Walton

Publisher: Tor
Copyright: August 2018
ISBN: 1-4668-6573-3
Format: Kindle
Pages: 564

An Informal History of the Hugos is another collection of Jo Walton's Tor.com posts. As with What Makes This Book So Great, these are blog posts that are still available for free on-line. Unlike that collection, this series happened after Tor.com got better at tags, so it's much easier to find. Whether to buy it therefore depends on whether having it in convenient book form is worth it to you.

Walton's previous collection was a somewhat random assortment of reviews of whatever book she felt like reviewing. As you may guess from the title, this one is more structured. She starts at the first year that the Hugo Awards were given out (1953) and discusses the winners for each year up through 2000. Nearly all of that discussion is about the best novel Hugo, a survey of other good books for that year, and, when other awards (Nebula, Locus, etc.) start up, comparing them to the winners and nominees of other awards. One of the goals of each discussion is to decide whether the Hugo nominees did a good job of capturing the best books of the year and the general feel of the genre at that time.

There are a lot of pages in this book, but that's partly because there's a lot of filler. Each post includes all of the winners and (once a nomination system starts) nominees in every Hugo category. Walton offers an in-depth discussion of the novel in every year, and an in-depth discussion of the John W. Campbell Award for Best New Writer (technically not a Hugo but awarded with them and voted on in the same way) once those start. Everything else gets a few sentences at most, so it's mostly just lists, all of which you can readily find elsewhere if you cared. Personally, I would have omitted categories without commentary when this was edited into book form.

Two other things are included in this book. Most helpfully, Walton's Tor.com reviews of novels in the shortlist are included after the discussion of that year. If you like Walton's reviews, this is great for all the reasons that What Makes This Book So Great was so much fun. Walton has a way of talking about books with infectious enthusiasm, brief but insightful technical analysis, and a great deal of genre context without belaboring any one point. They're concise and readable and never outlast my attention span, and I wish I could write reviews half as well.

The other inclusion is a selection of the comments from the original blog posts. When these posts originally ran, they turned into a community discussion of the corresponding year of SF, and Tor included a selection of those comments in the book. Full disclosure: one of those comments is mine, about the way that cyberpunk latched on to some incorrect ideas of how computers work and made them genre conventions to such a degree that most cyberpunk takes place in a parallel universe with very different computer technology. (I suppose that technically makes me a published author to the tune of a couple of pages.) While I still largely agree with the comment, I blamed Neuromancer for this at the time, and embarrassingly discovered when re-reading it that I had been unfair. This is why one should never express opinions in public where someone might record them.

Anyway, there is a general selection of comments from random people, but the vast majority of the comments are discussions of the year's short fiction by Rich Horton and Gardner Dozois. I understand why this was included; Walton doesn't talk about the short fiction, Dozois was a legendary SF short fiction editor and multiple Hugo winner, and both Horton and Dozois reviewed short fiction for Locus. But they don't attempt reviews. For nearly all stories under discussion, unless you recognized the title, you would have no idea even what sub-genre it was in. It's just a sequence of assertions about which title or author was better.

Given that there are (in most years) three short fiction categories to the one novel category and both Horton and Dozois write about each category, I suspect there are more words in this book from Horton and Dozois than Walton. That's a problem when those comments turn into tedious catalogs.

Reviewing short fiction, particularly short stories, is inherently difficult. I've tried to do a lot of that myself, and it's tricky to find something useful to say that doesn't spoil the story. And to be fair to Horton and Dozois, they weren't being paid to write reviews; they were just commenting on blog posts as part of a community conversation, and I doubt anyone thought this would turn into a book. But when read as a book, their inclusion in this form wasn't my favorite editorial decision.

This is therefore a collection of Walton's commentary on the selections for best novel and best new writer alongside a whole lot of boring lists. In theory, the padding shouldn't matter; one can skip over it and just read Walton's parts, and that's still lots of material. But Walton's discussion of the best novels of the year also tends to turn into long lists of books with no commentary (particularly once the very-long Locus recommended list starts appearing), adding to the tedium. This collection requires a lot of skimming.

I enjoyed this series of blog posts when they were first published, but even at the time I skimmed the short fiction comments. Gathered in book form with this light of editing, I think it was less successful. If you are curious about the history of science fiction awards and never read the original posts, you may enjoy this, but I would rather have read another collection of straight reviews.

Rating: 6 out of 10

26 February, 2023 05:17AM