January 18, 2022

hackergotchi for Joey Hess

Joey Hess

January 17, 2022

Russ Allbery

DocKnot 7.00

The recent 6.01 release of my static web site generator was kind of a buggy mess, which uncovered a bunch of holes in my test suite and immediately turned up problems when I tried to use it to rebuild my actual web site. Most of the problems were Unicode-related; this release hopefully sorts out Unicode properly and handles it consistently.

Other bugs fixed include processing of old-style pointers in a spin input tree, several rather obvious bugs in the new docknot release command, and a few long-standing issues with docknot dist that should make its results more consistent and reliable.

I also got on a roll and finished the Path::Tiny transition in DocKnot, so now (nearly) all paths are internally represented as Path::Tiny objects. This meant changing some APIs, hence the version bump to 7.00.

For anyone who still does a lot of Perl, I highly recommend the Path::Tiny module. If you also write Python, you will be reminded (in a good way) of Python's pathlib module, which I now use whenever possible.

You can get the latest version of DocKnot from CPAN or from its distribution page.

17 January, 2022 09:41PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Different types of Backups

In my previous post, I explained how I recently set up backups for my home server to be synced using Amazon's services. I received a (correct) comment on that by Iustin Pop which pointed out that while it is reasonably cheap to upload data into Amazon's offering, the reverse -- extracting data -- is not as cheap.

He is right, in that extracting data from S3 Glacier Deep Archive costs over an order of magnitude more than it costs to store it there on a monthly basis -- in my case, I expect to have to pay somewhere in the vicinity of 300-400 USD for a full restore. However, I do not consider this to be a major problem, as these backups are only to fulfill the rarer of the two types of backups cases.

There are two reasons why you should have backups.

The first is the most common one: "oops, I shouldn't have deleted that file". This happens reasonably often; people will occasionally delete or edit a file that they did not mean to, and then they will want to recover their data. At my first job, a significant part of my job was to handle recovery requests from users who had accidentally deleted a file that they still needed.

Ideally, backups to handle this type of situation are easily accessible to end users, and are performed reasonably frequently. A system that automatically creates and deletes filesystem snapshots (such as the zfsnap script for ZFS snapshots, which I use on my server) works well. The crucial bit here is to ensure that it is easier to copy an older version of a file than it is to start again from scratch -- if a user must file a support request that may or may not be answered within a day or so, it is likely they will not do so for a file they were working on for only half a day, which means they lose half a day of work in such a case. If, on the other hand, they can just go into the snapshots directory themselves and it takes them all of two minutes to copy their file, then they will also do that for files they only created half an hour ago, so they don't even lose half an hour of work and can get right back to it. This means that backup strategies to mitigate the "oops I lost a file" case ideally do not involve off-site file storage, and instead are performed online.

The second case is the much rarer one, but (when required) has the much bigger impact: "oops the building burned down". Variants of this can involve things like lightning strikes, thieves, earth quakes, and the like; in all cases, the point is that you want to be able to recover all your files, even if every piece of equipment you own is no longer usable.

That being the case, you will first need to replace that equipment, which is not going to be cheap, and it is also not going to be an overnight thing. In order to still be useful after you lost all your equipment, they must also be stored off-site, and should preferably be offline backups, too. Since replacing your equipment is going to cost you time and money, it's fine if restoring the backups is going to take a while -- you can't really restore from backup any time soon anyway. And since you will lose a number of days of content that you can't create when you can only fall back on your off-site backups, it's fine if you also lose a few days of content that you will have to re-create.

All in all, the two types of backups have opposing requirements: "oops I lost a file" backups should be performed often and should be easily available; "oops I lost my building" backups should not be easily available, and are ideally done less often, so you don't pay a high amount of money for storage of your off-sites.

In my opinion, if you have good "lost my file" backups, then it's also fine if the recovery of your backups are a bit more expensive. You don't expect to have to ever pay for these; you may end up with a situation where you don't have a choice, and then you'll be happy that the choice is there, but as long as you can reasonably pay for the worst case scenario of a full restore, it's not a case you should be worried about much.

As such, and given that a full restore from Amazon Storage Gateway is going to be somewhere between 300 and 400 USD for my case -- a price I can afford, although it's not something I want to pay every day -- I don't think it's a major issue that extracting data is significantly more expensive than uploading data.

But of course, this is something everyone should consider for themselves...

17 January, 2022 03:43PM

hackergotchi for Matthew Garrett

Matthew Garrett

Boot Guard and PSB have user-hostile defaults

Compromising an OS without it being detectable is hard. Modern operating systems support the imposition of a security policy or the launch of some sort of monitoring agent sufficient early in boot that even if you compromise the OS, you're probably going to have left some sort of detectable trace[1]. You can avoid this by attacking the lower layers - if you compromise the bootloader then it can just hotpatch a backdoor into the kernel before executing it, for instance.

This is avoided via one of two mechanisms. Measured boot (such as TPM-based Trusted Boot) makes a tamper-proof cryptographic record of what the system booted, with each component in turn creating a measurement of the next component in the boot chain. If a component is tampered with, its measurement will be different. This can be used to either prevent the release of a cryptographic secret if the boot chain is modified (for instance, using the TPM to encrypt the disk encryption key), or can be used to attest the boot state to another device which can tell you whether you're safe or not. The other approach is verified boot (such as UEFI Secure Boot), where each component in the boot chain verifies the next component before executing it. If the verification fails, execution halts.

In both cases, each component in the boot chain measures and/or verifies the next. But something needs to be the first link in this chain, and traditionally this was the system firmware. Which means you could tamper with the system firmware and subvert the entire process - either have the firmware patch the bootloader in RAM after measuring or verifying it, or just load a modified bootloader and lie about the measurements or ignore the verification. Attackers had already been targeting the firmware (Hacking Team had something along these lines, although this was pre-secure boot so just dropped a rootkit into the OS), and given a well-implemented measured and verified boot chain, the firmware becomes an even more attractive target.

Intel's Boot Guard and AMD's Platform Secure Boot attempt to solve this problem by moving the validation of the core system firmware to an (approximately) immutable environment. Intel's solution involves the Management Engine, a separate x86 core integrated into the motherboard chipset. The ME's boot ROM verifies a signature on its firmware before executing it, and once the ME is up it verifies that the system firmware's bootblock is signed using a public key that corresponds to a hash blown into one-time programmable fuses in the chipset. What happens next depends on policy - it can either prevent the system from booting, allow the system to boot to recover the firmware but automatically shut it down after a while, or flag the failure but allow the system to boot anyway. Most policies will also involve a measurement of the bootblock being pushed into the TPM.

AMD's Platform Secure Boot is slightly different. Rather than the root of trust living in the motherboard chipset, it's in AMD's Platform Security Processor which is incorporated directly onto the CPU die. Similar to Boot Guard, the PSP has ROM that verifies the PSP's own firmware, and then that firmware verifies the system firmware signature against a set of blown fuses in the CPU. If that fails, system boot is halted. I'm having trouble finding decent technical documentation about PSB, and what I have found doesn't mention measuring anything into the TPM - if this is the case, PSB only implements verified boot, not measured boot.

What's the practical upshot of this? The first is that you can't replace the system firmware with anything that doesn't have a valid signature, which effectively means you're locked into firmware the vendor chooses to sign. This prevents replacing the system firmware with either a replacement implementation (such as Coreboot) or a modified version of the original implementation (such as firmware that disables locking of CPU functionality or removes hardware allowlists). In this respect, enforcing system firmware verification works against the user rather than benefiting them.
Of course, it also prevents an attacker from doing the same thing, but while this is a real threat to some users, I think it's hard to say that it's a realistic threat for most users.

The problem is that vendors are shipping with Boot Guard and (increasingly) PSB enabled by default. In the AMD case this causes another problem - because the fuses are in the CPU itself, a CPU that's had PSB enabled is no longer compatible with any motherboards running firmware that wasn't signed with the same key. If a user wants to upgrade their system's CPU, they're effectively unable to sell the old one. But in both scenarios, the user's ability to control what their system is running is reduced.

As I said, the threat that these technologies seek to protect against is real. If you're a large company that handles a lot of sensitive data, you should probably worry about it. If you're a journalist or an activist dealing with governments that have a track record of targeting people like you, it should probably be part of your threat model. But otherwise, the probability of you being hit by a purely userland attack is so ludicrously high compared to you being targeted this way that it's just not a big deal.

I think there's a more reasonable tradeoff than where we've ended up. Tying things like disk encryption secrets to TPM state means that if the system firmware is measured into the TPM prior to being executed, we can at least detect that the firmware has been tampered with. In this case nothing prevents the firmware being modified, there's just a record in your TPM that it's no longer the same as it was when you encrypted the secret. So, here's what I'd suggest:

1) The default behaviour of technologies like Boot Guard or PSB should be to measure the firmware signing key and whether the firmware has a valid signature into PCR 7 (the TPM register that is also used to record which UEFI Secure Boot signing key is used to verify the bootloader).
2) If the PCR 7 value changes, the disk encryption key release will be blocked, and the user will be redirected to a key recovery process. This should include remote attestation, allowing the user to be informed that their firmware signing situation has changed.
3) Tooling should be provided to switch the policy from merely measuring to verifying, and users at meaningful risk of firmware-based attacks should be encouraged to make use of this tooling

This would allow users to replace their system firmware at will, at the cost of having to re-seal their disk encryption keys against the new TPM measurements. It would provide enough information that, in the (unlikely for most users) scenario that their firmware has actually been modified without their knowledge, they can identify that. And it would allow users who are at high risk to switch to a higher security state, and for hardware that is explicitly intended to be resilient against attacks to have different defaults.

This is frustratingly close to possible with Boot Guard, but I don't think it's quite there. Before you've blown the Boot Guard fuses, the Boot Guard policy can be read out of flash. This means that you can drop a Boot Guard configuration into flash telling the ME to measure the firmware but not prevent it from running. But there are two problems remaining:

1) The measurement is made into PCR 0, and PCR 0 changes every time your firmware is updated. That makes it a bad default for sealing encryption keys.
2) It doesn't look like the policy is measured before being enforced. This means that an attacker can simply reflash modified firmware with a policy that disables measurement and then make a fake measurement that makes it look like the firmware is ok.

Fixing this seems simple enough - the Boot Guard policy should always be measured, and measurements of the policy and the signing key should be made into a PCR other than PCR 0. If an attacker modified the policy, the PCR value would change. If an attacker modified the firmware without modifying the policy, the PCR value would also change. People who are at high risk would run an app that would blow the Boot Guard policy into fuses rather than just relying on the copy in flash, and enable verification as well as measurement. Now if an attacker tampers with the firmware, the system simply refuses to boot and the attacker doesn't get anything.

Things are harder on the AMD side. I can't find any indication that PSB supports measuring the firmware at all, which obviously makes this approach impossible. I'm somewhat surprised by that, and so wouldn't be surprised if it does do a measurement somewhere. If it doesn't, there's a rather more significant problem - if a system has a socketed CPU, and someone has sufficient physical access to replace the firmware, they can just swap out the CPU as well with one that doesn't have PSB enabled. Under normal circumstances the system firmware can detect this and prompt the user, but given that the attacker has just replaced the firmware we can assume that they'd do so with firmware that doesn't decide to tell the user what just happened. In the absence of better documentation, it's extremely hard to say that PSB actually provides meaningful security benefits.

So, overall: I think Boot Guard protects against a real-world attack that matters to a small but important set of targets. I think most of its benefits could be provided in a way that still gave users control over their system firmware, while also permitting high-risk targets to opt-in to stronger guarantees. Based on what's publicly documented about PSB, it's hard to say that it provides real-world security benefits for anyone at present. In both cases, what's actually shipping reduces the control people have over their systems, and should be considered user-hostile.

[1] Assuming that someone's both turning this on and actually looking at the data produced

comment count unavailable comments

17 January, 2022 04:37AM

January 16, 2022

hackergotchi for Chris Lamb

Chris Lamb

Favourite films of 2021

In my four most recent posts, I went over the memoirs and biographies, the non-fiction, the fiction and the 'classic' novels that I enjoyed reading the most in 2021. But in the very last of my 2021 roundup posts, I'll be going over some of my favourite movies. (Saying that, these are perhaps less of my 'favourite films' than the ones worth remarking on — after all, nobody needs to hear that The Godfather is a good movie.)

It's probably helpful to remark you that I took a self-directed course in film history in 2021, based around the first volume of Roger Ebert's The Great Movies. This collection of 100-odd movie essays aims to “make a tour of the landmarks of the first century of cinema,” and I watched all but a handul before the year was out. I am slowly making my way through volume two in 2022. This tome was tremendously useful, and not simply due to the background context that Ebert added to each film: it also brought me into contact with films I would have hardly come through some other means. Would I have ever discovered the sly comedy of Trouble in Paradise (1932) or the touching proto-realism of L'Atalante (1934) any other way? It also helped me to 'get around' to watching films I may have put off watching forever — the influential Battleship Potemkin (1925), for instance, and the ur-epic Lawrence of Arabia (1962) spring to mind here.

Choosing a 'worst' film is perhaps more difficult than choosing the best. There are first those that left me completely dry (Ready or Not, Written on the Wind, etc.), and those that were simply poorly executed. And there are those that failed to meet their own high opinions of themselves, such as the 'made for Reddit' Tenet (2020) or the inscrutable Vanilla Sky (2001) — the latter being an almost perfect example of late-20th century cultural exhaustion.

But I must save my most severe judgement for those films where I took a visceral dislike how their subjects were portrayed. The sexually problematic Sixteen Candles (1984) and the pseudo-Catholic vigilantism of The Boondock Saints (1999) both spring to mind here, the latter of which combines so many things I dislike into such a short running time I'd need an entire essay to adequately express how much I disliked it.

§

Dogtooth (2009)

A father, a mother, a brother and two sisters live in a large and affluent house behind a very high wall and an always-locked gate. Only the father ever leaves the property, driving to the factory that he happens to own. Dogtooth goes far beyond any allusion to Josef Fritzl's cellar, though, as the children's education is a grotesque parody of home-schooling. Here, the parents deliberately teach their children the wrong meaning of words (e.g. a yellow flower is called a 'zombie'), all of which renders the outside world utterly meaningless and unreadable, and completely mystifying its very existence. It is this creepy strangeness within a 'regular' family unit in Dogtooth that is both socially and epistemically horrific, and I'll say nothing here of its sexual elements as well.

Despite its cold, inscrutable and deadpan surreality, Dogtooth invites all manner of potential interpretations. Is this film about the artificiality of the nuclear family that the West insists is the benchmark of normality? Or is it, as I prefer to believe, something more visceral altogether: an allegory for the various forms of ontological violence wrought by fascism, as well a sobering nod towards some of fascism's inherent appeals? (Perhaps it is both. In 1972, French poststructuralists Gilles and Félix Guattari wrote Anti-Oedipus, which plays with the idea of the family unit as a metaphor for the authoritarian state.) The Greek-language Dogtooth, elegantly shot, thankfully provides no easy answers.

§

Holy Motors (2012)

There is an infamous scene in Un Chien Andalou, the 1929 film collaboration between Luis Buñuel and famed artist Salvador Dalí. A young woman is cornered in her own apartment by a threatening man, and she reaches for a tennis racquet in self-defence. But the man suddenly picks up two nearby ropes and drags into the frame two large grand pianos... each leaden with a dead donkey, a stone tablet, a pumpkin and a bewildered priest.

This bizarre sketch serves as a better introduction to Leos Carax's Holy Motors than any elementary outline of its plot, which ostensibly follows 24 hours in the life of a man who must play a number of extremely diverse roles around Paris... all for no apparent reason. (And is he even a man?) Surrealism as an art movement gets a pretty bad wrap these days, and perhaps justifiably so. But Holy Motors and Un Chien Andalou serve as a good reminder that surrealism can be, well, 'good, actually'. And if not quite high art, Holy Motors at least demonstrates that surrealism can still unnerving and hilariously funny. Indeed, recalling the whimsy of the plot to a close friend, the tears of laughter came unbidden to my eyes once again. ("And then the limousines...!")

Still, it is unclear how Holy Motors truly refreshes surrealism for the twenty-first century. Surrealism was, in part, a reaction to the mechanical and unfeeling brutality of World War I and ultimately sought to release the creative potential of the unconscious mind. Holy Motors cannot be responding to another continental conflagration, and so it appears to me to be some kind of commentary on the roles we exhibit in an era of 'post-postmodernity': a sketch on our age of performative authenticity, perhaps, or an idle doodle on the function and psychosocial function of work.

Or perhaps not. After all, this film was produced in a time that offers the near-universal availability of mind-altering substances, and this certainly changes the context in which this film was both created. And, how can I put it, was intended to be watched.

§

Manchester by the Sea (2016)

An absolutely devastating portrayal of a character who is unable to forgive himself and is hesitant to engage with anyone ever again. It features a near-ideal balance between portraying unrecoverable anguish and tender warmth, and is paradoxically grandiose in its subtle intimacy. The mechanics of life led me to watch this lying on a bed in a chain hotel by Heathrow Airport, and if this colourless circumstance blunted the film's emotional impact on me, I am probably thankful for it. Indeed, I find myself reduced in this review to fatuously recalling my favourite interactions instead of providing any real commentary. You could write a whole essay about one particular incident: its surfaces, subtexts and angles... all despite nothing of any substance ever being communicated. Truly stunning.

§

McCabe & Mrs. Miller (1971)

Roger Ebert called this movie “one of the saddest films I have ever seen, filled with a yearning for love and home that will not ever come.” But whilst it is difficult to disagree with his sentiment, Ebert's choice of “sad” is somehow not quite the right word. Indeed, I've long regretted that our dictionaries don't have more nuanced blends of tragedy and sadness; perhaps the Ancient Greeks can loan us some.

Nevertheless, the plot of this film is of a gambler and a prostitute who become business partners in a new and remote mining town called Presbyterian Church. However, as their town and enterprise booms, it comes to the attention of a large mining corporation who want to bully or buy their way into the action. What makes this film stand out is not the plot itself, however, but its mood and tone — the town and its inhabitants seem to be thrown together out of raw lumber, covered alternatively in mud or frozen ice, and their days (and their personalities) are both short and dark in equal measure.

As a brief aside, if you haven't seen a Roger Altman film before, this has all the trappings of being a good introduction. As Ebert went on to observe: “This is not the kind of movie where the characters are introduced. They are all already here.” Furthermore, we can see some of Altman's trademark conversations that overlap, a superb handling of ensemble casts, and a quietly subversive view of the tyranny of 'genre'... and the latter in a time when the appetite for revisionist portrays of the West was not very strong. All of these 'Altmanian' trademarks can be ordered in much stronger measures in his later films: in particular, his comedy-drama Nashville (1975) has 24 main characters, and my jejune interpretation of Gosford Park (2001) is that it is purposefully designed to poke fun those who take a reductionist view of 'genre', or at least on the audience's expectations. (In this case, an Edwardian-era English murder mystery in the style of Agatha Christie, but where no real murder or detection really takes place.)

On the other hand, McCabe & Mrs. Miller is actually a poor introduction to Altman. The story is told in a suitable deliberate and slow tempo, and the two stars of the film are shown thoroughly defrocked of any 'star status', in both the visual and moral dimensions. All of these traits are, however, this film's strength, adding up to a credible, fascinating and riveting portrayal of the old West.

§

Detour (1945)

Detour was filmed in less than a week, and it's difficult to decide — out of the actors and the screenplay — which is its weakest point.... Yet it still somehow seemed to drag me in.

The plot revolves around luckless Al who is hitchhiking to California. Al gets a lift from a man called Haskell who quickly falls down dead from a heart attack. Al quickly buries the body and takes Haskell's money, car and identification, believing that the police will believe Al murdered him. An unstable element is soon introduced in the guise of Vera, who, through a set of coincidences that stretches credulity, knows that this 'new' Haskell (ie. Al pretending to be him) is not who he seems. Vera then attaches herself to Al in order to blackmail him, and the world starts to spin out of his control.

It must be understood that none of this is executed very well. Rather, what makes Detour so interesting to watch is that its 'errors' lend a distinctively creepy and unnatural hue to the film. Indeed, in the early twentieth century, Sigmund Freud used the word unheimlich to describe the experience of something that is not simply mysterious, but something creepy in a strangely familiar way. This is almost the perfect description of watching Detour — its eerie nature means that we are not only frequently second-guessed about where the film is going, but are often uncertain whether we are watching the usual objective perspective offered by cinema.

In particular, are all the ham-fisted segues, stilted dialogue and inscrutable character motivations actually a product of Al inventing a story for the viewer? Did he murder Haskell after all, despite the film 'showing' us that Haskell died of natural causes? In other words, are we watching what Al wants us to believe? Regardless of the answers to these questions, the film succeeds precisely because of its accidental or inadvertent choices, so it is an implicit reminder that seeking the director's original intention in any piece of art is a complete mirage. Detour is certainly not a good film, but it just might be a great one. (It is a short film too, and, out of copyright, it is available online for free.)

§

Safe (1995)

Safe is a subtly disturbing film about an upper-middle-class housewife who begins to complain about vague symptoms of illness. Initially claiming that she “doesn't feel right,” Carol starts to have unexplained headaches, a dry cough and nosebleeds, and eventually begins to have trouble breathing. Carol's family doctor treats her concerns with little care, and suggests to her husband that she sees a psychiatrist.

Yet Carol's episodes soon escalate. For example, as a 'homemaker' and with nothing else to occupy her, Carol's orders a new couch for a party. But when the store delivers the wrong one (although it is not altogether clear that they did), Carol has a near breakdown. Unsure where to turn, an 'allergist' tells Carol she has "Environmental Illness," and so Carol eventually checks herself into a new-age commune filled with alternative therapies.

On the surface, Safe is thus a film about the increasing about of pesticides and chemicals in our lives, something that was clearly felt far more viscerally in the 1990s. But it is also a film about how lack of genuine healthcare for women must be seen as a critical factor in the rise of crank medicine. (Indeed, it made for something of an uncomfortable watch during the coronavirus lockdown.) More interestingly, however, Safe gently-yet-critically examines the psychosocial causes that may be aggravating Carol's illnesses, including her vacant marriage, her hollow friends and the 'empty calorie' stimulus of suburbia. None of this should be especially new to anyone: the gendered Victorian term 'hysterical' is often all but spoken throughout this film, and perhaps from the very invention of modern medicine, women's symptoms have often regularly minimised or outright dismissed. (Hilary Mantel's 2003 memoir, Giving Up the Ghost is especially harrowing on this.)

As I opened this review, the film is subtle in its messaging. Just to take one example from many, the sound of the cars is always just a fraction too loud: there's a scene where a group is eating dinner with a road in the background, and the total effect can be seen as representing the toxic fumes of modernity invading our social lives and health. I won't spoiler the conclusion of this quietly devasting film, but don't expect a happy ending.

§

The Driver (1978)

Critics grossly misunderstood The Driver when it was first released. They interpreted the cold and unemotional affect of the characters with the lack of developmental depth, instead of representing their dissociation from the society around them. This reading was encouraged by the fact that the principal actors aren't given real names and are instead known simply by their archetypes instead: 'The Driver', 'The Detective', 'The Player' and so on. This sort of quasi-Jungian erudition is common in many crime films today (Reservoir Dogs, Kill Bill, Layer Cake, Fight Club), so the critics' misconceptions were entirely reasonable in 1978.

The plot of The Driver involves the eponymous Driver, a noted getaway driver for robberies in Los Angeles. His exceptional talent has far prevented him from being captured thus far, so the Detective attempts to catch the Driver by pardoning another gang if they help convict the Driver via a set-up robbery. To give himself an edge, however, The Driver seeks help from the femme fatale 'Player' in order to mislead the Detective.

If this all sounds eerily familiar, you would not be far wrong. The film was essentially remade by Nicolas Winding Refn as Drive (2011) and in Edgar Wright's 2017 Baby Driver. Yet The Driver offers something that these neon-noir variants do not. In particular, the car chases around Los Angeles are some of the most captivating I've seen: they aren't thrilling in the sense of tyre squeals, explosions and flying boxes, but rather the vehicles come across like wild animals hunting one another. This feels especially so when the police are hunting The Driver, which feels less like a low-stakes game of cat and mouse than a pack of feral animals working together — a gang who will tear apart their prey if they find him. In contrast to the undercar neon glow of the Fast & Furious franchise, the urban realism backdrop of the The Driver's LA metropolis contributes to a sincere feeling of artistic fidelity as well.

To be sure, most of this is present in the truly-excellent Drive, where the chase scenes do really communicate a credible sense of stakes. But the substitution of The Driver's grit with Drive's soft neon tilts it slightly towards that common affliction of crime movies: style over substance. Nevertheless, I can highly recommend watching The Driver and Drive together, as it can tell you a lot about the disconnected socioeconomic practices of the 1980s compared to the 2010s. More than that, however, the pseudo-1980s synthwave soundtrack of Drive captures something crucial to analysing the world of today. In particular, these 'sounds from the past filtered through the present' bring to mind the increasing role of nostalgia for lost futures in the culture of today, where temporality and pop culture references are almost-exclusively citational and commemorational.

§

The Souvenir (2019)

The ostensible outline of this quietly understated film follows a shy but ambitious film student who falls into an emotionally fraught relationship with a charismatic but untrustworthy older man. But that doesn't quite cover the plot at all, for not only is The Souvenir a film about a young artist who is inspired, derailed and ultimately strengthened by a toxic relationship, it is also partly a coming-of-age drama, a subtle portrait of class and, finally, a film about the making of a film.

Still, one of the geniuses of this truly heartbreaking movie is that none of these many elements crowds out the other. It never, ever feels rushed. Indeed, there are many scenes where the camera simply 'sits there' and quietly observes what is going on. Other films might smother themselves through references to 18th-century oil paintings, but The Souvenir somehow evades this too. And there's a certain ring of credibility to the story as well, no doubt in part due to the fact it is based on director Joanna Hogg's own experiences at film school. A beautifully observed and multi-layered film; I'll be happy if the sequel is one-half as good.

§

The Wrestler (2008)

Randy 'The Ram' Robinson is long past his prime, but he is still rarin' to go in the local pro-wrestling circuit. Yet after a brutal beating that seriously threatens his health, Randy hangs up his tights and pursues a serious relationship... and even tries to reconnect with his estranged daughter. But Randy can't resist the lure of the ring, and readies himself for a comeback.

The stage is thus set for Darren Aronofsky's The Wrestler, which is essentially about what drives Randy back to the ring. To be sure, Randy derives much of his money from wrestling as well as his 'fitness', self-image, self-esteem and self-worth. Oh, it's no use insisting that wrestling is fake, for the sport is, needless to say, Randy's identity; it's not for nothing that this film is called The Wrestler.

In a number of ways, The Sound of Metal (2019) is both a reaction to (and a quiet remake of) The Wrestler, if only because both movies utilise 'cool' professions to explore such questions of identity. But perhaps simply when The Wrestler was produced makes it the superior film. Indeed, the role of time feels very important for the Wrestler. In the first instance, time is clearly taking its toll on Randy's body, but I felt it more strongly in the sense this was very much a pre-2008 film, released on the cliff-edge of the global financial crisis, and the concomitant precarity of the 2010s.

Indeed, it is curious to consider that you couldn't make The Wrestler today, although not because the relationship to work has changed in any fundamentalway. (Indeed, isn't it somewhat depressing the realise that, since the start of the pandemic and the 'work from home' trend to one side, we now require even more people to wreck their bodies and mental health to cover their bills?) No, what I mean to say here is that, post-2016, you cannot portray wrestling on-screen without, how can I put it, unwelcome connotations. All of which then reminds me of Minari's notorious red hat...

But I digress. The Wrestler is a grittily stark darkly humorous look into the life of a desperate man and a sorrowful world, all through one tragic profession.

§

Thief (1981)

Frank is an expert professional safecracker and specialises in high-profile diamond heists. He plans to use his ill-gotten gains to retire from crime and build a life for himself with a wife and kids, so he signs on with a top gangster for one last big score. This, of course, could be the plot to any number of heist movies, but Thief does something different. Similar to The Wrestler and The Driver (see above) and a number of other films that I watched this year, Thief seems to be saying about our relationship to work and family in modernity and postmodernity.

Indeed, the 'heist film', we are told, is an understudied genre, but part of the pleasure of watching these films is said to arise from how they portray our desired relationship to work. In particular, Frank's desire to pull off that last big job feels less about the money it would bring him, but a displacement from (or proxy for) fulfilling some deep-down desire to have a family or indeed any relationship at all. Because in theory, of course, Frank could enter into a fulfilling long-term relationship right away, without stealing millions of dollars in diamonds... but that's kinda the entire point: Frank needing just one more theft is an excuse to not pursue a relationship and put it off indefinitely in favour of 'work'. (And being Federal crimes, it also means Frank cannot put down meaningful roots in a community.) All this is communicated extremely subtly in the justly-lauded lowkey diner scene, by far the best scene in the movie.

The visual aesthetic of Thief is as if you set The Warriors (1979) in a similarly-filthy Chicago, with the Xenophon-inspired plot of The Warriors replaced with an almost deliberate lack of plot development... and the allure of The Warriors' fantastical criminal gangs (with their alluringly well-defined social identities) substituted by a bunch of amoral individuals with no solidarity beyond the immediate moment. A tale of our time, perhaps.

I should warn you that the ending of Thief is famously weak, but this is a gritty, intelligent and strangely credible heist movie before you get there.

§

Uncut Gems (2019)

The most exhausting film I've seen in years; the cinematic equivalent of four cups of double espresso, I didn't even bother even trying to sleep after downing Uncut Gems late one night. Directed by the two Safdie Brothers, it often felt like I was watching two films that had been made at the same time. (Or do I mean two films at 2X speed?)

No, whatever clumsy metaphor you choose to adopt, the unavoidable effect of this film's finely-tuned chaos is an uncompromising and anxiety-inducing piece of cinema. The plot follows Howard as a man lost to his countless vices — mostly gambling with a significant side hustle in adultery, but you get the distinct impression he would be happy with anything that will give him another high. A true junkie's junkie, you might say. You know right from the beginning it's going to end in some kind of disaster, the only question remaining is precisely how and what.

Portrayed by an (almost unrecognisable) Adam Sandler, there's an uncanny sense of distance in the emotional chasm between 'Sandler-as-junkie' and 'Sandler-as-regular-star-of-goofy-comedies'. Yet instead of being distracting and reducing the film's affect, this possibly-deliberate intertextuality somehow adds to the masterfully-controlled mayhem. My heart races just at the memory. Oof.

§

Woman in the Dunes (1964)

I ended up watching three films that feature sand this year: Denis Villeneuve's Dune (2021), Lawrence of Arabia (1962) and Woman in the Dunes. But it is this last 1964 film by Hiroshi Teshigahara that will stick in my mind in the years to come. Sure, there is none of the Medician intrigue of Dune or the Super Panavision-70 of Lawrence of Arabia (or its quasi-orientalist score, itself likely stolen from Anton Bruckner's 6th Symphony), but Woman in the Dunes doesn't have to assert its confidence so boldly, and it reveals the enormity of its plot slowly and deliberately instead. Woman in the Dunes never rushes to get to the film's central dilemma, and it uncovers its terror in little hints and insights, all whilst establishing the daily rhythm of life.

Woman in the Dunes has something of the uncanny horror as Dogtooth (see above), as well as its broad range of potential interpretations. Both films permit a wide array of readings, without resorting to being deliberately obscurantist or being just plain random — it is perhaps this reason why I enjoyed them so much. It is true that asking 'So what does the sand mean?' sounds tediously sophomoric shorn of any context, but it somehow applies to this thoughtfully self-contained piece of cinema.

§

A Quiet Place (2018)

Although A Quiet Place was not actually one of the best films I saw this year, I'm including it here as it is certainly one of the better 'mainstream' Hollywood franchises I came across. Not only is the film very ably constructed and engages on a visceral level, I should point out that it is rare that I can empathise with the peril of conventional horror movies (and perhaps prefer to focus on its cultural and political aesthetics), but I did here.

The conceit of this particular post-apocalyptic world is that a family is forced to live in almost complete silence while hiding from creatures that hunt by sound alone. Still, A Quiet Place engages on an intellectual level too, and this probably works in tandem with the pure 'horrorific' elements and make it stick into your mind. In particular, and to my mind at least, A Quiet Place a deeply American conservative film below the surface: it exalts the family structure and a certain kind of sacrifice for your family. (The music often had a passacaglia-like strain too, forming a tombeau for America.) Moreover, you survive in this dystopia by staying quiet — that is to say, by staying stoic — suggesting that in the wake of any conflict that might beset the world, the best thing to do is to keep quiet. Even communicating with your loved ones can be deadly to both of you, so not emote, acquiesce quietly to your fate, and don't, whatever you do, speak up. (Or join a union.)

I could go on, but The Quiet Place is more than this. It's taut and brief, and despite cinema being an increasingly visual medium, it encourages its audience to develop a new relationship with sound.

16 January, 2022 06:44PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Backing up my home server with Bacula and Amazon Storage Gateway

I have a home server.

Initially conceived and sized so I could digitize my (rather sizeable) DVD collection, I started using it for other things; I added a few play VMs on it, started using it as a destination for the deja-dup-based backups of my laptop and the time machine-based ones of the various macs in the house, and used it as the primary location of all the photos I've taken with my cameras over the years (currently taking up somewhere around 500G) as well as those that were taking at our wedding (another 100G). To add to that, I've copied the data that my wife had on various older laptops and external hard drives onto this home server as well, so that we don't lose the data should something happen to one or more of these bits of older hardware.

Needless to say, the server was running full, so a few months ago I replaced the 4x2T hard drives that I originally put in the server with 4x6T ones, and there was much rejoicing.

But then I started considering what I was doing. Originally, the intent was for the server to contain DVD rips of my collection; if I were to lose the server, I could always re-rip the collection and recover that way (unless something happened that caused me to lose both at the same time, of course, but I consider that sufficiently unlikely that I don't want to worry about it). Much of the new data on the server, however, cannot be recovered like that; if the server dies, I lose my photos forever, with no way of recovering them. Obviously that can't be okay.

So I started looking at options to create backups of my data, preferably in ways that make it easily doable for me to automate the backups -- because backups that have to be initiated are backups that will be forgotten, and backups that are forgotten are backups that don't exist. So let's not try that.

When I was still self-employed in Belgium and running a consultancy business, I sold a number of lower-end tape libraries for which I then configured bacula, and I preferred a solution that would be similar to that without costing an arm and a leg. I did have a look at a few second-hand tape libraries, but even second hand these are still way outside what I can budget for this kind of thing, so that was out too.

After looking at a few solutions that seemed very hackish and would require quite a bit of handholding (which I don't think is a good idea), I remembered that a few years ago, I had a look at the Amazon Storage Gateway for a customer. This gateway provides a virtual tape library with 10 drives and 3200 slots (half of which are import/export slots) over iSCSI. The idea is that you install the VM on a local machine, you connect it to your Amazon account, you connect your backup software to it over iSCSI, and then it syncs the data that you write to Amazon S3, with the ability to archive data to S3 Glacier or S3 Glacier Deep Archive. I didn't end up using it at the time because it required a VMWare virtualization infrastructure (which I'm not interested in), but I found out that these days, they also provide VM images for Linux KVM-based virtual machines (amongst others), so that changes things significantly.

After making a few calculations, I figured out that for the amount of data that I would need to back up, I would require a monthly budget of somewhere between 10 and 20 USD if the bulk of the data would be on S3 Glacier Deep Archive. This is well within my means, so I gave it a try.

The VM's technical requirements state that you need to assign four vCPUs and 16GiB of RAM, which just so happens to be the exact amount of RAM and CPU that my physical home server has. Obviously we can't do that. I tried getting away with 4GiB and 2 vCPUs, but that didn't work; the backup failed out after about 500G out of 2T had been written, due to the VM running out of resources. On the VM's console I found complaints that it required more memory, and I saw it mention something in the vicinity of 7GiB instead, so I decided to try again, this time with 8GiB of RAM rather than 4. This worked, and the backup was successful.

As far as bacula is concerned, the tape library is just a (very big...) normal tape library, and I got data throughput of about 30M/s while the VM's upload buffer hadn't run full yet, with things slowing down to pretty much my Internet line speed when it had. With those speeds, Bacula finished the backup successfully in "1 day 6 hours 43 mins 45 secs", although the storage gateway was still uploading things to S3 Glacier for a few hours after that.

All in all, this seems like a viable backup solution for large(r) amounts of data, although I haven't yet tried to perform a restore.

16 January, 2022 09:06AM

Russell Coker

SSD Endurance

I previously wrote about the issue of swap potentially breaking SSD [1]. My conclusion was that swap wouldn’t be a problem as no normally operating systems that I run had swap using any significant fraction of total disk writes. In that post the most writes I could see was 128GB written per day on a 120G Intel SSD (writing the entire device once a day).

My post about swap and SSD was based on the assumption that you could get many thousands of writes to the entire device which was incorrect. Here’s a background on the terminology from WD [2]. So in the case of the 120G Intel SSD I was doing over 1 DWPD (Drive Writes Per Day) which is in the middle of the range of SSD capability, Intel doesn’t specify the DWPD or TBW (Tera Bytes Written) for that device.

The most expensive and high end NVMe device sold by my local computer store is the Samsung 980 Pro which has a warranty of 150TBW for the 250G device and 600TBW for the 1TB device [3]. That means that the system which used to have an Intel SSD would have exceeded the warranty in 3 years if it had a 250G device.

My current workstation has been up for just over 7 days and has averaged 110GB written per day. It has some light VM use and the occasional kernel compile, a fairly typical developer workstation. It’s storage is 2*Crucial 1TB NVMe devices in a BTRFS RAID-1, the NVMe devices are the old series of Crucial ones and are rated for 200TBW which means that they can be expected to last for 5 years under the current load. This isn’t a real problem for me as the performance of those devices is lower than I hoped for so I will buy faster ones before they are 5yo anyway.

My home server (and my wife’s workstation) is averaging 325GB per day on the SSDs used for the RAID-1 BTRFS filesystem for root and for most data that is written much (including VMs). The SSDs are 500G Samsung 850 EVOs [4] which are rated at 150TBW which means just over a year of expected lifetime. The SSDs are much more than a year old, I think Samsung stopped selling them more than a year ago. Between the 2 SSDs SMART reports 18 uncorrectable errors and “btrfs device stats” reports 55 errors on one of them. I’m not about to immediately replace them, but it appears that they are well past their prime.

The server which runs my blog (among many other things) is averaging over 1TB written per day. It currently has a RAID-1 of hard drives for all storage but it’s previous incarnation (which probably had about the same amount of writes) had a RAID-1 of “enterprise” SSDs for the most written data. After a few years of running like that (and some time running with someone else’s load before it) the SSDs became extremely slow (sustained writes of 15MB/s) and started getting errors. So that’s a pair of SSDs that were burned out.

Conclusion

The amounts of data being written are steadily increasing. Recent machines with more RAM can decrease storage usage in some situations but that doesn’t compare to the increased use of checksummed and logged filesystems, VMs, databases for local storage, and other things that multiply writes. The amount of writes allowed under warranty isn’t increasing much and there are new technologies for larger SSD storage that decrease the DWPD rating of the underlying hardware.

For the systems I own it seems that they are all going to exceed the rated TBW for the SSDs before I have other reasons to replace them, and they aren’t particularly high usage systems. A mail server for a large number of users would hit it much earlier.

RAID of SSDs is a really good thing. Replacement of SSDs is something that should be planned for and a way of swapping SSDs to less important uses is also good (my parents have some SSDs that are too small for my current use but which work well for them). Another thing to consider is that if you have a server with spare drive bays you could put some extra SSDs in to spread the wear among a larger RAID-10 array. Instead of having a 2*SSD BTRFS RAID-1 for a server you could have 6*SSD to get a 3* longer lifetime than a regular RAID-1 before the SSDs wear out (BTRFS supports this sort of thing).

Based on these calculations and the small number of errors I’ve seen on my home server I’ll add a 480G SSD I have lying around to the array to spread the load and keep it running for a while longer.

16 January, 2022 05:33AM by etbe

Russ Allbery

Review: The Brightest Fell

Review: The Brightest Fell, by Seanan McGuire

Series: October Daye #11
Publisher: DAW
Copyright: 2017
ISBN: 0-698-18352-5
Format: Kindle
Pages: 353

This is the eleventh book in the October Daye urban fantasy series, not counting various novellas and side stories. You really cannot start here, particularly given how many ties this book has to the rest of the series.

I would like to claim there's some sort of plan or strategy in how I read long series, but there are just a lot of books to read and then I get distracted and three years have gone by. The advantage of those pauses, at least for writing reviews, is that I return to the series with fresh eyes and more points of comparison. My first thought this time around was "oh, these books aren't that well written, are they," followed shortly thereafter by staying up past midnight reading just one more chapter.

Plot summaries are essentially impossible this deep into a series, when even the names of the involved characters can be a bit of a spoiler. What I can say is that we finally get the long-awaited confrontation between Toby and her mother, although it comes in an unexpected (and unsatisfying) form. This fills in a few of the gaps in Toby's childhood, although there's not much there we didn't already know. It fills in considerably more details about the rest of Toby's family, most notably her pure-blood sister.

The writing is indeed not great. This series is showing some of the signs I've seen in other authors (Mercedes Lackey, for instance) who wrote too many books per year to do each of them justice. I have complained before about McGuire's tendency to reuse the same basic plot structure, and this instance seemed particularly egregious. The book opens with Toby enjoying herself and her found family, feeling like they can finally relax. Then something horrible happens to people she cares about, forcing her to go solve the problem. This in theory requires her to work out some sort of puzzle, but in practice is fairly linear and obvious because, although I love Toby as a character, she can't puzzle her way out of a wet sack. Everything is (mostly) fixed in the end, but there's a high cost to pay, and everyone ends the book with more trauma.

The best books of this series are the ones where McGuire manages to break with this formula. This is not one of them. The plot is literally on magical rails, since The Brightest Fell skips even pretending that Toby is an actual detective (although it establishes that she's apparently still working as one in the human world, a detail that I find baffling) and gives her a plot compass that tells her where to go. I don't really mind this since I read this series for emotional catharsis rather than Toby's ingenuity, but alas that's mostly missing here as well. There is a resolution of sorts, but it's the partial and conditional kind that doesn't include awful people getting their just deserts.

This is also not a good series entry for world-building. McGuire has apparently been dropping hints for this plot back at least as far as Ashes of Honor. I like that sort of long-term texture to series like this, but the unfortunate impact on this book is a lot of revisiting of previous settings and very little in the way of new world-building. The bit with the pixies was very good; I wanted more of that, not the trip to an Ashes of Honor setting to pick up a loose end, or yet another significant scene in Borderland Books.

As an aside, I wish authors would not put real people into their books as characters, even when it's with permission as I'm sure it was here. It's understandable to write a prominent local business into a story as part of the local color (although even then I would rather it not be a significant setting in the story), but having the actual owner and staff show up, even in brief cameos, feels creepy and weird to me. It also comes with some serious risks because real people are not characters under the author's control. (All the content warnings for that link, which is a news story from three years after this book was published.)

So, with all those complaints, why did I stay up late reading just one more chapter? Part of the answer is that McGuire writes very grabby books, at least for me. Toby is a full-speed-ahead character who is constantly making things happen, and although the writing in this book had more than the usual amount of throat-clearing and rehashing of the same internal monologue, the plot still moved along at a reasonable clip. Another part of the answer is that I am all-in on these characters: I like them, I want them to be happy, and I want to know what's going to happen next. It helps that McGuire has slowly added characters over the course of a long series and given most of them a chance to shine. It helps even more that I like all of them as people, and I like the style of banter that McGuire writes. Also, significant screen time for the Luidaeg is never a bad thing.

I think this was the weakest entry in the series in a while. It wrapped up some loose ends that I wasn't that interested in wrapping up, introduced a new conflict that it doesn't resolve, spent a bunch of time with a highly unpleasant character I didn't enjoy reading about, didn't break much new world-building ground, and needed way more faerie court politics. But some of the banter was excellent, the pixies and the Luidaeg were great, and I still care a lot about these characters. I am definitely still reading.

Followed by Nights and Silences.

Continuing a pattern from Once Broken Faith, the ebook version of The Brightest Fell includes a bonus novella. (I'm not sure if it's also present in the print version.)

"Of Things Unknown": As is usual for the short fiction in this series, this is a side story from the perspective of someone other than Toby. In this case, that's April O'Leary, first introduced all the way back in A Local Habitation, and the novella focuses on loose ends from that novel. Loose ends are apparently the theme of this book.

This was... fine. I like April, I enjoyed reading a story from her perspective, and I'm always curious to see how Toby looks from the outside. I thought the plot was strained and the resolution a bit too easy and painless, and I was not entirely convinced by April's internal thought processes. It felt like McGuire left some potential for greater plot complications on the table here, and I found it hard to shake the impression that this story was patching an error that McGuire felt she'd made in the much earlier novel. But it was nice to have an unambiguously happy ending after the more conditional ending of the main story. (6)

Rating: 6 out of 10

16 January, 2022 03:06AM

DocKnot 6.01

This release of my static site generator and software release manager finishes incorporating the last piece of my old release script that I was still using: copying a new software release into a software distribution archive tree, updating symlinks, updating the version database used to generate my web pages, and archiving the old version.

I also added a new docknot update-spin command that updates an input tree for the spin static site generator, fixing any deprecations or changes in the input format. Currently, all this does is convert the old-style *.rpod pointer files to new-style *.spin pointers.

This release also has a few other minor bug fixes, including for an embarrassing bug that required docknot spin be run from a package source tree because it tried to load per-package metadata (even though it doesn't use that data).

You can get the latest release from CPAN or from the DocKnot distribution page.

16 January, 2022 01:34AM

January 14, 2022

hackergotchi for Debian Social Team

Debian Social Team

Some site updates

  • Pleroma has been updated to version 2.4.1. We also suffered some downtime during the 11th of January. Upgrading to the latest version fixed our issues.
  • Peertube has been upgraded to version 4.0.0.
  • Jitsi Meet has been upgraded to version 2.0.6726.
  • Mjolnr has been upgraded to 1.2.1.
  • Our upgrade to bullseye is complete, we haven’t encountered any problems upgrading to bullseye \o/.

14 January, 2022 06:01PM by highvoltage

hackergotchi for Norbert Preining

Norbert Preining

Future of “my” packages in Debian

After having been (again) demoted (timed perfectly to my round birthday!) based on flimsy arguments, I have been forced to rethink the level of contribution I want to do for Debian. Considering in particular that I have switched my main desktop to dual-boot into Arch Linux (all on the same btrfs fs with subvolumes, great!) and have run Arch now for several days exclusively, I think it is time to review the packages I am somehow responsible for (full list of packages).

After about 20 years in Debian, time to send off quite some stuff that has accumulated over time.

KDE/Plasma, frameworks, Gears, and related packages

All these packages are group maintained, so there is not much to worry about. Furthermore, a few new faces have joined the team and are actively working on the packages, although mostly on Qt6. I guess that with me not taking action, frameworks, gears, and plasma will fall back over time (frameworks: Debian 5.88 versus current 5.90, gears: Debian 21.08 versus current 21.12, plasma uptodate at the moment).

With respect to my packages on OBS, they will probably also go stale over time. Using Arch nowadays I lack the development tools necessary to build Debian packages, and above all, the motivation.

I am sorry for all those who have learned to rely on my OBS packages over the last years, bringing modern and uptodate KDE/Plasma to Debian/stable, please direct your complaints at the responsible entities in Debian.

Cinnamon

As I have written already here, I have reduced my involvement quite a lot, and nowadays Fabio and Joshua are doing the work. But both are not even DM (AFAIR) and I am the only one doing uploads (I got DM upload permissions for it). But I am not sure how long I will continue doing this. This also means that in the near future, Cinnamon will also go stale.

TeX related packages

Hilmar has DM upload permissions and is very actively caring for the packages, so I don’t see any source of concern here. New packages will need to find a new uploader, though. With myself also being part of upstream, I can surely help out in the future with difficult problems.

Calibre and related packages

Yokota-san (another DM I have sponsored) has DM upload permissions and is very actively caring for the packages, so also here there is not much of concern.

Onedrive

This is already badly outdated, and I recommend using the OBS builds which are current and provide binaries for Ubuntu and Debian for various versions.

ROCm

Here fortunately a new generation of developers has taken over maintenance and everything is going smoothly, much better than I could have done, yeah to that!

Qalculate related packages

These are group maintained, but unfortunately nobody else but me has touched the repos for quite some time. I fear that the packages will go stale rather soon.

isync/mbsync

I have recently salvaged this package, and use it daily, but I guess it needs to be orphaned sooner or later.

CafeOBJ

While I am also part of upstream here, I guess it will be orphaned.

Julia

Julia is group maintained, but unfortunately nobody else but me has touched the repo for quite some time, and we are already far behind the normal releases (and julia got removed from testing). While go stale/orphaned. I recommend installing upstream binaries.

python-mechanize

Another package that is group maintained in the Python team, but with only me as uploader I guess it will go stale and effectively be orphaned soon.

xxhash

Has already by orphaned.

qpdfview

No upstream development, so not much to do, but will be orphaned, too.


14 January, 2022 02:17AM by Norbert Preining

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.8: Updated, Strict Headers

rcpp logo

The Rcpp team is thrilled to share the news of the newest release 1.0.8 of Rcpp which hit CRAN today, and has already been uploaded to Debian as well. Windows and macOS builds should appear at CRAN in the next few days. This release continues with the six-months cycle started with release 1.0.5 in July 2020. As a reminder, interim ‘dev’ or ‘rc’ releases will alwasys be available in the Rcpp drat repo; this cycle there were once again seven (!!) – times two as we also tested the modified header (more below). These rolling release tend to work just as well, and are also fully tested against all reverse-dependencies.

Rcpp has become the most popular way of enhancing R with C or C++ code. Right now, around 2478 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 242 in BioConductor.

This release finally brings a change we have worked on quite a bit over the last few months. The idea of enforcing the setting of STRICT_R_HEADERS was prososed years ago in 2016 and again in 2018. But making such a chance against a widely-deployed code base has repurcussions, and we were not ready then. Last April, this was revisited in issue #1158. Over the course of numerous lengthy runs of tests of a changed Rcpp package against (essentially) all reverse-dependencies (i.e. packages which use Rcpp) we identified ninetyfour packages in total which needed a change. We provided either a patch we emailed, or a GitHub pull request, to all ninetyfour. And we are happy to say that eighty cases were resolved via a new CRAN upload, with a seven more having merged the pull request but not yet uploaded.

Hence, we could make the case to CRAN (who were always CC’ed on the monthly ‘nag’ emails we sent to maintainers of packages needing a change) that an upload was warranted. And after a brief period for their checks and inspection, our January 11 release of Rcpp 1.0.8 arrived on CRAN on January 13.

So with that, a big and heartfelt Thank You! to all eighty maintainers for updating their packages to permit this change at the Rcpp end, to CRAN for the extra checking, and to everybody else who I bugged with the numerous emails and updated to the seemingly never-ending issue #1158. We all got this done, and that is a Good Thing (TM).

Other than the aforementioned change which will not automatically set STRICT_R_HEADERS (unless opted out which one can), a number of nice pull request by a number of contributors are included in this release:

  • Iñaki generalized use of finalizers for external pointers in #1180
  • Kevin ensured include paths are always quoted in #1189
  • Dirk added new headers to allow a more fine-grained choice of Rcpp feature for faster builds in #1191
  • Travers Ching extended the function signature generator to allow for a default R argument in #1184 and #1187
  • Dirk extended documentation, removed old example code, updated references and refreshed CI setup in several PRs (see below)

The full list of details follows.

Changes in Rcpp release version 1.0.8 (2022-01-11)

  • Changes in Rcpp API:

    • STRICT_R_HEADERS is now enabled by default, see extensive discussion in #1158 closing #898.

    • A new #define allows default setting of finalizer calls for external pointers (Iñaki in #1180 closing #1108).

    • Rcpp:::CxxFlags() now quotes the include path generated, (Kevin in #1189 closing #1188).

    • New header files Rcpp/Light, Rcpp/Lighter, Rcpp/Lightest and default Rcpp/Rcpp for fine-grained access to features (and compilation time) (Dirk #1191 addressing #1168).

  • Changes in Rcpp Attributes:

    • A new option signature allows customization of function signatures (Travers Ching in #1184 and #1187 fixing #1182)
  • Changes in Rcpp Documentation:

    • The Rcpp FAQ has a new entry on how not to grow a vector (Dirk in #1167).

    • Some long-spurious calls to RNGSope have been removed from examples (Dirk in #1173 closing #1172).

    • DOI reference in the bibtex files have been updated per JSS request (Dirk in #1186).

  • Changes in Rcpp Deployment:

    • Some continuous integration components have been updated (Dirk in #1174, #1181, and #1190).

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2822 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 January, 2022 01:03AM

Reproducible Builds (diffoscope)

diffoscope 200 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 200. This version includes the following changes:

* Even if a Sphinx .inv inventory file is labelled "The remainder of this
  file is compressed using zlib", it might not actually be. In this case,
  don't traceback, and simply return the original content.
  (Closes: reproducible-builds/diffoscope#299)
* Update "X has been modified after NT_GNU_BUILD_ID has been applied" message
  to, for instance, not duplicating the full filename in the primary
  diffoscope's output.

You find out more by visiting the project homepage.

14 January, 2022 12:00AM

January 13, 2022

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (November and December 2021)

The following contributors got their Debian Developer accounts in the last two months:

  • Douglas Andrew Torrance (dtorrance)
  • Mark Lee Garrett (lee)

The following contributors were added as Debian Maintainers in the last two months:

  • Lukas Matthias Märdian
  • Paulo Roberto Alves de Oliveira
  • Sergio Almeida Cipriano Junior
  • Julien Lamy
  • Kristian Nielsen
  • Jeremy Paul Arnold Sowden
  • Jussi Tapio Pakkanen
  • Marius Gripsgard
  • Martin Budaj
  • Peymaneh
  • Tommi Petteri Höynälänmaa

Congratulations!

13 January, 2022 04:00PM by Jean-Pierre Giraud

hackergotchi for Daniel Lange

Daniel Lange

January 12, 2022

hackergotchi for Michael Prokop

Michael Prokop

Revisiting 2021

*

Uhm yeah, so this shirt didn’t age well. :) Mainly to recall what happened, I’m once again revisiting my previous year (previous edition: 2020).

2021 was quite challenging overall. It started with four weeks of distance learning at school. Luckily at least at school things got back to "some kind of normal" afterwards. The lockdowns turned out to be an excellent opportunity for practising Geocaching though, and that’s what I started to do with my family. It’s a great way to grab some fresh air, get to know new areas, and spend time with family and friends – I plan to continue doing this. :)

We bought a family season ticket for Freibäder (open-air baths) in Graz; this turned out to be a great investment – I enjoyed the open air swimming with family, as well as going for swimming laps on my own very much, and plan to do the same in 2022. Due to the lockdowns and the pandemics, the weekly Badminton sessions sadly didn’t really take place, so I pushed towards the above-mentioned outdoor swimming and also some running; with my family we managed to do some cycling, inline skating and even practiced some boulder climbing.

For obvious reasons plenty of concerts I was looking forward didn’t take place. With my parents we at least managed to attend a concert performance of Puccinis Tosca with Jonas Kaufmann at Schloßbergbühne Kasematten/Graz, and with the kids we saw "Robin Hood" in Oper Graz and "Pippi Langstrumpf" at Studiobühne of Oper Graz. The lack of concerts and rehearsals once again and still severely impacts my playing the drums, including at HTU BigBand Graz. :-/

Grml-wise we managed to publish release 2021.07, codename JauKerl. Debian-wise we got version 11 AKA bullseye released as new stable release in August.

For 2021 I planned to and also managed to minimize buying (new) physical stuff, except for books and other reading stuff. Speaking of reading, 2021 was nice — I managed to finish more than 100 books (see “Mein Lesejahr 2021“), and I’d like to keep the reading pace.

Now let’s hope for better times in 2022!

12 January, 2022 05:30PM by mika

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Training apps

I've been using various training apps (and their associated web sites) since 2010 now, forward-porting data to give me twelve years of logs. (My primary migration path has been CardioTrianer → Endomondo → Strava.) However, it strikes me that they're just becoming worse and worse, and I think I've figured out why: What I want is a training site with some social functions, but what companies are creating are social networks. Not social networks about training; just social networks.

To be a bit more concrete: I want something that's essentially a database. I want to quickly search for workouts in a given area and of a given length, and then bring up key information, compare, contrast, get proper graphs (not something where you can't see the difference between 3:00/km and 4:00/km!), and so on. (There's a long, long list of features and bugs I'd like to get fixed, but I won't list them all.)

But Strava is laser-focused on what's happened recently; there's a stream with my own workouts and my friends', just like a social network, and that's pretty much the main focus (and they have not even tried to solve the stream problem; if I have a friend that's too active, I don't see anything else). I'd need to pay a very steep price (roughly $110/year, the same price of a used GPS watch!) to even get a calendar; without that, I need to go back through a super-slow pagination UI 20 and 20 workouts at a time to even see something older.

Garmin Connect is somewhat better here; at least I can query on length and such. (Not so strange; Garmin is in the business of selling devices, not making social networks.) But it's very oriented around one specific watch brand, and it's far from perfect either. My big issue is that nobody's even trying, it seems. But I guess there's simply no money in that.

12 January, 2022 05:27PM

January 11, 2022

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

ThinkPad AMD Debian

After a hiatus of 6 years, it was nice to be back with the ThinkPad. This blog post briefly touches upon my impressions with the current generation ThinkPad T14 Gen2 AMD variant.

ThinkPad T14 Gen2 AMD

ThinkPad T14 Gen2 AMD

Lenovo

It took 8 weeks to get my hands on the machine. Given the pandemic, restrictions and uncertainities, not sure if I should call it an ontime delivery. This was a CTO - Customise-to-order; so was nice to get rid of things I really didn’t care/use much. On the other side, it also meant I could save on some power. It also came comparatively cheaper overall.

  • No fingerprint reader
  • No Touch screen

There’s still parts where Lenovo could improve. Or less frustate a customer. I don’t understand why a company would provide a full customization option on their portal, while at the same time, not provide an explicit option to choose the make/model of the hardware one wants. Lenovo deliberately chooses to not show/specify which WiFi adapter one could choose. So, as I suspected, I ended up with a MEDIATEK Corp. Device 7961 wifi adapter.

AMD

For the first time in my computing life, I’m now using AMD at the core. I was pretty frustrated with annoying Intel Graphics bugs, so decided to take the plunge and give AMD/ATI a shot, knowing that the radeon driver does have decent support. So far, on the graphics side of things, I’m glad that things look bright. The stock in-kernel radeon driver has been working perfect for my needs and I haven’t had to tinker even once so far, in my 30 days of use.

On the overall system performance, I have not done any benchmarks nor do I want to do. But wholly, the system performance is smooth.

Power/Thermal

This is where things need more improvement on the AMD side. This AMD laptop terribly draws a lot of power in suspend mode. And it isn’t just this machine, but also the previous T14 Gen1 which has similar problems. I’m not sure if this is a generic ThinkPad problem, or an AMD specific problem. But coming from the Dell XPS 13 9370 Intel, this does draw a lot lot more power. So much, that I chose to use hibernation instead.

Similarly, on the thermal side, this machine doesn’t cool down well as compared the the Dell XPS Intel one. On an idle machine, its temperature are comparatively higher. Looking at powertop reports, it does show to consume an average of 10 watts power even while idle.

I’m hoping these are Linux ingeration issues and that Lenovo/AMD will improve things in the coming months. But given the user feedback on the ThinkPad T14 Gen1 thread, it may just be wishful thinking.

Linux

The overall hardware support has been surprisingly decent. The MediaTek WiFi driver had some glitches but with Linux 5.15+, things have considerably improved. And I hope the trend will continue with forthcoming Linux releases. My previous device driver experience with MediaTek wasn’t good but I took the plunge, considering that in the worst scenario I’d have the option to swap the card.

There’s a lot of marketing about Linux + Intel. But I took a jibe with Linux + AMD. There are glitches but nothing so far that has been a dealbreaker. If anything, I wish Lenovo/AMD would seriously work on the power/thermal issues.

Migration

Other than what’s mentioned above, I haven’t had any serious issues. I may have had some rare occassional hangs but they’ve been so infrequent that I haven’t spent time to investigate those.

Upon receiving the machine, my biggest requirement was how to switch my current workstation from Dell XPS to Lenovo ThinkPad. I’ve been using btrfs for some time now. And over the years, built my own practise on how to structure it. Things like, provisioning [sub]volumes, based on use cases is one thing I see. Like keeping separate subvols for: cache/temporary data, copy-on-write data , swap etc. I wish these things could be simplified; either on the btrfs tooling side or some different tool on top of it.

Below is filtered list of subvols created over years, that were worthy of moving to the new machine.

rrs@priyasi:~$ cat btrfs-volume-layout 
ID 550 gen 19166 top level 5 path home/foo/.cache
ID 552 gen 1522688 top level 5 path home/rrs
ID 553 gen 1522688 top level 552 path home/rrs/.cache
ID 555 gen 1426323 top level 552 path home/rrs/rrs-home/Libvirt-Images
ID 618 gen 1522672 top level 5 path var/spool/news
ID 634 gen 1522670 top level 5 path var/tmp
ID 635 gen 1522688 top level 5 path var/log
ID 639 gen 1522226 top level 5 path var/cache
ID 992 gen 1522670 top level 5 path disk-tmp
ID 1018 gen 1522688 top level 552 path home/rrs/NoBackup
ID 1196 gen 1522671 top level 5 path etc
ID 23721 gen 775692 top level 5 path swap
18:54 ♒ ॐ ♅ ♄ ⛢     ☺ 😄    

btrfs send/receive

This did come in handy but I sorely missed some feature. Maybe they aren’t there, or are there and I didn’t look close enough. Over the years, different attributes were set to different subvols. Over time I forget what feature was added where. But from a migration point of view, it’d be nice to say, “Take this volume and take it with all its attributes”. I didn’t find that functionality in send/receive.

There’s get/set-property which I noticed later but by then it was late. So some sort of tooling, ideally something like btrfs migrate or somesuch would be nicer.

In the file system world, we already have nice tools to take care of similar scenarios. Like with rsync, I can request it to carry all file attributes.

Also, iirc, send/receive works only on ro volumes. So there’s more work one needs to do in:

  1. create ro vol
  2. send
  3. receive
  4. don’t forget to set rw property
  5. And then somehow find out other properties set on each individual subvols and [re]apply the same on the destination

I wish this all be condensed into a sub-command.

For my own sake, for this migration, the steps used were:

user@debian:~$ for volume in `sudo btrfs sub list /media/user/TOSHIBA/Migrate/ | cut -d ' ' -f9 | grep -v ROOTVOL | grep -v etc | grep -v btrbk`; do echo $volume; sud
o btrfs send /media/user/TOSHIBA/$volume | sudo btrfs receive /media/user/BTRFSROOT/ ; done            
Migrate/snapshot_disk-tmp
At subvol /media/user/TOSHIBA/Migrate/snapshot_disk-tmp
At subvol snapshot_disk-tmp
Migrate/snapshot-home_foo_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_foo_.cache
At subvol snapshot-home_foo_.cache
Migrate/snapshot-home_rrs
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs
At subvol snapshot-home_rrs
Migrate/snapshot-home_rrs_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_.cache
At subvol snapshot-home_rrs_.cache
ERROR: crc32 mismatch in command
Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol snapshot-home_rrs_rrs-home_Libvirt-Images
ERROR: crc32 mismatch in command
Migrate/snapshot-var_spool_news
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_spool_news
At subvol snapshot-var_spool_news
Migrate/snapshot-var_lib_machines
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_lib_machines
At subvol snapshot-var_lib_machines
Migrate/snapshot-var_lib_machines_DebianSidTemplate
..... snipped .....

And then, follow-up with:

user@debian:~$ for volume in `sudo btrfs sub list /media/user/BTRFSROOT/ | cut -d ' ' -f9`; do echo $volume; sudo btrfs property set -ts /media/user/BTRFSROOT/$volume ro false; done
ROOTVOL
ERROR: Could not open: No such file or directory
etc
snapshot_disk-tmp
snapshot-home_foo_.cache
snapshot-home_rrs
snapshot-var_spool_news
snapshot-var_lib_machines
snapshot-var_lib_machines_DebianSidTemplate
snapshot-var_lib_machines_DebSidArmhf
snapshot-var_lib_machines_DebianJessieTemplate
snapshot-var_tmp
snapshot-var_log
snapshot-var_cache
snapshot-disk-tmp

And then finally, renaming everything to match proper:

user@debian:/media/user/BTRFSROOT$ for x in snapshot*; do vol=$(echo $x | cut -d '-' -f2 | sed -e "s|_|/|g"); echo $x $vol; sudo mv $x $vol; done
snapshot-var_lib_machines var/lib/machines
snapshot-var_lib_machines_Apertisv2020ospackTargetARMHF var/lib/machines/Apertisv2020ospackTargetARMHF
snapshot-var_lib_machines_Apertisv2021ospackTargetARM64 var/lib/machines/Apertisv2021ospackTargetARM64
snapshot-var_lib_machines_Apertisv2022dev3ospackTargetARMHF var/lib/machines/Apertisv2022dev3ospackTargetARMHF
snapshot-var_lib_machines_BusterArm64 var/lib/machines/BusterArm64
snapshot-var_lib_machines_DebianBusterTemplate var/lib/machines/DebianBusterTemplate
snapshot-var_lib_machines_DebianJessieTemplate var/lib/machines/DebianJessieTemplate
snapshot-var_lib_machines_DebianSidTemplate var/lib/machines/DebianSidTemplate
snapshot-var_lib_machines_DebianSidTemplate_var_lib_portables var/lib/machines/DebianSidTemplate/var/lib/portables
snapshot-var_lib_machines_DebSidArm64 var/lib/machines/DebSidArm64
snapshot-var_lib_machines_DebSidArmhf var/lib/machines/DebSidArmhf
snapshot-var_lib_machines_DebSidMips var/lib/machines/DebSidMips
snapshot-var_lib_machines_JenkinsApertis var/lib/machines/JenkinsApertis
snapshot-var_lib_machines_v2019 var/lib/machines/v2019
snapshot-var_lib_machines_v2019LinuxSupport var/lib/machines/v2019LinuxSupport
snapshot-var_lib_machines_v2020 var/lib/machines/v2020
snapshot-var_lib_machines_v2021dev3Slim var/lib/machines/v2021dev3Slim
snapshot-var_lib_machines_v2021dev3SlimTarget var/lib/machines/v2021dev3SlimTarget
snapshot-var_lib_machines_v2022dev2OspackMinimal var/lib/machines/v2022dev2OspackMinimal
snapshot-var_lib_portables var/lib/portables
snapshot-var_log var/log
snapshot-var_spool_news var/spool/news
snapshot-var_tmp var/tmp

snapper

Entirely independent of this, but indirectly related. I use snapper as my snapshotting tool. It worked perfect on my previous machine. While everything got migrated, the only thing that fell apart was snapper. It just wouldn’t start/run proper. Funny thing is that I just removed the snapper configs and reinitialized with the exact same config again, and voila snapper was happy.

Conclusion

That was pretty much it. With the above and then also migrating /boot and then just chroot to install the boot loader. At some time, I’d like to explore other boot options but given that that is such a non-essential task, it is low on the list.

The good part was that I booted into my new machine with my exact workstation setup as it was. All the way to the user cache and the desktop session. So it was nice on that part.

But I surely think there’s room for a better migration experience here. If not directly as btrfs migrate, then maybe as an independent tool. The problem is that such a tool is going to be used once in years, so I didn’t find the motivation to write one. But this surely would be a good use case for the distribution vendors.

11 January, 2022 02:07PM by Ritesh Raj Sarraf (rrs@researchut.com)

Russ Allbery

Review: Hench

Review: Hench, by Natalie Zina Walschots

Publisher: William Morrow
Copyright: September 2020
ISBN: 0-06-297859-4
Format: Kindle
Pages: 403

Anna Tromedlov is a hench, which means she does boring things for terrible people for money. Supervillains need a lot of labor to keep their bases and criminal organizations running, and they get that labor the same way everyone else does: through temporary agencies. Anna does spreadsheets, preferably from home on her couch.

On-site work was terrifying and she tried to avoid it, but the lure of a long-term contract was too strong. The Electric Eel, despite being a creepy sleazeball, seemed to be a manageable problem. He needed some support at a press conference, which turns out to be code for being a diversity token in front of the camera, but all she should have to do is stand there.

That's how Anna ends up holding the mind control device to the head of the mayor's kid when the superheroes attack, followed shortly by being thrown across the room by Supercollider.

Left with a complex fracture of her leg that will take months to heal, a layoff notice and a fruit basket from Electric Eel's company, and a vaguely menacing hospital conversation with the police (including Supercollider in a transparent disguise) in which it's made clear to her that she is mistaken about Supercollider's hand-print on her thigh, Anna starts wondering just how much damage superheroes have done. The answer, when analyzed using the framework for natural disasters, is astonishingly high. Anna's resulting obsession with adding up the numbers leads to her starting a blog, the Injury Report, with a growing cult following. That, in turn, leads to a new job and a sponsor: the mysterious supervillain Leviathan.

To review this book properly, I need to talk about Watchmen.

One of the things that makes superheroes interesting culturally is the straightforwardness of their foundational appeal. The archetypal superhero story is an id story: an almost pure power fantasy aimed at teenage boys. Like other pulp mass media, they reflect the prevailing cultural myths of the era in which they're told. World War II superheroes are mostly all-American boy scouts who punch Nazis. 1960s superheroes are a more complex mix of outsider misfits with a moral code and sarcastic but earnestly ethical do-gooders. The superhero genre is vast, with numerous reinterpretations, deconstructions, and alternate perspectives, but its ur-story is a good versus evil struggle of individual action, in which exceptional people use their powers for good to defeat nefarious villains.

Watchmen was not the first internal critique of the genre, but it was the one that everyone read in the 1980s and 1990s. It takes direct aim at that moral binary. The superheroes in Watchmen are not paragons of virtue (some of them are truly horrible people), and they have just as much messy entanglement with the world as the rest of us. It was superheroes re-imagined for the post-Vietnam, post-Watergate era, for the end of the Cold War when we were realizing how many lies about morality we had been told. But it still put superheroes and their struggles with morality at the center of the story.

Hench is a superhero story for the modern neoliberal world of reality TV and power inequality in the way that Watchmen was a superhero story for the Iran-Contra era and the end of the Cold War.

Whether our heroes have feet of clay is no longer a question. Today, a better question is whether the official heroes, the ones that are celebrated as triumphs of individual achievement, are anything but clay. Hench doesn't bother asking whether superheroes have fallen short of their ideal; that answer is obvious. What Hench asks instead is a question familiar to those living in a world full of televangelists, climate denialism, manipulative advertising, and Facebook: are superheroes anything more than a self-perpetuating scam? Has the good superheroes supposedly do ever outweighed the collateral damage? Do they care in the slightest about the people they're supposedly protecting? Or is the whole system of superheroes and supervillains a performance for an audience, one that chews up bystanders and spits them out mangled while delivering simplistic and unquestioned official morality?

This sounds like a deeply cynical premise, but Hench is not a cynical book. It is cynical about superheroes, which is not the same thing. The brilliance of Walschots's approach is that Anna has a foot in both worlds. She works for a supervillain and, over the course of the book, gains access to real power within the world of superheroic battles. But she's also an ordinary person with ordinary problems: not enough money, rocky friendships, deep anger at the injustices of the world and the way people like her are discarded, and now a disability and PTSD. Walschots perfectly balances the tension between those worlds and maintains that tension straight to the end of the book. From the supervillain world, Anna draws support, resources, and a mission, but all of the hope, true morality, and heart of this book comes from the ordinary side.

If you had the infrastructure of a supervillain at your disposal, what would you do with it?

Anna's answer is to treat superheroes as a destructive force like climate change, and to do whatever she can to drive them out of the business and thus reduce their impact on the world. The tool she uses for that is psychological warfare: make them so miserable that they'll snap and do something too catastrophic to be covered up. And the raw material for that psychological warfare is data.

That's the foot in the supervillain world. In descriptions of this book, her skills with data are often called her superpower. That's not exactly wrong, but the reason why she gains power and respect is only partly because of her data skills. Anna lives by the morality of the ordinary people world: you look out for your friends, you treat your co-workers with respect as long as they're not assholes, and you try to make life a bit better for the people around you. When Leviathan gives her the opportunity to put together a team, she finds people with skills she admires, funnels work to people who are good at it, and worries about the team dynamics. She treats the other ordinary employees of a supervillain as people, with lives and personalities and emotions and worth. She wins their respect.

Then she uses their combined skills to destroy superhero lives.

I was fascinated by the moral complexity in this book. Anna and her team do villainous things by the morality of the superheroic world (and, honestly, by the morality of most readers), including some things that result in people's deaths. By the end of the book, one could argue that Anna has been driven by revenge into becoming an unusual sort of supervillain. And yet, she treats the people around her so much better than either the heroes or the villains do. Anna is fiercely moral in all the ordinary person ways, and that leads directly to her becoming a villain in the superhero frame. Hench doesn't resolve that conflict; it just leaves it on the page for the reader to ponder.

The best part about this book is that it's absurdly grabby, unpredictable, and full of narrative momentum. Walschots's pacing kept me up past midnight a couple of times and derailed other weekend plans so that I could keep reading. I had no idea where the plot was going even at the 80% mark. The ending is ambiguous and a bit uncomfortable, just like the morality throughout the book, but I liked it the more I thought about it.

One caveat, unfortunately: Hench has some very graphic descriptions of violence and medical procedures, and there's an extended torture sequence with some incredibly gruesome body horror that I thought went on far too long and was unnecessary to the plot. If you're a bit squeamish like I am, there are some places where you'll want to skim, including one sequence that's annoyingly intermixed with important story developments.

Otherwise, though, this is a truly excellent book. It has a memorable protagonist with a great first-person voice, an epic character arc of empowerment and revenge, a timely take on the superhero genre that uses it for sharp critique of neoliberal governance and reality TV morality, a fascinatingly ambiguous and unsettled moral stance, a gripping and unpredictable plot, and some thoroughly enjoyable competence porn. I had put off reading it because I was worried that it would be too cynical or dark, but apart from the unnecessary torture scene, it's not at all. Highly recommended.

Rating: 9 out of 10

11 January, 2022 02:56AM

January 09, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.13: Some Fixes and Documentation

A new version, now at 0.3.13, of the Rblpapi package just arrived at CRAN. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the thirteenth release since the package first appeared on CRAN in 2016. It comprises the PRs from three different contributors (with special thanks once again to Michael Kerber), and extends test and documentation, and extends two function interfaces to control explicitly whether returned lists of length one should be simplified.

The list of changes follow below.

Changes in Rblpapi version 0.3.13 (2022-01-09)

  • Add a test for bds (Michael Kerber in #352)

  • Add simplify argument (and option) to bdh and bds (Dirk in #354 closing #353, #351)

  • Improve documentation for bsearch (John in #357 closing #356)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 January, 2022 11:07PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Grading using the Wacom Intuos S

I've been teaching economics for a few semesters already and, slowly but surely, I'm starting to get the hang of it. Having to deal with teaching remotely hasn't been easy though and I'm really hoping the winter semester will be in-person again.

Although I worked way too much last semester1, I somehow managed to transition to using a graphics tablet. I bought a Wacom Intuos S tablet (model CTL-4100) in late August 2021 and overall, I have been very happy with it. Wacom Canada offers a small discount for teachers and I ended up paying 115 CAD (~90 USD) for the tablet, an overall very reasonable price.

Unsurprisingly, the Wacom support on Linux is very good and my tablet worked out of the box. The only real problem I had was by default, the tablet sometimes boots up in Android mode, making it unusable. This is easily solved by pressing down on the pad's first and last buttons for a few seconds, until the LED turns white.

The included stylus came with hard plastic nibs, but I find them too slippery. I eventually purchased hard felt nibs, which increase the friction and makes for a more paper-like experience. They are a little less durable, but I wrote quite a fair bit and still haven't gone through a single one yet.

Learning curve

Learning how to use a graphical tablet took me at least a few weeks! When writing on a sheet of paper, the eyes see what the hand writes directly. This is not the case when using a graphical tablet: you are writing on a surface and see the result on your screen, a completely different surface. This dissociation takes a bit of practise to master, but after going through more than 300 pages of notes, it now feels perfectly normal.

Here is a side-by-side comparison of my very average hand-writing2:

  1. on paper
  2. using the tablet, the first week
  3. using the tablet, after a couple of months

Comparison of my writing, on paper, using the tablet and using the tablet after a few weeks

I still prefer the result of writing on paper, but I think this is mostly due to me not using the pressure sensitivity feature. The support in xournal wasn't great, but now that I've tried it in xournalpp (more on this below), I think I will be enabling it in the future. The result on paper is also more consistent, but I trust my skills will improve over time.

Pressure sensitivity on vs off

Use case

The first use case I have for the tablet is grading papers. I've been asking my students to submit their papers via Moodle for a few semesters already, but until now, I was grading them using PDF comments. The experience wasn't great3 and was rather slow compared to grading physical copies.

I'm also a somewhat old-school teacher: I refuse to teach using slides. Death by PowerPoint is real. I write on the blackboard a lot4 and I find it much easier to prepare my notes by hand than by typing them, as the end result is closer to what I actually end up writing down on the board.

Writing notes by hand on sheets of paper is a chore too, especially when you revisit the same materiel regularly. Being able to handwrite digital notes gives me a lot more flexibility and it's been great.

So far, I have been using xournal to write notes and grade papers, and although it is OK, it has a bunch of quirks I dislike. I was waiting for xournalpp to be packaged in Debian, and it now is5! I'm looking forward to using it next semester.

Towards a better computer monitor

I have also been feeling the age of my current computer monitor. I am currently using an old 32" 1080p TV from LG and up until now, I had been able to deal with the drawbacks. The colors are pretty bad and 1080p for such a large display isn't great, but I got used to it.

What I really noticed when I started using my graphics tablet was the input lag. It's bad enough that there's a clear jello effect when writing and it eventually gives me a headache. It's so bad I usually prefer to work on my laptop, which has a nicer but noticeably smaller panel.

I'm currently looking to replace this aging TV6 by something more modern. I have been waiting out since I would like to buy something that will last me another 10 years if possible. Sadly, 32" high refresh rate 4K monitors aren't exactly there yet and I haven't found anything matching my criteria. I would probably also need a new GPU, something that is not easy to come by these days.


  1. I worked at two colleges at the same time, teaching 2 different classes (one of which I was giving for the first time...) to 6 groups in total. I averaged more than 60h per week for sure. 

  2. Yes, I only write in small caps. Students love it, as it's much easier to read on the blackboard. 

  3. Although most PDF readers support displaying comments, some of my more clueless students still had trouble seeing them and I had to play tech support more than I wanted. 

  4. Unsurprisingly, my students also love it. One of the most common feedback I get at the end of the semester is they hate slides too and are very happy I'm one of the few teachers who writes on the board. 

  5. Many thanks to Barak A. Pearlmutter for maintaining this package. 

  6. It dates back from 2010, when my mom replaced our old CRT by a flat screen. FullHD TVs were getting affordable and I wasn't sad to see our tiny 20-something inches TV go. I eventually ended up with the LG flatscreen a few years later when I moved out in my first apartment and my mom got something better. 

09 January, 2022 09:30PM by Louis-Philippe Véronneau

Russell Coker

Video Conferencing (LCA)

I’ve just done a tech check for my LCA lecture. I had initially planned to do what I had done before and use my phone for recording audio and video and my PC for other stuff. The problem is that I wanted to get an external microphone going and plugging in a USB microphone turned off the speaker in the phone (it seemed to direct audio to a non-existent USB audio output). I tried using bluetooth headphones with the USB microphone and that didn’t work. Eventually a viable option seemed to be using USB headphones on my PC with the phone for camera and microphone. Then it turned out that my phone (Huawei Mate 10 Pro) didn’t support resolutions higher than VGA with Chrome (it didn’t have the “advanced” settings menu to select resolution), this is probably an issue of Android build features. So the best option is to use a webcam on the PC, I was recommended a Logitech C922 but OfficeWorks only has a Logitech C920 which is apparently OK.

The free connection test from freeconference.com [1] is good for testing out how your browser works for videoconferencing. It tests each feature separately and is easy to run.

After buying the C920 webcam I found that it sometimes worked and sometimes caused a kernel panic like the following (partial panic log included for the benefit of people Googling this Logitech C920 problem):

[95457.805417] BUG: kernel NULL pointer dereference, address: 0000000000000000
[95457.805424] #PF: supervisor read access in kernel mode
[95457.805426] #PF: error_code(0x0000) - not-present page
[95457.805429] PGD 0 P4D 0 
[95457.805431] Oops: 0000 [#1] SMP PTI
[95457.805435] CPU: 2 PID: 75486 Comm: v4l2src0:src Not tainted 5.15.0-2-amd64 #1  Debian 5.15.5-2
[95457.805438] Hardware name: HP ProLiant ML110 Gen9/ProLiant ML110 Gen9, BIOS P99 02/17/2017
[95457.805440] RIP: 0010:usb_ifnum_to_if+0x3a/0x50 [usbcore]
...
[95457.805481] Call Trace:
[95457.805484]  
[95457.805485]  usb_hcd_alloc_bandwidth+0x23d/0x360 [usbcore]
[95457.805507]  usb_set_interface+0x127/0x350 [usbcore]
[95457.805525]  uvc_video_start_transfer+0x19c/0x4f0 [uvcvideo]
[95457.805532]  uvc_video_start_streaming+0x7b/0xd0 [uvcvideo]
[95457.805538]  uvc_start_streaming+0x2d/0xf0 [uvcvideo]
[95457.805543]  vb2_start_streaming+0x63/0x100 [videobuf2_common]
[95457.805550]  vb2_core_streamon+0x54/0xb0 [videobuf2_common]
[95457.805555]  uvc_queue_streamon+0x2a/0x40 [uvcvideo]
[95457.805560]  uvc_ioctl_streamon+0x3a/0x60 [uvcvideo]
[95457.805566]  __video_do_ioctl+0x39b/0x3d0 [videodev]

It turns out that Ubuntu Launchpad bug #1827452 has great information on this problem [2]. Apparently if the device decides it doesn’t have enough power then it will reconnect and get a different USB bus device number and this often happens when the kernel is initialising it. There’s a race condition in the kernel code in which the code to initialise the device won’t realise that the device has been detached and will dereference a NULL pointer and then mess up other things in USB device management. The end result for me is that all USB devices become unusable in this situation, commands like “lsusb” hang, and a regular shutdown/reboot hangs because it can’t kill the user session because something is blocked on USB.

One of the comments on the Launchpad bug is that a powered USB hub can alleviate the problem while a USB extension cable (which I had been using) can exacerbate it. Officeworks currently advertises only one powered USB hub, it’s described as “USB 3” but also “maximum speed 480 Mbps” (USB 2 speed). So basically they are selling a USB 2 hub for 4* the price that USB 2 hubs used to sell for.

When debugging this I used the “cheese” webcam utility program and ran it in a KVM virtual machine. The KVM parameters “-device qemu-xhci -usb -device usb-host,hostbus=1,hostaddr=2” (where 1 and 2 are replaced by the Bus and Device numbers from “lsusb”) allow the USB device to be passed through to the VM. Doing this meant that I didn’t have to reboot my PC every time a webcam test failed.

For audio I’m using the Sades Wand gaming headset I wrote about previously [3].

09 January, 2022 07:20AM by etbe

François Marier

Removing an alias/domain from a Let's Encrypt certificate managed by certbot

I recently got an error during a certbot renewal:

Challenge failed for domain echo.fmarier.org
Failed to renew certificate jabber-gw.fmarier.org with error: Some challenges have failed.
The following renewals failed:
  /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem (failure)
1 renew failure(s), 0 parse failure(s)

due to the fact that I had removed the DNS entry for echo.fmarier.org.

I tried to find a way to remove that name from the certificate before renewing it, but it seems like the only way to do it is to create a new certificate without that alternative name.

First, I looked for the domains included in the certificate:

$ certbot certificates
...
  Certificate Name: jabber-gw.fmarier.org
    Serial Number: 31485424904a33fb2ab43ab174b4b146512
    Key Type: RSA
    Domains: jabber-gw.fmarier.org echo.fmarier.org fmarier.org
    Expiry Date: 2022-01-04 05:28:57+00:00 (VALID: 29 days)
    Certificate Path: /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem
    Private Key Path: /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem

Then, deleted the existing certificate:

$ certbot delete jabber-gw.fmarier.org

and finally created a new certificate with all other names except for the obsolete one:

$ certbot certonly -d jabber-gw.fmarier.org -d fmarier.org --duplicate

09 January, 2022 06:00AM

Russ Allbery

Review: Redemptor

Review: Redemptor, by Jordan Ifueko

Series: Raybearer #2
Publisher: Amulet Books
Copyright: 2021
ISBN: 1-68335-720-5
Format: Kindle
Pages: 328

Redemptor is the second half of a duology that started with Raybearer. You could read the first book without the second, but reading the second without the first will not make much sense. I'm going to be a bit elliptical in my plot description since there's a lot of potential for spoilers for the first book.

Tarisai has reached a point of stability and power, but she's also committed herself to a goal, one that will right a great historical and ongoing injustice. She's also now in a position to both notice and potentially correct numerous other injustices in the structure of her society, and plans to start by defending those closest to her. But in the midst of her opening gambit to save someone she believes is unjustly imprisoned, the first murderous undead child appears, attacking both Tarisai's fragile sense of security and her self-esteem and self-worth. Before long, she's drowning in feelings of inadequacy and isolation, and her grand plans for reordering the world have turned into an anxiety loop of self-flagellating burnout.

I so much wanted to like this book. Argh.

I think I see what Ifueko was aiming for, and it's a worthy topic for a novel. In Raybearer, Tarisai got the sort of life that she previously could only imagine, but she's also the sort of person who shoulders massive obligations. Imposter syndrome, anxiety, overwork, and burnout are realistic risks, and are also important topics to write about. There are some nicely subtle touches buried in this story, such as the desire of her chosen family to have her present and happy without entirely understanding why she isn't, and without seeing the urgency that she sees in the world's injustice. The balancing act of being effective without overwhelming oneself is nearly impossible, and Tarisai has very little preparation or knowledgeable support.

But this story is told with the subtlety of a sledgehammer, and in a way that felt forced rather than arising naturally from the characters. If the point of emphasis had been a disagreement with her closest circle over when and how much the world should be changed, I think this would be a better book. In the places where this drives the plot, it is a better book. But Ifueko instead externalizes anxiety and depression in the form of obviously manipulative demonic undead children who (mostly) only Tarisai can see, and it's just way too much. Her reactions are manipulated and sometimes externally imposed in a way that turns what should have been a character vs. self plot into a character vs. character plot in which the protagonist is very obviously making bad decisions and the antagonist is an uninteresting cliche.

The largest problem I had with this book is that I found it thuddingly obvious, in part because the plot felt like it was on narrowly constrained rails to ensure it hit all of the required stops. When the characters didn't want the plot to go somewhere, they're sidelined, written out of the story, or otherwise forcibly overridden. Tarisai has to feel isolated, so all the people who, according to the events of the previous book and the established world-building rules, would not let her be isolated are pushed out of her life. When this breaks the rules of magic in this world, those rules are off-handedly altered. Characters that could have had their own growth arcs after Raybearer become static and less interesting, since there's no room for them in the plot. Instead, we get all new characters, which gives Redemptor a bit of a cast size problem.

Underneath this, there is an occasional flash of great writing. Ifueko chooses to introduce a dozen mostly-new characters to an already large cast and I was still able to mostly keep them straight, which shows real authorial skill. She is very good with short bursts of characterization to make new characters feel fresh and interesting. Even the most irritating of the new characters (Crocodile, whose surprise twist I thought was obvious and predictable) is an interesting archetype to explore in a book about activism and activist burnout. I can see some pieces of a better book here. But I desperately wanted something to surprise me, for Tarisai or one of the other characters to take the plot in some totally unexpected direction the way that Raybearer did. It never happened.

That leads directly to another complaint: I liked Raybearer in part because of the freshness of a different mythological system and a different storytelling tradition than what we typically get in fantasy novels. I was hoping for more of the same in Redemptor, which meant I was disappointed when I got a mix of Christianity and Greek mythology.

As advertised by Raybearer, the central mythological crisis of Redemptor concerns the Underworld. This doesn't happen until about 80% into the book (which is also a bit of a problem; the ending felt rushed given how central it was to the plot), so I can't talk about it in detail without spoiling it. But what I think I can say is that unfortunately the religious connotations of the title are not an accident. Rather than something novel that builds on the excellent idea of the emi-ehran spirit animal, there is a lot of Christ symbolism mixed with an underworld that could have come from an Orpheus retelling. There's nothing inherently wrong with this (although the Christian bits landed poorly for me), but it wasn't what I was hoping for from the mythology of this world.

I rarely talk much about the authors in fiction reviews. I prefer to let books stand on their own without trying too hard to divine the author's original intentions. But here, I think it's worth acknowledging Ifueko's afterword in which she says that writing Redemptor in the middle of a pandemic, major depression, and the George Floyd protests was the most difficult thing she'd ever done. I've seen authors write similar things in afterwords when the effect on the book was minimal or invisible, but I don't think that was the case here. Redemptor is furious, anxious, depressed, and at points despairing, and while it's okay for novels to be all of those things when it's under the author's control, here they felt like emotions that were imposed on the story from outside.

Raybearer was an adventure story about found family and ethics that happened to involve a lot of politics. Redemptor is a story about political activism and governance, but written in a universe whose bones are set up for an adventure story. The mismatch bothered me throughout; not only did these not feel like the right characters to tell this story with, but the politics were too simple, too morally clear-cut, and too amenable to easy solutions for a good political fantasy. Raybearer focused its political attention on colonialism. That's a deep enough topic by itself to support a duology (or more), but Redemptor adds in property rights, land reform, economic and social disparity, unfair magical systems, and a grab bag of other issues, and it overwhelms the plot. There isn't space and time to support solutions with sufficient complexity to satisfyingly address the problems. Ifueko falls back on benevolent dictator solutions, and I understand why, but that's not the path to a satisfying resolution in an overtly political fantasy.

This is the sort of sequel that leaves me wondering if I can recommend reading the first book and not the second, and that makes me sad. Redemptor is not without its occasional flashes of brilliance, but I did not have fun reading this book and I can't recommend the experience. That said, I think this is a book problem, not an author problem; I will happily read Ifueko's next novel, and I suspect it will be much better.

Rating: 5 out of 10

09 January, 2022 03:19AM

hackergotchi for Matthew Garrett

Matthew Garrett

Pluton is not (currently) a threat to software freedom

At CES this week, Lenovo announced that their new Z-series laptops would ship with AMD processors that incorporate Microsoft's Pluton security chip. There's a fair degree of cynicism around whether Microsoft have the interests of the industry as a whole at heart or not, so unsurprisingly people have voiced concerns about Pluton allowing for platform lock-in and future devices no longer booting non-Windows operating systems. Based on what we currently know, I think those concerns are understandable but misplaced.

But first it's helpful to know what Pluton actually is, and that's hard because Microsoft haven't actually provided much in the way of technical detail. The best I've found is a discussion of Pluton in the context of Azure Sphere, Microsoft's IoT security platform. This, in association with the block diagrams on page 12 and 13 of this slidedeck, suggest that Pluton is a general purpose security processor in a similar vein to Google's Titan chip. It has a relatively low powered CPU core, an RNG, and various hardware cryptography engines - there's nothing terribly surprising here, and it's pretty much the same set of components that you'd find in a standard Trusted Platform Module of the sort shipped in pretty much every modern x86 PC. But unlike Titan, Pluton seems to have been designed with the explicit goal of being incorporated into other chips, rather than being a standalone component. In the Azure Sphere case, we see it directly incorporated into a Mediatek chip. In the Xbox Series devices, it's incorporated into the SoC. And now, we're seeing it arrive on general purpose AMD CPUs.

Microsoft's announcement says that Pluton can be shipped in three configurations:as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.

So in this respect, Pluton changes very little; the only difference is that the TPM code is running on hardware dedicated to that purpose, rather than alongside other code. Importantly, in this mode Pluton will not do anything unless the system firmware or OS ask it to. Pluton cannot independently block the execution of any other code - it knows nothing about the code the CPU is executing unless explicitly told about it. What the OS can certainly do is ask Pluton to verify a signature before executing code, but the OS could also just verify that signature itself. Windows can already be configured to reject software that doesn't have a valid signature. If Microsoft wanted to enforce that they could just change the default today, there's no need to wait until everyone has hardware with Pluton built-in.

The two things that seem to cause people concerns are remote attestation and the fact that Microsoft will be able to ship firmware updates to Pluton via Windows Update. I've written about remote attestation before, so won't go into too many details here, but the short summary is that it's a mechanism that allows your system to prove to a remote site that it booted a specific set of code. What's important to note here is that the TPM (Pluton, in the scenario we're talking about) can't do this on its own - remote attestation can only be triggered with the aid of the operating system. Microsoft's Device Health Attestation is an example of remote attestation in action, and the technology definitely allows remote sites to refuse to grant you access unless you booted a specific set of software. But there are two important things to note here: first, remote attestation cannot prevent you from booting whatever software you want, and second, as evidenced by Microsoft already having a remote attestation product, you don't need Pluton to do this! Remote attestation has been possible since TPMs started shipping over two decades ago.

The other concern is Microsoft having control over the firmware updates. The context here is that TPMs are not magically free of bugs, and sometimes these can have security consequences. One example is Infineon TPMs producing weak RSA keys, a vulnerability that could be rectified by a firmware update to the TPM. Unfortunately these updates had to be issued by the device manufacturer rather than Infineon being able to do so directly. This meant users had to wait for their vendor to get around to shipping an update, something that might not happen at all if the machine was sufficiently old. From a security perspective, being able to ship firmware updates for the TPM without them having to go through the device manufacturer is a huge win.

Microsoft's obviously in a position to ship a firmware update that modifies the TPM's behaviour - there would be no technical barrier to them shipping code that resulted in the TPM just handing out your disk encryption secret on demand. But Microsoft already control the operating system, so they already have your disk encryption secret. There's no need for them to backdoor the TPM to give them something that the TPM's happy to give them anyway. If you don't trust Microsoft then you probably shouldn't be running Windows, and if you're not running Windows Microsoft can't update the firmware on your TPM.

So, as of now, Pluton running firmware that makes it look like a TPM just isn't a terribly interesting change to where we are already. It can't block you running software (either apps or operating systems). It doesn't enable any new privacy concerns. There's no mechanism for Microsoft to forcibly push updates to it if you're not running Windows.

Could this change in future? Potentially. Microsoft mention another use-case for Pluton "as a security processor used for non-TPM scenarios like platform resiliency", but don't go into any more detail. At this point, we don't know the full set of capabilities that Pluton has. Can it DMA? Could it play a role in firmware authentication? There are scenarios where, in theory, a component such as Pluton could be used in ways that would make it more difficult to run arbitrary code. It would be reassuring to hear more about what the non-TPM scenarios are expected to look like and what capabilities Pluton actually has.

But let's not lose sight of something more fundamental here. If Microsoft wanted to block free operating systems from new hardware, they could simply mandate that vendors remove the ability to disable secure boot or modify the key databases. If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements. If they want to be hostile to free software, they don't need Pluton to do it.

(Edit: it's been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you're running. There's various reasons I don't think this is realistic - one is that there's just way too much variability in measurements for it to be practical to write a policy that's strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

comment count unavailable comments

09 January, 2022 12:59AM

January 08, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

2021 in Fiction

Cover for *This is How You Lose the Time War*
Cover for *Robot*
Cover for *The Glass Hotel*

Following on from last year's round-up of my reading, here's a look at the fiction I enjoyed in 2021.

I managed to read 42 books in 2021, up from 31 last year. That's partly to do with buying an ereader: 33/36% of my reading (by pages/by books) was ebooks. I think this demonstrates that ebooks have mostly complemented paper books for me, rather than replacing them.

My book of the year (although it was published in 2019) was This is How You Lose the Time War by Amal El-Mohtar and Max Gladstone: A short epistolary love story between warring time travellers and quite unlike anything else I've read for a long time. Other notables were The Glass Hotel by Emily St John Mandel and Robot by Adam Wiśniewski-Snerg.

The biggest disappointment for me was The Ministry for the Future by Kim Stanley Robinson (KSR), which I haven't even finished. I love KSRs writing: I've written about him many times on this blog, at least in 2002, 2006 and 2009, I think I've read every other novel he's published and most of his short stories. But this one was too much of something for me. He's described this novel a the end-point of a particular journey and approach to writing he's taken, which I felt relieved to learn, assuming he writes any more novels (and I really hope that he does) they will likely be in a different "mode".

My "new author discovery" for 2021 was Chris Beckett: I tore through Two Tribes and America City before promptly buying all his other work. He fits roughly into the same bracket as Adam Roberts and Christopher Priest, two of my other favourite authors.

5 of the books I read (12%) were from my "backlog" of already-purchased physical books. I'd like to try and reduce my Backlog further so I hope to push this figure up next year.

I made a small effort to read more diverse authors this year. 24% of the books I read (by book count and page count) were by women. 15% by page count were (loosely) BAME (19% by book count). Again I'd like to increase these numbers modestly in 2022.

Unlike 2020, I didn't complete any short story collections in 2021! This is partly because there was only one issue of Interzone published in all of 2021, a double-issue which I haven't yet finished. This is probably a sad date point in terms of Interzone's continued existence, but it's not dead yet.

08 January, 2022 09:32PM

John Goerzen

Make the Internet Yours Again With an Instant Mesh Network

I’m going to lead with the technical punch line, and then explain it:

Yggdrasil Network is an opportunistic mesh that can be deployed privately or as part of a global-scale network. Each node gets a stable IPv6 address (or even an entire /64) that is derived from its public key and is bound to that node as long as the node wants it (of course, it can generate a new keypair anytime) and is valid wherever the node joins the mesh. All traffic is end-to-end encrypted.

Yggdrasil will automatically discover peers on a LAN via broadcast beacons, and requires zero configuration to peer in such a way. It can also run as an overlay network atop the public Internet. Public peers serve as places to join the global network, and since it’s a mesh, if one device on your LAN joins the global network, the others will automatically have visibility on it also, thanks to the mesh routing.

It neatly solves a lot of problems of portability (my ssh sessions stay live as I move networks, for instance), VPN (incoming ports aren’t required since local nodes can connect to a public peer via an outbound connection), security, and so forth.

Now on to the explanation:

The Tyranny of IP rigidity

Every device on the Internet, at one time, had its own globally-unique IP address. This number was its identifier to the world; with an IP address, you can connect to any machine anywhere. Even now, when you connect to a computer to download a webpage or send a message, under the hood, your computer is talking to the other one by IP address.

Only, now it’s hard to get one. The Internet protocol we all grew up with, version 4 (IPv4), didn’t have enough addresses for the explosive growth we’ve seen. Internet providers and IT departments had to use a trick called NAT (Network Address Translation) to give you a sort of fake IP address, so they could put hundreds or thousands of devices behind a single public one. That, plus the mobility of devices — changing IPs whenever they change locations — has meant that a fundamental rule of the old Internet is now broken:

Every participant is an equal peer. (Well, not any more.)

Nowadays, you can’t you host your own website from your phone. Or share files from your house. (Without, that is, the use of some third-party service that locks you down and acts as an intermediary.)

Back in the 90s, I worked at a university, and I, like every other employee, had a PC on my desk with an unfirewalled public IP. I installed a webserver, and poof – instant website. Nowadays, running a website from home is just about impossible. You may not have a public IP, and if you do, it likely changes from time to time. And even then, your ISP probably blocks you from running servers on it.

In short, you have to buy your way into the resources to participate on the Internet.

I wrote about these problems in more detail in my article Recovering Our Lost Free Will Online.

Enter Yggdrasil

I already gave away the punch line at the top. But what does all that mean?

  • Every device that participates gets an IP address that is fully live on the Yggdrasil network.
  • You can host a website, or a mail server, or whatever you like with your Yggdrasil IP.
  • Encryption and authentication are smaller (though not nonexistent) worries thanks to the built-in end-to-end encryption.
  • You can travel the globe, and your IP will follow you: onto a plane, from continent to continent, wherever. Yggdrasil will find you.
  • I’ve set up /etc/hosts on my laptop to use the Yggdrasil IPs for other machines on my LAN. Now I can just “ssh foo” and it will work — from home, from a coffee shop, from a 4G tether, wherever. Now, other tools like tinc can do this, obviously. And I could stop there; I could have a completely closed, private Yggdrasil network.

    Or, I can join the global Yggdrasil network. Each device, in addition to accepting peers it finds on the LAN, can also be configured to establish outbound peering connections or accept inbound ones over the Internet. Put a public peer or two in your configuration and you’ve joined the global network. Most people will probably want to do that on every device (because why not?), but you could also do that from just one device on your LAN. Again, there’s no need to explicitly build routes via it; your other machines on the LAN will discover the route’s existence and use it.

    This is one of many projects that are working to democratize and decentralize the Internet. So far, it has been quite successful, growing to over 2000 nodes. It is the direct successor to the earlier cjdns/Hyperboria and BATMAN networks, and aims to be a proof of concept and a viable tool for global expansion.

    Finally, think about how much easier development is when you don’t have to necessarily worry about TLS complexity in every single application. When you don’t have to worry about port forwarding and firewall penetration. It’s what the Internet should be.

    08 January, 2022 03:57AM by John Goerzen

    Ayoyimika Ajibade

    Nodejs 16 and Webpack 5 transition in Debian🍥

    What is Debian 🍥 ?

    Debian is also known as Debian GNU/Linux is a free open-source operating system (OS) based currently on the Linux kernel or the FreeBSD kernel, developed by the community-supported Debian Project; although efforts are in place to provide Debian for other kernels, primarily for the Hurd.

    Fun fact about Debian 😁💃💃

    • Debian was the first Linux distribution to include a package management system for easy installation and removal of software. It was also the first Linux distribution that could be upgraded without requiring reinstallation.

    • To protect your system against “Trojan horses” and other malevolent software, Debian's servers verify that uploaded packages come from their registered Debian maintainers.

    • Debian comes with over 59000 packages; as of this writing (precompiled software that is bundled up in a nice format for easy installation on your machine), a package manager (APT), and other utilities that make it possible to manage thousands of packages on thousands of computers as easily as installing a single application. All of it is FREE!

    • Debian is also the basis for many other distributions, most notably Ubuntu

    What is Webpack ?

    Webpack is a static module bundler for modern JavaScript applications. When webpack processes your application, it internally builds a dependency graph from one or more entry points and then combines every module your project needs into one or more bundles, which are static assets to serve your content from

    What is nodejs ?

    Node.js is an open-source, cross-platform built on Chrome's JavaScript runtime for easily building fast and scalable network applications and also developing server-side applications, Here javascript code is no longer limited to the traditional method of running on the web browser

    What does Transitioning mean in Debian?

    Transitioning is a concept in Debian about maintaining only one version of a library like webpack, nodejs. There is a bottleneck as other libraries and applications may not support the version we have in Debian. So we have to port that software which For example, node-mini-css-extract-plugin, node-mermaid and so many packages uses webpack. In buster we had webpack4 and in bullseye, we want to update it to webpack5. node-mini-css-extract-plugin already supports webpack5, but others like node-mermaid don't support it yet. So either we wait or we help those projects to update their webpack version. Check out this chat between my mentor and a community member on transitioning of rails6

    Getting Started with Creating or Updating packages in Debian

    To be able to create or maintain packages suitable for uploading to Debian you must be in a sid/unstable environment or distribution. See recommended instructions on how to setup Debian Sid via this link

    See link on how to debianize a new package

    See link for brief steps on how to update a package to its new upstream version. For more detailed content on the whys and hows of updating a package to its new upstream version visit here

    Note💡 In updating to the new upstream version we have to watch out for breaking changes caused by both minor updates or major updates. As per https://semver.org major updates(e.g If the current version is 2.3.4, then 3.0 is a major update) of libraries with versions greater than 1.0 and minor updates(e.g If the current version is 0.10 then 0.11 is a minor update) of libraries with versions less than 1.0 can have breaking changes

    The overall flow of webpack5 and nodejs16 transitioning in Debian

    After grasping the fundamental process and flow on how to update a package, you are well on your way to transitioning🚀🚀. Transitioning in webpack or nodejs involves building and testing of dependencies or packages that depend on webpack or nodejs respectively called reverse-dependencies, these reverse dependencies are tested and built with the new updated version usually uploaded to the experimental distribution if reverse dependencies are built and tested successfully both reverse dependencies and dependency in this case nodejs or webpack are then uploaded to the unstable/sid distribution for further processing

    The major guidelines to follow while transitioning are

    • Find a list of reverse dependencies to fix

    • See if new upstream versions of reverse dependencies are available that supports the transitioning version

    • See if new upstream of reverse dependencies are available that supports the transitioning version works

    • Report bugs found while rebuilding and testing reverse dependencies in Debian

    • Forward bugs found while rebuilding and testing reverse dependencies upstream

    • Fix or update packages and forward patches upstream

    After a successful transitioning phase users of the Debian OS have access to the latest and also oldest installation of these packages via apt based on their preferences, which implies having the benefit of more features, bug fixes, updated security patches from those packages, all these are possible due to the community of amazing people💗🔥

    08 January, 2022 03:30AM by Ayoyimika Ajibade

    January 07, 2022

    Ingo Juergensmann

    Moving my repositories from Github to Codeberg.org

    Some weeks ago I moved my repositories from Github (evil, Microsoft, blabla) to Codeberg. Codeberg is a non-profit organisation located in Germany. When you really dislike Microsoft products it is somewhat a natural reaction (at least for me) to move away from Github, which was bought by Microsoft, to some more independent service provider for hosting source code. Nice thing with Codeberg is as well that it offers a migration tool from Github to Codeberg. Additionally Codeberg is also on Mastodon. If you are looking for a good service hosting your git repositories and want to move away from Github as well, please give Codeberg a try.

    So, please update your git settings to https://github.com/ingoj to https://codeberg.org/Windfluechter (or the specific repo).

    07 January, 2022 10:50PM by ij

    Ayoyimika Ajibade

    Everyone Struggles

    Starting anything new always has in it an element of uncertainty, doubt, fears, and struggle to forge ahead, this has been my current situation as an outreachy intern working on the transition of nodejs16 and webpack5 which is about updating all packages that depend on nodejs14 and webpack4 to work well with the updated version of nodejs16 and webpack5 in the Debian operations system. Juicy right!😋

    As a software developer struggling to grasp both basic and advanced knowledge of a concept can seem daunting, much like learning anything new, you can be overwhelmed when you are surrounded and know there is a whole lot of other new concept, tools, process, languages you have to learn that are linked to what you are currently learning, as you are struggling to grasp the fundamental idea of what you are currently learning. imbued in any struggle to get a solution to the problem is where innovation and inventions lie in, and our learning becomes improved as we dive into fact-finding, getting your hypothesis after a series of tests and ultimately proffering a solution

    Some of my struggles as I intern with Debian has been lack of skill of the shell scripting language as that is one of the core languages to understand so as to navigate your way around maintaining packages for Debian, also funny enough having just an intermediate knowledge of the javascript programming language as arguably having a basic knowledge of javascript is necessary to building and testing javascript packages in Debian as I know only the basic of javascript since my core language is Python, that I struggle with. The good thing is that the more I keep at it the faster the chance of the struggles reducing

    Now to the fun part! having a community of developers who have been through the struggling phase is divine, as they make your learning experience much easier, my mentors and other community member have made learning to package modules for Debian much easier as all hands are on deck to always help out with our challenges. I remember it felt so wonderful when my first contribution got merged and I became more encouraged to update more packages. These helped me a lot in the contribution stage for Debian as I better familiar with how the system worked. I’m super grateful to my mentors and co-intern as they are always there to assist me.

    How I Navigate my way through my struggles

    I guess the first thing about any challenge is to be aware of it and admit your limitations of particular knowledge, then you move on to creatively seek solutions by asking for help from those who know the way. Voila! Now comes the part where you have to take up their solutions, ideas, opinions and make it work for your particular case scenario that is a skill set that all Software developers must-have.

    Going through documentation has immensely helped solve my problems much faster and build new knowledge, as I get the fundamental idea of why and how things work. I also try to break each concept down into steps, achieve my goals for each step, then build all solutions in each step together, surfing the internet to find solutions also has a huge benefit.

    Vocabulary terms Used in Debian 🍥

    1. uscan => a tool to identify and download upstream source code from the repository, also compressing it into the required format.

    2. apt => a package manager to manage packages in Debian, similar to pip in python, npm in javascript.

    3. sid/bullseye/buster/stretch => sid is the permanent code name for the unstable distribution, the bullseye is a code name for testing distribution as it will become the next stable or buster, buster is the code name for stable distribution while the stretch is the code name for old stable distribution.

    4. reverse-rebuild => is building all modules that depend on a package in Debian while building the main package.

    5. lintian => A helper tool used to check for inconsistencies and errors in a Debian Package based on Debian standards.

    6. pkg-js-tools => A collection of tools to aid packaging Node modules in Debian.

    7. dpkg-buildpackage => A command to build upstream code in an unclean chroot or environment.

    8. quilt => A patch creation and management automation script. quilt helps manage a series of patches that a Debian package maintainer needs to be applied to upstream source when building the package.

    9. autopkgtest => a script used to test an installed binary package using the source package's tests

    10. RFS => (Request For Sponsorship) Working in the Debian ecosystem includes two roles either as a Debian Maintainer with restricted rights and privileges like uploading to the Debian archive or as a Debian Developer with all rights and privileges such as uploading to the Debian archive, as a new contributor or a Debian maintainer (with few rights and privileges) in Debian you can RFS so that your pull request (PR) can be merged to the Debian archive by a Debian Developer, much like your contribution has been accepted 😸

    There are so many terms and tools you have to get accustomed to, but they are easy to understand and use, as enough and frequently updated wiki documentation are available to guide you through, plus a whole lot of community members you can ask questions from.

    “strength and growth come only through continuous effort and struggle.” — Napoleon Hill

    07 January, 2022 08:02AM by Ayoyimika Ajibade

    Reproducible Builds (diffoscope)

    diffoscope 199 released

    The diffoscope maintainers are pleased to announce the release of diffoscope version 199. This version includes the following changes:

    [ Chris Lamb ]
    * Support both variants of "odt2txt", including the one provided by unoconv.
      (Closes: reproducible-builds/diffoscope#298)
    
    [ Jelle van der Waa ]
    * Add external tool reference on Arch Linux for xb-tool.
    

    You find out more by visiting the project homepage.

    07 January, 2022 12:00AM

    January 06, 2022

    Jacob Adams

    Linux Hibernation Documentation

    Recently I’ve been curious about how hibernation works on Linux, as it’s an interesting interaction between hardware and software. There are some notes in the Arch wiki and the kernel documentation (as well as some kernel documentation on debugging hibernation and on sleep states more generally), and of course the ACPI Specification

    The Formal Definition

    ACPI (Advanced Configuration and Power Interface) is, according to the spec, “an architecture-independent power management and configuration framework that forms a subsystem within the host OS” which defines “a hardware register set to define power states.”

    ACPI defines four global system states G0, working/on, G1, sleeping, G2, soft off, and G3, mechanical off1. Within G1 there are 4 sleep states, numbered S1 through S4. There are also S0 and S5, which are equivalent to G0 and G2 respectively2.

    Sleep

    According to the spec, the ACPI S1-S4 states all do the same thing from the operating system’s perspective, but each saves progressively more power, so the operating system is expected to pick the deepest of these states when entering sleep. However, most operating systems3 distinguish between S1-S3, which are typically referred to as sleep or suspend, and S4, which is typically referred to as hibernation.

    S1: CPU Stop and Cache Wipe

    The CPU caches are wiped and then the CPU is stopped, which the spec notes is equivalent to the WBINVD instruction followed by the STPCLK signal on x86. However, nothing is powered off.

    S2: Processor Power off

    The system stops the processor and most system clocks (except the real time clock), then powers off the processor. Upon waking, the processor will not continue what it was doing before, but instead use its reset vector4.

    S3: Suspend/Sleep (Suspend-to-RAM)

    Mostly equivalent to S2, but hardware ensures that only memory and whatever other hardware memory requires are powered.

    S4: Hibernate (Suspend-to-Disk)

    In this state, all hardware is completely powered off and an image of the system is written to disk, to be restored from upon reapplying power. Writing the system image to disk can be handled by the operating system if supported, or by the firmware.

    Linux Sleep States

    Linux has its own set of sleep states which mostly correspond with ACPI states.

    Suspend-to-Idle

    This is a software only sleep that puts all hardware into the lowest power state it can, suspends timekeeping, and freezes userspace processes.

    All userspace and some kernel threads5, except those tagged with PF_NOFREEZE, are frozen before the system enters a sleep state. Frozen tasks are sent to the __refrigerator(), where they set TASK_UNINTERRUPTIBLE and PF_FROZEN and infinitely loop until PF_FROZEN is unset6.

    This prevents these tasks from doing anything during the imaging process. Any userspace process running on a different CPU while the kernel is trying to create a memory image would cause havoc. This is also done because any filesystem changes made during this would be lost and could cause the filesystem and its related in-memory structures to become inconsistent. Also, creating a hibernation image requires about 50% of memory free, so no tasks should be allocating memory, which freezing also prevents.

    Standby

    This is equivalent to ACPI S1.

    Suspend-to-RAM

    This is equivalent to ACPI S3.

    Hibernation

    Hibernation is mostly equivalent to ACPI S4 but does not require S4, only requiring “low-level code for resuming the system to be present for the underlying CPU architecture” according to the Linux sleep state docs.

    To hibernate, everything is stopped and the kernel takes a snapshot of memory. Then, the system writes out the memory image to disk. Finally, the system either enters S4 or turns off completely.

    When the system restores power it boots a new kernel, which looks for a hibernation image and loads it into memory. It then overwrites itself with the hibernation image and jumps to a resume area of the original kernel7. The resumed kernel restores the system to its previous state and resumes all processes.

    Hybrid Suspend

    Hybrid suspend does not correspond to an official ACPI state, but instead is effectively a combination of S3 and S4. The system writes out a hibernation image, but then enters suspend-to-RAM. If the system wakes up from suspend it will discard the hibernation image, but if the system loses power it can safely restore from the hibernation image.

    1. The difference between soft and mechanical off is that mechanical off is “entered and left by a mechanical means (for example, turning off the system’s power through the movement of a large red switch)” 

    2. It’s unclear to me why G and S states overlap like this. I assume this is a relic of an older spec that only had S states, but I have not as yet found any evidence of this. If someone has any information on this, please let me know and I’ll update this footnote. 

    3. Of the operating systems I know of that support ACPI sleep states (I checked Windows, Mac, Linux, and the three BSDs8), only MacOS does not allow the user to deliberately enable hibernation, instead supporting a hybrid suspend it calls safe sleep 

    4. “The reset vector of a processor is the default location where, upon a reset, the processor will go to find the first instruction to execute. In other words, the reset vector is a pointer or address where the processor should always begin its execution. This first instruction typically branches to the system initialization code.” Xiaocong Fan, Real-Time Embedded Systems, 2015 

    5. All kernel threads are tagged with PF_NOFREEZE by default, so they must specifically opt-in to task freezing. 

    6. This is not from the docs, but from kernel/freezer.c which also notes “Refrigerator is place where frozen processes are stored :-).” 

    7. This is the operation that requires “special architecture-specific low-level code”. 

    8. Interestingly NetBSD has a setting to enable hibernation, but does not actually support hibernation 

    06 January, 2022 12:00AM

    January 05, 2022

    Reproducible Builds

    Reproducible Builds in December 2021

    Welcome to the December 2021 report from the Reproducible Builds project! In these reports, we try and summarise what we have been up to over the past month, as well as what else has been occurring in the world of software supply-chain security.

    As a quick recap of what reproducible builds is trying to address, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. As always, if you would like to contribute to the project, please get in touch with us directly or visit the Contribute page on our website.


    Early in December, Julien Voisin blogged about setting up a rebuilderd instance in order to reproduce Tails images. Working on previous work from 2018, Julien has now set up a public-facing instance which is providing build attestations.

    As Julien dryly notes in his post, “Currently, this isn’t really super-useful to anyone, except maybe some Tails developers who want to check that the release manager didn’t backdoor the released image.” Naturally, we would contend — sincerely — that this is indeed useful.


    The secure/anonymous Tor browser now supports reproducible source releases. According to the project’s changelog, version 0.4.7.3-alpha of Tor can now build reproducible tarballs via the make dist-reprod command. This issue was tracked via Tor issue #26299.


    Fabian Keil posted a question to our mailing list this month asking how they might analyse differences in images produced with the FreeBSD and ElectroBSD’s mkimg and makefs commands:

    After rebasing ElectroBSD from FreeBSD stable/11 to stable/12
    I recently noticed that the "memstick" images are unfortunately
    still not 100% reproducible.

    Fabian’s original post generated a short back-and-forth with Chris Lamb regarding how diffoscope might be able to support the particular format of images generated by this command set.


    diffoscope

    diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploading versions 195, 196, 197 and 198 to Debian, as well as made the following changes:

    • Support showing Ordering differences only within .dsc field values. []
    • Add support for ‘XMLb’ files. []
    • Also add, for example, /usr/lib/x86_64-linux-gnu to our local binary search path. []
    • Support OCaml versions 4.11, 4.12 and 4.13. []
    • Drop some unnecessary has_same_content_as logging calls. []
    • Replace token variable with an anonymously-named variable instead to remove extra lines. []
    • Don’t use the runtime platform’s native endianness when unpacking .pyc files. This fixes test failures on big-endian machines. []

    Mattia Rizzolo also made a number of changes to diffoscope this month as well, such as:

    • Also recognize GnuCash files as XML. []
    • Support the pgpdump PGP packet visualiser version 0.34. []
    • Ignore the new Lintian tag binary-with-bad-dynamic-table. []
    • Fix the Enhances field in debian/control. []

    Finally, Brent Spillner fixed the version detection for Black ‘uncompromising code formatter’ [], Jelle van der Waa added an external tool reference for Arch Linux [] and Roland Clobus added support for reporting when the GNU_BUILD_ID field has been modified []. Thank you for your contributions!


    Distribution work

    In Debian this month, 70 reviews of packages were added, 27 were updated and 41 were removed, adding to our database of knowledge about specific issues. A number of issue types were created as well, including:

    strip-nondeterminism version 1.13.0-1 was uploaded to Debian unstable by Holger Levsen. It included contributions already covered in previous months as well as new ones from Mattia Rizzolo, particularly that the dh_strip_nondeterminism Debian integration interface uses the new get_non_binnmu_date_epoch() utility when available: this is important to ensure that strip-nondeterminism does not break some kinds of binNMUs.


    In the world of openSUSE, however, Bernhard M. Wiedemann posted his monthly reproducible builds status report.


    In NixOS, work towards the longer-term goal of making the graphical installation image reproducible is ongoing. For example, Artturin made the gnome-desktop package reproducible.


    Upstream patches

    The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. In December, we wrote a large number of such patches, including:


    Testing framework

    The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

    • Holger Levsen:

      • Run the Debian scheduler less often. []
      • Fix the name of the Debian ‘testing’ suite name. []
      • Detect builds that are rescheduling due to problems with the diffoscope container. []
      • No longer special-case particular machines having a different /boot partition size. []
      • Automatically fix failed apt-daily and apt-daily-upgrade services [], failed e2scrub_all.service & user@ systemd units [][] as well as ‘generic’ build failures [].
      • Simplify a script to powercycle arm64 architecture nodes hosted at/by codethink.co.uk. []
      • Detect if the udd-mirror.debian.net service is down. []
      • Various miscellaneous node maintenance. [][]
    • Roland Clobus (Debian ‘live’ image generation):

      • If the latest snapshot is not complete yet, try to use the previous snapshot instead. []
      • Minor: whitespace correction + comment correction. []
      • Use unique folders and reports for each Debian version. []
      • Turn off debugging. []
      • Add a better error description for incorrect/missing arguments. []
      • Report non-reproducible issues in Debian sid images. []

    Lastly, Mattia Rizzolo updated the automatic logfile parsing rules in a number of ways (eg. to ignore a warning about the Python setuptools deprecation) [][] and Vagrant Cascadian adjusted the config for the Squid caching proxy on a node. []



    If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

    05 January, 2022 02:44PM

    Jonathan Wiltshire

    Continuing adventures of the mystery cable

    My 4×2 has been in action again, trying to find the remainder of the mystery sometimes-4mm/sometimes-2.5mm/sometimes-1.5mm cable. It finally appeared in the tiniest gap possible between back wall and joist.

    As we had suspected by tracing everything else, the junction is an unfused union of all three cable types with a 230V 32A circuit breaker on one end and a light switch on the other. So in the event of fault current at the kitchen lights, the 1.5mm cable is definitely going to burn out and almost certainly set fire to the flat roof. Delightful.

    It is no longer like this.

    More hunting for the mystery cable (bonus glimpse of one of the new light fittings)
    The 4×2 ceiling removal device in action
    At last, quarry in sight!
    Mystery Cable Junction Box of Fire and Doom ™?
    All the terrifying branches removed and ready to go back into the ceiling for now, until the next room is refurbished

    05 January, 2022 01:39PM by Jonathan

    hackergotchi for Thomas Lange

    Thomas Lange

    FAI.me service now support backports for Debian 11 (bullseye)

    The FAI.me service for creating customized installation and cloud images now supports a backports kernel for the stable release Debian 11 (aka bullseye). If you enable the backports option, you will currently get kernel 5.14. This will help you if you have newer hardware that is not support by the default kernel 5.10. The backports option is also still available for the images when using the old Debian 10 (buster) release.

    The URL of the FAI.me service is

    https://fai-project.org/FAIme/

    FAI.me

    05 January, 2022 11:46AM

    January 04, 2022

    Russell Coker

    Terrorists Inspired by Fiction

    The Tom Clancy book Debt of Honor published in August 1994 first introduced the concept of a heavy passenger aircraft being used as a weapon by terrorists against a well defended building. In April 1994 there was an attempt to hijack and deliberately crash FedEx flight 705. It’s possible for a book to be changed 4 months before publication, but it seems unlikely that a significant plot point in a series of books was changed in such a small amount of time so it’s likely that Tom Clancy got the idea first. There have been other variations on that theme, such as the Yokosuka_MXY-7 Kamakazi flying bomb (known by the Allies as “Baka” which is Japanese for idiot). But Tom Clancy seemed to pioneer the idea of a commercial passenger jet being subverted for the purpose of ground attack.

    7 years after Tom Clancy’s book was published the 911 hijackings happened.

    The TV series Black Mirror first aired in 2011, and the first episode was about terrorists kidnapping a princess and demanding that the UK PM perform an indecent act with a pig for her release. While the plot was a little extreme (the entire series is extreme) the basic concept of sexual extortion based on terrorist acts is something that could be done in real life, and if terrorists were inspired by this they are taking longer than expected to do it.

    Most democracies seem to end up with two major parties that are closely matched. Even if a government was strict about not negotiating with terrorists it seems likely that terrorists demanding that a politician perform an unusual sex act on TV would change things, supporters would be divided into groups that support and oppose negotiating. Discussions wouldn’t be as civil as when the negotiation involves money or freeing prisoners. If an election result was perceived to have been influenced by such terrorism then supporters of the side that lost would claim it to be unfair and reject the result. If the goal of terrorists was to cause chaos then that would be one way of achieving it, and they have had over 10 years to consider this possibility.

    Are we overdue for a terror attack inspired by Black Mirror?

    04 January, 2022 11:00PM by etbe

    Jelmer Vernooij

    Personal Streaming Audio Server

    For a while now, I’ve been looking for a good way to stream music from my home music collection on my phone.

    There are quite a few options for music servers that support streaming. However, Android apps that can stream music from one of those servers tend to be unmaintained, clunky or slow (or more than one of those).

    It is possible to use something that runs in a web server, but that means no offline caching - which can be quite convenient in spots without connectivity, such as the Underground or other random bits of London with poor cell coverage.

    Server

    Most music servers today support some form of the subsonic API.

    I’ve tried a couple, with mixed results:

    • supysonic; Python. Slow. Ran into some issues with subsonic clients. No real web UI.
    • gonic; Go. Works well & fast enough. Minimal web UI, i.e. no ability to play music from a browser.
    • airsonic; Java. Last in a chain of (abandoned) forks. More effort to get to work, and resource intensive.

    Eventually, I’ve settled on Navidrome. It’s got a couple of things going for it:

    • Good subsonic implementation that worked with all the Android apps I used it with.
    • Great Web UI for use in a browser

    I run Navidrome in Kubernetes. It’s surprisingly easy to get going. Here’s the deployment I’m using:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: navidrome
    spec:
     replicas: 1
     selector:
       matchLabels:
         app: navidrome
     template:
       metadata:
         labels:
           app: navidrome
       spec:
         containers:
           - name: navidrome
             image: deluan/navidrome:latest
             imagePullPolicy: Always
             resources:
               limits:
                 cpu: ".5"
                 memory: "2Gi"
               requests:
                 cpu: "0.1"
                 memory: "10M"
             ports:
               - containerPort: 4533
             volumeMounts:
               - name: navidrome-data-volume
                 mountPath: /data
               - name: navidrome-music-volume
                 mountPath: /music
             env:
               - name: ND_SCANSCHEDULE
                 value: 1h
               - name: ND_LOGLEVEL
                 value: info
               - name: ND_SESSIONTIMEOUT
                 value: 24h
               - name: ND_BASEURL
                 value: /navidrome
             livenessProbe:
                httpGet:
                  path: /navidrome/app
                  port: 4533
                initialDelaySeconds: 30
                periodSeconds: 3
                timeoutSeconds: 90
         volumes:
            - name: navidrome-data-volume
              hostPath:
               path: /srv/navidrome
               type: Directory
            - name: navidrome-music-volume
              hostPath:
                path: /srv/media/music
                type: Directory
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: navidrome
    spec:
      ports:
        - port: 4533
          name: web
      selector:
        app: navidrome
      type: ClusterIP
    

    At the moment, this deployment is still tied to the machine with my music on it since it relies on hostPath volumes, but I’m planning to move that to ceph in the future.

    I then expose this service on /navidrome on my private domain (here replaced with example.com) using an Ingress:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: navidrome
    spec:
      ingressClassName: nginx
      rules:
      - host: example.com
        http:
          paths:
          - backend:
              service:
                name: navidrome
                port:
                  name: web
            path: /navidrome(/|$)(.*)
            pathType: Prefix
    

    Client

    On the desktop, I usually just use navidrome’s web interface. Clementine’s support for subsonic is also okay. sublime-music is meant to be a music player specifically for Subsonic, but I’ve not really found it stable enough for day-to-day usage.

    There are various Android clients for Subsonic, but I’ve only really considered the Open Source ones that are hosted on F-Droid. Most of those are abandoned, but D-Sub works pretty well - as does my preferred option, Subtracks.

    04 January, 2022 06:00PM by Jelmer Vernooij

    hackergotchi for Jonathan McDowell

    Jonathan McDowell

    Upgrading from a CC2531 to a CC2538 Zigbee coordinator

    Previously I setup a CC2531 as a Zigbee coordinator for my home automation. This has turned out to be a good move, with the 4 gang wireless switch being particularly useful. However the range of the CC2531 is fairly poor; it has a simple PCB antenna. It’s also a very basic device. I set about trying to improve the range and scalability and settled upon a CC2538 + CC2592 device, which feature an MMCX antenna connector. This device also has the advantage that it’s ARM based, which I’m hopeful means I might be able to build some firmware myself using a standard GCC toolchain.

    For now I fetched the JetHome firmware from https://github.com/jethome-ru/zigbee-firmware/tree/master/ti/coordinator/cc2538_cc2592 (JH_2538_2592_ZNP_UART_20211222.hex) - while it’s possible to do USB directly with the CC2538 my board doesn’t have those bits so going the external USB UART route is easier.

    The device had some existing firmware on it, so I needed to erase this to force a drop into the boot loader. That means soldering up the JTAG pins and hooking it up to my Bus Pirate for OpenOCD goodness.

    OpenOCD config
    source [find interface/buspirate.cfg]
    
    buspirate_port /dev/ttyUSB1
    buspirate_mode normal
    buspirate_vreg 1
    buspirate_pullup 0
    
    transport select jtag
    
    source [find target/cc2538.cfg]
    
    Steps to erase
    $ telnet localhost 4444
    Trying ::1...
    Trying 127.0.0.1...
    Connected to localhost.
    Escape character is '^]'.
    Open On-Chip Debugger
    > mww 0x400D300C 0x7F800
    > mww 0x400D3008 0x0205
    > shutdown
    shutdown command invoked
    Connection closed by foreign host.
    

    At that point I can switch to the UART connection (on PA0 + PA1) and flash using cc2538-bsl:

    $ git clone https://github.com/JelmerT/cc2538-bsl.git
    $ cc2538-bsl/cc2538-bsl.py -p /dev/ttyUSB1 -e -w -v ~/JH_2538_2592_ZNP_UART_20211222.hex
    Opening port /dev/ttyUSB1, baud 500000
    Reading data from /home/noodles/JH_2538_2592_ZNP_UART_20211222.hex
    Firmware file: Intel Hex
    Connecting to target...
    CC2538 PG2.0: 512KB Flash, 32KB SRAM, CCFG at 0x0027FFD4
    Primary IEEE Address: 00:12:4B:00:22:22:22:22
        Performing mass erase
    Erasing 524288 bytes starting at address 0x00200000
        Erase done
    Writing 524256 bytes starting at address 0x00200000
    Write 232 bytes at 0x0027FEF88
        Write done
    Verifying by comparing CRC32 calculations.
        Verified (match: 0x74f2b0a1)
    

    I then wanted to migrate from the old device to the new without having to repair everything. So I shut down Home Assistant and backed up the CC2531 network information using zigpy-znp (which is already installed for Home Assistant):

    python3 -m zigpy_znp.tools.network_backup /dev/zigbee > cc2531-network.json
    

    I copied the backup to cc2538-network.json and modified the coordinator_ieee to be the new device’s MAC address (rather than end up with 2 devices claiming the same MAC if/when I reuse the CC2531) and did:

    python3 -m zigpy_znp.tools.network_restore --input cc2538-network.json /dev/ttyUSB1
    

    The old CC2531 needed unplugged first, otherwise I got an RuntimeError: Network formation refused, RF environment is likely too noisy. Temporarily unscrew the antenna or shield the coordinator with metal until a network is formed. error.

    After that I updated my udev rules to map the CC2538 to /dev/zigbee and restarted Home Assistant. To my surprise it came up and detected the existing devices without any extra effort on my part. However that resulted in 2 coordinators being shown in the visualisation, with the old one turning up as unk_manufacturer. Fixing that involved editing /etc/homeassistant/.storage/core.device_registry and removing the entry which had the old MAC address, removing the device entry in /etc/homeassistant/.storage/zha.storage for the old MAC and then finally firing up sqlite to modify the Zigbee database:

    $ sqlite3 /etc/homeassistant/zigbee.db
    SQLite version 3.34.1 2021-01-20 14:10:07
    Enter ".help" for usage hints.
    sqlite> DELETE FROM devices_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM endpoints_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM in_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM neighbors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11' OR device_ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM node_descriptors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM out_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> .quit
    

    So far it all seems a bit happier than with the CC2531; I’ve been able to pair a light bulb that was previously detected but would not integrate, which suggests the range is improved.

    (This post another in the set of “things I should write down so I can just grep my own website when I forget what I did to do foo”.)

    04 January, 2022 03:50PM

    Russell Coker

    Big Smart TVs

    Recently a relative who owned a 50″ Plasma TV asked me for advice on getting a new TV. Looking at the options all the TVs seem to be smart TVs (running Android with built in support for YouTube and Netflix) and most of them seem to be 4K resolution. 4K doesn’t provide much benefit now as most people don’t have BlueRay DVD players and discs, there aren’t a lot of 4K YouTube videos, and most streaming services don’t offer 4K resolution. But as 4K doesn’t cost much more it doesn’t make sense not to get it.

    I gave my relative a list of good options from Kogan (the Australian company that has the cheapest consumer electronics) and they chose a 65″ 4K Smart TV from Kogan. That only cost $709 plus delivery which is reasonably affordable for something that will presumably last for a long time and be used by many people.

    Netflix on a web browser won’t do more than FullHD resolution unless you use Edge on Windows 10. But Netflix on the smart tv has a row advertising 4K shows which indicates that 4K is supported. There are some 4K videos on YouTube but not a lot at this time.

    Size

    It turns out that 65″ is very big. It didn’t fit on the table that had been used for the 50″ Plasma TV.

    Rtings.com has a good article about TV size vs distance [1]. According to their calculations if you want to sit 2 meters away from a TV and have a 30 degree field of view (recommended for “mixed” use) then a 45″ TV is ideal.

    According to their calculations on pixel sizes, if you have a FullHD display (or the common modern case a FullHD signal displayed on a 4K monitor) that is between 1.8 and 2.5 meters away from you then a 45″ TV is the largest that will be useful. To take proper advantage of a monitor larger than 45″ at a distance of 2 meters you need a 4K signal. If you have a 4K signal then you can get best results by having a 45″ monitor less than 1.8 meters away from you. As most TV watching involves less than 3 people it shouldn’t be inconvenient to be less than 1.8 meters away from the TV.

    The 65″ TV weighs 21Kg according to the specs, that isn’t a huge amount for something small, but for something a large and inconvenient as a 65″ TV it’s impossible for one person to safely move. Kogan sells 43″ TVs that weigh 6KG, that’s something that most adults could move with one hand. I think that a medium size TV that can be easily moved to a convenient location would probably give an equivalent viewing result to an extremely large TV that can’t be moved at all. I currently have a 40″ LCD TV, the only reason I have that is because a friend didn’t need it, the previous 32″ TV that I used was adequate for my needs. Most of my TV viewing is on a 28″ monitor, which I find adequate for 2 or 3 people. So I generally wouldn’t recommend a 65″ TV for anyone.

    Android for TVs

    Android wasn’t designed for TVs and doesn’t work that well on them. Having buttons on the remote for Netflix and YouTube is handy, but it would be nice if there were programmable buttons for other commonly used apps or a way to switch between the last few apps (like ALT-TAB on a PC).

    One good feature of Android for TV is that it can display a set of rows of shows (similar to the Netflix method of displaying) where each row is from a different app. The apps I’ve installed on that TV which support the row view are Netflix, YouTube, YouTube Music, ABC iView (that’s Australian ABC), 7plus, 9now, and SBS on Demand. That’s nice, now we just need channel 10’s app to support that to have coverage for all Australian free TV stations in the Android TV interface.

    Conclusion

    It’s a nice TV and it generally works well. Android is OK for TV use but far from great. It is running Android version 9, maybe a newer version of Android works better on TVs.

    It’s too large for reasonable people to use in a home. I’ve seen smaller TVs used for 20 people in an office in a video conference. It’s cheap enough that most people can afford it, but it’s easier and more convenient to have something smaller and lighter.

    04 January, 2022 11:37AM by etbe

    Curiosity Stream

    I have recently signed up for the Curiosity Stream [1] documentary site, this is designed to be like Netflix but for non-fiction content only. The service costs $US15 per annum or $52US per annum for 4K (I think the 4K service was about $US120 per annum when I signed up). The extra price for 4K seems excessive, while it is in line with the bandwidth requirements a large portion of the costs of the service would be about user support and running the service reliably for which 4K makes little difference.

    My aim in subscribing was to just get a service like Netflix with new documentary content as I have watched every documentary I want to watch on Netflix (I think I’ve watched over 1000 hours of Netflix documentaries). So naturally I compare the service to Netflix and I found that it doesn’t compare well. Curiosity Stream (CS) has no button to skip the intro and has a problem with using the right arrow to skip forward (seems to only work once and then I have to use the mouse), this costs me about 30 seconds for each episode which adds up when watching documentaries at 1.5* speed. The method of controlling the viewing is a little clunky, sometimes the popup menu at the bottom of the screen to control playback doesn’t disappear by itself until you mouse over it and space bar doesn’t select pause instead it selects the last action. CS allows selecting individual episodes for your watch list instead of entire series, this could be useful for some people but I just find it annoying, it might be good for classroom use. The method of searching for new shows to watch isn’t as good as the Netflix method or the way things are displayed on Android TV (which seems to have an API for multiple video providers to show a list of shows with one row per provider). Some of these things might seem OK if you haven’t used other services, but if you are used to Netflix and Amazon Prime then it will seem clunky.

    The amount of content on Curiosity Stream doesn’t seem that large, I don’t know how to get a full measure of it, but when I search for things I seem to get less results than on Netflix. That could be something to do with what I’m searching for.

    In terms of value for money $US15 per annum for the content that CS provides is a good deal. Netflix overall offers better value for home users having fiction as well as non-fiction in large quantities. But for documentary content $US15 per annum is pretty cheap. I recommend signing up for CS, but for most people signing up for Netflix first will be a good option.

    1. [1] https://curiositystream.com/

    04 January, 2022 05:27AM by etbe

    January 03, 2022

    Paul Wise

    FLOSS Activities December 2021

    Focus

    This month I didn't have any particular focus. I just worked on issues in my info bubble.

    Changes

    Issues

    Review

    • Spam: reported 166 Debian mailing list posts
    • Patches: reviewed libpst upstream patches
    • Debian packages: sponsored nsis, memtest86+
    • Debian wiki: RecentChanges for the month
    • Debian BTS usertags: changes for the month
    • Debian screenshots:

    Administration

    • libpst: setup GitHub presence, migrate from hg to git, requested details from bug reporters
    • plac: cleaned up git repo anomalies
    • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages: stardict, node-carto
    • Debian wiki: unblock IP addresses, approve accounts

    Communication

    • Respond to queries from Debian users and contributors on the mailing lists and IRC

    Sponsors

    The purple-discord, python-plac, sptag, smart-open, libpst, memtest86+, oci-python-sdk work was sponsored. All other work was done on a volunteer basis.

    03 January, 2022 11:35PM

    Ian Jackson

    Debian’s approach to Rust - Dependency handling

    tl;dr: Faithfully following upstream semver, in Debian package dependencies, is a bad idea.

    Introduction

    I have been involved in Debian for a very long time. And I’ve been working with Rust for a few years now. Late last year I had cause to try to work on Rust things within Debian.

    When I did, I found it very difficult. The Debian Rust Team were very helpful. However, the workflow and tooling require very large amounts of manual clerical work - work which it is almost impossible to do correctly since the information required does not exist. I had wanted to package a fairly straightforward program I had written in Rust, partly as a learning exercise. But, unfortunately, after I got stuck in, it looked to me like the effort would be wildly greater than I was prepared for, so I gave up.

    Since then I’ve been thinking about what I learned about how Rust is packaged in Debian. I think I can see how to fix some of the problems. Although I don’t want to go charging in and try to tell everyone how to do things, I felt I ought at least to write up my ideas. Hence this blog post, which may become the first of a series.

    This post is going to be about semver handling. I see problems with other aspects of dependency handling and source code management and traceability as well, and of course if my ideas find favour in principle, there are a lot of details that need to be worked out, including some kind of transition plan.

    How Debian packages Rust, and build vs runtime dependencies

    Today I will be discussing almost entirely build-dependencies; Rust doesn’t (yet?) support dynamic linking, so built Rust binaries don’t have Rusty dependencies.

    However, things are a bit confusing because even the Debian “binary” packages for Rust libraries contain pure source code. So for a Rust library package, “building” the Debian binary package from the Debian source package does not involve running the Rust compiler; it’s just file-copying and format conversion. The library’s Rust dependencies do not need to be installed on the “build” machine for this.

    So I’m mostly going to be talking about Depends fields, which are Debian’s way of talking about runtime dependencies, even though they are used only at build-time. The way this works is that some ultimate leaf package (which is supposed to produce actual executable code) Build-Depends on the libraries it needs, and those Depends on their under-libraries, so that everything needed is installed.

    What do dependencies mean and what are they for anyway?

    In systems where packages declare dependencies on other packages, it generally becomes necessary to support “versioned” dependencies. In all but the most simple systems, this involves an ordering (or similar) on version numbers and a way for a package A to specify that it depends on certain versions of B.

    Both Debian and Rust have this. Rust upstream crates have version numbers and can specify their dependencies according to semver. Debian’s dependency system can represent that.

    So it was natural for the designers of the scheme for packaging Rust code in Debian to simply translate the Rust version dependencies to Debian ones. However, while the two dependency schemes seem equivalent in the abstract, their concrete real-world semantics are totally different.

    These different package management systems have different practices and different meanings for dependencies. (Interestingly, the Python world also has debates about the meaning and proper use of dependency versions.)

    The epistemological problem

    Consider some package A which is known to depend on B. In general, it is not trivial to know which versions of B will be satisfactory. I.e., whether a new B, with potentially-breaking changes, will actually break A.

    Sometimes tooling can be used which calculates this (eg, the Debian shlibdeps system for runtime dependencies) but this is unusual - especially for build-time dependencies. Which versions of B are OK can normally only be discovered by a human consideration of changelogs etc., or by having a computer try particular combinations.

    Few ecosystems with dependencies, in the Free Software community at least, make an attempt to precisely calculate the versions of B that are actually required to build some A. So it turns out that there are three cases for a particular combination of A and B: it is believed to work; it is known not to work; and: it is not known whether it will work.

    And, I am not aware of any dependency system that has an explicit machine-readable representation for the “unknown” state, so that they can say something like “A is known to depend on B; versions of B before v1 are known to break; version v2 is known to work”. (Sometimes statements like that can be found in human-readable docs.)

    That leaves two possibilities for the semantics of a dependency A depends B, version(s) V..W: Precise: A will definitely work if B matches V..W, and Optimistic: We have no reason to think B breaks with any of V..W.

    At first sight the latter does not seem useful, since how would the package manager find a working combination? Taking Debian as an example, which uses optimistic version dependencies, the answer is as follows: The primary information about what package versions to use is not only the dependencies, but mostly in which Debian release is being targeted. (Other systems using optimistic version dependencies could use the date of the build, i.e. use only packages that are “current”.)

    Precise

    Optimistic

    People involved in version management

    Package developers,
    downstream developers/users.

    Package developers,
    downstream developer/users,
    distribution QA and release managers.

    Package developers declare versions V and dependency ranges V..W so that

    It definitely works.

    A wide range of B can satisfy the declared requirement.

    The principal version data used by the package manager

    Only dependency versions.

    Contextual, eg, Releases - set(s) of packages available.

    Version dependencies are for

    Selecting working combinations (out of all that ever existed).

    Sequencing (ordering) of updates; QA.

    Expected use pattern by a downstream

    Downstream can combine any
    declared-good combination.

    Use a particular release of the whole system. Mixing-and-matching requires additional QA and remedial work.

    Downstreams are protected from breakage by

    Pessimistically updating versions and dependencies whenever anything might go wrong.

    Whole-release QA.

    A substantial deployment will typically contain

    Multiple versions of many packages.

    A single version of each package, except where there are actual incompatibilities which are too hard to fix.

    Package updates are driven by

    Top-down:
    Depending package updates the declared metadata.
    Bottom-up:
    Depended-on package is updated in the repository for the work-in-progress release.

    So, while Rust and Debian have systems that look superficially similar, they contain fundamentally different kinds of information. Simply representing the Rust versions directly into Debian doesn’t work.

    What is currently done by the Debian Rust Team is to manually patch the dependency specifications, to relax them. This is very labour-intensive, and there is little automation supporting either decisionmaking or actually applying the resulting changes.

    What to do

    Desired end goal

    To update a Rust package in Debian, that many things depend on, one need simply update that package.

    Debian’s sophisticated build and CI infrastructure will try building all the reverse-dependencies against the new version. Packages that actually fail against the new dependency are flagged as suffering from release-critical problems.

    Debian Rust developers then update those other packages too. If the problems turn out to be too difficult, it is possible to roll back.

    If a problem with a depending packages is not resolved in a timely fashion, priority is given to updating core packages, and the depending package falls by the wayside (since it is empirically unmaintainable, given available effort).

    There is no routine manual patching of dependency metadata (or of anything else).

    Radical proposal

    Debian should not precisely follow upstream Rust semver dependency information. Instead, Debian should optimistically try the combinations of packages that we want to have. The resulting breakages will be discovered by automated QA; they will have to be fixed by manual intervention of some kind, but usually, simply updating the depending package will be sufficient.

    This no longer ensures (unlike the upstream Rust scheme) that the result is expected to build and work if the dependencies are satisfied. But as discussed, we don’t really need that property in Debian. More important is the new property we gain: that we are able to mix and match versions that we find work in practice, without a great deal of manual effort.

    Or to put it another way, in Debian we should do as a Rust upstream maintainer does when they do the regular “update dependencies for new semvers” task: we should update everything, see what breaks, and fix those.

    (In theory a Rust upstream package maintainer is supposed to do some additional checks or something. But the practices are not standardised and any checks one does almost never reveal anything untoward, so in practice I think many Rust upstreams just update and see what happens. The Rust upstream community has other mechanisms - often, reactive ones - to deal with any problems. Debian should subscribe to those same information sources, eg RustSec.)

    Nobbling cargo

    Somehow, when cargo is run to build Rust things against these Debian packages, cargo’s dependency system will have to be overridden so that the version of the package that is actually selected by Debian’s package manager is used by cargo without complaint.

    We probably don’t want to change the Rust version numbers of Debian Rust library packages, so this should be done by either presenting cargo with an automatically-massaged Cargo.toml where the dependency version restrictions are relaxed, or by using a modified version of cargo which has special option(s) to relax certain dependencies.

    Handling breakage

    Rust packages in Debian should already be provided with autopkgtests so that ci.debian.net will detect build breakages. Build breakages will stop the updated dependency from migrating to the work-in-progress release, Debian testing.

    To resolve this, and allow forward progress, we will usually upload a new version of the dependency containing an appropriate Breaks, and either file an RC bug against the depending package, or update it. This can be done after the upload of the base package.

    Thus, resolution of breakage due to incompatibilities will be done collaboratively within the Debian archive, rather than ad-hoc locally. And it can be done without blocking.

    My proposal prioritises the ability to make progress in the core, over stability and in particular over retaining leaf packages. This is not Debian’s usual approach but given the Rust ecosystem’s practical attitudes to API design, versioning, etc., I think the instability will be manageable. In practice fixing leaf packages is not usually really that hard, but it’s still work and the question is what happens if the work doesn’t get done. After all we are always a shortage of effort - and we probably still will be, even if we get rid of the makework clerical work of patching dependency versions everywhere (so that usually no work is needed on depending packages).

    Exceptions to the one-version rule

    There will have to be some packages that we need to keep multiple versions of. We won’t want to update every depending package manually when this happens. Instead, we’ll probably want to set a version number split: rdepends which want version <X will get the old one.

    Details - a sketch

    I’m going to sketch out some of the details of a scheme I think would work. But I haven’t thought this through fully. This is still mostly at the handwaving stage. If my ideas find favour, we’ll have to do some detailed review and consider a whole bunch of edge cases I’m glossing over.

    The dependency specification consists of two halves: the depending .deb‘s Depends (or, for a leaf package, Build-Depends) and the base .debVersion and perhaps Breaks and Provides.

    Even though libraries vastly outnumber leaf packages, we still want to avoid updating leaf Debian source packages simply to bump dependencies.

    Dependency encoding proposal

    Compared to the existing scheme, I suggest we implement the dependency relaxation by changing the depended-on package, rather than the depending one.

    So we retain roughly the existing semver translation for Depends fields. But we drop all local patching of dependency versions.

    Into every library source package we insert a new Debian-specific metadata file declaring the earliest version that we uploaded. When we translate a library source package to a .deb, the “binary” package build adds Provides for every previous version.

    The effect is that when one updates a base package, the usual behaviour is to simply try to use it to satisfy everything that depends on that base package. The Debian CI will report the build or test failures of all the depending packages which the API changes broke.

    We will have a choice, then:

    Breakage handling - update broken depending packages individually

    If there are only a few packages that are broken, for each broken dependency, we add an appropriate Breaks to the base binary package. (The version field in the Breaks should be chosen narrowly, so that it is possible to resolve it without changing the major version of the dependency, eg by making a minor source change.)

    When can then do one of the following:

    • Update the dependency from upstream, to a version which works with the new base. (Assuming there is one.) This should be the usual response.

    • Fix the dependency source code so that builds and works with the new base package. If this wasn’t just a backport of an upstream change, we should send our fix upstream. (We should prefer to update the whole package, than to backport an API adjustment.)

    • File an RC bug against the dependency (which will eventually trigger autoremoval), or preemptively ask for the Debian release managers to remove the dependency from the work-in-progress release.

    Breakage handling - declare new incompatible API in Debian

    If the API changes are widespread and many dependencies are affected, we should represent this by changing the in-Debian-source-package metadata to arrange for fewer Provides lines to be generated - withdrawing the Provides lines for earlier APIs.

    Hopefully examination of the upstream changelog will show what the main compat break is, and therefore tell us which Provides we still want to retain.

    This is like declaring Breaks for all the rdepends. We should do it if many rdepends are affected.

    Then, for each rdependency, we must choose one of the responses in the bullet points above. In practice this will often be a mass bug filing campaign, or large update campaign.

    Breakage handling - multiple versions

    Sometimes there will be a big API rewrite in some package, and we can’t easily update all of the rdependencies because the upstream ecosystem is fragmented and the work involved in reconciling it all is too substantial.

    When this happens we will bite the bullet and include multiple versions of the base package in Debian. The old version will become a new source package with a version number in its name.

    This is analogous to how key C/C++ libraries are handled.

    Downsides of this scheme

    The first obvious downside is that assembling some arbitrary set of Debian Rust library packages, that satisfy the dependencies declared by Debian, is no longer necessarily going to work. The combinations that Debian has tested - Debian releases - will work, though. And at least, any breakage will affect only people building Rust code using Debian-supplied libraries.

    Another less obvious problem is that because there is no such thing as Build-Breaks (in a Debian binary package), the per-package update scheme may result in no way to declare that a particular library update breaks the build of a particular leaf package. In other words, old source packages might no longer build when exposed to newer versions of their build-dependencies, taken from a newer Debian release. This is a thing that already happens in Debian, with source packages in other languages, though.

    Semver violation

    I am proposing that Debian should routinely compile Rust packages against dependencies in violation of the declared semver, and ship the results to Debian’s millions of users.

    This sounds quite alarming! But I think it will not in fact lead to shipping bad binaries, for the following reasons:

    The Rust community strongly values safety (in a broad sense) in its APIs. An API which is merely capable of insecure (or other seriously bad) use is generally considered to be wrong. For example, such situations are regarded as vulnerabilities by the RustSec project, even if there is no suggestion that any actually-broken caller source code exists, let alone that actually-broken compiled code is likely.

    The Rust community also values alerting programmers to problems. Nontrivial semantic changes to APIs are typically accompanied not merely by a semver bump, but also by changes to names or types, precisely to ensure that broken combinations of code do not compile.

    Or to look at it another way, in Debian we would simply be doing what many Rust upstream developers routinely do: bump the versions of their dependencies, and throw it at the wall and hope it sticks. We can mitigate the risks the same way a Rust upstream maintainer would: when updating a package we should of course review the upstream changelog for any gotchas. We should look at RustSec and other upstream ecosystem tracking and authorship information.

    Difficulties for another day

    As I said, I see some other issues with Rust in Debian.

    • I think the library “feature flagencoding scheme is unnecessary. I hope to explain this in a future essay.

    • I found Debian’s approach to handling the source code for its Rust packages quite awkward; and, it has some troubling properties. Again, I hope to write about this later.

    • I get the impression that updating rustc in Debian is a very difficult process. I haven’t worked on this myself and I don’t feel qualified to have opinions about it. I hope others are thinking about how to make things easier.

    Thanks all for your attention!



    comment count unavailable comments

    03 January, 2022 06:35PM

    Thorsten Alteholz

    My Debian Activities in December 2021

    FTP master

    This month I accepted 412 and rejected 44 packages. The overall number of packages that got accepted was 423.

    Debian LTS

    This was my ninetieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

    This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

    • [DLA 2846-1] raptor2 security update for one CVE
    • [DLA 2845-1] libsamplerate security update for one CVE
    • [DLA 2859-1] zziplib security update for one CVE
    • [DLA 2858-1] libzip security update for one CVE
    • [DLA 2869-1] xorg-server security update for three CVEs
    • [#1002912] for graphicsmagick in Buster
    • [debdiff] for sphinxearch/buster to maintainer and sec team
    • [debdiff] for zziplib/buster to maintainer
    • [debdiff] for zziplib/bullseye to maintainer
    • [debdiff] for raptor2/bullseye to maintainer

    I also started to work on libarchive

    Further I worked on packages in NEW on security-master. In order to faster process such packages, I added a notification when work arrived there.

    Last but not least I did some days of frontdesk duties.

    Debian ELTS

    This month was the forty-second ELTS month.

    During my allocated time I uploaded:

    • ELA-527-1 for libsamplerate
    • ELA-528-1 for raptor2
    • ELA-529-1 for ufraw
    • ELA-532-1 for zziplib
    • ELA-534-1 for xorg-server

    Last but not least I did some days of frontdesk duties.

    Debian Astro

    Related to my previous article about fun with telescopes I uploaded new versions or did source uploads for:

    Besides the indi-stuff I also uploaded

    Other stuff

    I celebrated christmas :-).

    03 January, 2022 12:23PM by alteholz

    hackergotchi for Joachim Breitner

    Joachim Breitner

    Telegram bots in Python made easy

    A while ago I set out to get some teenagers interested in programming, and thought about a good way to achieve that. A way that allows them to get started with very little friction, build something that’s relevant in their currently live quickly, and avoids frustration.

    They were old enough to have their own smartphone, and they were already happily chatting with their friends, using the Telegram messenger. I have already experimented a bit with writing bots for Telegram (e.g. @Umklappbot or @Kaleidogen), and it occurred to me that this might be a good starting point: Chat bot interactions have a very simple data model: message in, response out, all simple text. Much simpler than anything graphical or even web programming. In a way it combines the simplicity of the typical initial programming exercises on the command-line with the impact and relevance of web programming.

    But of course “real” bot programming is still too hard – installing a programming environment, setting up a server, deploying, dealing with access tokens, understanding the Telegram Bot API and mapping it to your programming language.

    The IDEThe IDE

    So I built a browser-based Python programming environments for Telegram bots that takes care of all of that. You simply write a single Python function, click the “Deploy” button, and the bot is live. That’s it!

    This environment provides a much simpler “API” for the bots: Define a function like the following:

      def private_message(sender, text):
         return "Hello!"

    This gets called upon a message, and if it returns a String, that’s the response. That’s it! Not enough to build any kind of Telegram bot, but sufficient for many fun applications.

    A chatbotA chatbot

    In fact, my nephew and niece use this to build a simple interactive fiction game, where the player says where they are going (“house”, ”forest”, “lake”) and thus explore the story, and in the end kill the dragon. And my girlfriend created a shopping list bot that we are using “productively”.

    If you are curious, you can follow the instructions to create your own bot. There you can also find the source code and instructions for hosting your own instance (on Amazon Web Services).

    Help with the project (e.g. improving the sandbox for running untrustworthy python code; making the front-end work better) is of course highly appreciated, too. The frontend is written in PureScript, and the backend in Python, building on Amazon lambda and Amazon DynamoDB.

    03 January, 2022 10:20AM by Joachim Breitner (mail@joachim-breitner.de)

    Birger Schacht

    Introducing carl

    For some time now I wanted to learn Rust, but I either didn’t have the time or couldn’t come up with a nice beginner project. Given that I recently found myself to be without a job and we had another lockdown in the part of the world I happen to live in, I decided to give that idea another go (no pun intended).

    There is apparently a trend to reimplement existing Unix tools in Rust (see exa, a ‘modern replacement for ls’, delta, a syntax highlighting pager for git, diff and grep output, bat, a ‘cat clone with wings’, zellij, a terminal workspace, ripgrep, a line-oriented search tool …). I looked around what else was out there, but what I wasn’t able to find was an implementation of cal(1) in Rust (maybe I wasn’t looking hard enough, feel free to point anything out to me I might have overlooked). No cal in Rust even though a calendar implementation would provide the potential to go over the top with terminal colors, which is also very important when writing reimplementations of older CLI tools! So I started writing and soon had a simple prototype of a cal reimplementation. A couple of weeks later I can now present carl, a cal implementation in Rust:

    default carl output

    The default output of carl is what you would expect from a commandline calendar tool. It prints the days of the current month and highlights the current day. Other than the cal tools I tried, carl by default also prints days that are in the past in grey. Like cal it can also print three month if you use the -3 switch or the whole year if you use the -y switch.

    Colors

    You can use a theme file to change the colors carl uses for various dates. The name of the theme is set in carls configuration file (in XDG_CONFIG_HOME/.carl/config.toml or system-wide in XDG_CONFIG_DIRS/.carl/config.toml) using the theme setting - carl looks for the theme in the file ${themename}.theme in the configuration folders. A custom theme could for example look like this:

    custom carl output

    You can change the foreground color, background color and/or style of a date using a combination of various stylenames like BGRed, Bold and FGCyan. The README lists all possible stylenames. A list of date properties defines which dates the specific style should affect- date properties are for example CurrentDate, AfterCurrentDate or FirstDayOfMonth. Again, the README lists the existing date properties and I’m happy to implement more of them, if you have ideas.

    Ical support

    I also added the option to list .ical files in the configuration file. carl can display the dates from the ical file(s) with a separate style. There can either be a global style for all the dates from all the ical files by using the IsEvent date property or a separate style for the dates of a specific *.ical file.

    carl output with ical files

    Using the --agenda commandline switch carl also shows an agenda of dates from the ical files.

    Installation

    I have uploaded carl to crates.io, so you can install carl using cargo: cargo install carl. I also plan to upload it to Debian at some point. If you find bugs or have feature requests, please don’t hesitate to create issues.

    03 January, 2022 08:52AM

    January 02, 2022

    hackergotchi for Ben Hutchings

    Ben Hutchings

    Debian LTS work, December 2021

    In December I was assigned 20 hours of work by Freexian's Debian LTS initiative. I worked 16 hours, and the remaining 4 hours cancelled out my over-work in November.

    I completed an update to the linux (4.9) package and issued DLA 2843-1.

    02 January, 2022 03:58PM

    January 01, 2022

    hackergotchi for Chris Lamb

    Chris Lamb

    Favourite books of 2021: Classics

    In my three most recent posts, I went over the memoirs and biographies, the non-fiction and fiction I enjoyed in 2021. But in the last of my 2021 book-related posts, however, I'll be going over my favourite classics.

    Of course, the difference between regular fiction and a 'classic' is an ambiguous, arbitrary and often-meaningless distinction: after all, what does it matter if Hemingway's The Old Man and the Sea (from 1951) is a classic or not? The term also smuggles in some of the ethnocentric gatekeeping encapsulated in the term 'Western canon' too. Nevertheless, the label of 'classic' has some utility for me in that it splits up the vast amount of non-fiction I read in two...

    Books that just missed the cut here include: Oscar Wilde's The Picture of Dorian Gray (moody and hilarious, but I cannot bring myself to include it due to the egregious antisemitism); Tolstoy's The Kreutzer Sonata (so angry! so funny!); and finally Notes from Underground by Fyodor Dostoevsky. Of significant note, though, would be the ghostly The Turn of the Screw by Henry James.

    §

    Heart of Darkness (1899)

    Joseph Conrad

    Heart of Darkness tells the story of Charles Marlow, a sailor who accepts an assignment from a Belgian trading company as a ferry-boat captain in the African interior, and the novella is widely regarded as a critique of European colonial rule in Africa. Loosely remade by Francis Ford Coppola as Apocalypse Now (1979), I started this book with the distinct possibility that this superb film adaptation would, for a rare treat, be 'better than the book'. However, Conrad demolished this idea of mine within two chapters, yet also elevated the film to a new level as well.

    This was chiefly due to how observant Conrad was of the universals that make up human nature. Some of his insight pertains to the barbarism of the colonialists, of course, but Conrad applies his shrewd acuity to the at the smaller level as well. Some of these quotes are justly famous: “Ah! but it was something to have at least a choice of nightmares,” for example, as well as the reference to a fastidiously turned-out colonial administrator who, with unimaginable horrors occurring mere yards from his tent, we learn “he was devoted to his books, which were in applepie order”. (It seems to me to be deliberately unclear whether his devotion arises from gross inhumanity, utter denial or some combination of the two.) Oh, and there's a favourite moment of mine when a character remarks that “It was very fine for a time, but after a bit I did get tired of resting.”

    Tired of resting! Yes, it's difficult to now say something original about a many-layered classic such as this, especially one that has analysed from so many angles already; from a literary perspective at first, of course, but much later from a critical postcolonial perspective, such as in Chinua Achebe's noted 1975 lecture, An Image of Africa. Indeed, the history of criticism in the twentieth century of Heart of Darkness must surely parallel the social and political developments in the Western world. (On a highly related note, the much-cited non-fiction book King Leopold's Ghost is on my reading list for 2022.)

    I will therefore limit myself to saying that the boat physically falling apart as it journeys deeper into the Congo may be intended to represent that our idea of 'Western civilisation' ceases to function, both morally as well as physically, in this remote environment. And, whilst I'm probably not the first to notice the potential ambiguity, when Marlow lies to Kurtz's 'Intended [wife]' in the closing section in order to save her from being exposed to the truth about Kurtz (surely a metaphor about the ignorance of the West whilst also possibly incorporating some comment on gender?), the Intended replies: “I knew it.” For me, though, it is not beyond doubt that what the Intended 'knows' is that she knew that Marlow would lie to her: in other words, that the alleged ignorance of everyday folk in the colonial homeland is studied and deliberate. Compact and fairly easy-to-read, it is clear that Heart of Darkness rewards even the most rudimentary analysis.

    §

    Rebecca (1938)

    Daphne du Maurier

    Daphne du Maurier creates in Rebecca a credible and suffocating atmosphere in the shape of Manderley, a grand English mansion owned by aristocratic widower Maxim de Winter. Our unnamed narrator (a young woman seemingly naïve in the ways of the world) meets Max in Monte Carlo, and she soon becomes the second Mrs. de Winter. The tale takes a turn to the 'gothic', though, when it becomes apparent that the unemotional Max, as well as potentially Manderley itself, appears to be haunted by the memory of his late first wife, the titular Rebecca.

    Still, Rebecca is less of a story about supernatural ghosts than one about the things that can haunt our minds. For Max, this might be something around guilt; for our narrator, the class-centered fear that she will never fit in. Besides, Rebecca doesn't need an actual ghost when you have Manderley's overbearing housekeeper, Mrs Danvers, surely one of the creepiest characters in all of fiction. Either way, the conflict of a kind between the fears of the protagonists means that they never really connect with each other.

    The most obvious criticism of Rebecca is that the main character is unreasonably weak and cannot quite think or function on her own. (Isn't it curious that the trait of the male 'everyman' is a kind of physical clumsiness yet the female equivalent is shorthanded by being slightly slow?) But the naïvete of Rebecca's narrator makes her easier to relate to in a way, and it also makes the reader far more capable of empathising with her embarrassment. This is demonstrated best whilst she, in one of the best evocations of this particular anxiety I have yet come across, is gingerly creeping around Manderlay and trying to avoid running into the butler.

    A surprise of sorts comes in the latter stages of the book, and this particular twist brings us into contact with a female character who is anything but 'credulous'. This revelation might even change your idea of who the main character of this book really is too. (Speaking of amateur literary criticism, I have many fan theories about Rebecca, including that Maxim de Winter's estate manager, Frank Crawley, is actually having an affair with Max, and also that Maxim may have a lot more involvement in Mrs Danvers final act that he lets on.)

    An easily accessible novel (with a great-but-not-perfect 1940 adaptation by Alfred Hitchcock, Rebecca is a real indulgence.

    §

    A Clockwork Orange (1962)

    Anthony Burgess

    One of Stanley Kubrick's most prominent tricks was to use different visual languages in order to prevent the audience from immediately grasping the underlying story. In his 1975 Barry Lyndon, for instance, the intentionally sluggish pacing and elusive characters require significant digestion to fathom and appreciate, and the luminous and quasi-Renaissance splendour of the cinematography does its part to constantly distract the viewer from the film's greater meaning.

    This is very much the case in Kubrick's A Clockwork Orange as well — whilst it ostensibly appears to be about a Saturnalia of violence, the 'greater meaning' of A Clockwork Orange pertains to the Christian conception of free will; admittedly, a much drier idea to bother making a film around. This is all made much clearer when reading Anthony Burgess' 1962 original novel. Alex became a 'true Christian' through the experimental rehabilitation process, and even offers to literally turn the other cheek at one point. But as Alex had no choice to do so (and can no longer choose to commit violence), he is incapable of making a free moral choice. Thus, is he really a Man?

    Yet whilst the book's central concern is our conception of free will in modern societies, it also appears to be a repudiation of two conservative principles. Firstly, A Clockwork Orange demolishes the idea that 'high art' leads to morally virtuous citizens. After all, if you can do a “bit of the old ultra-violence” whilst listening to the “glorious 9th by old Ludvig van,” then so much for the oft-repeated claims that culture makes you better as a person. (This, at least, I already knew from personal experience.)

    The other repudiation in A Clockwork Orange is in regard to the pervasive idea that the countryside is a refuge from crime and sin. By contrast, we see the gang commit their most horrific violence in rural areas, and, later, Alex is taken to the countryside by his former droogs for a savage beating. Although this doesn't seem to quite fit the novel, this was actually an important point for Burgess to include: otherwise his book could easily be read as a commentary on the corrupting influence of urban spaces, rather than of modernity itself.

    The language of this book cannot escape comment here. Alex narrates most of the book in a language called Nadsat, a fractured slang constructed by Burgess based on Russian and Cockney rhyming slang. (The language is strange for only a few pages, I promise. And note that 'Alex' is a very common Russian name.) Using Nadsat has the effect of making the book feel distinctly alien, but it also prevents it from prematurely aging too. Indeed, it comes as bit of a shock to realise that A Clockwork Orange was published 1962, the same year as The Beatles' released their first single, Love Me Do.

    I could probably say a whole lot more about this thoroughly engrossing book and its movie adaptation (eg. the meta-textual line in Kubrick's version: “It's funny how the colours of the real world only seem really real when you watch them on a screen...” appears verbatim in the textual original), but I'll leave it there. The book of A Clockwork Orange is not only worth the investment in the language, but is, again, somehow better than the film.

    §

    The Great Gatsby (1925)

    F. Scott Fitzgerald

    I'm actually being a little deceitful by including this book here: I cannot really say that The Great Gatsby was a 'favourite' read of the year, but its literary merit is so undeniable (and my respect for Fitzgerald's achievement is deep enough) that the experience was one of those pleasures you feel at seeing anything done well.

    Here you have a book so rich in symbolic meaning that you could easily confuse the experience with drinking Coke syrup undiluted. And a text that has made the difficulty and complexity of reading character a prominent theme of the novel, as well as a technical concern of the book itself. Yet at all times you have in your mind that The Great Gatsby is first and foremost a book about a man writing a book, and, therefore, about the construction of stories and myths.

    What is the myth being constructed in Gatsby? The usual answer today is that the book is really about the moral virtues of America. Or, rather, the lack thereof. Indeed, as James Boice wrote in 2016:

    Could Wilson have killed Gatsby any other way? Could he have ran him over, or poisoned him, or attacked him with a knife? Not at all—this an American story, the quintessential one, so Gatsby could have only died the quintessential American death.

    The “quintessential American death” is, of course, being killed with a gun.

    Whatever your own analysis, The Great Gatsby is not only magnificently written, but it is captivating to the point where references intrude many months later. For instance, when reading something about Disney's 'princess culture', I was reminded of when Daisy says of her daughter: “I hope she'll be a fool—that's the best thing of a girl can be in this world, a beautiful little fool”. Or the billboard with the eyes of 'Doctor T. J. Eckleburg'. Or the fact that the books in Gatsby's library have never been read (so what is 'Owl Eyes' doing there during the party?!). And the only plain room in Gatsby's great house is his bedroom... Okay, fine, I must have been deluding myself: I love this novel.

    01 January, 2022 06:38PM

    Caleb Adepitan

    Everybody Struggles

    I often get intimidated by the brilliance of others by just looking at them from a distance seeing how radiantly they gleam in their knowledge. I often feel like I'm not doing enough or I'm just meant to be a mediocre and there's nothing I can do to be more than that. I often feel like everyone except me is a genius, and are impeccable unlike me.

    But maybe I'm just at a point these people had previously been at, and have now gone past. Maybe it's my time to steer the wheel they once steered, or maybe they aren't even what they seem to me to be and it's all just a misrepresentation cooked up by an imperfect mind. Hmm...it has to be that. A “perfect” portrait painted by an imperfect mind which when found out renders every stroke in the portrait imperfect. It has to be that or all.

    burst.gif

    No one starts having it all figured out. A king is born in a day, but a king is not nutured in a day. We all have something we struggle with, it's just different all across. And because their current struggle isn't my current struggle and mine not theirs doesn't make them immuned to the system. Most times all that “gleaming in thier knowledge” is a collective thing and you only notice when you take a closer look; that there are more heads to the knowledge and brilliance than you think. We mostly see what people know and seldom see what they don't know. If it was the other way around, we'll see how ridiculous the things these highly esteemed people don't know are. What matters most is that they keep making it through their struggles.

    No man is an island! The sooner we realize this the better. We can always take those things we struggle with to others and work collectively on them, learn from a lot other experiences and gleam collectively. It's like a tug of war. When what is on the other end is more than we can pull off, we invite many more hands to pull it off.

    01 January, 2022 03:36PM by Caleb Adepitan

    Utkarsh Gupta

    FOSS Activites in December 2021

    Here’s my (twenty-seventh) monthly but brief update about the activities I’ve done in the F/L/OSS world.

    Debian

    This was my 36th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

    Just churning through the backlog again this month. Ugh.

    Anyway, I did the following stuff in Debian:

    Uploads and bug fixes:

    • ruby2.7 (2.7.5-1) - New upstream version fixing 3 new CVEs.

    Other $things:

    • Mentoring for newcomers.
    • Moderation of -project mailing list.

    Ubuntu

    This was my 11th month of actively contributing to Ubuntu. Now that I’ve joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

    I mostly worked on different things, I guess.

    I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from next year onward, as I was doing before. :D


    Debian (E)LTS

    Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

    And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

    This was my twenty-seventh month as a Debian LTS and eighteenth month as a Debian ELTS paid contributor.
    I was assigned 40.00 hours for LTS and 60.00 hours for ELTS and worked on the following things:
    (since I had a 3-week vacation, I wanted to wrap things up that were pending and so I worked for 20h more for LTS, which I’ll compensate the next month!)

    LTS CVE Fixes and Announcements:

    ELTS CVE Fixes and Announcements:

    • Issued ELA 525-2, fixing CVE-2021-43527, for nss.
      For Debian 8 jessie, these problems have been fixed in version 2:3.26-1+debu8u15.
    • Issued ELA 530-1, for systemd.
      For Debian 8 jessie, these problems have been fixed in version 215-17+deb8u14.
    • Issued ELA 531-1, fixing CVE-2021-41817 and CVE-2021-41819, for ruby2.1.
      For Debian 8 jessie, these problems have been fixed in version 2.1.5-2+deb8u13.
    • Issued ELA 533-1, fixing CVE-2018-12020, for python-gnupg.
      For Debian 8 jessie, these problems have been fixed in version 0.3.6-1+deb8u2.
    • Issued ELA 536-1, fixing CVE-2021-43818, for lxml.
      For Debian 8 jessie, these problems have been fixed in version Prior to version 4.6.5, the HTML Cleaner in lxml.html lets certain.
    • Started working on src:samba for CVE-2020-25717 to CVE-2020-25722 and CVE-2021-23192 for jessie and stretch, both.
      The version difference b/w the suites are a bit too much for the patch(es) to be easily backported. I’ve talked to Anton to work something out. \o/
    • Found the problem w/ libjdom1-java. Will have to roll the regression upload.
      I’ve prepared the patch but needs some testing to be finally rolled out. Same for stretch.

    Other (E)LTS Work:

    • Front-desk duty from 29-11 to 05-12 and 20-12 to 26-12 for both LTS and ELTS.
    • Triaged ffmpeg, git, gpac, inetutils, mc, modsecurity-crs, node-object-path, php-pear, systemd-cron, node-tar, ruby2.3, gst-plugins-bad0.10, npm, nltk, request-tracker4, ros-ros-comm, mediawiki, ruby2.1, ckeditor, ntfs-3g, tiff, wordpress, and jsoup, udisks2, libgit2, python3.5, python3.4, and openssh.
    • Mark CVE-2021-38171/ffmpeg as postponed for stretch.
    • Mark CVE-2021-40330/git as no-dsa for stretch and jessie.
    • Mark CVE-2020-19481/gpac as ignored for stretch.
    • Mark CVE-2021-40491/inetutils as no-dsa for stretch.
    • Mark CVE-2021-36370/mc as no-dsa for stretch and jessie.
    • Mark CVE-2021-35368/modsecurity-crs as no-dsa for stretch.
    • Mark CVE-2021-23434/node-object-path as end-of-life for stretch.
    • Mark CVE-2021-32610/php-pear as no-dsa for stretch.
    • Mark CVE-2017-9525/systemd-cron as no-dsa for stretch.
    • Mark CVE-2021-37701/node-tar as end-of-life for stretch.
    • Mark CVE-2021-37712/node-tar as end-of-life in stretch.
    • Mark CVE-2021-39201/wordpress as not-affected for jessie.
    • Mark CVE-2020-19143/tiff as not-affected for stretch and jessie.
    • Mark CVE-2021-38562/request-tracker4 as no-dsa for stretch.
    • Mark CVE-2021-37146/ros-ros-comm as no-dsa for stretch.
    • Mark CVE-2021-28965/ruby2.1 as ignored for jessie.
    • Mark CVE-2021-37714/jsoup as ignored for jessie.
    • Mark CVE-2021-41617/openssh as no-dsa for jessie.
    • Auto EOL’ed ardour, nltk, request-tracker4, python-scrapy, webkit2gtk, and linux for jessie.
    • Attended monthly Debian LTS meeting.
    • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
    • General and other discussions on LTS private and public mailing list.

    Debian LTS Survey

    I’ve spent 5 hours on the LTS survey on the following bits:

    • Went through the old content on the previous survey.
    • Reviewed the new content - still more work to do.
    • Discussed the survey bits in the team meeting.
    • Partly reviewing the questions of the survey.
    • Walking through the instance to find the doability of the tasks discussed in the meeting.
    • Segregating and staging questions. More work to do here.

    Until next time.
    :wq for today.

    01 January, 2022 05:41AM

    hackergotchi for Junichi Uekawa

    Junichi Uekawa

    Welcome to 2022.

    Welcome to 2022. Hopefully I will be writing more code this year.

    01 January, 2022 03:04AM by Junichi Uekawa

    December 31, 2021

    hackergotchi for Chris Lamb

    Chris Lamb

    Favourite books of 2021: Fiction

    In my two most recent posts, I listed the memoirs and biographies and followed this up with the non-fiction I enjoyed the most in 2021. I'll leave my roundup of 'classic' fiction until tomorrow, but today I'll be going over my favourite fiction.

    Books that just miss the cut here include Kingsley Amis' comic Lucky Jim, Cormac McCarthy's The Road (although see below for McCarthy's Blood Meridian) and the Complete Adventures of Tintin by Hergé, the latter forming an inadvertently incisive portrait of the first half of the 20th century.

    Like ever, there were a handful of books that didn't live up to prior expectations. Despite all of the hype, Emily St. John Mandel's post-pandemic dystopia Station Eleven didn't match her superb The Glass Hotel (one of my favourite books of 2020). The same could be said of John le Carré's The Spy Who Came in from the Cold, which felt significantly shallower compared to Tinker, Tailor, Soldier, Spy — again, a favourite of last year. The strangest book (and most difficult to classify at all) was undoubtedly Patrick Süskind's Perfume: The Story of a Murderer, and the non-fiction book I disliked the most was almost-certainly Beartown by Fredrik Bachman.

    Two other mild disappointments were actually film adaptions. Specifically, the original source for Vertigo by Pierre Boileau and Thomas Narcejac didn't match Alfred Hitchock's 1958 masterpiece, as did James Sallis' Drive which was made into a superb 2011 neon-noir directed by Nicolas Winding Refn. These two films thus defy the usual trend and are 'better than the book', but that's a post for another day.

    §

    A Wizard of Earthsea (1971)

    Ursula K. Le Guin

    How did it come to be that Harry Potter is the publishing sensation of the century, yet Ursula K. Le Guin's Earthsea is only a popular cult novel? Indeed, the comparisons and unintentional intertextuality with Harry Potter are entirely unavoidable when reading this book, and, in almost every respect, Ursula K. Le Guin's universe comes out the victor.

    In particular, the wizarding world that Le Guin portrays feels a lot more generous and humble than the class-ridden world of Hogwarts School of Witchcraft and Wizardry. Just to take one example from many, in Earthsea, magic turns out to be nurtured in a bottom-up manner within small village communities, in almost complete contrast to J. K. Rowling's concept of benevolent government departments and NGOs-like institutions, which now seems a far too New Labour for me. Indeed, imagine an entire world imbued with the kindly benevolence of Dumbledore, and you've got some of the moral palette of Earthsea.

    The gently moralising tone that runs through A Wizard of Earthsea may put some people off:

    Vetch had been three years at the School and soon would be made Sorcerer; he thought no more of performing the lesser arts of magic than a bird thinks of flying. Yet a greater, unlearned skill he possessed, which was the art of kindness.

    Still, these parables aimed directly at the reader are fairly rare, and, for me, remain on the right side of being mawkish or hectoring. I'm thus looking forward to reading the next two books in the series soon.

    §

    Blood Meridian (1985)

    Cormac McCarthy

    Blood Meridian follows a band of American bounty hunters who are roaming the Mexican-American borderlands in the late 1840s. Far from being remotely swashbuckling, though, the group are collecting scalps for money and killing anyone who crosses their path. It is the most unsparing treatment of American genocide and moral depravity I have ever come across, an anti-Western that flouts every convention of the genre. Blood Meridian thus has a family resemblance to that other great anti-Western, Once Upon a Time in the West: after making a number of gun-toting films that venerate the American West (ie. his Dollars Trilogy), Sergio Leone turned his cynical eye to the western.

    Yet my previous paragraph actually euphemises just how violent Blood Meridian is. Indeed, I would need to be a much better writer (indeed, perhaps McCarthy himself) to adequately 0utline the tone of this book. In a certain sense, it's less than you read this book in a conventional sense, but rather that you are forced to witness successive chapters of grotesque violence... all occurring for no obvious reason. It is often said that books 'subvert' a genre and, indeed, I implied as such above. But the term subvert implies a kind of Puck-like mischievousness, or brings to mind court jesters licensed to poke fun at the courtiers. By contrast, however, Blood Meridian isn't funny in the slightest. There isn't animal cruelty per se, but rather wanton negligence of another kind entirely. In fact, recalling a particular passage involving an injured horse makes me feel physically ill.

    McCarthy's prose is at once both baroque in its language and thrifty in its presentation. As Philip Connors wrote back in 2007, McCarthy “has spent forty years writing as if he were trying to expand the Old Testament,” and learning that McCarthy grew up around the Church therefore came as no real surprise. As an example of his textual frugality, I often looked for greater precision in the text, finding myself asking whether who a particular 'he' is, or to which side of a fight some “two men” belonged to. Yet we must always remember that there is no precision to found in a gunfight, so this infidelity is turned into a virtue. It's not that these are fair fights anyway, or even 'murder': Blood Meridian is just slaughter; pure butchery. Murder is a gross understatement for what this book is, and at many points we are grateful that McCarthy spares us precision.

    At others, however, we can be thankful for his exactitude. There is no ambiguity regarding the morality of the puppy-drowning Judge, for example: a Colonel Kurtz who has been given free license over the entire American south. There is, thank God, no danger of Hollywood mythologising him into a badass hero. Indeed, we must all be thankful that it is impossible to film this ultra-violent book... Indeed, the broader idea of 'adapting' anything to this world is, beyond sick.

    An absolutely brutal read; I cannot recommend it highly enough.

    §

    Bodies of Light (2014)

    Sarah Moss

    Bodies of Light is a 2014 book by Glasgow-born Sarah Moss on the stirrings of women's suffrage within an arty clique in nineteenth-century England. Set in the intellectually smoggy cities of Manchester and London, this poignant book follows the studiously intelligent Alethia 'Ally' Moberly who is struggling to gain the acceptance of herself, her mother and the General Medical Council. You can read my full review from July.

    §

    House of Leaves (2000)

    Mark Z. Danielewski

    House of Leaves is a remarkably difficult book to explain. Although the plot refers to a fictional documentary about a family whose house is somehow larger on the inside than the outside, this quotidian horror premise doesn't explain the complex meta-commentary that Danielewski adds on top. For instance, the book contains a large number of pseudo-academic footnotes (many of which contain footnotes themselves), with references to scholarly papers, books, films and other articles. Most of these references are obviously fictional, but it's the kind of book where the joke is that some of them are not. The format, structure and typography of the book is highly unconventional too, with extremely unusual page layouts and styles.

    It's the sort of book and idea that should be a tired gimmick but somehow isn't. This is particularly so when you realise it seems specifically designed to create a fandom around it and to manufacturer its own 'cult' status, something that should be extremely tedious. But not only does this not happen, House of Leaves seems to have survived through two exhausting decades of found footage: The Blair Witch Project and Paranormal Activity are, to an admittedly lesser degree, doing much of the same thing as House of Leaves.

    House of Leaves might have its origins in Nabokov's Pale Fire or even Derrida's Glas, but it seems to have more in common with the claustrophobic horror of Cube (1997). And like all of these works, House of Leaves book has an extremely strange effect on the reader or viewer, something quite unlike reading a conventional book. It wasn't so much what I got out of the book itself, but how it added a glow to everything else I read, watched or saw at the time. An experience.

    §

    Milkman (2018)

    Anna Burns

    This quietly dazzling novel from Irish author Anna Burns is full of intellectual whimsy and oddball incident. Incongruously set in 1970s Belfast during The Irish Troubles, Milkman's 18-year-old narrator (known only as “middle sister”), is the kind of dreamer who walks down the street with a Victorian-era novel in her hand.

    It's usually an error for a book that specifically mention other books, if only because inviting comparisons to great novels is grossly ill-advised. But it is a credit to Burns' writing that the references here actually add to the text and don't feel like they are a kind of literary paint by numbers. Our humble narrator has a boyfriend of sorts, but the figure who looms the largest in her life is a creepy milkman — an older, married man who's deeply integrated in the paramilitary tribalism. And when gossip about the narrator and the milkman surfaces, the milkman beings to invade her life to a suffocating degree.

    Yet this milkman is not even a milkman at all. Indeed, it's precisely this kind of oblique irony that runs through this daring but darkly compelling book.

    §

    The First Fifteen Lives of Harry August (2014)

    Claire North

    Harry August is born, lives a relatively unremarkable life and finally dies a relatively unremarkable death. Not worth writing a novel about, I suppose. But then Harry finds himself born again in the very same circumstances, and as he grows from infancy into childhood again, he starts to remember his previous lives. This loop naturally drives Harry insane at first, but after finding that suicide doesn't stop the quasi-reincarnation, he becomes somewhat acclimatised to his fate. He prospers much better at school the next time around and is ultimately able to make better decisions about his life, especially when he just happens to know how to stay out of trouble during the Second World War.

    Yet what caught my attention in this 'soft' sci-fi book was not necessarily the book's core idea but rather the way its connotations were so intelligently thought through. Just like in a musical theme and varations, the success of any concept-driven book is far more a product of how the implications of the key idea are played out than how clever the central idea was to begin with. Otherwise, you just have another neat Borges short story: satisfying, to be sure, but in a narrower way.

    From her relatively simple premise, for example, North has divined that if there was a community of people who could remember their past lives, this would actually allow messages and knowledge to be passed backwards and forwards in time. Ah, of course! Indeed, this very mechanism drives the plot: news comes back from the future that the progress of history is being interfered with, and, because of this, the end of the world is slowly coming. Through the lives that follow, Harry sets out to find out who is passing on technology before its time, and work out how to stop them.

    With its gently-moralising romp through the salient historical touchpoints of the twentieth century, I sometimes got a whiff of Forrest Gump. But it must be stressed that this book is far less certain of its 'right-on' liberal credentials than Robert Zemeckis' badly-aged film. And whilst we're on the topic of other media, if you liked the underlying conceit behind Stuart Turton's The Seven Deaths of Evelyn Hardcastle yet didn't enjoy the 'variations' of that particular tale, then I'd definitely give The First Fifteen Lives a try. At the very least, 15 is bigger than 7.

    More seriously, though, The First Fifteen Lives appears to reflect anxieties about technology, particularly around modern technological accelerationism. At no point does it seriously suggest that if we could somehow possess the technology from a decade in the future then our lives would be improved in any meaningful way. Indeed, precisely the opposite is invariably implied. To me, at least, homo sapiens often seems to be merely marking time until we can blow each other up and destroying the climate whilst sleepwalking into some crisis that might precipitate a thermonuclear genocide sometimes seems to be built into our DNA. In an era of cli-fi fiction and our non-fiction newspaper headlines, to label North's insight as 'prescience' might perhaps be overstating it, but perhaps that is the point: this destructive and negative streak is universal to all periods of our violent, insecure species.

    §

    The Goldfinch (2013)

    Donna Tartt

    After Breaking Bad, the second biggest runaway success of 2014 was probably Donna Tartt's doorstop of a novel, The Goldfinch. Yet upon its release and popular reception, it got a significant number of bad reviews in the literary press — with, of course, an equal number of predictable think pieces claiming this was sour grapes on the part of the cognoscenti. Ah, to be in 2014 again, when our arguments were so much more trivial.

    For the uninitiated, The Goldfinch is a sprawling bildungsroman that centres on Theo Decker, a 13-year-old whose world is turned upside down when a terrorist bomb goes off whilst visiting the Metropolitan Museum of Art, killing his mother among other bystanders. Perhaps more importantly, he makes off with a painting in order to fulfil a promise to a dying old man: Carel Fabritius' 1654 masterpiece The Goldfinch. For the next 14 years (and almost 800 pages), the painting becomes the only connection to his lost mother as he's flung, almost entirely rudderless, around the Western world, encountering an array of eccentric characters.

    Whatever the critics claimed, Tartt's near-perfect evocation of scenes, from the everyday to the unimaginable, is difficult to summarise. I wouldn't label it 'cinematic' due to her evocation of the interiority of the characters. Take, for example: “Even the suggestion that my father had close friends conveyed a misunderstanding of his personality that I didn't know how to respond” — it's precisely this kind of relatable inner subjectivity that cannot be easily conveyed by film, likely is one of the main reasons why the 2019 film adaptation was such a damp squib. Tartt's writing is definitely not 'impressionistic' either: there are many near-perfect evocations of scenes, even ones we hope we cannot recognise from real life. In particular, some of the drug-taking scenes feel so credibly authentic that I sometimes worried about the author herself.

    Almost eight months on from first reading this novel, what I remember most was what a joy this was to read. I do worry that it won't stand up to a more critical re-reading (the character named Xandra even sounds like the pharmaceuticals she is taking), but I think I'll always treasure the first days I spent with this often-beautiful novel.

    §

    Beyond Black (2005)

    Hilary Mantel

    Published about five years before the hyperfamous Wolf Hall (2004), Hilary Mantel's Beyond Black is a deeply disturbing book about spiritualism and the nature of Hell, somewhat incongruously set in modern-day England. Alison Harte is a middle-aged physic medium who works in the various towns of the London orbital motorway. She is accompanied by her stuffy assistant, Colette, and her spirit guide, Morris, who is invisible to everyone but Alison.

    However, this is no gentle and musk-smelling world of the clairvoyant and mystic, for Alison is plagued by spirits from her past who infiltrate her physical world, becoming stronger and nastier every day. Alison's smiling and rotund persona thus conceals a truly desperate woman: she knows beyond doubt the terrors of the next life, yet must studiously conceal them from her credulous clients.

    Beyond Black would be worth reading for its dark atmosphere alone, but it offers much more than a chilling and creepy tale. Indeed, it is extraordinarily observant as well as unsettlingly funny about a particular tranche of British middle-class life. Still, the book's unnerving nature that sticks in the mind, and reading it noticeably changed my mood for days afterwards, and not necessarily for the best.

    §

    The Wall (2019)

    John Lanchester

    The Wall tells the story of a young man called Kavanagh, one of the thousands of Defenders standing guard around a solid fortress that envelopes the British Isles. A national service of sorts, it is Kavanagh's job to stop the so-called Others getting in. Lanchester is frank about what his wall provides to those who stand guard: the Defenders of the Wall are conscripted for two years on the Wall, with no exceptions, giving everyone in society a life plan and a story.

    But whilst The Wall is ostensibly about a physical wall, it works even better as a story about the walls in our mind. In fact, the book blends together of some of the most important issues of our time: climate change, increasing isolation, Brexit and other widening societal divisions. If you liked P. D. James' The Children of Men you'll undoubtedly recognise much of the same intellectual atmosphere, although the sterility of John Lanchester's dystopia is definitely figurative and textual rather than literal.

    Despite the final chapters perhaps not living up to the world-building of the opening, The Wall features a taut and engrossing narrative, and it undoubtedly warrants even the most cursory glance at its symbolism. I've yet to read something by Lanchester I haven't enjoyed (even his short essay on cheating in sports, for example) and will be definitely reading more from him in 2022.

    §

    The Only Story (2018)

    Julian Barnes

    The Only Story is the story of Paul, a 19-year-old boy who falls in love with 42-year-old Susan, a married woman with two daughters who are about Paul's age. The book begins with how Paul meets Susan in happy (albeit complicated) circumstances, but as the story unfolds, the novel becomes significantly more tragic and moving.

    Whilst the story begins from the first-person perspective, midway through the book it shifts into the second person, and, later, into the third as well. Both of these narrative changes suggested to me an attempt on the part of Paul the narrator (if not Barnes himself), to distance himself emotionally from the events taking place. This effect is a lot more subtle than it sounds, however: far more prominent and devastating is the underlying and deeply moving story about the relationship ends up. Throughout this touching book, Barnes uses his mastery of language and observation to avoid the saccharine and the maudlin, and ends up with a heart-wrenching and emotive narrative. Without a doubt, this is the saddest book I read this year.

    31 December, 2021 04:39PM

    hackergotchi for Matthew Garrett

    Matthew Garrett

    Update on Linux hibernation support when lockdown is enabled

    Some time back I wrote up a description of my proposed (and implemented) solution for making hibernation work under Linux even within the bounds of the integrity model. It's been a while, so here's an update.

    The first is that localities just aren't an option. It turns out that they're optional in the spec, and TPMs are entirely permitted to say they don't support them. The only time they're likely to work is on platforms that support DRTM implementations like TXT. Most consumer hardware doesn't fall into that category, so we don't get to use that solution. Unfortunate, but, well.

    The second is that I'd ignored an attack vector. If the kernel is configured to restrict access to PCR 23, then yes, an attacker is never able to modify PCR 23 to be in the same state it would be if hibernation were occurring and the key certification data will fail to validate. Unfortunately, an attacker could simply boot into an older kernel that didn't implement the PCR 23 restriction, and could fake things up there (yes, this is getting a bit convoluted, but the entire point here is to make this impossible rather than just awkward). Once PCR 23 was in the correct state, they would then be able to write out a new swap image, boot into a new kernel that supported the secure hibernation solution, and have that resume successfully in the (incorrect) belief that the image was written out in a secure environment.

    This felt like an awkward problem to fix. We need to be able to distinguish between the kernel having modified the PCRs and userland having modified the PCRs, and we need to be able to do this without modifying any kernels that have already been released[1]. The normal approach to determining whether an event occurred in a specific phase of the boot process is to "cap" the PCR - extend it with a known value that indicates a transition between stages of the boot process. Any events that occur before the cap event must have occurred in the previous stage of boot, and since the final PCR value depends on the order of measurements and not just the contents of those measurements, if a PCR is capped before userland runs, userland can't fake the same PCR value afterwards. If Linux capped a PCR before userland started running, we'd be able to place a measurement there before the cap occurred and then prove that that extension occurred before userland had the opportunity to interfere. We could simply place a statement that the kernel supported the PCR 23 restrictions there, and we'd be fine.

    Unfortunately Linux doesn't currently do this, and adding support for doing so doesn't fix the problem - if an attacker boots a kernel that doesn't cap a PCR, they can just cap it themselves from userland. So, we're faced with the same problem: booting an older kernel allows the system to be placed in an identical state to the current kernel, and a fake hibernation image can be written out. Solving this required a PCR that was being modified after kernel code was running, but before userland was started, even with existing kernels.

    Thankfully, there is one! PCR 5 is defined as containing measurements related to boot management configuration and data. One of the measurements it contains is the result of the UEFI ExitBootServices() call. ExitBootServices() is called at the transition from the UEFI boot environment to the running OS, and the kernel contains code that executes before it. So, if we measure an assertion regarding whether or not we support restricted access to PCR 23 into PCR 5 before we call ExitBootServices(), this will prevent userspace from spoofing us (because userspace will only be able to extend PCR 5 after the firmware extended PCR 5 in response to ExitBootServices() being called). Obviously this depends on the firmware actually performing the PCR 5 extension when ExitBootServices() is called, but if firmware's out of spec then I don't think there's any real expectation of it being secure enough for any of this to buy you anything anyway.

    My current tree is here, but there's a couple of things I want to do before submitting it, including ensuring that the key material is wiped from RAM after use (otherwise it could potentially be scraped out and used to generate another image afterwards) and, uh, actually making sure this works (I no longer have the machine I was previously using for testing, and switching my other dev machine over to TPM 2 firmware is proving troublesome, so I need to pull another machine out of the stack and reimage it).

    [1] The linear nature of time makes feature development much more frustrating

    comment count unavailable comments

    31 December, 2021 03:36AM

    Reproducible Builds (diffoscope)

    diffoscope 198 released

    The diffoscope maintainers are pleased to announce the release of diffoscope version 198. This version includes the following changes:

    [ Chris Lamb ]
    * Support showing "Ordering differences only" within .dsc field values.
      (Closes: #1002002, reproducible-builds/diffoscope#297)
    * Support OCaml versions 4.11, 4.12 and 4.13. (Closes: #1002678)
    * Add support for XMLb files. (Closes: reproducible-builds/diffoscope#295)
    * Also add, for example, /usr/lib/x86_64-linux-gnu to our internal PATH.
    
    [ Mattia Rizzolo ]
    * Also recognize "GnuCash file" files as XML.
    

    You find out more by visiting the project homepage.

    31 December, 2021 12:00AM

    December 30, 2021

    hackergotchi for Chris Lamb

    Chris Lamb

    Favourite books of 2021: Non-fiction

    As a follow-up to yesterday's post listing my favourite memoirs and biographies I read in 2021, today I'll be outlining my favourite works of non-fiction.

    Books that just missed the cut include: The Unusual Suspect by Ben Machell for its thrilleresque narrative of a modern-day Robin Hood (and if you get to the end, a completely unexpected twist); Paul Fussell's Class: A Guide to the American Status System as an amusing chaser of sorts to Kate Fox's Watching the English; John Carey's Little History of Poetry for its exhilarating summation of almost four millennia of verse; David Graeber's Debt: The First 5000 Years for numerous historical insights, not least its rejoinder to our dangerously misleading view of ancient barter systems; and, although I didn't treasure everything about it, I won't hesitate to gift Pen Vogler's Scoff to a number of friends over the next year. The weakest book of non-fiction I read this year was undoubtedly Roger Scruton's How to Be a Conservative: I much preferred The Decadent Society for Ross Douthat for my yearly ration of the 'intellectual right'.

    I also very much enjoyed reading a number of classic texts from academic sociology, but they are difficult to recommend or even summarise. These included One-Dimensional Man by Herbert Marcuse, Postmodernism: Or, the Cultural Logic of Late Capitalism by Frederic Jameson and The Protestant Ethic and the Spirit of Capitalism by Max Weber. 'These are heavy books', remarks John Proctor in Arthur Miller's The Crucible...

    All round-up posts for 2021: Memoir/biography, Non-fiction (this post) & Fiction (coming soon).

    §

    Hidden Valley Road (2020)

    Robert Kolker

    A compelling and disturbing account of the Galvin family – six of whom were diagnosed with schizophrenia – which details a journey through the study and misunderstanding of the condition. The story of the Galvin family offers a parallel history of the science of schizophrenia itself, from the era of institutionalisation, lobotomies and the 'schizo mother', to the contemporary search for genetic markers for the disease... all amidst fundamental disagreements about the nature of schizophrenia and, indeed, of all illnesses of the mind. Samples of the Galvins' DNA informed decades of research which, curiously, continues to this day, potentially offering paths to treatment, prediction and even eradication of the disease, although on this last point I fancy that I detect a kind of neo-Victorian hubris that we alone will be the ones to find a cure. Either way, a gentle yet ultimately tragic view of a curiously 'American' family, where the inherent lack of narrative satisfaction brings a frustration and sadness of its own.

    §

    Islands of Abandonment: Life in the Post-Human Landscape (2021)

    Cat Flyn

    In this disarmingly lyrical book, Cat Flyn addresses the twin questions of what happens after humans are gone and how far can our damage to nature be undone. From the forbidden areas of post-war France to the mining regions of Scotland, Islands of Abandonment explores the extraordinary places where humans no longer live in an attempt to give us a glimpse into what happens when mankind's impact on nature is, for one reason or another, forced to stop. Needless to say, if anxieties in this area are not curdling away in your subconscious mind, you are probably in some kind of denial.

    Through a journey into desolate, eerie and ravaged areas in the world, this artfully-written study offers profound insights into human nature, eschewing the usual dry sawdust of Wikipedia trivia. Indeed, I summed it up to a close friend remarking that, through some kind of hilarious administrative error, the book's publisher accidentally dispatched a poet instead of a scientist to write this book. With glimmers of hope within the (mostly) tragic travelogue, Islands of Abandonment is not only a compelling read, but also a fascinating insight into the relationship between Nature and Man.

    §

    The Anatomy of Fascism (2004)

    Robert O. Paxton

    Everyone is absolutely sure they know what fascism is... or at least they feel confident choosing from a buffet of features to suit the political mood. To be sure, this is not a new phenomenon: even as 'early' as 1946, George Orwell complained in Politics and the English Language that “the word Fascism has now no meaning except in so far as it signifies ‘something not desirable’.” Still, it has proved uncommonly hard to define the core nature of fascism and what differentiates it from related political movements. This is still of great significance in the twenty-first century, for the definition ultimately determines where the powerful label of 'fascist' can be applied today.

    Part of the enjoyment of reading this book was having my own cosy definition thoroughly dismantled and replaced with a robust system of abstractions and common themes. This is achieved through a study of the intellectual origins of fascism and how it played out in the streets of Berlin, Rome and Paris. Moreover, unlike Strongmen (see above), fascisms that failed to gain meaningful power are analysed too, including Oswald Mosley's British Union of Fascists. Curiously enough, Paxton's own definition of fascism is left to the final chapter, and by the time you reach it, you get an anti-climatic feeling of it being redundant. Indeed, whatever it actually is, fascism is really not quite like any other 'isms' at all, so to try and classify it like one might be a mistake.

    In his introduction, Paxton warns that many of those infamous images associated with fascism (eg. Hitler in Triumph of the Will, Mussolini speaking from a balcony, etc.) have the ability to “induce facile errors” about the fascist leader and the apparent compliance of the crowd. (Contemporary accounts often record how sceptical the common man was of the leader's political message, even if they were transfixed by their oratorical bombast.) As it happens, I thus believe I had something of an advantage of reading this via an audiobook, and completely avoided re-absorbing these iconic images. To me, this was an implicit reminder that, however you choose to reduce it to a definition, fascism is undoubtedly the most visual of all political forms, presenting itself to us in vivid and iconic primary images: ranks of disciplined marching youths, coloured-shirted militants beating up members of demonised minorities; the post-war pictures from the concentration camps...

    Still, regardless of you choose to read it, The Anatomy of Fascism is a powerful book that can teach a great deal about fascism in particular and history in general.

    §

    What Good are the Arts? (2005)

    John Carey

    What Good are the Arts? takes a delightfully sceptical look at the nature of art, and cuts through the sanctimony and cant that inevitably surrounds them. It begins by revealing the flaws in lofty aesthetic theories and, along the way, debunks the claims that art makes us better people. They may certainly bring joy into your life, but by no means do the fine arts make you automatically virtuous. Carey also rejects the entire enterprise of separating things into things that are art and things that are not, making a thoroughly convincing case that there is no transcendental category containing so-called 'true' works of art.

    But what is perhaps equally important to what Carey is claiming is the way he does all this. As in, this is an extremely enjoyable book to read, with not only a fine sense of pace and language, but a devilish sense of humour as well. To be clear, What Good are the Arts? it is no crotchety monograph: Leo Tolstoy's *What Is Art? (1897) is hilarious to read in similar ways, but you can't avoid feeling its cantankerous tone holds Tolstoy's argument back. By contrast, Carey makes his argument in a playful sort of manner, in a way that made me slightly sad to read other polemics throughout the year. It's definitely not that modern genre of boomer jeremiad about the young, political correctness or, heaven forbid, 'cancel culture'... which, incidentally, made Carey's 2014 memoir, The Unexpected Professor something of a disappointing follow-up.

    Just for fun, Carey later undermines his own argument by arguing at length for the value of one art in particular. Literature, Carey asserts, is the only art capable of reasoning and the only art with the “ability to criticise.” Perhaps so, and Carey spends a chapter or so contending that fiction has the exclusive power to inspire the mind and move the heart towards practical ends... or at least far better than any work of conceptual art.

    Whilst reading this book I found myself taking down innumerable quotations and laughing at the jokes far more than I disagreed. And the sustained and intellectual style of polemic makes this a pretty strong candidate for my favourite overall book of the year.

    30 December, 2021 10:01PM

    December 29, 2021

    Noah Meyerhans

    When You Could Hear Security Scans

    Have you ever wondered what a security probe of a computer sounded like? I’d guess probably not, because on the face of it that doesn’t make a whole lot of sense. But there was a time when I could very clearly discern the sound of a computer being scanned. It sounded like a small mechanical heart beat: Click-click… click-click… click-click…

    Prior to 2010, I had a computer under my desk with what at the time were not unheard-of properties: Its storage was based on a stack of spinning metal platters (a now-antiquated device known as a “hard drive”), and it had a publicly routable IPv4 address with an unfiltered connection to the Internet. Naturally it ran Linux and an ssh server. As was common in those days, service logging was handled by a syslog daemon. The syslog daemon would sort log messages based on various criteria and record them somewhere. In most simple environments, “somewhere” was simply a file on local storage. When writing to a local file, syslog daemons can be optionally configured to use the fsync() system call to ensure that writes are flushed to disk. Practically speaking, what this meant is that a page of disk-backed memory would be written to the disk as soon as an event occurred that triggered a log message. Because of potential performance implications, fsync() was not typically enabled for most log files. However, due to the more sensitive nature of authentication logs, it was often enabled for /var/log/auth.log.

    In the first decade of the 2000’s, there was a fairly unsophisticated worm loose on the Internet that would probe sshd with some common username/password combinations. The worm would pause for a second or so between login attempts, most likely in an effort to avoid automated security responses. The effect was that a system being probed by this worm would generate disk write every second, with a very distinct audible signature from the hard drive.

    I think this situation is a fun demonstration of a side-channel data leak. It’s primitive and doesn’t leak very much information, but it was certainly enough to make some inference about the state of the system in question. Of course, side-channel leakage issues have been a concern for ages, but I like this one for its simplicity. It was something that could be explained and demonstrated easily, even to somebody with relatively limited understanding of “how computers work”, unlike, for instance measuring electromagnetic emanations from CPU power management units.

    For a different take on the sounds of a computing infrastructure, Peep (The Network Auralizer) won an award at a USENIX conference long, long ago. I’d love to see a modern deployment of such a system. I’m sure you could build something for your cloud deployment using something like AWS EventBridge or Amazon SQS fairly easily.

    For more on research into actual real-world side-channel attacks, you can read A Survey of Microarchitectural Side-channel Vulnerabilities, Attacks and Defenses in Cryptography or A Survey of Electromagnetic Side-Channel Attacks and Discussion on their Case-Progressing Potential for Digital Forensics.

    29 December, 2021 05:49PM by Noah Meyerhans (frodo+blog@morgul.net)

    hackergotchi for Chris Lamb

    Chris Lamb

    Favourite books of 2021: Memoir/biography

    Just as I did for 2020, I won't publically disclose exactly how many books I read in 2021, but they evidently provoked enough thoughts that felt it worth splitting my yearly writeup into separate posts. I will reveal, however, that I got through more books than the previous year, and, like before, I enjoyed the books I read this year even more in comparison as well.

    How much of this is due to refining my own preferences over time, and how much can be ascribed to feeling less pressure to read particular books? It“s impossible to say, and the question is complicated further by the fact I found many of the classics I read well worth of their entry into the dreaded canon.

    But enough of the throat-clearing. In today's post I'll be looking at my favourite books filed under memoir and biography, in no particular order.

    Books that just missed the cut here include: Bernard Crick's celebrated 1980 biography of George Orwell, if nothing else because it was a pleasure to read; Hilary Mantel's exhilaratingly bitter early memoir, Giving up the Ghost (2003); and Patricia Lockwood's hilarious Priestdaddy (2017). I also had a soft spot for Tim Kreider's We Learn Nothing (2012) as well, despite not knowing anything about the author in advance, likely a sign of good writing. The strangest book in this category I read was definitely Michelle Zauner's Crying in H Mart. Based on a highly-recommended 2018 essay in the New Yorker, its rich broth of genuine yearning for a departed mother made my eyebrows raise numerous times when I encountered inadvertent extra details about Zauner's relationships.

    §

    Beethoven: A Life in Nine Pieces (2020)

    Laura Tunbridge

    Whilst it might immediately present itself as a clickbait conceit, organising an overarching narrative around just nine compositions by Beethoven turns out to be an elegant way of saying something fresh about this grizzled old bear. Some of Beethoven's most famous compositions are naturally included in the nine (eg. the Eroica and the Hammerklavier piano sonata), but the book raises itself above conventional Beethoven fare when it highlights, for instance, his Septet, Op. 20, an early work that is virtually nobody's favourite Beethoven piece today. The insight here is that it was widely popular in its time, “played again and again around Vienna for the rest of his life.” No doubt many contemporary authors can relate to this inability to escape being artistically haunted by an earlier runaway success.

    The easiest way to say something interesting about Beethoven in the twenty-first century is to talk about the myth of Beethoven instead. Or, as Tunbridge implies, perhaps that should really be 'Beethoven' in leaden quotation marks, given so much about what we think we know about the man is a quasi-fictional construction. Take Anton Schindler, Beethoven's first biographer and occasional amanuensis, who destroyed and fabricated details about Beethoven's life, casting himself in a favourable light and exaggerating his influence with the composer. Only a few decades later, the idea of a 'heroic' German was to be politically useful as well; the Anglosphere often need reminding that Germany did not exist as a nation-state prior to 1871, so it should be unsurprising to us that the late nineteenth-century saw a determined attempt to create a uniquely 'German' culture ex nihilo. (And the less we say about Immortal Beloved the better, even though I treasure that film.) Nevertheless, Tunbridge cuts through Beethoven's substantial legacy using surgical precision that not only avoids feeling like it is settling a score, but it also does so in a way that is unlikely to completely alienate anyone emotionally dedicated to some already-established idea of the man — to bring forth the tediously predictable sentiment that Beethoven has 'gone woke'.

    With Alex Ross on the cult of Wagner, it seems that books about the 'myth of X' are somewhat in vogue right now. And this pattern within classical music might fit into some broader trend of deconstruction in popular non-fiction too, especially when we consider the numerous contemporary books on the long hangover of the Civil Rights era (Robin DiAngelo's White Fragility, etc.), the multifarious ghosts of Empire (Akala's Natives, Sathnam Sanghera's Empireland, etc.) or even the 'transmogrification' of George Orwell into myth.

    But regardless of its place in some wider canon, A Life in Nine Pieces is beautifully printed in hardback form (worth acquiring for that very reason alone), and it is one of the rare good books about classical music that can be recommended to both the connoisseur and the layperson alike.

    §

    Sea State (2021)

    Tabitha Lasley

    In her mid-30s and jerking herself out of a terrible relationship, Tabitha Lasley left London and put all her savings into a six-month lease on a flat within a questionable neighbourhood in Aberdeen, Scotland. She left to make good on a lukewarm idea for a book about oil rigs and the kinds of men who work on them: “I wanted to see what men were like with no women around,” she claims. The result is Sea State, a forthright examination of the life of North Sea oil riggers, and an unsparing portrayal of loneliness, masculinity, female desire and the decline of industry in Britain. (It might almost be said that Sea State is an update of a sort to George Orwell's visit to the mines in the North of England.)

    As bracing as the North Sea air, Sea State spoke to me on multiple levels but I found it additionally interesting to compare and contrast with Julian Barnes' The Man with Red Coat (see below). Women writers are rarely thought to be using fiction for higher purposes: it is assumed that, unlike men, whatever women commit to paper is confessional without any hint of artfulness. Indeed, it seems to me that the reaction against the decades-old genre of autofiction only really took hold when it became the domain of millennial women. (By contrast, as a 75-year-old male writer with a firmly established reputation in the literary establishment, Julian Barnes is allowed wide latitude in what he does with his sources and his writing can be imbued with supremely confident airs as a result.) Furthermore, women are rarely allowed metaphor or exaggeration for dramatic effect, and they certainly aren’t permitted to emphasise darker parts in order to explore them... hence some of the transgressive gratification of reading Sea State.

    Sea State is admittedly not a work of autofiction, but the sense that you are reading about an author writing a book is pleasantly unavoidable throughout. It frequently returns to the topic of oil workers who live multiple lives, and Lasley admits to living two lives herself: she may be in love but she's also on assignment, and a lot of the pleasure in this candid and remarkably accessible book lies in the way these states become slowly inseparable.

    §

    Twilight of Democracy (2020)

    Anne Applebaum

    For the uninitiated, Anne Applebaum is a staff writer for The Atlantic magazine who won a Pulitzer-prize for her 2004 book on the Soviet Gulag system. Her latest book, however, Twilight of Democracy is part memoir and part political analysis and discusses the democratic decline and the rise of right-wing populism. This, according to Applebaum, displays distinctly “authoritarian tendencies,” and who am I to disagree? Applebaum does this through three main case studies (Poland, the United Kingdom and the United States), but the book also touches on Hungary as well.

    The strongest feature of this engaging book is that Appelbaum's analysis focuses on the intellectual classes and how they provide significant justification for a descent into authoritarianism. This is always an important point to be remembered, especially as much of the folk understanding of the rise of authoritarian regimes tends to place exaggerated responsibility on the ordinary and everyday citizen: the blame placed on the working-class in the Weimar Republic or the scorn heaped upon 'white trash' of the contemporary Rust Belt, for example.

    Applebaum is uniquely poised to discuss these intellectuals because, well, she actually knows a lot of them personally. Or at least, she used to know them. Indeed, the narrative of the book revolves around two parties she hosted, both in the same house in northwest Poland. The first party, on 31 December 1999, was attended by friends from around the Western world, but most of the guests were Poles from the broad anti-communist alliance. They all agreed about democracy, the rule of law and the route to prosperity whilst toasting in the new millennium. (I found it amusing to realise that War and Peace also starts with a party.) But nearly two decades later, many of the attendees have ended up as supporters of the problematic 'Law and Justice' party which currently governs the country. Applebaum would now “cross the road to avoid them,” and they would do the same to her, let alone behave themselves at a cordial reception. The result of this autobiographical detail is that by personalising the argument, Applebaum avoids the trap of making too much of high-minded abstract argument for 'democracy', and additionally makes her book compellingly spicy too.

    Yet the strongest part of this book is also its weakest. By individualising the argument, it often feels that Applebaum is settling a number of personal scores. She might be very well justified in doing this, but at times it feels like the reader has walked in halfway through some personal argument and is being asked to judge who is in the right. Furthermore, Applebaum's account of contemporary British politics sometimes deviates into the cartoonish: nothing was egregiously incorrect in any of her summations, but her explanation of the Brexit referendum result didn't read as completely sound.

    Nevertheless, this lively and entertaining book that can be read with profit, even if you disagree with significant portions of it, and its highly-personal approach makes it a refreshing change from similar contemporary political analysis (eg. David Runciman's How Democracy Ends) which reaches for that more 'objective' line.

    §

    The Man in the Red Coat (2019)

    Julian Barnes

    As rich as the eponymous red coat that adorns his cover, Julian Barnes quasi-biography of French gynaecologist Samuel-Jean Pozzi (1846–1918) is at once illuminating, perplexing and downright hilarious. Yet even that short description is rather misleading, for this book evades classification all manner number of ways.

    For instance, it is unclear that, with the biographer's narrative voice so obviously manifest, it is even a biography in the useful sense of the word. After all, doesn't the implied pact between author and reader require the biographer to at least pretend that they are hiding from the reader? Perhaps this is just what happens when an author of very fine fiction turns his hand to non-fiction history, and, if so, it represents a deeper incursion into enemy territory after his 1984 metafictional Flaubert's Parrot. Indeed, upon encountering an intriguing mystery in Pozzi's life crying out for a solution, Barnes baldly turns to the reader, winks and states: “These matters could, of course, be solved in a novel.” Well, quite.

    Perhaps Barnes' broader point is that, given that's impossible for the author to completely melt into air, why not simply put down your cards and have a bit of fun whilst you're at it? If there's any biography that makes the case for a “rambling and lightly polemical” treatment, then it is this one.

    Speaking of having fun, however, two qualities you do not expect in a typical biography is simply how witty they can be, as well as it having something of the whiff of the thriller about it. A bullet might be mentioned in an early chapter, but given the name and history of Monsieur Pozzi is not widely known, one is unlikely to learn how he lived his final years until the closing chapters. (Or what happened to that turtle.) Humour is primarily incorporated into the book in two main ways: first, by explicitly citing the various wits of the day (“What is a vice? Merely a taste you don’t share.“ etc.), but perhaps more powerful is the gentle ironies, bon mots and observations in Barnes' entirely unflappable prose style, along with the satire implicit in him writing this moreish pseudo-biography to begin with.

    The opening page, with its steadfast refusal to even choose where to begin, is somewhat characteristic of Barnes' method, so if you don't enjoy the first few pages then you are unlikely to like the rest. (Indeed, the whole enterprise may be something of an acquired taste. Like Campari.) For me, though, I was left wryly grinning and often couldn't wait to turn the page. Indeed, at times it reminded me of a being at a dinner party with an extremely charming guest at the very peak of his form as a wit and raconteur, delighting the party with his rambling yet well-informed discursive on his topic de jour. A significant book, and a book of significance.

    29 December, 2021 05:46PM

    Thorsten Alteholz

    Fun with Telescopes

    Recently I purchased a small telescope to look at solar spots. When choosing a mount, I checked whether it can be controlled with OSS.

    In this context INDI is mentioned everywhere and my desired mount was supported. indi and kstars are already part of Debian, so “apt install”, selecting my mount, …. oh, wait, the menu shows it, but I can not select it.

    Ok, that was the time when I learned about the difference of indi and indi-3rdparty. The indi package just contains a few drivers and others are available from a different repository. This split has been done either due to different release cycles of the driver, a different OSS license of it, or just due to binary blobs without source being part of some drivers.

    Fine, there are already packages of the 3rdparty-repository available from an Ubuntu PPA, so it should be no problem to add them do Debian as well.

    Some manufacturers freely distribute at least the specification of their API so that others are able to write the corresponding software. Some manufacturers even write their own driver. Examples are:

    • Skywatcher (mounts who can be controlled by the Skywatcher Protocol)
    • Shelyak to control some spectrographs
    • Radio Astronomy Supplies’ SpectraCyber hydrogen line spectrometer
    • Vixen, for controlling Vixen Starbook and Vixen Starbook Ten
    • Starlight Express, SX CCDs, SX wheel and SX Active Optics

    A minor part actually does not have binary blobs but distributes the sources of their software. Unfortunately they have licenses that are not compatible with DFSG and those packages still need to go to non-free. Examples are:

    • Finger Lakes Instrumentation (FLI), L.L.C.
    • Lunatico Astronomia
    • Astrojolo
    • Astromechanics

    But there also seem to exist lots of manufacturers of astronomically accessories, especially cameras, that just distribute some binary blobs to talk to their hardware. This is sad, but at the moment it is just the way it is and such package need to go to non-free.

    Luckily their blobs are accompanied with corresponding licenses. At least those manufacturers understand how software licenses work and packaging their SDK is just straight forward. Examples are:

    • SBIG Astronomical Instruments
    • Moravian Instruments Inc.
    • Player One Astronomy

    However, when looking at the license information of some Ubuntu packages, several of them were distributed under a CC license. This is not a common license for software, so I wanted to get a confirmation whether these information are correct.

    Unfortunately most of such manufacturers don’t want to disclose their licenses. For whatever reason they distribute their tarballs without any hint and emails to their support channels are just ignored. Examples of such bad behaviour are:

    • Altaircam
    • QSI (was bought by Atik)
    • Atik
    • Touptek
    • QHY

    However the best answer comes from the Levenhuk support. My question about the license of their SDK was answered by:

    I am afraid we cannot disclose any further information except the software file that is available on our website.

    So strictly speaking nobody is allowed to use their software. I wonder whether such competence also becomes visible in their products. I will never really know as there are more than enough OSS friendly manufacturers available.

    Anyway, most of the indi-3rdparty drivers are now available and I got lots of suggestions about hardware I need to buy in the future :-).

    29 December, 2021 12:51PM by alteholz

    December 27, 2021

    hackergotchi for Steinar H. Gunderson

    Steinar H. Gunderson

    USB-C shenanigans

    At some point, my phone stopped taking charge (over USB-C) from one charger, but not the other—it would briefly say “charging”, then drop it, wait a few seconds, and then try again in an infinite loop. However, charging every night on the included charger worked fine, so I wasn't that worried.

    Naturally, I blamed the charger. But the other day, I noticed what was going on; if I flipped the cable, it would charge with absolutely no problems at all! And similarly, if I flipped the cable for the other charger, I'd see the issue. So it was merely about the twisting of each cable that caused the simplest insertion direction to be the correct one for the included charger, and the wrong one for the other one. (USB-C has some special logic to detect and deal with this “reversed” polarity. I guess it's somehow broken in my phone.)

    Moral: Unknown.

    27 December, 2021 01:31PM

    Abiola Ajadi

    Outreachy-Everyone Struggles!

    Hello Everyone!

    Three weeks into my internship and it’s been great so far with Awesome Mentors. I am currently learning a new Language which is Ruby and this is the perfect time to remind myself that everyone struggles! I struggled a bit getting farmiliar with the codebase and pushing my first merge request during the internship. I won’t say i have a perfect understanding of how everything works, but i am learning.

    What is Debci?

    Debci stands for Debian Continous Integration before i move on, what is continous integration? according to Atlassian ‘Continuous integration (CI) is the practice of automating the integration of code changes from multiple contributors into a single software project. It’s a primary DevOps best practice, allowing developers to frequently merge code changes into a central repository where builds and tests then run. Automated tools are used to assert the new code’s correctness before integration.’

    From the official documentation; The Debian continuous integration (debci) is an automated system that coordinates the execution of automated tests against packages in the Debian system.

    Debci exist to make sure packages work currently after an update, How it does this is by testing all of the packages that have tests written in them to make sure it works and nothing is broken. It has a UI for developers to see if the test passes or not.

    Progress

    This week; I have learnt how to write automated test and Squashing multiple commits into one single commit.

    27 December, 2021 12:00AM by Abiola Ajadi (briannaajadi03@gmail.com)

    December 26, 2021

    Vincent Bernat

    Custom screen saver with XSecureLock

    i3lock is a popular X11 screen lock utility. As far as customization goes, it only allows one to set a background from a PNG file. This limitation is part of the design of i3lock: its primary goal is to keep the screen locked, something difficult enough with X11. Each additional feature would increase the attack surface and move away from this goal.1 Many are frustrated with these limitations and extend i3lock through simple wrapper scripts or by forking it.2 The first solution is usually safe, but the second goes against the spirit of i3lock.

    XSecureLock is a less-known alternative to i3lock. One of the most attractive features of this locker is to delegate the screen saver feature to another process. This process can be anything as long it can attach to an existing window provided by XSecureLock, which won’t pass any input to it. It will also put a black window below it to ensure the screen stays locked in case of a crash.

    XSecureLock is shipped with a few screen savers, notably one using mpv to display photos or videos, like the Apple TV aerial videos. I have written my own saver using Python and GTK.3 It shows a background image, a clock, and the current weather.4

    Custom screen saver for XSecureLock, displaying a clock and the current weather
    Custom screen saver for XSecureLock

    I add two patches over XSecureLock:

    • Sleep before mapping screen saver window. This patch prevents a flash of black when starting XSecureLock by waiting a bit for the screen saver to be ready before displaying it. As I am also using a custom dimmer fading to the expected background before locking, the flash of black was quite annoying for me. I have good hope this patch will be accepted upstream.
    • Do not mess with DPMS/blanking. This patch prevents XSecureLock from blanking the screen. I think this is solely the role of the X11 DPMS extension. This makes the code simpler. I am unsure if this patch would be accepted by upstream.

    XSecureLock also delegates the authentication window to another process, but I was less comfortable providing a custom one as it is a bit more security-sensitive. While basic, the shipped authentication application is fine by me.

    I think people should avoid modifying i3lock code and use XSecureLock instead. I hope this post will help a bit.

    Update (2022-01)

    XScreenSaver can also run arbitrary programs as a screen saver.


    1. See for example this comment or this one explaining the rationale. ↩︎

    2. This Reddit post enumerates many of these alternatives. ↩︎

    3. Using GTK makes it a bit difficult to use some low-level features, like embedding an application into an existing window. However, the high-level features are easier, notably drawing an image and a text with a shadow. ↩︎

    4. Weather is retrieved by another script running on a timer and written to a file. The screen saver watches this file for updates. ↩︎

    26 December, 2021 12:39PM by Vincent Bernat