July 23, 2021

hackergotchi for Evgeni Golov

Evgeni Golov

It's not *always* DNS

Two weeks ago, I had the pleasure to play with Foremans Kerberos integration and iron out a few long standing kinks.

It all started with a user reminding us that Kerberos authentication is broken when Foreman is deployed on CentOS 8, as there is no more mod_auth_kerb available. Given mod_auth_kerb hasn't seen a release since 2013, this is quite understandable. Thankfully, there is a replacement available, mod_auth_gssapi. Even better, it's available in CentOS 7 and 8 and in Debian and Ubuntu too!

So I quickly whipped up a PR to completely replace mod_auth_kerb with mod_auth_gssapi in our installer and successfully tested that it still works in CentOS 7 (even if upgrading from a mod_auth_kerb installation) and CentOS 8.

Yay, the issue at hand seemed fixed. But just writing a post about that would've been boring, huh?

Well, and then I dared to test the same on Debian…

Turns out, our installer was using the wrong path to the Apache configuration and the wrong username Apache runs under while trying to setup Kerberos, so it could not have ever worked. Luckily Ewoud and I were able to fix that too. And yet the installer was still unable to fetch the keytab from my FreeIPA server 😿

Let's dig deeper! To fetch the keytab, the installer does roughly this:

# kinit -k
# ipa-getkeytab -k http.keytab -p HTTP/foreman.example.com

And if one executes that by hand to see the a actual error, you see:

# kinit -k
kinit: Cannot determine realm for host (principal host/foreman@)

Well, yeah, the principal looks kinda weird (no realm) and the interwebs say for "kinit: Cannot determine realm for host":

  • Kerberos cannot determine the realm name for the host. (Well, duh, that's what it said?!)
  • Make sure that there is a default realm name, or that the domain name mappings are set up in the Kerberos configuration file (krb5.conf)

And guess what, all of these are perfectly set by ipa-client-install when joining the realm…

But there must be something, right? Looking at the principal in the error, it's missing both the domain of the host and the realm. I was pretty sure that my DNS and config was right, but what about gethostname(2)?

# hostname
foreman

Bingo! Let's see what happens if we force that to be an FQDN?

# hostname foreman.example.com
# kinit -k

NO ERRORS! NICE!

We're doing science here, right? And I still have the CentOS 8 box I had for the previous round of tests. What happens if we set that to have a shortname? Nothing. It keeps working fine. And what about CentOS 7? VMs are cheap. Well, that breaks like on Debian, if we force the hostname to be short. Interesting.

Is it a version difference between the systems?

  • Debian 10 has krb5 1.17-3+deb10u1
  • CentOS 7 has krb5 1.15.1-50.el7
  • CentOS 8 has krb5 1.18.2-8.el8

So, something changed in 1.18?

Looking at the krb5 1.18 changelog the following entry jumps at one: Expand single-component hostnames in host-based principal names when DNS canonicalization is not used, adding the system's first DNS search path as a suffix.

Given Debian 11 has krb5 1.18.3-5 (well, testing has, so lets pretend bullseye will too), we can retry the experiment there, and it shows that it works with both, short and full hostname. So yeah, it seems krb5 "does the right thing" since 1.18, and before that gethostname(2) must return an FQDN.

I've documented that for our users and can now sleep a bit better. At least, it wasn't DNS, right?!

Btw, freeipa won't be in bulsseye, which makes me a bit sad, as that means that Foreman won't be able to automatically join FreeIPA realms if deployed on Debian 11.

23 July, 2021 06:36PM by evgeni

July 22, 2021

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (May and June 2021)

The following contributors got their Debian Developer accounts in the last two months:

  • Timo Röhling (roehling)
  • Patrick Franz (deltaone)
  • Christian Ehrhardt (paelzer)
  • Fabio Augusto De Muzio Tobich (ftobich)
  • Taowa (taowa)
  • Félix Sipma (felix)
  • Étienne Mollier (emollier)
  • Daniel Swarbrick (dswarbrick)
  • Hanno Wagner (wagner)

The following contributors were added as Debian Maintainers in the last two months:

  • Evangelos Ribeiro Tzaras
  • Hugh McMaster

Congratulations!

22 July, 2021 01:45PM by Jean-Pierre Giraud

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.6 on CRAN: New upstream

A new version 0.0.6 of RcppSpdlog is now on CRAN. It contains releases 1.9.0 of spdlog which in turn contains an updated version of fmt.

RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. No R package-side changes were needed or made.

The (minimal) NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.6 (2021-07-21)

  • Upgraded to upstream release spdlog 1.9.0

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 July, 2021 12:31PM

hackergotchi for Charles Plessy

Charles Plessy

Search in Debian's sources

Via my work on the media-types package,

I wanted to know which packages were using the media type application/x-xcf, which apparently is not correct (#991158). The https://codesearch.debian.net site gives the answer. (Thanks!)

Moreover, one can create a user key, for command-line remote access; here is an example below (the file dcs-apikeyHeader-plessy.txt contains x-dcs-apikey: followed by my access key).

curl -X GET "https://codesearch.debian.net/api/v1/searchperpackage?query=application/x-xcf&match_mode=literal" -H @dcs-apikeyHeader-plessy.txt > result.json

The result is serialised in JSON. Here is how I transformed it to make a list of email addresses that I could easily paste in mutt.

cat result.json |
  jq --raw-output '.[]."package"' |
  dd-list --stdin |
  sed -e '/^ /d' -e '/^$/'d -e 's/$/,/' -e 's/^/  /'

22 July, 2021 08:24AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Added memory to ACER Chromebox CXI3 (fizz/sion).

Added memory to ACER Chromebox CXI3 (fizz/sion). Got 2 16GB SO-DIMMs and installed them. I could not find correct information on how to open this box on the internet. They seem to be explaining similar boxes from HP or ASUS which seem to have simpler procedure to opening. I had to ply out out the 4 rubber pieces at the bottom, and then open the 4 screws. Then I could ply open the front and back panel by applying force where the screws were. In the front panel there's two more shorter screws that needs to be opened; after taking out the two screws (that's 4+2), I could open the box into two pieces. Be careful they are connected, I think there's audio cable. After opening you can access the memory chips. Pull the metal piece open on left and right hand side of the memory chip so that it raises. Make sure the metal pieces latch closed when you insert the new memory, that should signify memory is in place. I didn't do that at the beginning and the machine didn't boot. So far so good. No longer using zram.

22 July, 2021 12:53AM by Junichi Uekawa

KVM switch.

KVM switch. I am using ES-Tune KVM switch to switch Linux and ChromeBox. The Linux side seems to be unreliable. Sometimes it complains USB cable is bad. Reboot doesn't fix it and reconnecting seems to improve the state. Unplugging power from the KVM switch seems to fix the situation sometimes. Could be the KVM switch issue.

22 July, 2021 12:52AM by Junichi Uekawa

July 21, 2021

Molly de Blanc

Updates (2)

I feel like I haven’t had a lot to say about open source or, in general, tech for a while. From another perspective, I have a whole lot of heady things to say about open source and technology and writing about it seems like a questionable use of time when I have so much other writing and reading and job hunting to do. I will briefly share the two ideas I am obsessed with at the moment, and then try to write more about them later.

The Defensible-Charitable-Beneficent Trichotamy

I will just jokingly ha ha no but  seriously maybe jk suggest calling this the de Blanc-West Theory, considering it’s heavily based on ideas from Ben West.

Actions fall into one of the following categories:

Defensible: When an action is defensible, it is permissible, acceptable, or okay. We might not like it, but you can explain why you had to do it and we can’t really object. This could also be considered the “bare minimum.”

Charitable: A charitable action is “better” than a defensible action in that it produces more good, and it goes above and beyond the minimum.

Beneficent: This is a genuinely good action that produces good. It is admirable.

I love J.J. Thomson example of Henry Fonda for this. For a full explanation see section three at this web site. For a summary: imagine that you’re sick and the only thing that can cure you is Henry Fonda’s cool touch on your fevered brow. It is Defensible for Henry Fonda to do nothing — he doesn’t owe you anything in particular. It is Charitable for, say if Henry Fonda happened to be in the room, to walk across it and touch your forehead. It is Beneficent for Henry Fonda to re-corporealize back into this life and travel to your bedside to sooth your strange illness. P.S. Henry Fonda died in 1982.

I don’t think these ideas are particularly new, but it’s important to think about what we’re doing with technology and its design: are our decisions defensible, charitable, or beneficent? Which should they be? Why?

The Offsetting Harm-Ameliorating Harm-Doing Good Trichotamy

I’ve been doing some research and writing around carbon credits. I owe a lot of thanks to Philip Withnall and Adam Lerner for talking with me through these ideas. Extrapolating from action and policy recommendations, I suggest the following trichotamy:

Offsetting harm is attempting to look at the damage you’ve done and try to make up for it in some capacity. In the context of, e.g., air travel, this would be purchasing carbon credits.

Ameliorating harm is about addressing the particular harm you’ve done. Instead of carbon credits, you would be supporting carbon capture technologies or perhaps giving to or otherwise supporting groups and ecosystems that are being harmed by your air travel.

Doing Good is Doing Good. This would be like not traveling by air and choosing to still help the harm being caused by carbon emissions.

These ideas are also likely not particularly new, but thinking about technology in this context is also useful, especially as we consider technology in the context of climate change.

21 July, 2021 08:57PM by mollydb

hackergotchi for Sean Whitton

Sean Whitton

Delivering Common Lisp executables using Consfigurator

I realised this week that my recent efforts to improve how Consfigurator makes the fork(2) system call have also created a way to install executables to remote systems which will execute arbitrary Common Lisp code. Distributing precompiled programs using free software implementations of the Common Lisp standard tends to be more of a hassle than with a lot of other high level programming languages. Executables will often be hundreds of megabytes in size even if your codebase is just a few megabytes, because the whole interactive Common Lisp environment gets bundled along with your program’s code. Commercial Common Lisp implementations manage to do better, as I understand it, by knowing how to shake out unused code paths. Consfigurator’s new mechanism uploads only changed source code, which might only be kilobytes in size, and updates the executable on the remote system. So it should be useful for deploying Common Lisp-powered web services, and the like.

Here’s how it works. When you use Consfigurator you define an ASDF system – analagous to a Python package or Perl distribution – called your “consfig”. This defines HOST objects to represent the machines that you’ll use Consfigurator to manage, and any custom properties, functions those properties call, etc.. An ASDF system can depend upon other systems; for example, every consfig depends upon Consfigurator itself. When you execute Consfigurator deployments, Consfigurator uploads the source code of any ASDF systems that have changed since you last deployed this host, starts up Lisp on the remote machine, and loads up all the systems. Now the remote Lisp image is in a similarly clean state to when you’ve just started up Lisp on your laptop and loaded up the libraries you’re going to use. Only then are the actual deployment instructions are sent on stdin.

What I’ve done this week is insert an extra step for the remote Lisp image in between loading up all the ASDF systems and reading the deployment from stdin: the image calls fork(2) and establishes a pipe to communicate with the child process. The child process can be sent Lisp forms to evaluate, but for each Lisp form it receives it will actually fork again, and have its child process evaluate the form. Thus, going into the deployment, the original remote Lisp image has the capability to have arbitrary Lisp forms evaluated in a context in which all that has happened is that a statically defined set of ASDF systems has been loaded – the child processes never see the full deployment instructions sent on stdin. Further, the child process responsible for actually evaluating the Lisp form received from the first process first forks off another child process and sets up its own control pipe, such that it too has the capacbility to have arbitrary Lisp forms evaluated in a cleanly loaded context, no matter what else it might put in its memory in the meantime. (Things are set up such that the child processes responsible for actually evaluating the Lisp forms never see the Lisp forms received for evaluation by other child processes, either.)

So suppose now we have an ASDF system :com.silentflame.cool-web-service, and there is a function (start-server PORT) which we should call to start listening for connections. Then we can make our consfig depend upon that ASDF system, and do something like this:

CONSFIG> (deploy-these ((:ssh :user "root") :sbcl) server.example.org
           ;; Set up Apache to proxy requests to our service.
           (apache:https-vhost ...)
           ;; Now apply a property to dump the image.
           (image-dumped "/usr/local/bin/cool-web-service"
                         '(cool-web-service:start-server 1234)))

Consfigurator will: SSH to server.example.org; upload all the ASDF source for your consfig and its dependencies; compile and load that code into a remote SBCL process; call fork(2) and set up the control pipe; receive the applications of APACHE:HTTPS-VHOST and IMAGE-DUMPED shown above from your laptop, on stdin; apply the APACHE:HTTPS-VHOST property to ensure that Apache is proxying connections to port 1234; send a request into the control pipe to have the child process fork again and dump an executable which, when started, will evaluate the form (cool-web-service:start-server 1234). And that form will get evaluated in a pristine Lisp image, where the only meaningful things that have happened is that some ASDF systems have been loaded and a single fork(2) has taken place. You’d probably need to add some other properties to add some mechanism for actually invoking /usr/local/bin/cool-web-service and restarting it when the executable is updated.

(Background: The primary reason why Consfigurator’s remote Lisp images need to call fork(2) is that they need to do things like setuid from root to other accounts and enter chroots without getting stuck in those contexts. Previously we forked right before entering such contexts, but that meant that Consfigurator deployments could never be multithreaded, because it might later be necessary to fork, and you can’t usually do that once you’ve got more than one thread running. So now we fork before doing anything else, so that the parent can then go multithreaded if desired, but can still execute subdeployments in contexts like chroots by sending Lisp forms to evaluate in those contexts into the control pipe.)

21 July, 2021 08:30PM

Antoine Beaupré

Hacking my Kobo Clara HD

I just got a new Kobo ebook reader, a Kobo Clara HD. It's pretty similar to the Glo HD I had but which has unfortunately died after 5 years, even after trying to replace the battery.

Quick hardware review

This is a neat little device. It's very similar to the Glo HD, which is a bit disappointing: you'd think they would have improved on the design in the 5+ years since the Glo HD has come out.. It does have an "amber" night light which is nice, but the bezel is still not level with the display, and the device is still kind of on the thick side. A USB-C (instead of micro-USB) port would have been nice too.

But otherwise, it's pretty slick, and just works. And because the hardware design didn't change, I can still hack at it like a madman, which is really why I bought this thing in the first place.

Hopefully it will last longer than 5 years. Ebook readers should really last for decades, not years, but I guess that's too much to expect from our consumerist, suicidal, extinctionist society.

Configuration hacks

Here are the hacks I done on the device. I had done many more hacks on the Kobo Glo HD, but I decided to take a more streamlined, minimalist and, hopefully, easier for new users than the pile of hacks I was doing before (which I expand on at the end of the article).

SD card replacement

I replaced the SD card. The original card shipped with the Clara HD was 8GB which meant all my books actually fitted on the original, but just barely. The new card is 16GB.

Unfortunately, I did this procedure almost at the end of this guide (right before writing the syncthing scripts, below). Next time, that should be the first thing done so the original SD card acts as a pristine copy of the upstream firmware. So even though this seems like an invasive and difficult procedure, I actually do recommend you do it first.

The process is basically to:

  1. crack open the Kobo case (don't worry, it sounds awful but I've done it often)
  2. take the SD card out
  3. copy it over to a new, larger card (say on your computer)
  4. put the larger card in

This guide has all the details.

Registration bypass hack

This guide (from the same author!) has this awesome trick to bypass the annoying registration step. Basically:

  1. pretend you do not have wifi
  2. mount the device
  3. sqlite3 /media/.../KOBOeReader/.kobo/KoboReader.sqlite
  4. INSERT INTO user(UserID,UserKey) VALUES('1','');
  5. unmount the device

More details in the above guide, again.

Install koreader

My e-reader of choise is Koreader. It's just that great. I still don't find the general user interface (ie. the "file browswer") as intuitive as the builtin one, but the book reading just feels better. And anyways it's the easier way to get a shell on the device.

Follow those instructions, particularly the NickelMenu instructions (see also the NickelMenu home page). Yes, you need to install some other thing to start koreader, which doesn't start on its own. NickelMenu is the simplest and better integrated I have found.

You might also want to install some dictionnaries and configure SSH:

  1. mount USB
  2. drop your SSH public key in .../KOBOeReader/.adds/koreader/settings/SSH/authorized_keys
  3. unmount USB
  4. enable SSH in koreader (Gear -> Network -> SSH -> start SSH)

Install syncthing

I use Syncthing to copy all my books into the device now. I was previously using Koreader's OPDS support with Calibre's web interface, but that was clunky and annoying, and I'd constantly have to copy books around. Now the entire collection is synchronized.

As a bonus, I can actually synchronise (and backup!) the koreader metadata, since it's stored next to the files. So in theory, this means I could use koreader from multiple devices and have my reading progress sync'd, but I haven't tested that feature just yet.

I chose Syncthing because it's simple, lightweight, supported on Linux and Android, and statically compiles by default which means it's easy to deploy on the Kobo.

Here is how I installed and started Syncthing at first:

  1. Download the latest version for ARM
  2. extract the archive
  3. copy the syncthing binary into .../KOBOeReader/.adds/
  4. login over SSH (see above, sorry)
  5. create the following directory: ~/.config/syncthing/
  6. create the following configuration file:

    <configuration version="18">
        <gui enabled="true" tls="false" debugging="false">
            <address>0.0.0.0:8384</address>
        </gui>
    </configuration>
    
  7. copy a valid ca-certificates.crt file into /etc/ssl/certs/ on the Kobo (otherwise syncthing cannot bootstrap discovery servers)
  8. launch syncthing over SSH: /mnt/onboard/.adds/syncthing

You should now be able to connect to the syncthing GUI through your web browser.

Immediately change the admin password.

Then, figure out how to start it. Here are your options:

  1. on boot (inittab or whatever). downside: power usage.
  2. on wifi (udev hacks). downside: unreliable (see wallabako).
  3. on demand (e.g. nickel menu, koreader terminal shortcuts). downside: kind of clunky in koreader, did not work in nickel menu.
  4. manually, through shell. downside: requires a shell, but then again we already have one through koreader?

What I have done is to write trivial shell scripts (in .../KOBOeReader/scripts) to start syncthing. The first is syncthing-start.sh:

#!/bin/sh

/mnt/onboard/.adds/syncthing serve &

Then syncthing-stop.sh:

#!/bin/sh

/usr/bin/pkill syncthing

This makes those scripts usable from the koreader file browser. Then the folder can be added to the folder shortcuts and a long-hold on the script will allow you to execute it.

Still have to figure out why the Nickel Menu script is not working, but it could simply reuse the above to simplify debugging. This is the script I ended up with, in .../KOBOeReader/.adds/nm/syncthing:

menu_item :main    :Syncthing (toggle)    :cmd_spawn         :exec /mnt/onboard/scripts/syncthing-stop.sh
  chain_success:skip:4
    chain_success                      :cmd_spawn          :exec /mnt/onboard/scripts/syncthing-start.sh
    chain_success                      :dbg_toast          :Started Syncthing server
    chain_failure                      :dbg_toast          :Error starting Syncthing server
    chain_always:skip:-1
  chain_success                        :dbg_toast          :Stopped Syncthing server
menu_item :main    :Syncthing (start)    :cmd_output         :exec /mnt/onboard/scripts/syncthing-start.sh
menu_item :main    :Syncthing (stop)    :cmd_output         :exec /mnt/onboard/scripts/syncthing-stop.sh

It's unclear why this doesn't work: I only get "Error starting Syncthing server" for the toggle, and no output for the (start) action. In either case, syncthing doesn't actually start.

Avoided tasks

This list wouldn't be complete without listing more explicitly the stuff I have done before on the Kobo Glo HD and which I have deliberately decided not to do here because my time is precious:

  • plato install: beautiful project, but koreader is good enough
  • wallabako setup: too much work to maintain, Wallabag articles are too distracting and available on my phone anyways
  • using calibre to transfer books: not working half the time, different file layout than the source, one less Calibre dependency
  • using calibre to generate e-books based on RSS feeds (yes, I did that, and yes, it was pretty bad and almost useless)
  • SSH support: builtin to koreader

Now maybe I'll have time to actually read a book...

21 July, 2021 01:44AM

July 20, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

pkgKitten 0.2.2 on CRAN: Small Updates

kitten

A new release 0.2.2 of pkgKitten is now on CRAN, and will be uploaded to Debian. pkgKitten makes it simple to create new R packages via a simple function invocation. A wrapper kitten.r exists in the littler package to make it even easier.

This release simply corrects on minor aspect in the optional roxygen2 use, and updates the DESCRIPTION file.

Changes in version 0.2.2 (2021-07-19)

  • Small update to DESCRIPTION

  • Document hello2() argument

More details about the package are at the pkgKitten webpage, the pkgKitten docs site, and the pkgKitten GitHub repo.

Courtesy of my CRANberries site, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 July, 2021 10:35PM

Patryk Cisek

Authentication in an Enterprise

I’d like to shed some light at the process of Authentication since it’s a fundamental building block in creating secure tools that need to communicate with other actors over the network. When tools and/or users interact with one another – e.g., through a web browser – both ends of the interactions need a way to make sure, they’re communicating with the right party. Some bad actor might for example create a web page that looks like your bank’s online banking portal.

20 July, 2021 04:47PM by l (Patryk Cisek (patryk@cisek.emai)

Enrico Zini

Run a webserver for a specific user *only*

I'm creating a program that uses the web browser for its user interface, and I'm reasonably sure I'm not the first person doing this.

Normally such a problem would listen to a port on localhost, and tell the browser to connect to it. Bonus points for listening to a randomly allocated free port, so that one does not need to involve some amount of luck to get the program started.

However, using a local port still means that any user on the local machine can connect to it, which is generally a security issue.

A possible solution would be to use AF_UNIX Unix Domain Sockets, which are supported by various web servers, but as far as I understand not currently by browsers. I checked Firefox and Chrome, and they currently seem to fail to even acknowledge the use case.

I'm reasonably sure I'm not the first person doing this, and yes, it's intended as an understatement.

So, dear Lazyweb, is there a way to securely use a browser as a UI for a user's program, without exposing access to the backend to other users in the system?

Access token in the URL

Emanuele Di Giacomo suggests to add an access token to the URL that gets passed to the browser.

This would work to protect access on localhost: even if the application cannot use HTTPS, other users cannot see packets that go through the local interface, so both the access token and the session cookie that one could send afterwards would be protected.

Network namespaces

I thought about isolating server and browser in a private network namespace with something like unshare(1), but it seems to require root.

Johannes Schauer Marin Rodrigues wrote to correct that:

It's possible to unshare the network namespace by first unsharing the user namespace and thus becoming root which is possible without being root since #898446 got fixed.

For example you can run this as the normal user:

lxc-usernsexec -- lxc-unshare -s NETWORK -- ip addr

If you don't want to depend on lxc, you can write a wrapper in Perl or Python. I have a Perl implementation of that in mmdebstrap.

Firewalling

Martin Schuster wrote to suggest another option:

I had the same issue. My approach was "weird", but worked: Block /outgoing/ connections to the port, unless the uid is correct. That might be counter-intuitive, but of course all connections /to/ localhost will be done /from/ localhost also.

Something like:

iptables -A OUTPUT -p tcp -d localhost --dport 8123 -m owner --uid-owner joe -j ACCEPT

iptables -A OUTPUT -p tcp -d localhost --dport 8123 -j REJECT

20 July, 2021 10:39AM

July 19, 2021

Antonio Terceiro

Getting help with autopkgtest for your package

If you have been involved in Debian packaging at all in the last few years, you are probably aware that autopkgtest is now an important piece of the Debian release process. Back in 2018, the automated testing migration process started considering autopkgtest test results as part of its decision making.

Since them, this process has received several improvements. For example, during the bullseye freeze, non-key packages with a non-trivial autopkgtest test suite could migrate automatically to testing without their maintainers needing to open unblock requests, provided there was no regression in theirs autopkgtest (or those from their reverse dependencies).

Since 2014 when ci.debian.net was first introduced, we have seen an amazing increase in the number of packages in Debian that can be automatically tested. We went from around 100 to 15,000 today. This means not only happier maintainers because their packages get to testing faster, but also improved quality assurance for Debian as a whole.

Chart showing the number of packages tested by ci.debian.net. Starts from close to 0 in 2014, up to 15,000 in 2021. The growth tendency seems to slow down in the last year

However, the growth rate seems to be decreasing. Maybe the low hanging fruit have all been picked, or maybe we just need to help more people jump in the automated testing bandwagon.

With that said, we would like to encourage and help more maintainers to add autopkgtest to their packages. To that effect, I just created the autopkgtest-help repository on salsa, where we will take help requests from maintainers working on autopkgtest for their packages.

If you want help, please go ahead and create an issue in there. To quote the repository README:

Valid requests:

  • "I want to add autopkgtest to package X. X is a tool that [...] and it works by [...]. How should I approach testing it?"

    It's OK if you have no idea where to start. But at least try to describe your package, what it does and how it works so we can try to help you.

  • "I started writing autopkgtest for X, here is my current work in progress [link]. But I encountered problem Y. How to I move forward?"

    If you already have an autopkgtest but is having trouble making it work as you think it should, you can also ask here.

Invalid requests:

  • "Please write autopkgtest for my package X for me".

    As with anything else in free software, please show appreciation for other people's time, and do your own research first. If you pose your question with enough details (see above) and make it interesting, it may be that whoever answers will write at least a basic structure for you, but as the maintainer you are still the expert in the package and what tests are relevant.

If you ask your question soon, you might get your answer recorded in video: we are going to have a DebConf21 talk next month, where we I and Paul Gevers (elbrus) will answer a few autopkgtest questions in video for posterity.

Now, if you have experience enabling autopkgtest for you own packages, please consider watching that repository there to help us help our fellow maintainers.

19 July, 2021 09:00PM

July 18, 2021

hackergotchi for Shirish Agarwal

Shirish Agarwal

BBI Kenyan Supreme Court, U.P. Population Bill, South Africa, ‘Suli Deals’, IT rules 2021, Sedition Law and Danish Siddiqui’s death.

BBI Kenya and live Supreme Court streaming on YT

The last few weeks have been unrelenting as all sorts of news have been coming in, mostly about the downturn in the Economy, Islamophobia in India on the rise, Covid, and electioneering. However, in the last few days, Kenya surpassed India in live-streaming proceeds in a Court of Appeals about BBI or Building Bridges Initiative. A background filler article on the topic can be found in BBC. The live-streaming was done via YT and if wants to they can start from –

https://www.youtube.com/watch?v=JIQzpmVKvro

One can also subscribe to K24TV which took the initiative of sharing the proceedings with people worldwide. If K24TV continues to share SC proceedings of Kenya, that would add to the soft power of Kenya. I will not go into the details of the case as Gautam Bhatia who has been following the goings-on in Kenya is a far better authority on the subject. In fact, just recently he shared about another Kenyan judgment from a trial which can be seen here. He has shared the proceedings and some hot takes on the Twitter thread started by him. Probably after a couple of weeks or more when he has processed what all has happened there, he may also share some nuances although many of his thoughts would probably go to his book on Comparative Constitutional Law which he hopes to publish maybe in 2021/2022 or whenever he can. Such televised proceedings are sure to alleviate the standing of Kenya internationally. There has been a proposal to do similar broadcasts by India but with surveillance built-in, so they know who is watching. The problems with the architecture and the surveillance built-in have been shared by Srinivas Kodali or DigitalDutta quite a few times, but that probably is a story for another day.

Uttar Pradesh Population Control Bill

Hindus comprise 83% of Indian couples with more than two child children

The U.P. Population Bill came and it came with lot of prejudices. One of the prejudices is the idea that Muslims create or procreate to have the most children. Even with data is presented as shared above from NFHS National Family Health Survey which is supposed to carry our surveys every few years did the last one around 4 years back. The analysis from it has been instrumental not only in preparing graphs as above but also sharing about what sort of death toll must have been in rural India. And as somebody who have had the opportunity in the past, can vouch that you need to be extremely lucky if something happens to you when you are in a rural area.

Even in places like Bodh Gaya (have been there) where millions of tourists come as it is one of the places not to be missed on the Buddhism tourist circuit, the medical facilities are pretty underwhelming. I am not citing it simply because there are too many such newspaper reports from even before the pandemic, and both the State and the Central Govt. response has been dismal. Just a few months back, they were recalled. There were reports of votes being bought at INR 1000/- (around $14) and a bottle or two of liquor. There used to be a time when election monitoring whether national or state used to be a thing, and you had LTO’s (Long-time Observers) and STO’s (Short-Term Observers) to make sure that the election has been neutral. This has been on the decline in this regime, but that probably is for another time altogether. Although, have to point out the article which I had shared a few months ago on the private healthcare model is flawed especially for rural areas. Instead of going for cheap, telemedicine centers that run some version of a Linux distro. And can provide a variety of services, I know Kerala and Tamil Nadu from South India have experimented in past but such engagements need to be scaled up. This probably will come to know when the next time I visit those places (sadly due to the virus, not anytime soonish.:( ) .

Going back to the original topic, though, I had shared Hans Rosling’s famous Ted talk on population growth which shows that even countries which we would not normally associate with family planning for e.g. the middle-east and Africa have also been falling quite rapidly. Of course, when people have deeply held prejudices, then it is difficult. Even when sharing China as to how they had to let go of their old policy in 2016 as they had the thing for ‘leftover men‘. I also shared the powerful movie So Long my Son. I even shared how in Haryana women were and are trafficked and have been an issue for centuries but as neither suits the RW propaganda, they simply refuse to engage. They are more repulsed by people who publish this news rather than those who are actually practicing it, as that is ‘culture’. There is also teenage pregnancy, female infanticide, sex-selective abortion, etc., etc. It is just all too horrible to contemplate.

Personal anecdote – I know a couple, or they used to be a couple, where the gentleman wanted to have a male child. It was only after they got an autistic child, they got their DNA tested and came to know that the gentleman had a genetic problem. He again forced and had another child, and that too turned out to be autistic. Finally, he left the wife and the children, divorced them and lived with another woman. Almost a decade of the wife’s life was ruined. The wife before marriage was a gifted programmer employed at IBM. This was an arranged marriage. After this, if you are thinking of marrying, apart from doing astrology charts, also look up DNA compatibility charts. Far better than ruining yours or the women’s life. Both the children whom I loved are now in heaven, god bless them 😦

If one wants to, one can read a bit more about the Uttar Pradesh Population bill here. The sad part is that the systems which need fixing, nobody wants to fix. The reason being simple. If you get good health service by public sector, who will go to the private sector. In Europe, AFAIK they have the best medical bang for the money. Even the U.S. looks at Europe and hopes it had the systems that Europe has but that again is probably for another day.

South Africa and India long-lost brothers.

As had shared before, after the 2016 South African Debconf convention, I had been following South Africa. I was happy when FeesMustFall worked and in 2017 the then ANC president Zuma declared it in late 2017. I am sure that people who have been regular visitors to this blog know how my position is on student loans. They also must be knowing that even in U.S. till the 1970s it had free education all the way to be a lawyer and getting a lawyer license. It is only when people like Thurgood Marshall, Martin Luther King Jr., and others from the civil rights movement came out as a major force that the capitalists started imposing fees. They wanted people who could be sold to corporate slavery, and they won. Just last week, Biden took some steps and canceled student loans and is working on steps towards broad debt forgiveness.

Interestingly, NASA has an affirmative diversity program for people from diverse backgrounds, where a couple of UC (Upper Caste) women got the job. While they got the job, the RW (Right-Wing) was overjoyed as they got jobs on ‘merit’. Later, it was found that both the women were the third or fourth generation of immigrants in U.S.

NASA Federal Equal Opportunity Policy Directive NPD 3713 2H

Going back to the original question and topic, while there has been a concerning spate of violence, some calling it the worst sort of violence not witnessed since 1994. The problem, as ascertained in that article, is the same as here in India or elsewhere.

Those, again, who have been on my blog know that ‘merit’ 90% of the time is a function of privilege and there is a vast amount of academic literature which supports that.

If, for a moment, you look at the data that is shared in the graph above which shows that 83% of Hindus and 13% of Muslims have more than 2 children, what does it show, it shows that 83+13 = 96% of the population is living in insecurity. The 5% are the ones who have actually consolidated more power during this regime rule in India. Similarly, from what I understood living in Cape Town for about a month, it is the Dutch ‘Afrikaans’ as they like to call themselves and the immigrants who come from abroad who have enjoyed the fruits of tourism and money and power while the rest of the country is dying due to poverty. It is the same there, it is the same here. Corruption is also rampant in both countries, and the judiciary is virtually absent from both communities in India and SA. Interestingly, South Africa and India have been at loggerheads, but I suspect that is more due to the money and lobbying power by the Dutch. Usually, those who have money power, do get laws and even press on their side, and it is usually the ruling party in power. I cannot help but share about the Gupta brothers and their corruption as I came to know about it in 2016. And as have shared that I’m related to Gupta’s on my mother’s side, not those specific ones but Gupta as a clan. The history of the Gupta dynasty does go back to the 3rd-4th century.

Equally interesting have been Sonali Ranade’s series of articles which she wrote in National Herald, the latest on exports which is actually the key to taking India out of poverty rather than anything else. While in other countries Exporters are given all sort of subsidies, here it is being worked as how to give them less. This was in Economic times hardly a week back 😦

Export incentive schemes being reduced

I can’t imagine the incredible stupidity done by the Finance Minister. And then in an attempt to prove that, they will attempt to present a rosy picture with numbers that have nothing to do with reality.

Interestingly enough, India at one time was a major exporter of apples, especially from Kashmir. Now instead of exporting, we are importing them from Afghanistan as well as Belgium and now even from the UK. Those who might not want to use the Twitter link could use this article. Of course, what India got out of this trade deal is not known. One can see that the UK got the better deal from this. Instead of investing in our own capacity expansion, we are investing in increasing the capacity of others. This is at the time when due to fuel price hike (Central taxes 66%) demand is completely flat. And this is when our own CEA (Chief Economic Adviser) tells us that growth will be at the most 6-7% and that too in 2023-2024 while currently, the inflation rate is around 12%. Is it then any wonder that almost 70% are living on Govt. ration and people in the streets of Kolkata, Assam, and other places have to sell kidneys to make sure they have some money for their kids for tomorrow. Now I have nothing against the UK but trade negotiation is an art. Sadly, this has been going on for the last few years. The politicians in India fool the public by always telling of future trade deals. Sadly, as any businessman knows, once you have compromised, you always have to compromise. And the more you compromise, the more you weaken the hand for any future trade deals. 😦

IIT pupil tries to sell kidney to repay loan, but no takers for Dalit organ.

The above was from yesterday’s Times of India. Just goes to show how much people are suffering. There have been reports in vernacular papers of quite a few people from across regions and communities are doing this so they can live without pain a bit.

Almost all the time, the politicians are saved as only few understand international trade, the diplomacy and the surrounding geopolitics around it. And this sadly, is as much to do with basic education as much as it is to any other factor 😦

Suli Deals

About a month back on the holy day of Ramzan or Ramadan as it is known in the west, which is beloved by Muslims, a couple of Muslim women were targeted and virtually auctioned. Soon, there was a flood and a GitHub repository was created where hundreds of Muslim women, especially those who have a voice and fearlessly talk about their understanding about issues and things, were being virtually auctioned. One week after the FIR was put up, to date none of the people mentioned in the FIR have been arrested. In fact, just yesterday, there was an open letter which was published by livelaw. I have saved a copy on WordPress just in case something does go wrong. Other than the disgust we feel, can’t say much as no action being taken by GOI and police.

IT Rules 2021 and Big Media

After almost a year of sleeping when most activists were screaming hoarsely about how the new IT rules are dangerous for one and all, big media finally woke up a few weeks back and listed a writ petition in Madras High Court of the same. Although to be frank, the real writ petition was filed In February 2021, classical singer, performer T.M. Krishna in Madras High Court. Again, a copy of the writ petition, I have hosted on WordPress. On 23rd June 2021, a group of 13 media outlets and a journalist have challenged the IT Rules, 2021.

The Contention came from Digital News Publishers Association which is made up of the following news companies: ABP Network Private Limited, Amar Ujala Limited, DB Corp Limited, Express Network Pvt Ltd, HT Digital Streams Limited, IE Online Media Services Pvt Ltd, Jagran Prakashan Limited, Lokmat Media Private Limited, NDTV Convergence Limited, TV Today Network Limited, The Malayala Manorama Co (P) Ltd, Times Internet Limited, and Ushodaya Enterprises Private Limited. All the above are heavyweights in the markets where they operate. The reason being simple, when these media organizations came into being, the idea was to have self-regulation, which by and large has worked. Now, the present Govt. wants each news item to be okayed by them before publication. This is nothing but blatant misuse of power and an attempt at censorship. In fact, the Tamil Nadu BJP president himself made a promise of the same. And of course, what is true and what is a lie, only GOI knows and will decide for the rest of the country. If somebody remembers Joseph Goebbels at this stage, it is merely a coincidence. Anyways, 3 days ago Supreme Court on 14th July the Honorable Supreme Court asked the Madras High Court to transfer all the petitions to SC. This, the Madras High Court denied as cited/shared by Meera Emmanuel, a reporter who works with barandbench. The Court says nothing doing, let this happen and then the SC can entertain the motion of doing it that level. At the same time, they would have the benefit of Madras High Court opinion as well. It gave the center two weeks to file a reply. So, either of end-week of July or latest by August first week, we might be able to read the Center’s reply on the same. The SC could do a forceful intervention, but it would lead to similar outrage as has been witnessed in the past when a judge commented that if the SC has to do it all, then why do we need the High Courts, district courts etc. let all the solutions come from SC itself. This was, admittedly, frustration on the part of the judge, but due in part to the needless intervention of SC time and time again. But the concerns had been felt around all the different courts in the country.

Sedition Law

A couple of days ago, the Supreme Court under the guidance of Honorable CJI NV Ramanna, entertained the PIL filed by Maj Gen S G Vombatkere (Retd.) which asked simply that the sedition law which was used in the colonial times by the British to quell dissent by Mahatma Gandhi and Bal Gangadhar Tilak during the Indian freedom struggle. A good background filler article can be found on MSN which tells about some recent cases but more importantly how historically the sedition law was used to quell dissent during India’s Independence. Another article on MSN actually elaborates on the PIL filed by Maj Gen S. G. Vombatkere. Another article on MSN tells how sedition law has been challenged and changed in 10 odd countries. I find it equally sad and equally hilarious that the Indian media whose job is to share news and opinion on this topic is being instead of being shared more by MSN. Although, I would be bereft of my duty if I did not share the editorial on the same topic by the Hindu and Deccan Chronicle. Also, an interesting question to ask is, are there only 10 countries in the world that have sedition laws? AFAIK, there are roughly 200 odd countries as recognized by WTO. If 190 odd countries do not have sedition laws, it also tells a lot about them and a lot about the remaining 10. Also, it came to light that police are still filing laws under sec66A which was declared null and void a few years ago. It was replaced with section 124A if memory serves right and it has more checks and balances.

Danish Siddiqui, Pulitzer award-winning and death in Afghanistan

Before I start with Danish Siddiqui, let me share an anecdote that I think I have shared on the blog years ago about how photojournalists are. Again, those who know me and those who follow me know how much I am mad both about trains and planes (civil aviation). A few months back, I had shared a blog post about some of the biggest railway systems in the world which shows that privatization of Railways doesn’t necessarily lead to up-gradation of services but definitely leads to an increase in tariff/fares. Just had a conversation couple of days ago on Twitter and realized that need to also put a blog post about civil aviation in India and the problems it faces, but I digress.

This was about a gentleman who wanted to take a photo of a particular train coming out of a valley at a certain tunnel at two different heights, one from below and one from above the train. This was several years ago, and while I did share that award-winning photograph then, it probably would take me quite a bit of time and effort to again look it up on my blog and share.

The logistics though were far more interesting and intricate than I had first even thought of. We came around a couple of days before the train was supposed to pass that tunnel and the valley. More than half a dozen or maybe more shots were taken throughout the day by the cameras. The idea was to see how much light was being captured by the cameras and how much exposure was to be given so that the picture isn’t whitened out or is too black.

Weather is the strangest of foes for a photojournalist or even photographers, and the more you are in nature, the more unpredictable it is and can be. We were also at a certain height, so care had to be taken in case light rainfall happens or dew falls, both not good for digital cameras.

And dew is something which will happen regardless of what you want. So while the two days our gentleman cameraman fiddled with the settings to figure out correct exposure settings, we had one other gentleman who was supposed to take the train from an earlier station and apprise us if the train was late or not.

The most ideal time would be at 0600 hrs. When the train would enter the tunnel and come out and the mixture of early morning sun rays, dew, the flowers in the valley, and the train would give a beautiful effect. We could stretch it to maybe 0700 hrs.

Anything after that would just be useless, as it wouldn’t have the same effect. And of all this depended on nature. If the skies were to remain too dark, nothing we could do about it, if the dewdrops didn’t fall it would all be over.

On the day of the shoot, we were told by our compatriot that the train was late by half an hour. We sank a little on hearing that news. Although Photoshop and others can do touch-ups, most professionals like to take as authentic a snap as possible. Everything had been set up to perfection. The wide-angle lenses on both the cameras with protections were set up. The tension you could cut with a knife. While we had a light breakfast, I took a bit more and went in the woods to shit and basically not be there. This was too tensed up for me. Returned an hour to find everybody in a good mood. Apparently, the shoot went well. One of the two captured it for good enough. Now, this is and was in a benign environment where the only foe was the environment. A bad shot would have meant another week in the valley, something which I was not looking forward to. Those who have lived with photographers and photojournalists know how self-involved they can be in their craft, while how grumpy they can be if they had a bad shoot. For those, who don’t know, it is challenging to be friends with such people for a long time. I wish they would scream more at nature and let out the frustrations they have after a bad shoot. But again, this is in a very safe environment.

Now let’s cut to Danish Siddiqui and the kind of photojournalism he followed. He followed a much more riskier sort of photojournalism than the one described above. Krittivas Mukherjee in his Twitter thread shared how reporters in most advanced countries are trained in multiple areas, from risk assessment to how to behave in case you are kidnapped, are in riots, hostage situations, etc. They are also trained in all sorts of medical training from treating gunshot wounds, CPR, and other survival methods. They are supposed to carry medical equipment along with their photography equipment. Sadly, these concepts are unknown in India. And even then they get killed. Sadly, he attributes his death to the ‘thrill’ of taking an exclusive photograph. And the gentleman’s bio reads that he is a diplomat. Talk about tone-deafness 😦

On another completely different level was Karen Hao who was full of empathy as she shared the humility, grace, warmth and kinship she describes in her interaction with the photojournalist. His body of work can be seen via his ted talk in 2020 where he shared a brief collage of his works. Latest, though in a turnaround, the Taliban have claimed no involvement in the death of photojournalist Danish Siddiqui. This could be in part to show the Taliban in a more favorable light as they do and would want to be showcased as progressive, even though they are forcing that all women within a certain age become concubines or marry the fighters and killing the minority Hazaras or doing vile deeds with them. Meanwhile, statements made by Hillary Clinton almost a decade, 12 years ago have come back into circulation which stated how the U.S. itself created the Taliban to thwart the Soviet Union and once that job was finished, forgot all about it. And then in 2001, it landed back in Afghanistan while the real terrorists were Saudi. To date, not all documents of 9/11 are in the public domain. One can find more information of the same here. This is gonna take probably another few years before Saudi Arabia’s whole role in the September 11 attacks will be known.

Last but not the least, came to know about the Pegasus spyware and how many prominent people in some nations were targeted, including in mine India. Will not talk more as it’s already a big blog post and Pegasus revelations need an article on its own.

18 July, 2021 10:13PM by shirishag75

Jamie McClelland

Google and Bitly

It seems I’m the only person on the Internet who didn’t know sending email to Google with bit.ly links will tank your deliverability. To my credit, I’ve been answering deliverability support questions for 16 years and this has never come up.

Until last week.

For some reason, at May First we suddenly had about three percent of our email to Google deferred with the ominous sounding:

“Our system has detected that this message is 421-4.7.0 suspicious due to the nature of the content and/or the links within.”

The quantity of email that accounts for just three percent of mail to Google is high, and caused all kinds of monitoring alarms to go off, putting us into a bit of panic.

Eventually we realized all but one of the email messages had bit.ly links.

I’m still not sure whether this issue was caused by a weird and coincidental spike in users sending bit.ly links to Google. Or whether some subtle change in the Google algorithm is responsible. Or some change in our IP address reputation placed greater emphasis on bit.ly links.

In the end it doesn’t really matter - the real point is that until we disrupt this growing monopoly we will all be at the mercy of Google and their algorithms for email deliverability (and much, much more).

18 July, 2021 05:25PM

July 17, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

ttdo 0.0.7: Micro-tweak

A new (and genuinely) minor release of our ttdo package arrived on CRAN today. The ttdo package extends the most excellent (and very minimal / zero depends) unit testing package tinytest by Mark van der Loo with the very clever and well-done diffobj package by Brodie Gaslam to give us test results with visual diffs (as shown in the screenshot below) which seemingly is so compelling an idea that it eventually got copied by another package…

ttdo screenshot

This release cleans up one microscopic wart of an R warning when installing and byte-compiling the package due to a sprintf call with an unused argument.

And once again, this release gets a #ThankYouCRAN mark as it was processed in a fully automated and intervention-free manner in a matter of minutes.

As usual, the NEWS entry follows.

Changes in ttdo version 0.0.8 (2021-07-17)

  • Expand sprintf template to suppress R warning

CRANberries provides the usual summary of changes to the previous version. Please use the GitHub repo and its issues for any questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 July, 2021 03:29PM

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, June 2021

A Debian LTS logo

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In June, we put aside 5775 EUR to fund Debian projects for which we’re looking forward to receive more projects from various
Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In June, 12 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 18.0h (out of 14h assigned and 19h from May), thus carrying over 15h to July.
  • Anton Gladky did 12h (out of 12h assigned).
  • Ben Hutchings did 13.25h (out of 14h assigned and 2h from May), thus carrying over 2.75h to July.
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 29h (out of 40h assigned), thus carrying over 11h to July.
  • Holger Levsen‘s work was coordinating/managing the LTS team, he did 3.5h (out of 12h assigned) and gave back 8.5h to the pool.
  • Markus Koschany did 29.75h (out of 30h assigned plus 29.75h from May), thus carrying over 30h for July.
  • Ola Lundqvist did 10h (out of 12h assigned and 4.5h from May), thus carrying over 6.5h to July.
  • Roberto C. Sánchez did 12h (out of 32h assigned), thus carrying over 20h to July.
  • Sylvain Beucler did 30h (out of 30h assigned).
  • Thorsten Alteholz did 30h (out of 30h assigned).
  • Utkarsh Gupta did not report back about their work so we assume they did nothing (out of 40h assigned), thus is carrying over 40h for July.

Evolution of the situation

In June we released 30 DLAs. As already written last month we are looking for a Debian LTS project manager and team coordinator.
Finally, we would like to remark once again that we are constantly looking for new contributors. Please contact Holger if you are interested!

The security tracker currently lists 41 packages with a known CVE and the dla-needed.txt file has 23 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

17 July, 2021 03:20PM by Raphaël Hertzog

hackergotchi for Andy Simpkins

Andy Simpkins

Duel boot Debian and Windows

Installing a new laptop

‘New’ is a 2nd hand Thinkpad T470p laptop that I intend to duel boot with windows.
I have been a Debian user for over 20 years, I use windows at work for the proprietary EDA ‘Altium’, but I have never had a windows installation on my laptop. This machine will to be different – it is the first laptop that I have owned that has sufficient GPU to realistically run Altium.. I will try it in a VM later (if that works it will be my preferred choice), but for now I want to try a duel boot system.

So where to start?

Step one Debian wiki…

https://wiki.debian.org/DimentionedDualBoot/Windows

My laptop was purchased from a dealer / refurbisher. This means that they had confirmed that the hardware was functional, wiped it down and then installed a ‘clean’ copy of Windows on the whole system. What it doesn’t mean is that the system was set for UEFI boot and that the EFI partition is set correctly….

I turned on UEFI and made sure that Legacy BIOS mode was disabled.

Next I re-installed Windows, making sure to leave enough disk space for may later Debian install. (if you already have UEFI / secure boot enabled then you could skip the reinstall and instead re-size your disk)

Eeew! Windows now wants to show me adverts, it doesn’t give me the option to never show me ads, but at least I could insist that it doesn’t display tailored ads based on the obvious snooping of my web browsing habits – just another reason to use Debian.

Now to install Debian…

I want an encrypted file system, and because I want to dual boot I can’t just follow the guided installation in the Debian installer. So I shall detail what I did here. Indeed I took several attempts at this and eventually asked for help as I had still messed up (I thought I was doing it correctly but had missed out a step)

First the boiler plate DI

  • Download your prefered Debian installation media (I am using Bullseye AMD64 netinst beta), and drop this directly onto a USB memory stick (dd)
  • Put the USB stick in the laptop and select this as the boot device (on my thinkpad the boot device menu is F12)
  • I chose the graphical installation option, but only because it was less key strokes to select
  • Select your prefered Locale
    • UI language (English)
    • Enter your location (United Kingdom)
    • …and keyboard layout (British English)
  • Next DI comes up with a whole host of missing firmware for the detected WiFi – I can safely ignore this as I have a network cable plugged in (select No). If I want to enable WiFi I could choose to add media with the firmware at this stage or add it later.
    • I have a network cable plugged in and DI finds and configures my network setup (IPv6 and v4 with DHCP)
  • I enter a hostname (I chose to name my machines after lizards – this will be called skink)
  • I am asked for a domain name (I have koipond.org.uk configured)
  • You are then asked for some account details
    • I do not enter a root password as I want the root account login disabled
    • But I do provide my details for a user account

Now for the interesting bit – Partitioning the disk(s)

Select MANUAL disk partitioning…

I have the following partitions:

/dev/nvmen0p1
1.0MB FREE SPACE
#1 536.9 MB B K ESP
400.0 GB FREE SPACE
#3 16.8 MB Microsoft reserved partition
#4 111.6 GB ntfs Basic data partition
335.4 kB FREE SPACE

  • Create an partition for /boot
    • Select the 400GB free space
    • Create a new partition
    • Enter enough space of /boot (>100MB I select 500 MB)
    • place this at the beginning of the disk
    • Name it (boot)
    • Use as ext2 – we don’t want journaling here
    • Mount point – /boot
  • Set up encrypted volumes
    • We need to write the new partition table to disk before we can continue
    • Create encrypted volumes
      • select the large remaining area of free space
      • name it (skink)
      • write disk configuration
      • finish
      • let the system overwrite the partition with random
      • enter a passphrase for the disk
  • Set up LVM (inside the encrypted volume)
    • Select Configure Logical Volume Manager
    • Write changes to disk (we do this a lot)
    • Create volume group
      • Give it a name (VG-Skink)
      • Select the encrypted partition
    • Create logical volume (swap)
      • Select the volume group to use (VG-Skink)
      • Enter a name (LV-Swap)
      • Enter size of swap (32G)
    • Create logical volume (system)
      • Select the volume group to use (VG-Skink)
      • Enter a name (LV-System)
      • Enter size of swap (remaining space)
    • Finish

Set use

  • Select your LVM VG for swap
    • Use as: Swap area
    • Done Setting up partition
  • Select your LVM VG for system
    • Use as: Ext4 journaling file system
    • Mount point: / – the root filing system
    • Mount options: I select ‘discard’ (trim function as this makes a considerable improvement to the disk performance and life)

I now have the following partitions:

LVM VG VG-Skink
#1 32 GB f swap swap
LVM VG VG-System
#1 367.5 GB f ext4 /
Encrypted volume
#1 399.5 GB K lvm
/dev/nvmen0p1
1.0MB FREE SPACE
#1 536.9 MB B K ESP
#2 500.2 MB F ext2 /boot
#5 399.2 GB K crypto skink
#3 16.8 MB Microsoft reserved partition
#4 111.6 GB ntfs Basic data partition
335.4 kB FREE SPACE

  • Finish partitioning and write changes to disk
    • Write the changes to disk

Boiler plate debian install continues

The system will install a base system

  • Configure package manager – Select nearest mirror (I run a local mirror so select enter information manually)
  • Yes I do want to take part in “popcon” (Debian uses this as a guide to how many instances of each package are installed – I select this for anything other than test installs)
  • Software Selection
    • I will have a desktop environment and I currently use KDE
    • I would like an ssh server to be installed
    • I want the standard system utilities

Sit back and wait a for the system to install…

Well that didn’t take very long – Damn this new laptop is quick. I suspect that is nvme solid state storage, no longer limited to SATA bus speeds (and even that wasn’t slow)

17 July, 2021 01:58PM by andy

July 16, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.10.6.0.0 on CRAN: A New Upstream

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 882 other packages on CRAN.

This new release gets us Armadillo 10.6.0 which was released yesterday. We did the usual reverse dependency checks (which came out spotless and clean), and had also just done even fuller checks for Rcpp 1.0.7.

Since the previous RcppArmadillo 0.10.5.0.0 release we made a few interim releases to the drat repo. In general, Conrad is a little more active than we want to be with (montly or less frequent) CRAN updates so keep and eye on the drat repo (or follow the GitHub repo) for a higher-frequence cadence. To use the drat repo, use install.packages("RcppArmadillo", repos="https://RcppCore.github.io/drat") or update.packages() with a similar repos argument.

The full set of changes follows. We include the last interim release as well.

Changes in RcppArmadillo version 0.10.6.0.0 (2021-07-16)

  • Upgraded to Armadillo release 10.6.0 (Keep Calm)

    • expanded chol() to optionally use pivoted decomposition

    • expanded vector, matrix and cube constructors to allow element initialisation via fill::value(scalar), eg. mat X(4,5,fill::value(123))

    • faster loading of CSV files when using OpenMP

    • added csv_opts::semicolon option to allow saving/loading of CSV files with semicolon (;) instead of comma (,) as the separator

Changes in RcppArmadillo version 0.10.5.3.0 (2021-07-01)

  • Upgraded to Armadillo release 10.5.3 (Antipodean Fortress)

  • GitHub-only release

  • Extended test coverage with several new tests, added a coverage badge.

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 July, 2021 11:16PM

Russell Coker

Thoughts about RAM and Storage Changes

My first Linux system in 1992 was a 386 with 4MB of RAM and a 120MB hard drive which (for some reason I forgot) only was supported by Linux for about 90MB. My first hard drive was 70MB and could do 500KB/s for contiguous IO, my first Linux hard drive was probably a bit faster, maybe 1MB/s. My current Linux workstation has 64G of RAM and 2*1TB NVMe devices that can sustain about 1.1GB/s. The laptop I’m using right now has 8GB of RAM and a 180GB SSD that can do 380MB/s.

My laptop has 2000* the RAM of my first Linux system and maybe 400* the contiguous IO speed. Currently I don’t even run a VM with less than 4GB of RAM, NB I’m not saying that smaller VMs aren’t useful merely that I don’t happen to be using them now. Modern AMD64 CPUs support 2MB “huge pages”. As a proportion of system RAM if I used 2MB pages everywhere they would be a smaller portion of system RAM than the 4KB pages on my first Linux system!

I am not suggesting using 2MB pages for general systems. For my workstations the majority of processes are using less than 10MB of resident memory and given the different uses for memory mapped shared objects, memory mapped file IO, malloc(), stack, heap, etc there would be a lot of inefficiency having 2MB the limit for all allocation. But as systems worked with 4MB of RAM or less and 4K pages it would surely work to have only 2MB pages with 64GB or more of RAM.

Back in the 90s it seemed ridiculous to me to have 256 byte pages on a 68030 CPU, but 4K pages on a modern AMD64 system is even more ridiculous. Apparently AMD64 supports 1GB pages on some CPUs, that seems ridiculously large but when run on a system with 1TB of RAM that’s comparable to 4K pages on my first Linux system. Currently AWS offers 24TB EC2 instances and the Google Cloud Project offers 12TB virtual machines. It might even make sense to have the entire OS using 1GB pages for some usage scenarios on such systems, wasting tens of GB of RAM to save TLB thrashing might be a good trade-off.

My personal laptop has 200* the RAM of my first Linux system and maybe 400* the contiguous IO speed. An employer recently assigned me a Thinkpad Carbon X1 Gen6 with an NVMe device that could sustain 5GB/s until the CPU overheated, that’s 5000* the contiguous IO speed of my first Linux hard drive. My Linux hard drive had a 28ms average access time and my first Linux hard drive probably was a little better, let’s call it 20ms for the sake of discussion. It’s generally quoted that access times for NVMe are at best 10us, that’s 2000* better than my first Linux hard drive. As seek times are the main factor for swap performance a laptop with 8GB of RAM and a fast NVMe device could be expected to give adequate performance with 2000* the swap of my first Linux system. For the work laptop in question I had 8G of swap and my personal laptop has 6G of swap which is somewhat comparable to the 4MB of swap on my first Linux system in that swap is about equal to RAM size, so I guess my personal laptop is performing better than it can be expected to.

These are just some idle thoughts about hardware changes over the years. Don’t take it as advice for purchasing hardware and don’t take it too seriously in general. Also when writing comments don’t restrict yourself to being overly serious, feel free to run the numbers on what systems with petabytes of Optane might be like, speculate on what NUMA systems in laptops might be like, etc. Go wild.

16 July, 2021 01:23PM by etbe

Jamie McClelland

From Ikiwiki to Hugo

Back in the days of Etch, I converted this blog from Drupal to ikiwiki. I remember being very excited about this brand new concept of static web sites derived from content stored in a version control system.

And now over a decade later I’ve moved to hugo.

I feel some loyalty to ikiwiki and Joey Hess for opening my eyes to the static web site concept. But ultimately I grew tired of splitting my time and energy between learning ikiwiki and hugo, which has been my tool of choice for new projects. When I started getting strange emails that I suspect had something to do with spammers filling out ikiwiki’s commenting registration system, I choose to invest my time in switching to hugo over debugging and really understanding how ikiwiki handles user registration.

I carefully reviewed anarcat’s blog on converting from ikiwiki to hugo and learned about a lot of ikiwiki features I am not using. Wow, it’s times like these that I’m glad I keep it really simple. Based on the various ikiwiki2hugo python scripts I studied, I eventually wrote a far simpler one tailored to my needs.

Also, in what could only be called a desperate act of procrastination combined with a touch of self-hatred (it’s been a rough week) I rejected all the commenting options available to me and choose to implement my own in PHP.

What?!?! Why would anyone do such a thing?

I refer you to my previous sentence about desperate procrastination. And also… I know it’s fashionable to hate PHP, but honestly as the first programming language I learned, there is something comforting and familiar about it. And, on a more objective level, I can deploy it easily to just about any hosting provider in the world. I don’t have to maintain a unicorn service or a nodejs service and make special configuration entries in my web configuration. All I have to do is upload the php files and I’m done.

Well, I’m sure I’ll regret this decision.

Special thanks to Alexander Bilz for the anatole hugo theme. I choose it via a nearly random click to avoid the rabbit hole of choosing a theme. And, by luck, it has turned out quite well. I only had to override the commento partial theme page to hijack it for my own commenting system’s use.

16 July, 2021 12:27PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Bought BEHRINGER U-PHORIA 2-Channel UMC202HD.

Bought BEHRINGER U-PHORIA 2-Channel UMC202HD. My previous Q1002US mixer seemed to have unreliable right channel bus and was getting worried. MIDAS preamp, simple to use. Connect it to Linux box and it works as a USB audio device with two channel of input. It has two inputs and hence it will probably look like a stereo recording. If I connect my XM8500 it turns out to be on the left channel, for example. Looking at the waveforms, the noise floor is lower than Q1002US and I love it so far. Used my hack yesterday to confirm its behavior.

16 July, 2021 12:30AM by Junichi Uekawa

Reproducible Builds (diffoscope)

diffoscope 178 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 178. This version includes the following changes:

[ Chris Lamb ]
* Don't traceback on an broken symlink in a directory.
  (Closes: reproducible-builds/diffoscope#269)
* Rewrite the calculation of a file's "fuzzy hash" to make the control
  flow cleaner.

[ Balint Reczey ]
* Support .deb package members compressed with the Zstandard algorithm.
  (LP: #1923845)

[ Jean-Romain Garnier ]
* Overhaul the Mach-O executable file comparator.
* Implement tests for the Mach-O comparator.
* Switch to new argument format for the LLVM compiler.
* Fix test_libmix_differences in testsuite for the ELF format.
* Improve macOS compatibility for the Mach-O comparator.
* Add llvm-readobj and llvm-objdump to the internal EXTERNAL_TOOLS data
  structure.

[ Mattia Rizzolo ]
* Invoke gzip(1) with the short option variants to support Busybox's gzip.

You find out more by visiting the project homepage.

16 July, 2021 12:00AM

July 15, 2021

hackergotchi for Jonathan Dowland

Jonathan Dowland

Small tweaks to `git branch` behaviour

Despite my best efforts, I often end up with a lot of branches in my git repositories, many of which need cleaning up, but even so, may which don't. Two git configuration tweaks make the output of git branch much more useful for me.

Motivational example, default git behaviour:

🍊git branch
  2021-apr-cpu-proposed
  OPENJDK-159-openj9-FROM
  OPENJDK-312-passwd
  OPENJDK-407-dnf-modules-fonts
  create_override_files_in_redhat_189
* develop
  inline-container-yaml
  local-modules
  mdrafiur-pr185-jolokia
  openjdk-containers-1.9
  openjdk-rm-jolokia
  osbs-openjdk
  release
  signing-intent-release
  ubi-1.3-mergedown
  ubi-11-singleton-jdk
  ubi8.2
  update-FROM-lines
  update-for-cct-module-changes-maven-etc

The default sort order is alphabetical, but that's never useful for the repositories I work in. The age of the branch is generally more useful. This particular example isn't that long, but often the number of branches can fill the screen. git can be configured to use columns for branch listings, which I think generally improves readability.

🍊git config --global branch.sort authordate
🍊git config --global column.branch auto

After:

🍊git branch
  update-for-cct-module-changes-maven-etc   signing-intent-release
  openjdk-rm-jolokia                        local-modules
  ubi8.2                                    mdrafiur-pr185-jolokia
  ubi-11-singleton-jdk                      OPENJDK-312-passwd
  ubi-1.3-mergedown                         create_override_files_in_redhat_189
  OPENJDK-159-openj9-FROM                   2021-apr-cpu-proposed
  openjdk-containers-1.9                    OPENJDK-407-dnf-modules-fonts
  inline-container-yaml                     release
  update-FROM-lines                       * develop
  osbs-openjdk

15 July, 2021 08:34AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

July 14, 2021

Pavit Kaur

GSoC: First Phase of Coding Period

Hello there.

I still can’t believe that the first half of GSoC period is almost over. So it’s been about 5 weeks working on the project and that means I have a lot to share about it. So without further ado, let’s get started.

coding-period-1

I will be listing up my work done in the respective tasks.

Task: Migrating Logins to Salsa

The objective of this task was that the users could log in to their account on debci using their Debian Salsa account (collaborative development server for Debian based on the GitLab software) and this is implemented with the help of OmniAuth, the ruby authentication framework.

At the beginning of this, I had to discuss quite a few issues with my mentors that I was bumping into, and by the end of it with multiple revisions and discussions, the following was implemented:

  • The previous users' table schema of debci comprises the username field which contained mostly the emails of the users with some exceptions and to accommodate the Salsa logins, a new uid field is added to the table to store the Salsa uid of the logged-in user with the username field storing Salsa usernames now and as the Salsa users have the liberty to change their usernames, the updation of username as well as in debci database is also taken care of.

  • For Salsa login, the ruby-omniauth-gitlab strategy has been used and for login in development mode, the developer strategy which comes with ruby-omniauth has been set up.

  • Added a Login Page giving the option to log in using Salsa and an additional option to login in Developer Mode which is accessible only in Development Setup so that other contributors don’t have to set up dummy Salsa applications for working.

  • Added specs for the new login process. This was an interesting part, as I got the chance to understand RSpec and facilities provided by OmniAuth to mock the authentication for Integration Testing.

  • One blocker that I dealt with was that the Debian release from where packages were pulled out for debci have the OmniAuth version 1.8, which was not working well with the developer strategy implementation for the application so to resolve that I did a minor change to the callback API for developer strategy until the time that release have the newer version of OmniAuth.

  • Another thing we discussed in one of the meetings that in the existing database structure, the tests do not have a real reference to the users' table and rather the username is stored directly as a string for the requestor field, so this thing was fixed as part of this task.

The migration of the existing users' data for the new logins was handled by my mentor Antonio Terceiro and with this, our first task is concluded. All these changes are now part of Debian Continuous Integration platform and you can find the blogpost for same by Antonio here.

This task also allowed me to write my first ever tutorial Tutorial: Integrating OmniAuth with Sinatra Application to help people looking to integrate their ruby application with OmniAuth.

Moving further to the next task in progress.

Task: Adding support for testing security uploads and Debian LTS

This is the next task I am working on enabling private tests in debci for adding support for testing security uploads and Debian LTS. Since it’s a bigger task, it is broken down into about 6-7 steps and till now, the following has been done:

  • The schema of jobs' (tests) table is updated to have a boolean field to store whether the job is private or not.

  • The is_private parameter is added to both API and Self-Service section so the private test can be submitted through the API as well as through GUI form on the web platform.

  • Another thing which comes up through discussion in meetings that a parameter is required to add extra-apt-sources for getting packages of security repository and this is the part in progress.

So that concludes my work till now. It has been an amazing journey with lots of learning and also the guidance from the wonderful mentors of my project and I am looking forward to more exciting parts ahead.

That’s all for now. See you next time!

14 July, 2021 03:52AM by Pavit Kaur (pavitk1@gmail.com)

July 13, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Optimization silver bullets

If you work with optimizing code for a while, you'll notice that a fairly common pattern is for people to believe in optimization silver bullets; just one trick that they think is always the solution for whatever woes you may have. It's not that said thing is bad per se, it's just that they keep suggesting the same thing over and over even if that's not actually the issue.

To name some examples: I've seen people suggesting removing mallocs is always the case (even if malloc didn't show up on the profile), or that adding likely() and unlikely() everywhere would double the IPC of a complex system (PGO, with near-perfect condition probabilities, gave 5%), or designed a system entirely around minimizing instruction cache pressure (where the system they intended to replace didn't have issues with instruction cache). And I guess we've all seen the people insisting on optimizing their code on -O9, because higher is better, right, and who are the GCC people to compile their own code with -O2 anyway?

I've more or less learned to ignore these people, as long as they don't show up with profiles and microbenchmarks, which they never do. (This is the easiest way to see if people's suggestions are bogeymen or real; if people know what they're doing, they can point to a real profile, and they'll write a stable microbenchmark to show that they've actually fixed the issue and to guard against future regressions.) But there's one silver bullet that always rubs me the wrong way: False sharing.

False sharing is when two unrelated items happen to lie on the same cache line, and they are accessed frequently by different cores. Seemingly, false sharing is just exotic enough that people have heard of it and are proud of that, and then they start being afraid of it everywhere for no good reason. I've seen people writing large incantations to protect against false sharing, presumably blowing the data cache in the process, and then discovered that due to them misunderstanding the compiler, the entire thing had been a no-op for years. It's pretty crazy.

That's why I was very happy to finally, after 25 years of multithreaded coding, discover a real case of false sharing in PiStorm; one thread had a local variable made global for some no-longer-relevant debugging reasons, and another thread was making constant writes to a global one in a busy loop. Really a classic, bad case of false sharing. Rune wrote up a patch, and lo and behold, the benchmarks went up!

…by about one percent.

13 July, 2021 02:32PM

hackergotchi for Matthew Garrett

Matthew Garrett

Does free software benefit from ML models being derived works of training data?

Github recently announced Copilot, a machine learning system that makes suggestions for you when you're writing code. It's apparently trained on all public code hosted on Github, which means there's a lot of free software in its training set. Github assert that the output of Copilot belongs to the user, although they admit that it may occasionally produce output that is identical to content from the training set.

Unsurprisingly, this has led to a number of questions along the lines of "If Copilot embeds code that is identical to GPLed training data, is my code now GPLed?". This is extremely understandable, but the underlying issue is actually more general than that. Even code under permissive licenses like BSD requires retention of copyright notices and disclaimers, and failing to include them is just as much a copyright violation as incorporating GPLed code into a work and not abiding by the terms of the GPL is.

But free software licenses only have power to the extent that copyright permits them to. If your code isn't a derived work of GPLed material, you have no obligation to follow the terms of the GPL. Github clearly believe that Copilot's output doesn't count as a derived work as far as US copyright law goes, and as a result the licenses on the training data don't apply to the output. Some people have interpreted this as an attack on free software - Copilot may insert code that's either identical or extremely similar to GPLed code, and claim that there are no license obligations created as a result, effectively allowing the laundering of GPLed code into proprietary software.

I'm completely unqualified to hold a strong opinion on whether Github's legal position is justifiable or not, and right now I'm also not interested in thinking about it too much. What I think is more interesting is what the impact of either position has on free software. Do we benefit more from a future where the output of Copilot (or similar projects) is considered a derived work of the training data, or one where it isn't? Having been involved in a bunch of GPL enforcement activities, it's very easy to think of this as something that weakens the GPL and, as a result, weakens free software. That was my initial reaction, but that's shifted over the past few days.

Let's look at the GNU manifesto, specifically this section:

The fact that the easiest way to copy a program is from one neighbor to another, the fact that a program has both source code and object code which are distinct, and the fact that a program is used rather than read and enjoyed, combine to create a situation in which a person who enforces a copyright is harming society as a whole both materially and spiritually; in which a person should not do so regardless of whether the law enables him to.

The GPL makes use of copyright law to ensure that GPLed work can't be taken from the commons. Anyone who produces a derived work of GPLed code is obliged to provide that work under the same terms. If software weren't copyrightable, the GPL would have no power. But this is the outcome Stallman wanted! The GPL doesn't exist because copyright is good, it exists because software being copyrightable is what enables the concept of proprietary software in the first place.

The powers that the GPL uses to enforce sharing of code are used by the authors of proprietary software to reduce that sharing. They attempt to forbid us from examining their code to determine how it works - they argue that anyone who does so is tainted, unable to contribute similar code to free software projects in case they produce a derived work of the original. Broadly speaking, the further the definition of a derived work reaches, the greater the power of proprietary software authors. If Oracle's argument that APIs are copyrightable had prevailed, it would have been disastrous for free software. If the Apple look and feel suit had established that Microsoft infringed Apple's copyright, we might be living in a future where we had no free software desktop environments.

When we argue for an interpretation of copyright law that enhances the power of the GPL, we're also enhancing the power of giant corporations with a lot of lawyers on hand. So let's look at this another way. If Github's interpretation of copyright law holds, we can train a model on proprietary code and extract concepts without having to worry about being tainted. The proprietary code itself won't enter the commons, but the ideas it embodies will. No more worries about whether you're literally copying the code that implements an algorithm you want to duplicate - simply start typing and let the model remove the risk for you.

There's a reasonable counter argument about equality here. How much GPL-influenced code is going to end up in proprietary projects when compared to the reverse? It's not an easy question to answer, but we should bear in mind that the majority of public repositories on Github aren't under an open source license. Copilot is already claiming to give us access to the concepts embodied in those repositories. Do these provide more value than is given up? I honestly don't know how to measure that. But what I do know is that free software was founded in a belief that software shouldn't be constrained by copyright, and our default stance shouldn't be to argue against the idea that copyright is weaker than we imagined.

(Edit: this post by Julia Reda makes some of the same arguments, but spends some more time focusing on a legal analysis of why having copyright cover the output of Copilot would be a problem)

comment count unavailable comments

13 July, 2021 01:57AM

hackergotchi for Debian XMPP Team

Debian XMPP Team

XMPP Novelties in Debian 11 Bullseye

This is not only the Year of the Ox, but also the year of Debian 11, code-named bullseye. The release lies ahead, full freeze starts this week. A good opportunity to take a look at what is new in bullseye. In this post new programs and new software versions related to XMPP, also known as Jabber are presented. XMPP exists since 1999, and has a diverse and active developers community. It is a universal communication protocol, used for instant messaging, IoT, WebRTC, and social applications. You probably will encounter some oxen in this post.

  • biboumi, XMPP gateway to connect to IRC servers: 8.3 → 9.0
    The biggest change for users is SASL support: A new field in the Configure ad-hoc command lets you set a password that will be used to authenticate to the nick service, instead of using the cumbersome NickServ method.
    Many more changes are listed in the changelog.
  • Dino, modern XMPP client: 0.0.git20181129 → 0.2.0
    Dino in Debian 10 was practically a technology preview. In Debian 11 it is already a fully usable client, supporting OMEMO encryption, file upload, image preview, message correction and many more features in a clean and beautiful user interface.
  • ejabberd, the extensible realtime platform: 18.12.1 → 21.01.
    Probably the most important improvement for end-users is XEP-0215 support to facilitate modern WebRTC-style audio/video calls. ejabberd also integrates more nicely with systemd (e.g., the watchdog feature if supported, now). Apart from that, a new configuration validator was introduced, which brings a more flexible (but mostly backwards-compatible) syntax. Also, error reporting in case of misconfiguration should be way more helpful, now. As a new authentication backend, JSON Web Tokens (JWT) can be used. In addition to the XMPP and SIP support, ejabberd now includes a full-blown MQTT server. A large number of smaller features has been added, performance was improved in many ways, and several bugs were fixed. See the long list of changes.
  • Gajim, a GTK+-based Jabber client: 1.1.2 → 1.3.1
    The new Debian release brings many improvements. Gajim’s network code has been completely rewritten, which leads to faster connections, better recovery from network loss, and less network related hiccups. Customizing Gajim is now easier than ever. Thanks to the new settings backend and a completely reworked Preferences window, you can adapt Gajim to your needs in just a few seconds.
    Good for newcomers: account creation is now a lot easier with Gajim’s new assistant. The new Profile window gives you many options to tell people more about yourself. You can now easily crop your own profile picture before updating it.
    Group chats actions have been reorganized. It’s now easier to send invitations or change your nickname for example. Gajim also received support for chat markers, which enables you to see how far your contact followed the conversation. But this is by far not everything the new release brings. There are many new and helpful features, such as pasting images from your clipboard directly into the chat or playing voice messages directly from the chat window.
    Read more about the new Gajim release in Debian 11 here.
    Furthermore, three more Gajim plugins are now in Debian: gajim-lengthnotifier, gajim-openpgp for OX 🐂 (XEP-0373: OpenPGP for XMPP) and gajim-syntaxhighlight.
  • NEW Kaidan Simple and user-friendly Jabber/XMPP client 0.7.0
    Kaidan is a simple, user-friendly and modern XMPP chat client. The user interface makes use of Kirigami and QtQuick, while the back-end of Kaidan is entirely written in C++ using Qt and the Qt-based XMPP library QXmpp. Kaidan runs on mobile and desktop systems including Linux, Windows, macOS, Android, Plasma Mobile and Ubuntu Touch.
  • mcabber, small Jabber (XMPP) console client: 1.1.0 → 1.1.2
    A theme for 256 color terminals is now included, the handling of carbon message copies has been improved, and various minor issues have been fixed.
  • Poezio, Console-based XMPP client: 0.12.1 → 0.13.1
    This new release brings many improvements, such as Message Archive (XEP-0313) support, initial support for OMEMO (XEP-0384) through a plugin, HTTP File Upload support, Consitent Color Generation (XEP-0392), and plenty of internal changes and bug fixes. Not all changes in 0.13 and 0.13.1 can be listed, see the CHANGELOG for a more extensive summary.
  • Profanity, the console based XMPP client: 0.6.0 → 0.10.0
    We can not list all changes which have been done, but here are some highlights.
    Support of OMEMO Encryption (XEP-0384). Consistent Color Generation (XEP-0392), be aware of the changes in the command to standardize the names of commands. A clipboard feature has been added. Highlight unread messages with a different color in /wins. Keyboard switch to select the next window with unread messages with alt + a. Support for Last Message Correction (XEP-0308), Allow UTF-8 symbols as OMEMO/OTR/PGP indicator char. Add option to open avatars directly (XEP-0084). Add option to define a theme at startup and some changes to improve themes. Add possibility to easily open URLs. Add experimental OX 🐂 (XEP-0373, XEP-0374) support. Add OMEMO media sharing support, ...
    There is also a Profanity light package in Debian now, the best option for systems with tight limits on resources.
  • Prosody, the lightweight extensible XMPP server: 0.11.2 → 0.11.9
    Upgrading to the latest stable release of Prosody brings a whole load of improvements in the stability, usability and performance departments. It especially improves the performance of websockets, and PEP performance for users with many contacts. It includes interoperability improvements for a range of clients.
  • prosody-modules, community modules and extensions for Prosody: 0.0~hg20190203 → 0.0~hg20210130
    The ever-growing collection of goodies to plug into Prosody has a number of exciting additions, including a suite of modules to handle invite-based account registration, and others for moderating messages in group chats (e.g. for removal of spam/abuse), server-to-server federation over Tor and client authentication using certificates. Many existing community modules received updates as well.
  • Psi, Qt-based XMPP client: 1.3 → 1.5
    The new version contains important bug fixes.
  • salutatoi, multi-frontends, multi-purposes communication tool: 0.7.0a4 → 0.8.0~hg3453
    This version is now fully running on Python 3, and has full OMEMO support (one2one, groups and files). The CLI frontend (jp) has among new commands a "jp file get" one which is comparable to wget with OMEMO support. A file sharing component is included, with HTTP Upload and Jingle support. For a list of other improvements, please consult the changelog.
    Note, that the upstream project has been renamed to "Libervia".
  • NEW sms4you, Personal gateway connecting SMS to XMPP or email 0.0.7
    It runs with a GSM device over ModemManager and uses a lightweight XMPP server or a single email account to handle communication in both directions.
  • NEW xmppc, XMPP Command Line Client 0.1.0
    xmppc is a new command line tool for XMPP. It supports some basic features of XMPP (request your roster, bookmarks, OMEMO Devices and fingerprints). You can send messages with both legacy PGP (XEP-0027) and the new OX 🐂 (XEP-0373: OpenPGP for XMPP).

That's all for now. Enjoy Debian 11 bullseye and Happy Chatting!

13 July, 2021 12:00AM by Debian XMPP Team

July 12, 2021

hackergotchi for Chris Lamb

Chris Lamb

Saint Alethia? On Bodies of Light by Sarah Moss

How are you meant to write about an unfinished emancipation? Bodies of Light is a 2014 book by Glasgow-born Sarah Moss on the stirrings of women's suffrage in an arty clique in nineteenth-century England. Set in the intellectually smoggy cities of Manchester and London, we follow the studious and intelligent Alethia 'Ally' Moberly, who is struggling to gain the acceptance of herself, her mother and the General Medical Council.

'Alethia' may be the Greek goddess of truth, but our Ally is really searching for wisdom. Her strengths are her patience and bookish learning, and she acquires Latin as soon as she learns male doctors will use it to keep women away from the operating theatre. In fact, Ally's acquisition of language becomes a recurring leitmotif: replaying a suggestive dream involving a love interest, for instance, Ally thinks of 'dark, tumbling dreams for which she has a perfectly adequate vocabulary'. There are very few moments of sensuality in the book, and pairing it with Ally's understated wit achieves a wonderful effect.

The amount we learn about a character is adapted for effect as well. There are few psychological insights about Ally's sister, for example, and she thus becomes a fey, mysterious and almost Pre-Raphaelite figure below the surface of a lake to match the artistic movement being portrayed. By contrast, we get almost the complete origin story of Ally's mother, Elizabeth, who also constitutes of those rare birds in literature: an entirely plausible Christian religious zealot. Nothing Ally does is ever enough for her, but unlike most modern portrayals of this dynamic, neither of them are aware of what is going, and it is conveyed in a way that is chillingly... benevolent. This was brought home in the annual 'birthday letters' that Elizabeth writes to her daughter:

Last year's letter said that Ally was nervous, emotional and easily swayed, and that she should not allow her behaviour to be guided by feeling but remember always to assert her reason. Mamma would help her with early hours, plain food and plenty of exercise. Ally looks at the letter, plump in its cream envelope. She hopes Mamma wrote it before scolding her yesterday.

§

The book makes the implicit argument that it is a far more robust argument against pervasive oppression to portray a character in, say, 'a comfortable house, a kind husband and a healthy child', yet they are nonetheless still deeply miserable, for reasons they can't quite put their finger on. And when we see Elizabeth perpetuating some generational trauma with her own children, it is telling that is pattern is not short-circuited by an improvement in their material conditions. Rather, it is arrested only by a kind of political consciousness — in Ally's case, the education in a school. In fact, if there is a real hero in Bodies of Light, it is the very concept of female education.

There's genuine shading to the book's ideological villains, despite finding their apotheosis in the jibes about 'plump Tories'. These remarks first stuck out to me as cheap thrills by the author; easy and inexpensive potshots that are unbecoming of the pages around them. But they soon prove themselves to be moments of much-needed humour. Indeed, when passages like this are read in their proper context, the proclamations made by sundry Victorian worthies start to serve as deadpan satire:

We have much evidence that the great majority of your male colleagues regard you as an aberration against nature, a disgusting, unsexed creature and a danger to the public.

Funny as these remarks might be, however, these moments have a subtler and more profound purpose as well. Historical biography always has the risk of allowing readers to believe that the 'issue' has already been solved — hence, perhaps, the enduring appeal of science fiction. But Moss providing these snippets from newspapers 150 years ago should make a clear connection to a near-identical moral panic today.

§

On the other hand, setting your morality tale in the past has the advantage that you can show that progress is possible. And it can also demonstrate how that progress might come about as well. This book makes the argument for collective action and generally repudiates individualisation through ever-fallible martyrs. Ally always needs 'allies' — not only does she rarely work alone, but she is helped in some way by almost everyone around her. This even includes her rather problematic mother, forestalling any simplistic proportioning of blame. (It might be ironic that Bodies of Light came out in 2014, the very same year that Sophia Amoruso popularised the term 'girl boss'.) Early on, Ally's schoolteacher is coded as the primary positive influence on her, but Ally's aunt later inherits this decisive role, continuing Ally's education on cultural issues and what appears to be the Victorian version of 'self-care'. Both the aunt and the schoolteacher are, of course, surrogate mother figures.

After Ally arrives in the cut-throat capital, you often get the impression you are being shown discussions where each of the characters embodies a different school of thought within first-wave feminism. This can often be a fairly tedious device in fiction, the sort of thing you would find in a Sally Rooney novel, Pilgrim's Progress or some other ponderously polemical tract. Yet when Ally appears to 'win' an argument, it is only in the sense that the narrator continues to follow her, implicitly and lightly endorsing her point. Perhaps if I knew my history better, I might be able to associate names with the book's positions, but perhaps it is better (at least for the fiction-reading experience...) that I don't, as the baggage of real-world personalities can often get in the way. I'm reminded here of Regina King's One Night in Miami... (2020), where caricatures of Malcolm X, Muhammad Ali, Jim Brown and Sam Cooke awkwardly replay various arguments within an analogous emancipatory struggle.

Yet none of the above will be the first thing a reader will notice. Each chapter begins with a description of an imaginary painting, providing a title and a date alongside a brief critical exegesis. The artworks serve a different purpose in each chapter: a puzzle to be unlocked, a fear to be confirmed, an unsolved enigma. The inclusion of (artificial) provenances is interesting as well, not simply because they add colour and detail to the chapter to come, but because their very inclusion feels reflective of how we see art today.

Orphelia (1852) by Sir John Everett Millais.

To continue the question this piece began, how should an author conclude a story about an as-yet-unfinished struggle for emancipation? How can they? Moss' approach dares you to believe the ending is saccharine or formulaic, but what else was she meant to turn in — yet another tale of struggle and suffering? After all, Thomas Hardy has already written Tess of the d'Urbervilles. All the same, it still feels slightly unsatisfying to end merely with Ally's muted, uncelebrated success.

Nevertheless, I suspect many readers will dislike the introduction of a husband in the final pages, taking it as a betrayal of the preceding chapters. Yet Moss denies us from seeing the resolution as a Disney-style happy ending. True, Ally's husband turns out to be a rather dashing lighthouse builder, but isn't it Ally herself who is lighting the way in their relationship, warning other women away from running aground on the rocks of mental illness? And Tom feels more of a reflection of Ally's newly acquired self-acceptance instead of that missing piece she needed all along. We learn at one point that Tom's 'importance to her is frightening' — this is hardly something a Disney princess would say.

In fact, it is easy to argue that a heroic ending for Ally might have been an even more egregious betrayal. The evil of saints is that you can never live up to them, for the concept of a 'saint' embodies an unreachable ideal that no human can begin to copy. By being taken as unimpeachable and uncorrectable as well, saints preclude novel political action, and are therefore undoubtedly agents of reaction. Appreciating historical figures as the (flawed) people that they really were is the first step if you wish to continue — or adapt — their political ideas.

§

I had acquired Bodies of Light after enjoying Moss' Summerwater (2020), which had the dubious honour of being touted as the 'first lockdown novel', despite it being finished before Covid-19. There are countless ways one might contrast the two, so I will limit myself to the sole observation that the strengths of one are perhaps the weaknesses of the other. It's not that Bodies of Light ends with a whimper, of course, as it quietly succeeds in concert with Ally. But by contrast, the tighter arc of Summerwater (which is set during a single day, switches protagonist between chapters, features a closed-off community, etc.) can reach a higher high with its handful of narrative artifices. Summerwater is perhaps like Phil Collins' solo career: 'more satisfying, in a narrower way.'

12 July, 2021 06:10PM

hackergotchi for Daniel Silverstone

Daniel Silverstone

Subplot - First public alpha release

This weekend we (Lars and I) finished our first public alpha release of Subplot. Subplot is a tool for helping you to document your acceptance criteria for a project in such a way that you can also produce a programmatic test suite for the verification criteria. We centre this around the concept of writing a Markdown document about your project, with the option to write Gherkin-like given/when/then scenarios inside which detail the automated verification of the acceptance criteria.

This may sound very similar to Yarn, a similar concept which Lars, Richard, and I came up with in 2013. Critically back then we were very 'software engineer' focussed and so Yarn was a testing tool which happened to also produce reasonable documentation outputs if you squinted sideways and tried not to think too critically about them. Subplot on the other hand considers the documentation output to be just as important, if not more important, than the test suite output. Yarn was a tool which ran tests embedded in Markdown files, where Subplot is a documentation tool capable of extracting tests from an acceptance document for use in testing your project.

The release we made is the first time we're actively asking other people to try Subplot and see whether the concept is useful to them. Obviously we expect there to be plenty of sharp corners and there's a good amount of functionality yet to implement to make Subplot as useful as we want it to be, but if you find yourself looking at a project and thinking "How do I make sure this is acceptable to the stakeholders without first teaching them how to read my unit tests?" then Subplot may be the tool for you.

While Subplot can be used to produce test suites with functions written in Bash, Python, or Rust, the only language we're supporting as first-class in this release is Python. However I am personally most interested in the Rust opportunity as I see a lot of Rust programs very badly tested from the perspective of 'acceptance' as there is a tendency in Rust projects to focus on unit-type tests. If you are writing something in Rust and want to look at producing some high level acceptance criteria and yet still test in Rust, then please take a look at Subplot, particularly how we test subplotlib itself.

Issues, feature requests, and perhaps most relevantly, code patches, gratefully received. A desire to be actively involved in shaping the second goal of Subplot even more so.

12 July, 2021 05:06PM by Daniel Silverstone

July 10, 2021

hackergotchi for Laura Arjona Reina

Laura Arjona Reina

Android backups with rsync

A quick note to self to remind how I do backups of my Android device with rsync (and adb).

I have followed this guide: How to use rsync over USB on Android with adb

My personal notes:

  • I have Lineage so I have rsync in my Android device already installed
  • I run Debian stable (buster, for now) on my laptop, with adb installed
  • My /sdcard/rsyncd.conf file:

address = 127.0.0.1
port = 1873
uid = 0
gid = 0
[root]
path = /
use chroot = false
read only = false'

  • The command:

adb shell /data/local/tmp/rsync --daemon --no-detach --config=/sdcard/rsyncd.conf --log-file=/proc/self/fd/2

didn't work, produced this message: "@ERROR: protocol startup error" so I ended up doing:

adb shell
rsync --daemon --no-detach --config=/sdcard/rsyncd.conf --log-file=/sdcard/rsync.log

and opened another tab to perform the rsync commands from my laptop:

rsync -av --progress --stats rsync://localhost:6010/root/storage .
rsync -av --progress --stats rsync://localhost:6010/root/data .

Then I saw that rsync was copying the symlinks instead of their contents: /storage/self/primary was a broken link to /mnt/user/0/primary

So I ran again the commands with -LK:

rsync -av --progress --stats -LK rsync://localhost:6010/root/storage .
rsync -av --progress --stats -LK rsync://localhost:6010/root/data .

and now I have a copy of all the files I'm interested. In addition to this, I run an adb backup of the system:

adb backup -f ./adb_backup_apk_shared_all_system.ad -apk -shared -all -system

and I think that's all that I need for the case I want to remove stuff from my phone or some disaster happens.

10 July, 2021 07:19PM by larjona

hackergotchi for Joey Hess

Joey Hess

a bitter pill for Microsoft Copilot

These blackberries are so sweet and just out there in the commons, free for the taking. While picking a gallon this morning, I was thinking about how neat it is that Haskell is not one programming language, but a vast number of related languages. A lot of smart people have, just for fun, thought of ways to write Haskell programs that do different things depending on the extensions that are enabled. (See: Wait, what language is this?)

I've long wished for an AI to put me out of work programming. Or better, that I could collaborate with. Haskell's type checker is the closest I've seen to that but it doesn't understand what I want. I always imagined I'd support citizenship a full, general AI capable of that. I did not imagine that the first real attempt would be the product of a rent optimisation corporate AI, that throws all our hard work in a hopper, and deploys enough lawyers to muddy the question of whether that violates our copyrights.

Perhaps it's time to think about non-copyright mitigations. Here is an easy way, for Haskell developers. Pick an extension and add code that loops when it's not enabled. Or when it is enabled. Or when the wrong combination of extensions are enabled.

{-# LANGUAGE NumDecimals #-}

main :: IO ()
main = if show(1e1) /= "10" then main else do

I will deploy this mitigation in my code where I consider it appropriate. I will not be making my code do anything worse than looping, but of course this method could be used to make Microsoft Copilot generate code that is as problimatic as necessary.

10 July, 2021 02:19PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

drat 0.2.1: Small Tweak

drat user

A new minor release of drat arrived on CRAN overnight. This is a minor update relative to the 0.2.0 release in April. This release will now create an empty file index.html in the top-level (when initRepo() is called), and check for presence of such a file when adding files to a repo (via insertPackage()). This helps to avoid getting ‘404’ results when (perfectly valid) drat repos are checking by accessing the top-level URL, as for example CRAN does when testing if an Additional_repositoiries is reachable. The ‘step-by-step’ vignette had already suggested creating one by hand, this is now done programmatically (and one is present in the repo suggsted to fork from too).

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code. See below for a few custom reference examples.

Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for two-plus decades how to do this. And you can too: drat is easy to use, documented by six vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.2.1 (2021-07-09)

  • Two internal functions now have a note in their documentation stating them as not exported (Dirk in response to #123)

  • Repositories created by initRepo now have an placeholder index.html to not trigger a curl check at CRAN (Dirk)

  • Adding to a repository now checks for a top-level index.html and displays a message if missing (Dirk)

  • The DratStepByStep.Rmd vignette mentions the added index.html file

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 July, 2021 11:02AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Fixed multi-track audio recording web page that was broken for a while.

Fixed multi-track audio recording web page that was broken for a while. here. But I think I wasn't satisfied with the synchronization.

10 July, 2021 09:04AM by Junichi Uekawa

hackergotchi for Sean Whitton

Sean Whitton

Live replacement of provider cloud images with upstream Debian

Tonight I’m provisioning a new virtual machine at Hetzner and I wanted to share how Consfigurator is helping with that. Hetzner have a Debian “buster” image you can start with, as you’d expect, but it comes with things like cloud-init, preconfiguration to use Hetzner’s apt mirror which doesn’t serve source packages(!), and perhaps other things I haven’t discovered. It’s a fine place to begin, but I want all the configuration for this server to be explicit in my Consfigurator consfig, so it is good to start with pristine upstream Debian. I could boot one of Hetzner’s installation ISOs but that’s slow and manual. Consfigurator can replace the OS in the VM’s root filesystem and reboot for me, and we’re ready to go.

Here’s the configuration:

(defhost foo.silentflame.com (:deploy ((:ssh :user "root") :sbcl))
  (os:debian-stable "buster" :amd64)

  ;; Hetzner's Debian 10 image comes with a three-partition layout and boots
  ;; with traditional BIOS.
  (disk:has-volumes
   (physical-disk
    :device-file "/dev/sda" :boots-with '(grub:grub :target "i386-pc")))

  (on-change (installer:cleanly-installed-once
              nil
              ;; This is a specification of the OS Hetzner's image has, so
              ;; Consfigurator knows how to install SBCL and debootstrap(8).
              ;; In this case it's the same Debian release as the replacement.
              '(os:debian-stable "buster" :amd64))

    ;; Clear out the old OS's EFI system partition contents, in case we can
    ;; switch to booting with EFI at some point (if we wanted we could specify
    ;; an additional x86_64-efi target above, and grub-install would get run
    ;; to repopulate /boot/efi, but I don't think Hetzner can boot from it yet).
    (file:directory-does-not-exist "/boot/efi/EFI")

    (apt:installed "linux-image-amd64")
    (installer:bootloaders-installed)

    (fstab:entries-for-volumes
     (disk:volumes
       (mounted-ext4-filesystem :mount-point "/")
       (partition
        (mounted-fat32-filesystem
         :mount-options '("umask=0077") :mount-point "/boot/efi"))))
    (file:lacks-lines "/etc/fstab" "# UNCONFIGURED FSTAB FOR BASE SYSTEM")

    (file:is-copy-of "/etc/resolv.conf" "/old-os/etc/resolv.conf")
    (mount:unmounted-below-and-removed "/old-os"))

  (apt:mirror "http://ftp.de.debian.org/debian")
  (apt:no-pdiffs)
  (apt:standard-sources.list)
  (sshd:installed)
  (as "root" (ssh:authorized-keys +spwsshkey+))
  (sshd:no-passwords)
  (timezone:configured "Etc/UTC")
  (swap:has-swap-file "2G")

  (network:clean-/etc/network/interfaces)
  (network:static "enp1s0" "xxx.xxx.xxx.xxx" "xxx.xxx.1.1" "255.255.255.255"))

and to use it you evaluate this at the REPL:

CONSFIG> (deploy ((:ssh :user "root" :hop "xxx.xxx.xxx.xxx") :sbcl) foo.silentflame.com)

Here the :HOP parameter specifies the IP address of the new machine, as DNS hasn’t been updated yet. Consfigurator installs SBCL and debootstrap(8), prepares a minimal system, replaces the contents of /, gets to work applying the other properties, and then reboots. This gets us a properly populated fstab:

UUID=...            /           ext4    relatime    0   1
PARTUUID=...        /boot/efi   vfat    umask=0077  0   2
/var/lib/swapfile   swap        swap    defaults    0   0

(slightly doctored for more readable alignment)

There’s ordering logic so that the swapfile will end up after whatever filesystem contains it; a UUID is used for ext4 filesystems, but for fat32 filesystems, to be safe, a PARTUUID is used.

The application of (INSTALLER:BOOTLOADERS-INSTALLED) handles calling both update-grub(8) and grub-install(8), relying on the metadata specified about /dev/sda. Next time we execute Consfigurator against the machine, it’ll ignore all the property applications attached to the application of (INSTALLER:CLEANLY-INSTALLED-ONCE) with ON-CHANGE, and just apply everything following that block.

There are a few things I don’t have good solutions for. When you boot Hetzner’s image the primary network interface is eth0, but then for a freshly debootstrapped Debian you get enp1s0, and I haven’t got a good way of knowing what it’ll be (if you know it’ll have the same name, you can use (NETWORK:PRESERVE-STATIC-ONCE) to create a file in /etc/network/interfaces.d based on the current default route and corresponding interface).

Another tricky thing is SSH host keys. It’s easy to use Consfigurator to add host keys to your laptop’s ~/.ssh/known_hosts, but in this case the host key changes back and forth from whatever the Hetzner image has and the newly generated key you get afterwards. One option might be to copy the old host keys out of /old-os before it gets deleted, like how /etc/resolv.conf is copied.

This work is based on Propellor’s equivalent functionality. I think my approach to handling /etc/fstab and bootloader installation is an improvement on what Joey does.

10 July, 2021 04:20AM

July 08, 2021

Thorsten Alteholz

My Debian Activities in June 2021

FTP master

This month I accepted 105 and rejected 6 packages. The overall number of packages that got accepted was 111.

Debian LTS

This was my eighty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been almost 30h. During that time I did LTS and normal security uploads of:

  • [DLA 2691-1] libgcrypt20 security update for one CVE
  • [DLA 2692-1] bluez security update for two CVEs
  • [DLA 2694-1] tiff security update for two CVEs
  • [DLA 2697-1] fluidsynth security update for one CVE
  • [DLA 2698-1] node-bl security update for one CVE
  • [DLA 2699-1] ipmitool security update for one CVE
  • PU bug #989815 ring/buster for one CVE

I also made further progress on gpac.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the thirty-sixth ELTS month.

During my allocated time I uploaded:

  • ELA-444-1 for libgcrypt20
  • ELA-445-1 for bluez
  • ELA-447-1 for tiff
  • ELA-450-1 for fluidsynth

Last but not least I did some days of frontdesk duties.

Other stuff

On my neverending golang challenge I again uploaded lots of packages either for NEW or as source upload.

08 July, 2021 03:43PM by alteholz

July 07, 2021

Vincent Fourmond

Upcoming features of QSoas and github repository

For the past years, most of the development has happened behind the scene in a private repository, and the code has appeared in the public repository only a couple of months before the release, in the release branch. I have now decided to publish the current code of QSoas in the github repository (in the public branch). This way, you can follow and use all the good things that were developed since the last release, and also verify whether any bug you have is still present in the currently developed version !

Upcoming features


This is the occasion to write a bit about the some of the features that have been added since the publication of the 3.0 release. Not all of them are polished nor documented yet, but here are a few teasers. The current version in github has:
  • a comprehensive handling of column/row names, which makes it much easier to work with files with named columns (like the output files QSoas produces !);
  • better handling of lists of meta-data, when there is one value of the meta for each segment or each Y column;
  • handling of complex numbers in apply-formula;
  • defining fits using external python code;
  • a command for linear least squares (which has the huge advantage of being exact and not needing any initial parameters);
  • commands to pause in a script or sort datasets in the stack;
  • improvements over previous commands, in particular with eval;
  • ... and more...
Check out the github repository if you want to know more about the new features !

As of now, no official date is planned for the 3.1 release, but this could happen during fall.

About QSoas


QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

07 July, 2021 05:38PM by Vincent Fourmond (noreply@blogger.com)

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Insect Camouflage Plant

I was quite impressed by the ability of this insect; yet to be. The way it has camouflaged itself is mesmerizing. I’ll let the video do the talking as this one is going to be difficult to express in words.

07 July, 2021 02:27PM by Ritesh Raj Sarraf (rrs@researchut.com)

July 05, 2021

hackergotchi for Bálint Réczey

Bálint Réczey

Hello zstd compressed .debs in Ubuntu!

When Julian Andres Klode and I added initial Zstandard compression support to Ubuntu’s APT and dpkg in Ubuntu 18.04 LTS we planned getting the changes accepted to Debian quickly and making Ubuntu 18.10 the first release where the new compression could speed up package installations and upgrades. Well, it took slightly longer than that.

Since then many other packages have been updated to support zstd compressed packages and read-only compression has been back-ported to the 16.04 Xenial LTS release, too, on Ubuntu’s side. In Debian, zstd support is available now in APT, debootstrap and reprepro (thanks Dimitri!). It is still under review for inclusion in Debian’s dpkg (BTS bug 892664).

Given that there is sufficient archive-wide support for zstd, Ubuntu is switching to zstd compressed packages in Ubuntu 21.10, the current development release. Please welcome hello/2.10-2ubuntu3, the first zstd-compressed Ubuntu package that will be followed by many other built with dpkg (>= 1.20.9ubuntu2), and enjoy the speed!

05 July, 2021 06:26PM by Réczey Bálint

Petter Reinholdtsen

Six complete translations of The Debian Administrator's Handbook for Buster

I am happy observe that the The Debian Administrator's Handbook is available in six languages now. I am not sure which one of these are completely proof read, but the complete book is available in these languages:

  • English
  • Norwegian Bokmål
  • German
  • Indonesian
  • Brazil Portuguese
  • Spanish

This is the list of languages more than 70% complete, in other words with not too much left to do:

  • Chinese (Simplified) - 90%
  • French - 79%
  • Italian - 79%
  • Japanese - 77%
  • Arabic (Morocco) - 75%
  • Persian - 71%

I wonder how long it will take to bring these to 100%.

Then there is the list of languages about halfway done:

  • Russian - 63%
  • Swedish - 53%
  • Chinese (Traditional) - 46%
  • Catalan - 45%

Several are on to a good start:

  • Dutch - 26%
  • Vietnamese - 25%
  • Polish - 23%
  • Czech - 22%
  • Turkish - 18%

Finally, there are the ones just getting started:

  • Korean - 4%
  • Croatian - 2%
  • Greek - 2%
  • Danish - 1%
  • Romanian - 1%

If you want to help provide a Debian instruction book in your own language, visit Weblate to contribute to the translations.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

05 July, 2021 05:20PM

hackergotchi for Michael Prokop

Michael Prokop

Debian bullseye: changes in util-linux #newinbullseye

Continuing with #newinbullseye. One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. There is util-linux v2.33.1 in Debian/buster and util-linux v2.36.1 in Debian/bullseye, and as usual there are many new features and options available.

I don’t want to replicate the release notes provided by upstream, instead make sure to check out the Release highlights sections in the following release notes:

Tools that have been taken over from / moved to other packages

Debian’s util-linux source package provides new binary packages: eject (and eject-udeb) and bsdextrautils. The util-linux implementation of /usr/bin/eject is used now, replacing the one previously provided by the eject source package.

Overall, from a util-linux perspective the following shifts took place:

  • col, colcrt, colrm, column: moved from binary package bsdmainutils to bsdextrautils
  • eject: moved to binary package eject
  • hd: moved from binary package bsdmainutils to bsdextrautils
  • hexdump: moved from binary package bsdmainutils to bsdextrautils
  • look: moved from binary package bsdmainutils to bsdextrautils
  • ul: moved from binary package bsdmainutils to bsdextrautils
  • write(.ul): moved from binary package bsdmainutils (named bsd-write) to bsdextrautils

Deprecated / removed tools

Tools that are no longer shipped as of Debian/bullseye:

  • /usr/bin/rename.ul (rename files): use e.g. rename package instead, see #926637 for details regarding the removal
  • /usr/bin/volname (return volume name for a device formatted with an ISO-9660 file system): use blkid -s LABEL -o value $filename instead
  • /usr/lib/eject/dmcrypt-get-device: no replacement available

New tools

Debian’s bsdutils package (which is provided by the util-linux source package) provides a new tool from util-linux:

  • scriptlive: re-execute stdin log by a shell in PTY session

The new tools lsirq + irqtop (to monitor kernel interrupts) sadly didn’t make it into util-linux’s packaging of Debian/bullseye (as without per-CPU data they do not seem mature at this time). The new hardlink tool (to consolidate duplicate files via hardlinks) won’t be shipped, as there’s an existing hardlink package already.

New features/options

agetty + getty:

--show-issue    display issue file and exit

blkdiscard:

--force         disable all checking

blkid:

-D, --no-part-details      don't print info from partition table

blkzone:

Commands:

open         Open a range of zones.
close        Close a range of zones.
finish       Set a range of zones to Full.

Options:

-f, --force            enforce on block devices used by the system

cfdisk:

--lock[=<mode>]      use exclusive device lock (yes, no or nonblock)

dmesg:

--noescape             don't escape unprintable character
-W, --follow-new       wait and print only new messages

fdisk:

-x, --list-details          like --list but with more details
-n, --noauto-pt             don't create default partition table on empty devices
--lock[=<mode>]             use exclusive device lock (yes, no or nonblock)

fstrim:

-I, --listed-in <list>   trim filesystems listed in specified files
--quiet-unsupported      suppress error messages if trim unsupported

lsblk:

Options:

-E, --dedup <column> de-duplicate output by <column> 
                     (for example 'lsblk --dedup WWN' to de-duplicate devices by WWN number, e.g. multi-path devices)
-M, --merge          group parents of sub-trees (usable for RAIDs, Multi-path)
                     see http://karelzak.blogspot.com/2018/11/lsblk-merge.html

New output columns:

FSVER         filesystem version
PARTTYPENAME  partition type name
DAX           dax-capable device

lscpu:

Options:

-B, --bytes             print sizes in bytes rather than in human readable format
-C, --caches[=<list>]   info about caches in extended readable format
    --output-all        print all available columns for -e, -p or -C

Available output columns for -C:

        ALL-SIZE  size of all system caches
           LEVEL  cache level
            NAME  cache name
        ONE-SIZE  size of one cache
            TYPE  cache type
            WAYS  ways of associativity
    ALLOC-POLICY  allocation policy
    WRITE-POLICY  write policy
        PHY-LINE  number of physical cache line per cache t
            SETS  number of sets in the cache; set lines has the same cache index
   COHERENCY-SIZE  minimum amount of data in bytes transferred from memory to cache         

lslogins:

--lastlog <path>     set an alternate path for lastlog

lsns:

-t, --type time      namespace type time is also supported now (next to mnt, net, ipc, user, pid, uts, cgroup)

mkswap:

--lock[=<mode>]      use exclusive device lock (yes, no or nonblock)

more:

Options:

-n, --lines <number>  the number of lines per screenful

New long options (in addition to the listed equivalent short options):

  --silent       - equivalent to -d
  --logical      - equivalent to -f
  --no-pause     - equivalent to -l
  --print-over   - equivalent to -c
  --clean-print  - equivalent to -p
  --squeeze      - equivalent to -s
  --plain        - equivalent to -u

mount:

Options:

--target-prefix <path>  specifies path use for all mountpoints

Source:

ID=<id>                 specifies device by udev hardware ID

mountpoint:

--nofollow     do not follow symlink

nsenter:

-T, --time[=<file>]    enter time namespace

script:

-I, --log-in <file>           log stdin to file
-O, --log-out <file>          log stdout to file (default)
-B, --log-io <file>           log stdin and stdout to file
-T, --log-timing <file>       log timing information to file
-m, --logging-format <name>   force to 'classic' or 'advanced' format
-E, --echo <when>             echo input (auto, always or never)

sfdisk:

--disk-id <dev> [<str>]           print or change disk label ID (UUID)
--relocate <oper> <dev>           move partition header
--move-use-fsync                  use fsync after each write when move data
--lock[=<mode>]                   use exclusive device lock (yes, no or nonblock)

unshare:

-T, --time[=<file>]       unshare time namespace
--map-user=<uid>|<name>   map current user to uid (implies --user)
--map-group=<gid>|<name>  map current group to gid (implies --user)
-c, --map-current-user    map current user to itself (implies --user)
--keep-caps               retain capabilities granted in user namespaces
-R, --root=<dir>          run the command with root directory set to <dir>
-w, --wd=<dir>            change working directory to <dir>
-S, --setuid <uid>        set uid in entered namespace
-G, --setgid <gid>        set gid in entered namespace
--monotonic <offset>      set clock monotonic offset (seconds) in time namespaces
--boottime <offset>       set clock boottime offset (seconds) in time namespaces

wipefs:

--lock[=<mode>] use exclusive device lock (yes, no or nonblock)

05 July, 2021 04:22PM by mika

hackergotchi for Jonathan Dowland

Jonathan Dowland

Photos and WhatsApp

I woke up this morning to a lovely little gallery of pictures of our children that my wife had sent me via WhatsApp.

This has become the most common way we interact with family photos. We regularly send and receive photos to and from our families via WhatsApp, which re-compresses them for transit and temporary storage across their network.

The original photos, wherever they are, will be in a very high quality (as you get on most modern cameras) and will be backed up in perfect fidelity to either Apple or Google‘s photo storage solutions. But all of that seems moot, when the most frequent way we engage with the pictures is via a method which compresses so aggressively that you can clearly see the artefacts, even thumbnailed on a phone screen.

I still don’t feel particularly happy with the solution in place for backing up the photos (or even: getting them off the phone). Both Apple and Google make it less than convenient to get them out of their respective walled gardens. I’ve been evaluating the nextCloud app and a Nextcloud instance on my home NAS as a possible alternative.

05 July, 2021 02:46PM

Russell Coker

Servers and Lockdown

OS security features and server class systems are things that surely belong together. If a program is important enough to buy expensive servers to run it then it’s important enough that you want to have all the OS security features enabled. For such an important program you will also want to have all possible monitoring systems running so you can predict hardware failures etc. Therefore you would expect that you could buy a server, setup the vendor’s management software, configure your Linux kernel with security features such as “lockdown” (a LSM that restricts access to /dev/mem, the iopl() system call, and other dangerous things [1]), and have it run nicely! You will be disappointed if you try doing that on a HP or Dell server though.

HP Problems

[370742.622525] Lockdown: hpasmlited: raw io port access is restricted; see man kernel_lockdown.7

The above message is logged when trying to INSTALL (not even run) the hp-health package from the official HP repository (as documented in my previous blog post about the HP ML-110 Gen9 [2]) with “lockdown=integrity” (the less restrictive lockdown option). Now the HP package in question is in their repository for Debian/Stretch (released in 2017) and the Lockdown LSM was documented by LWN as being released in 2019, so not supporting a Debian/Bullseye feature in Debian/Stretch packages isn’t inherently a bad thing apart from the fact that they haven’t released a new version of that package since. The Stretch package that I am testing now was released in 2019. Also it’s been regarded as best practice to have device drivers for this sort of thing since long before 2017.

# hplog -v

ERROR: Could not open /dev/cpqhealth/cdt.
Please make sure the Health Monitor is started.

Attempting to run the “hplog -v” command (to view the HP hardware log) gives the above error. Strace reveals that it could and did open /dev/cpqhealth/cdt but had problems talking to something (presumably the Health Monitor daemon) over a Unix domain socket. It would be nice if they could at least get the error message right!

Dell Problems

[   13.811165] Lockdown: smbios-sys-info: /dev/mem,kmem,port is restricted; see man kernel_lockdown.7
[   13.820935] Lockdown: smbios-sys-info: raw io port access is restricted; see man kernel_lockdown.7
[   18.118118] Lockdown: dchcfg: raw io port access is restricted; see man kernel_lockdown.7
[   18.127621] Lockdown: dchcfg: /dev/mem,kmem,port is restricted; see man kernel_lockdown.7
[   19.371391] Lockdown: dsm_sa_datamgrd: raw io port access is restricted; see man kernel_lockdown.7
[   19.382147] Lockdown: dsm_sa_datamgrd: /dev/mem,kmem,port is restricted; see man kernel_lockdown.7

Above is a sample of the messages when booting a Dell PowerEdge R710 with “lockdown=integrity” with the srvadmin-omacore package installed from the official Dell repository I describe in my blog post about the Dell PowerEdge R710 [3]. Now that repository is for Ubuntu/Xenial which was released in 2015, but again it was best practice to have device drivers for this many years ago. Also the newest Debian based releases that Dell apparently supports are Ubuntu/Xenial and Debian/Jessie which were both released in 2015.

# omreport system esmlog
Error! No Embedded System Management (ESM) log found on this system.

Above is the result when I try to view the ESM log (the Dell hardware log).

How Long Should Server Support Last?

The Wikipedia List of Dell PowerEdge Servers shows that the R710 is a Generation 11 system. Generation 11 was first released in 2010 and Generation 12 was first released in 2012. Generation 13 was the latest hardware Dell sold in 2015 when they apparently ceased providing newer OS support for Generation 11. Dell currently sells Generation 15 systems and provides more recent support for Generation 14 and Generation 15. I think it’s reasonable to debate whether Dell should support servers for 4 generations. But given that a major selling point of server class systems is that they have long term support I think it would make sense to give better support for this and not drop support when it’s only 2 versions from the latest release! The support for Dell Generation 11 hardware only seems to have lasted for 3 years after Generation 12 was first released. Also it appears that software support for Dell Generation 13 ceased before Generation 14 was released, that sucks for the people who bought Generation 13 when they were new!

HP is currently selling “Gen 10” servers which were first released at the end of 2017. So it appears that HP stopped properly supporting Gen 9 servers as soon as Gen 10 servers were released!

One thing to note about these support times, when the new generation of hardware was officially released the previous generation was still on sale. So while HP Gen 10 servers officially came out in 2017 that doesn’t necessarily mean that someone who wanted to buy a ML-110 Gen10 could actually have done so.

For comparison Red Hat Enterprise Linux has been supported for 4-6 years for every release they made since 2005 and Ubuntu has always had a 5 year LTS support for servers.

How To Do It Properly

The correct way of interfacing with hardware is via a device driver that is supported in the kernel.org tree. That means it goes through the usual kernel source code quality checks which are really good at finding bugs and gives users an assurance that the code won’t cause security problems. Generally nothing about the code from Dell or HP gives me confidence that it should be directly accessing /dev/kmem or raw IO ports without risk of problems.

Once a driver is in the kernel.org tree it will usually stay there forever and not require further effort from the people who submit it. Then it just works for everyone and tends to work with any other kernel features that people use, like LSMs.

If they released the source code to the management programs then it would save them even more effort as they could be maintained by the community.

05 July, 2021 11:17AM by etbe

July 04, 2021

Pavit Kaur

Tutorial: Integrating OmniAuth with Sinatra Application

As part of my GSoC project, my first task includes that user could login into their account on debci using their Debian Salsa account (collaborative development server for Debian based on the GitLab software).

The task is officially completed using OmniAuth library and while implementing it, I found that the documentation of OmniAuth is quite a mix-match and more focused on using it with Rails app and this gives me a idea to write a tutorial for people looking to integrate OmniAuth with Sinatra application. So here it is.

Now, depending on the provider, Omniauth requires specific strategy which are generally released individually as RubyGems. For this tutorial, I would be using omniauth-gitlab which I used for Debian Salsa in my project, omniauth-twitter and a developer strategy which could be used for project in development mode and comes with omniauth gem itself.

For simplicity purposes, I have included all routes and OmniAuth configurations in a single file app.rb.

Let’s start.

Register your application

This can be easily done – just head over to provider (Twitter, Salsa) website and find the option to create a new application and fill in the form. In the callback URL field, you need to append /auth/:provider/callback to whichever URL you used in the website field.

tutorial-register-app

The client-id and client-secret is obtained from the console which is used in further step to set up OmniAuth.

Gems

At the top of the file, we require the necessary gems of for our project

require 'sinatra'
require 'omniauth'
require 'omniauth-gitlab'
require 'omniauth-twitter'

Enable sessions

In order for OmniAuth to work and to store the logged in user across requests, sessions need to be activated and if activated, you have one session hash per user session.

configure do
    set :sessions, true
end

Set up OmniAuth configurations

OmniAuth::Builder Rack middleware build up your list of OmniAuth strategies for use in your application:

use OmniAuth::Builder do
    if development?
      provider :developer,
               fields: [:name],
               uid_field: :name
    end
    provider :gitlab, #client-id , #client-secret,
             scope: "read_user",
             client_options: {
               site: 'https://salsa.debian.org/api/v4/'
             }
    provider :twitter, #client-id , #client-secret,
end

Here, few things to note could be the extra options used with providers, as
In developer:
fields : to specify the form fields for login in developer mode and by default it has name and email.
uid_field: to specify that which field’s value could be obtained as uid and by default it is email.
In gitlab:
scope: to limit the scope of application, by default, the api scope is requested and must be allowed in GitLab’s application configuration.
client_options: to specify the server being used as client based on Gitlab software.
Note: In case you want to have a different callback url other than the default /auth/:provider/callback, it can be specified using redirect_url option in case of gitlab provider and accordingly update it in your Application Configuration at provider’s console.

Extra configurations

To redirect to auth/failure route in case of failure even in developer mode, following could be added:

OmniAuth.config.on_failure = proc do |env|
    OmniAuth::FailureEndpoint.new(env).redirect_to_failure
end

By default, OmniAuth will log to STDOUT but you can configure this. If you don’t want OmniAuth to log to STDOUT, following could be used:

OmniAuth.config.logger.level = Logger::UNKNOWN

Setting up routes

Login route

Starting with GET /login route, where you can specify the options available to login:

get '/login' do
    <<~HTML
    <form method='post' action='/auth/gitlab'>
    <input type="hidden" name="authenticity_token" value='#{request.env["rack.session"]["csrf"]}'>
    <button type='submit'>Login with Salsa</button>
    </form>
    <form method='post' action='/auth/twitter'>
    <input type="hidden" name="authenticity_token" value='#{request.env["rack.session"]["csrf"]}'>
    <button type='submit'>Login with Twitter</button>
    </form>
    <form method='post' action='/auth/developer'>
    <input type="hidden" name="authenticity_token" value='#{request.env["rack.session"]["csrf"]}'>
    <button type='submit'>Login with Developer</button>
    </form>
  HTML
end

The auth/:provider path is created and configured automatically by OmniAuth, so you just need to send the request to that paths and auth process will start.
As per the OmniAuth version 2.0, OmniAuth now defaults to only POST as allowed request_phase methods and authenticity_token is required to validate your requests so make sure to take care of this.

Callback routes:

On success from authentication, Omniauth will return the hash of information to the auth/:provider/callback in the Rack environment under the key omniauth.auth so this is what you can use in your desired way like creating a entry to your database and storing the current user in session params.

get '/auth/:provider/callback' do
    erb "
    <h1>Hello #{request.env['omniauth.auth']['info']['name']}</h1>"
end

post '/auth/developer/callback' do
    erb "
    <h1>Hello #{request.env['omniauth.auth']['info']['name']}</h1>"
end

Here, POST request method is used for developer strategy and GET request method for twitter and gitlab as that is how their working is defined in their respective strategies.

Failure route:

If user authentication fails on the provider side, OmniAuth will catch the response and then redirect the request to the path /auth/failure, passing a corresponding error message in a parameter named message.

get '/auth/failure' do
    halt(403, erb("<h2>Authentication Failed</h2><h4>Reason: </h4><pre>#{params[:message]}</pre>"))
end

Final result

That’s it, our application is all set to be tested. And this is how it will be working.

Tutorial

Here, I have already logged into my respective accounts so page for entring credentials does not show up but if the user has not logged into his account, he would be first asked to log in.

For complete code, you can check out: OmniAuth with Sinatra Tutorial

So this completes the tutorial. I hope it helps others who are looking to integrate their Ruby Applications with OmniAuth. If you have any feedback, feel free to let me know.

See you next time!

04 July, 2021 12:00AM by Pavit Kaur (pavitk1@gmail.com)

July 03, 2021

François Marier

Zoom WebRTC links

Most people connect to Zoom via a proprietary client which has been on the receiving end of a number of security and privacy issues over the past year, with some experts even describing it as malware.

It's not widely known however that Zoom offers a half-decent WebRTC client which means cross-platform one-click access to a Zoom room or webinar without needing to install any software.

Given a Zoom link such as https://companyname.zoom.us/j/123456789?pwd=letmein, you can use https://zoom.us/wc/join/123456789?pwd=letmein to connect in your browser.

Notice that the pool of Zoom room IDs is global and you can just drop the companyname from the URL.

In my experience however, Jitsi has much better performance than Zoom's WebRTC client. For instance, I've never been able to use Zoom successfully on a Raspberry Pi 4 (8GB), but Jitsi works quite well. If you have a say in the choice of conference platform, go with Jitsi instead.

03 July, 2021 08:00PM

July 02, 2021

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, June 2021

In June I was assigned 14 hours of work by Freexian's Debian LTS initiative and carried over 2 hours from earlier months. I worked 13.25 hours and will carry over the remainder.

I finished bringing the linux (Linux 4.9) package up to date, uploaded it, and issued DLA-2689-1.

I also updated the linux-4.19 package based on the version in stable point release 10.10, and issued DLA-2690-1.

Finally, I backported my recent security fixes for klibc, uploaded it and issued DLA-2695-1.

02 July, 2021 10:15PM

July 01, 2021

Vincent Bernat

Upgrading my desktop PC

I built my current desktop PC in 2014. A second SSD was added in 2015. The motherboard and the power supply were replaced after a fault1 in 2016. The memory was upgraded in 2018. A discrete AMD GPU was installed in 2019 to drive two 4K screens. An NVMe disk was added earlier this year to further increase storage performance. This is a testament to the durability of a desktop PC compared to a laptop: it’s evolutive and you can keep it a long time.

While fine for most usage, the CPU started to become a bottleneck during video conferences.2 So, it was set for an upgrade. The table below summarizes the change. This update cost me about 800 €.

Before After
CPU Intel i5-4670K @ 3.4 GHz AMD Ryzen 5 5600X @ 3.7 GHz
CPU fan Zalman CNPS9900 Noctua NH-U12S
Motherboard Asus Z97-PRO Gamer Asus TUF Gaming B550-PLUS
RAM 2×8 GB + 2×4 GB DDR3 @ 1.6 GHz 2×16 GB DDR4 @ 3.6 GHz
GPU Asus Radeon PH RX 550 4G M7
Disks 500 GB Crucial P2 NVMe
256 GB Samsung SSD 850
256 GB Samsung SSD 840
PSU be quiet! Pure Power CM L8 @ 530 W
Case Antec P100

According to some benchmark, the new CPU should be 4× faster when all cores are used and 1.5× faster for a single-threaded workload. Compiling an arbitrary3 kernel provides a 3× speedup. Before:

$ lscpu -e
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE    MAXMHZ   MINMHZ
  0    0      0    0 0:0:0:0          yes 3800.0000 800.0000
  1    0      0    1 1:1:1:0          yes 3800.0000 800.0000
  2    0      0    2 2:2:2:0          yes 3800.0000 800.0000
  3    0      0    3 3:3:3:0          yes 3800.0000 800.0000
$ CCACHE_DISABLE=1 =time -f '⌛ %E' make -j$(nproc)
[…]
  OBJCOPY arch/x86/boot/vmlinux.bin
  AS      arch/x86/boot/header.o
  LD      arch/x86/boot/setup.elf
  OBJCOPY arch/x86/boot/setup.bin
  BUILD   arch/x86/boot/bzImage
Kernel: arch/x86/boot/bzImage is ready  (#1)
⌛ 4:54.32

After:

$ lscpu -e
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE    MAXMHZ    MINMHZ
  0    0      0    0 0:0:0:0          yes 5210.3511 2200.0000
  1    0      0    1 1:1:1:0          yes 4650.2920 2200.0000
  2    0      0    2 2:2:2:0          yes 5210.3511 2200.0000
  3    0      0    3 3:3:3:0          yes 5073.0459 2200.0000
  4    0      0    4 4:4:4:0          yes 4932.1279 2200.0000
  5    0      0    5 5:5:5:0          yes 4791.2100 2200.0000
  6    0      0    0 0:0:0:0          yes 5210.3511 2200.0000
  7    0      0    1 1:1:1:0          yes 4650.2920 2200.0000
  8    0      0    2 2:2:2:0          yes 5210.3511 2200.0000
  9    0      0    3 3:3:3:0          yes 5073.0459 2200.0000
 10    0      0    4 4:4:4:0          yes 4932.1279 2200.0000
 11    0      0    5 5:5:5:0          yes 4791.2100 2200.0000
$ CCACHE_DISABLE=1 =time -f '⌛ %E' make -j$(nproc)
[…]
  OBJCOPY arch/x86/boot/vmlinux.bin
  AS      arch/x86/boot/header.o
  LD      arch/x86/boot/setup.elf
  OBJCOPY arch/x86/boot/setup.bin
  BUILD   arch/x86/boot/bzImage
Kernel: arch/x86/boot/bzImage is ready  (#1)
⌛ 1:40.18

Here we go for another seven years!


  1. The original power supply was from an older configuration. It suddenly became unable to reliably start the PC. The motherboard got replaced as it was the first suspect: without load, the power supply was working correctly. ↩︎

  2. On Linux, many programs are unable to leverage hardware acceleration. This is a pity. On a laptop, this can also draws the battery pretty fast. ↩︎

  3. The kernel is configured with make defconfig on commit 15fae3410f1d↩︎

01 July, 2021 07:59PM by Vincent Bernat

Paul Wise

FLOSS Activities June 2021

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Spam: reported 3 Debian bug reports and 135 Debian mailing list posts
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:
    • approved php-horde endless-sky claws-mail memtester
    • rejected python-gdal/weboob-qt (unrelated software)

Administration

  • Debian: restart bacula director
  • Debian wiki: approve accounts

Communication

  • This month I left freenode, an IRC network I had been on for at least 16 years, for reasons that you probably all read about. I think the biggest lesson I take from this situation and ones happening around the same time is that proper governance in peer production projects is absolutely critical.
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The purple-discord/flower work was sponsored by my employers.

All other work was done on a volunteer basis.

01 July, 2021 01:21AM

June 30, 2021

hackergotchi for Aigars Mahinovs

Aigars Mahinovs

Keeping it as simple as possible

You know that you've had the same server too long when they discontinue the entire class of servers you were using and you need to migrate to a new instance. And you know you've not done anything with that server (and the blog running on it) for too long when you have no idea how that thing is actually working.

Its a good opportunity to start over from scratch, and a good motivation to the new thing as simply as humanly possible, or even simpler.

So I am switching to a statically generated blog as well. Not sure what took me so long, but thank good the tooling has really improved since the last time I looked.

It was as simple as picking Nikola, finding its import_feed plugin, changing the BLOG_RSS_LIMIT in my Django Mezzanine blog to a thousand (to export all posts via RSS/ATOM feed), fixing some bugs in the import_feed plugin, waiting a few minutes for the full feed to generate and to be imported, adjusting the config of the resulting site, posting that to git and writing a simple shell script to pull that repo periodically and call nikola build on it, as well as config to serve ther result via ngnix. Done.

After that creating a new blog post is just nikola new_post and editing it in vim and pushing to git. I prefer Markdown, but it supports all kinds of formats. And the old posts are just stored as HTML. Really simple.

I think I will spend more time fighting with Google to allow me to forward email from my domain to my GMail postbox without it refusing all of it as spam.

30 June, 2021 07:49PM by Aigars Mahinovs

Anton Gladky

2021/06, FLOSS activity

LTS

This is my fourth month of working for LTS. I was assigned 12 hrs and worked all of them.

Released DLAs

  1. DLA 2672-1 imagemagick_6.9.7.4+dfsg-11+deb9u13

    • CVE-2020-27751

      A flaw was found in MagickCore/quantum-export.c. An attacker who submits a crafted file that is processed by ImageMagick could trigger undefined behavior in the form of values outside the range of type unsigned long long as well as a shift exponent that is too large for 64-bit type. This would most likely lead to an impact to application availability, but could potentially cause other problems related to undefined behavior.

    • CVE-2021-20243

      A flaw was found in MagickCore/resize.c. An attacker who submits a crafted file that is processed by ImageMagick could trigger undefined behavior in the form of math division by zero.

    • CVE-2021-20245

      A flaw was found in coders/webp.c. An attacker who submits a crafted file that is processed by ImageMagick could trigger undefined behavior in the form of math division by zero.

    • CVE-2021-20309

      A division by zero in WaveImage() of MagickCore/visual-effects.c may trigger undefined behavior via a crafted image file submitted to an application using ImageMagick.

    • CVE-2021-20312

      An integer overflow in WriteTHUMBNAILImage of coders/thumbnail.c may trigger undefined behavior via a crafted image file that is submitted by an attacker and processed by an application using ImageMagick.

    • CVE-2021-20313

      A potential cipher leak when the calculate signatures in TransformSignature is possible.

  2. DLA 2677-1 libwebp_0.5.2-1+deb9u1

    • CVE-2018-25009

      An out-of-bounds read was found in function WebPMuxCreateInternal. The highest threat from this vulnerability is to data confidentiality and to the service availability.

    • CVE-2018-25010

      An out-of-bounds read was found in function ApplyFilter. The highest threat from this vulnerability is to data confidentiality and to the service availability.

    • CVE-2018-25011

      A heap-based buffer overflow was found in PutLE16(). The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.

    • CVE-2018-25012

      An out-of-bounds read was found in function WebPMuxCreateInternal. The highest threat from this vulnerability is to data confidentiality and to the service availability.

    • CVE-2018-25013

      An out-of-bounds read was found in function ShiftBytes. The highest threat from this vulnerability is to data confidentiality and to the service availability.

    • CVE-2018-25014

      An unitialized variable is used in function ReadSymbol. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.

    • CVE-2020-36328

      A heap-based buffer overflow in function WebPDecodeRGBInto is possible due to an invalid check for buffer size. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.

    • CVE-2020-36329

      A use-after-free was found due to a thread being killed too early. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.

    • CVE-2020-36330

      An out-of-bounds read was found in function ChunkVerifyAndAssign. The highest threat from this vulnerability is to data confidentiality and to the service availability.

    • CVE-2020-36331

      An out-of-bounds read was found in function ChunkAssignData. The highest threat from this vulnerability is to data confidentiality and to the service availability.

    CVE-2020-36332 was marked as ignored for stretch due to too disruptive patch for older versions of libwebp.

  3. DLA-2687-1 prosody_0.9.12-2+deb9u3

    • CVE-2021-32917

      The proxy65 component allows open access by default, even if neither of the users has an XMPP account on the local server, allowing unrestricted use of the server’s bandwidth.

    • CVE-2021-32921

      Authentication module does not use a constant-time algorithm for comparing certain secret strings when running under Lua 5.2 or later. This can potentially be used in a timing attack to reveal the contents of secret strings to an attacker.

  4. DLA-2687-2 prosody_0.9.12-2+deb9u4

    Upload prosody_0.9.12-2+deb9u3 introduced a regression in the mod_auth_internal_hashed module. Big thanks to Andre Bianchi for the reporting an issue and for testing the update.

    CVE-2021-32918, CVE-2021-32920, were marked as ignored for stretch: the affected code is not existing in that version of prosody.

LTS-Meeting

I attended the Debian LTS team Jitsi-meeting.

Debian Science Team

openpiv-python

I started to package python-openpiv. The software implements PIV (Particle Image Velocimetry) method to compare two images and obtain velocity field.

Other FLOSS activities

Admesh

Admesh is the first package which I adopted over 10 years ago! Upstream is not active for a very long time, so I created a github-repo back in 2013.

The software helps to manipulate STL-files. STL is the file format for meshes, mostly developed for CAD programs.

This month I decided to clean the build system. It was switched to cmake. CI was updated, now it compiles the sources under Linux/Windows environment, runs tests, AddressSanitizer and UndefinedBehaviourSanitizer were employed. Work is ongoing.

30 June, 2021 05:00PM

Russell Coker

Enrico Zini

Systemd containers with unittest

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Unit testing some parts of Transilience, like the apt and systemd actions, or remote Mitogen connections, can really use a containerized system for testing.

To have that, I reused my work on nspawn-runner. to build a simple and very fast system of ephemeral containers, with minimal dependencies, based on systemd-nspawn and btrfs snapshots:

Setup

To be able to use systemd-nspawn --ephemeral, the chroots needs to be btrfs subvolumes. If you are not running on a btrfs filesystem, you can create one to run the tests, even on a file:

fallocate -l 1.5G testfile
/usr/sbin/mkfs.btrfs testfile
sudo mount -o loop testfile test_chroots/

I created a script to setup the test environment, here is an extract:

mkdir -p test_chroots

cat << EOF > "test_chroots/CACHEDIR.TAG"
Signature: 8a477f597d28d172789f06886806bc55
# chroots used for testing transilience, can be regenerated with make-test-chroot
EOF

btrfs subvolume create test_chroots/buster
eatmydata debootstrap --variant=minbase --include=python3,dbus,systemd buster test_chroots/buster

CACHEDIR.TAG is a nice trick to tell backup software not to bother backing up the contents of this directory, since it can be easily regenerated.

eatmydata is optional, and it speeds up debootstrap quite a bit.

Running unittest with sudo

Here's a simple helper to drop root as soon as possible, and regain it only when needed. Note that it needs $SUDO_UID and $SUDO_GID, that are set by sudo, to know which user to drop into:

class ProcessPrivs:
    """
    Drop root privileges and regain them only when needed
    """
    def __init__(self):
        self.orig_uid, self.orig_euid, self.orig_suid = os.getresuid()
        self.orig_gid, self.orig_egid, self.orig_sgid = os.getresgid()

        if "SUDO_UID" not in os.environ:
            raise RuntimeError("Tests need to be run under sudo")

        self.user_uid = int(os.environ["SUDO_UID"])
        self.user_gid = int(os.environ["SUDO_GID"])

        self.dropped = False

    def drop(self):
        """
        Drop root privileges
        """
        if self.dropped:
            return
        os.setresgid(self.user_gid, self.user_gid, 0)
        os.setresuid(self.user_uid, self.user_uid, 0)
        self.dropped = True

    def regain(self):
        """
        Regain root privileges
        """
        if not self.dropped:
            return
        os.setresuid(self.orig_suid, self.orig_suid, self.user_uid)
        os.setresgid(self.orig_sgid, self.orig_sgid, self.user_gid)
        self.dropped = False

    @contextlib.contextmanager
    def root(self):
        """
        Regain root privileges for the duration of this context manager
        """
        if not self.dropped:
            yield
        else:
            self.regain()
            try:
                yield
            finally:
                self.drop()

    @contextlib.contextmanager
    def user(self):
        """
        Drop root privileges for the duration of this context manager
        """
        if self.dropped:
            yield
        else:
            self.drop()
            try:
                yield
            finally:
                self.regain()


privs = ProcessPrivs()
privs.drop()

As soon as this module is loaded, root privileges are dropped, and can be regained for as little as possible using a handy context manager:

   with privs.root():
       subprocess.run(["systemd-run", ...], check=True, capture_output=True)

Using the chroot from test cases

The infrastructure to setup and spin down ephemeral machine is relatively simple, once one has worked out the nspawn incantations:

class Chroot:
    """
    Manage an ephemeral chroot
    """
    running_chroots: Dict[str, "Chroot"] = {}

    def __init__(self, name: str, chroot_dir: Optional[str] = None):
        self.name = name
        if chroot_dir is None:
            self.chroot_dir = self.get_chroot_dir(name)
        else:
            self.chroot_dir = chroot_dir
        self.machine_name = f"transilience-{uuid.uuid4()}"

    def start(self):
        """
        Start nspawn on this given chroot.

        The systemd-nspawn command is run contained into its own unit using
        systemd-run
        """
        unit_config = [
            'KillMode=mixed',
            'Type=notify',
            'RestartForceExitStatus=133',
            'SuccessExitStatus=133',
            'Slice=machine.slice',
            'Delegate=yes',
            'TasksMax=16384',
            'WatchdogSec=3min',
        ]

        cmd = ["systemd-run"]
        for c in unit_config:
            cmd.append(f"--property={c}")

        cmd.extend((
            "systemd-nspawn",
            "--quiet",
            "--ephemeral",
            f"--directory={self.chroot_dir}",
            f"--machine={self.machine_name}",
            "--boot",
            "--notify-ready=yes"))

        log.info("%s: starting machine using image %s", self.machine_name, self.chroot_dir)

        log.debug("%s: running %s", self.machine_name, " ".join(shlex.quote(c) for c in cmd))
        with privs.root():
            subprocess.run(cmd, check=True, capture_output=True)
        log.debug("%s: started", self.machine_name)
        self.running_chroots[self.machine_name] = self

    def stop(self):
        """
        Stop the running ephemeral containers
        """
        cmd = ["machinectl", "terminate", self.machine_name]
        log.debug("%s: running %s", self.machine_name, " ".join(shlex.quote(c) for c in cmd))
        with privs.root():
            subprocess.run(cmd, check=True, capture_output=True)
        log.debug("%s: stopped", self.machine_name)
        del self.running_chroots[self.machine_name]

    @classmethod
    def create(cls, chroot_name: str) -> "Chroot":
        """
        Start an ephemeral machine from the given master chroot
        """
        res = cls(chroot_name)
        res.start()
        return res

    @classmethod
    def get_chroot_dir(cls, chroot_name: str):
        """
        Locate a master chroot under test_chroots/
        """
        chroot_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "test_chroots", chroot_name))
        if not os.path.isdir(chroot_dir):
            raise RuntimeError(f"{chroot_dir} does not exists or is not a chroot directory")
        return chroot_dir


# We need to use atextit, because unittest won't run
# tearDown/tearDownClass/tearDownModule methods in case of KeyboardInterrupt
# and we need to make sure to terminate the nspawn containers at exit
@atexit.register
def cleanup():
    # Use a list to prevent changing running_chroots during iteration
    for chroot in list(Chroot.running_chroots.values()):
        chroot.stop()

And here's a TestCase mixin that starts a containerized systems and opens a Mitogen connection to it:

class ChrootTestMixin:
    """
    Mixin to run tests over a setns connection to an ephemeral systemd-nspawn
    container running one of the test chroots
    """
    chroot_name = "buster"

    @classmethod
    def setUpClass(cls):
        super().setUpClass()
        import mitogen
        from transilience.system import Mitogen
        cls.broker = mitogen.master.Broker()
        cls.router = mitogen.master.Router(cls.broker)
        cls.chroot = Chroot.create(cls.chroot_name)
        with privs.root():
            cls.system = Mitogen(
                    cls.chroot.name, "setns", kind="machinectl",
                    python_path="/usr/bin/python3",
                    container=cls.chroot.machine_name, router=cls.router)

    @classmethod
    def tearDownClass(cls):
        super().tearDownClass()
        cls.system.close()
        cls.broker.shutdown()
        cls.chroot.stop()

Running tests

Once the tests are set up, everything goes on as normal, except one needs to run nose2 with sudo:

sudo nose2-3

Spin up time for containers is pretty fast, and the tests drop root as soon as possible, and only regain it for as little as needed.

Also, dependencies for all this are minimal and available on most systems, and the setup instructions seem pretty straightforward

30 June, 2021 07:07AM

June 29, 2021

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

New Winter Bicycle

Although I'm about 5 months early and we're in the middle of a terrible heat wave where I live, I've just finished building my new winter bicycle!

I've been riding all year round for about 8 years now and the winters in Montreal are harsh enough that a second winter-ready bicycle is required to have a fun and safe cold season.

I'm saying "new bicycle" because a few months ago I totaled the frame of my previous winter bike. As you can see in the picture below (please disregard the salt crust), I broke the seat tube at the lug.

I didn't notice it at first and while riding I kept hearing a "bang bang" sound coming from the frame when going over bumps. To my horror, I eventually realised the sound was coming from the seat tube hitting the bottom bracket shell!

A large crack in my old bike frame

I'm sad to see this Lejeune French frame go, but it was probably built in the 70's or the early 80's: it had a good life. Another great example of how a high quality steel frame can last a lifetime.

Sparing no expense, I decided to replace it by a brand new Surly Cross-Check frameset. Surly is known for making great bike frames and the Cross-Check is a very versatile model. I'll finally be able to fit my Schwalbe Marathon Winter Plus tires at the front and the back with full length mud guards!!! Hard to ask for more.

A picture of my new winter bike with CX Pro tires

Summer has just started and I'm already looking forward to winter :)

29 June, 2021 08:00PM by Louis-Philippe Véronneau

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Plant Territorial Behavior

This blog post is about my observations of some of the plants in my home garden.

While still a n00b on the subject, these notes are my observations and experiences over days, weeks and months. Thankfully, with the capability to take frequent pictures, it has been easy to do an assessment and generate a report of some of these amazing behaviors of plants, in an easy timeline order; all thanks to the EXIF data embedded. This has very helpfully allowed me to record my, otherwise minor observations, into great detail; and make some sense out of it by correlating the data over time.

It is an emotional experience. You see, plants are amazing. When I sow a sapling, water it, feed it, watch it grow, prune it, medicate it, and what not; I build up affection towards it.

Though, at the same time, to me it is a strict relationship, not too attached; as in it doesn’t hurt to uproot a plant if there is a good reason. But still, I find some sort of association to it.

With plants around, it feels I have a lot of lives around me. All prospering, communicating, sharing. And communicate they do. What is needed is just the right language to observe and absorb their signals and decipher what they are trying to say.

Devastation

How in this world, when you are caring for your plants, can it transform:

From This

Healthy Mulberry Plant

Healthy Mulberry Plant

Healthy Mulberry Plant

Healthy Mulberry Plant

Healthy Bael Plant

Healthy Bael Plant

To This

Dead Mulberry Plant

Dead Mulberry Plant

Very Sick Bael Plant

Very Sick Bael Plant

With emotions involved, this can be an unpleasant experience.

Bael is a dear plant to me. The plant as a whole has religious values (Shiva). As well, its fruits have lots of health benefits, especially for the intestines. Its leaves have a lot of medicinal properties.

When I planted the Bael, there were a lot of emotions that went along.

On the other hand, the Mulberry is something I put in with a lot of enthusiasm. Mulberries are now rare to find, especially in urban locations. For one, they have a very short shelf life; But more than that, the way lifestyles are heading towards, I was always worried if my children would ever have a day to see and taste these fruits.

The mulberry that I planted, yielded twice; once very soon when I had planted and second, before it died. Infact, it died while during its second yield phase.

It was quite saddening to see that happen. It made me wonder why it happened. I had been caring for the plants fairly well. Watering them timely, feeding them the right amount of nutrients. They were getting a good amount of sun.

But still their health was deteriorating. And then the demise of the Mulberry. Many thoughts hit my mind.

I consulted the claimed experts in the domain, the maali, the gardener. I got a very vague answer; there must be termites in the soil. It didn’t make much sense to me. I mean if there are termites they’d hit day one. They won’t sleep for months and just wake up one fine day and start attacking the roots of particular plants; not all.

I wasn’t convinced with the termite theory; But still, give the expert the blind hand, I went with his word.

When my mulberry was dead, I dug its roots. Looking for proof, to see if there were any termites, I uprooted it. But I couldn’t find any trace of termites. And the plant next to it was perfectly healthy and blossoming. So I was convinced that it wasn’t the termite but something else.

But else what ? I still didn’t have an answer to that.

Thinking

The Corona pandemic had embraced and there was a lot to worry, and worry not any, if you change the perspective. With plants around in my home, and our close engagement with them, and the helplessness that I felt after seeking help from the experts, it was time again; to build up some knowledge on the subject.

But how ? How do you go about a subject you have not much clue about ? A subject which has always been around in the surrounding but very seldom have I dedicated focused thought to it. To be honest, the initial thought of diving on the subject made me clueless. I had no idea where to begin with. But, so, as has been my past history, I chose to take it as a curiosity.

I gathered some books, skimmed through a couple of pages. Majority of the books I got hold of were about DIYs and How to do Home Gardening types. It was a decent introduction to a novice but my topic of curiosity was different.

Thankfully, with the Internet, and YouTube®️ in particular, a lot of good stuff is available as documentary videos. While going through some, I came across a video which mentioned about carnivore plants. Like, for example, this one.

Carnivore Plant

Carnivore Plant

This got me thinking that there could be a possibility of something similar, that did the fate to my Mulberry plant. But who did it ? And how to dive further on this suspicion ? And most of all, if that thought of possibility was actually the reality. Or was I just hitting in the dark ?

Beginning

To put some perspective, here’s how it started. When we moved into our home, the gardener put in a couple of plants stock, as part of the property handover. Now, I don’t exactly recollect the name of the plants that came in stock, neither English nor Hindi; But at my neighbor’s place, the plant is still there.

Here are some of the pictures of this beauty. But don’t just go by the looks as looks can be deceiving

Dominating Plant

Dominating Plant

Dominating Plant

Dominating Plant

Dominating Plant

Dominating Plant

We hadn’t put any serious thought about the plants we were offered by the gardener. After all, we never had ever thought of any mishap either.

Plants we planted

Apart from what was offered by the builder/gardener as part of the property handover, in over the next 6 months of we moving in, I planted 3 tree type plants.

  1. Mulberry
  2. Bael
  3. Rudraksha

The Mulberry, as I have described so far, died a tragic death.

Bael, on the other hand, fought hard. But very little did we know that the plant was struggling the fight. Our impression was that we must have been given a bad breed of the plant. Or maybe the termite theory had some truth.

For the Rudraksha plant, the growth was slow. This was the very first time I had seen a Rudraksha plant, so I had no clue of what its growth rate could be, and what to expect out of it. I wasn’t sure if the local climate suited the plant. A quick search showed no objections to the plant in the local climate, but that was it. So my theory has been to put in the plant, and observe.

Here’s what my Rudraksha plant looked like during the initial days/weeks of its settlement

Rudraksha Plant

Rudraksha Plant

The Hint

Days passed and so on. Not much had progressed in gathering information. The plant’s health was usual; deteriorating at a slow pace.

On day, thinking of the documentaries I had been watching, it hit my mind about the plant behavior.

  • Plants can be Carnivores.
  • Plants can be Aggressive.
  • Plants can be Invaders.
  • Plants can be Territorial.

There are many plants where their aggression can be witnessed with bare human eyes. Like creepers. Some of them are good at spreading tentacles, grabbing onto other plants' stems and branches and spread above it. This was my hint from the documentaries. That’s one of the many ways plants set their dominance. That is what hit my mind that if plants are aggressive on the out, underneath the soil, they should be having similar behavior.

I mean, what we see as humans is just a part of the actual plant. More than half of the actual plant is usually underneath the soil, in most plants.

So there’s a high chance to get more information out, if you dig the soil and look the roots.

The Digging

As I mentioned earlier, I do establish bindings, emotions and attachments. But not much usually comes in the way to curiosity.

To dig further on the theory that the problem was elsewhere, with within the plants ecosystem, we needed to pick on another subject - the plant.

And the plant we chose was the plant which was planted in the initial offering to us, when we moved into our home. It was the same plant breed which was neighboring all our newly planted trees: Rudraksha, Mulberry and Bael.

If you look closely into the pictures above of these plants, you’ll notice the stem of another plant, the Territorial Dominator, is close-by to these 3 plants. That’s because the gardener put in a good number of them to get his action item complete.

So we chose to dig and uproot one of those plant to start with. Now, while they may look gentle on the outside, with nice red colored tiny flowers, these plants were giants underneath. Their roots were huge. It took some sweat shredding to single-handedly remove them.

Dominating Plant Uprooted

Dominating Plant Uprooted

Dominating Plant Uprooted

Dominating Plant Uprooted

Dominating Plant Uprooted

Dominating Plant Uprooted

Dominating Plant Uprooted

Dominating Plant Uprooted

Today is brighter

Bael

I’ll let the pictures do the initial talking today.

Healthy Bael

Healthy Bael

Healthy Bael

Healthy Bael

Healthy Bael

Healthy Bael

Healthy Bael

Healthy Bael

Healthy Bael

Healthy Bael

The above ones are pictures of the same Bael plant, which had struggled to live, for almost 14 months. Back then, this plant was starved of its resources. It was dying a slow death out of starvation. After we uprooted the other dominant species, the Bael has recovered and has regained its charm.

In the pictures above of the Bael plant, you can clearly mark out the difference in its stem. The dark colored one is from its months of struggle, while the bright green is from now where it is well nourished and regained its health.

Mulberry

As for the Mulberry, I couldn’t save it. But I later managed to get another one. But it turns out I didn’t take good, full length pictures of the new mulberry when I planted. The only picture I have is this:

Second Mulberry Plant

Second Mulberry Plant

I recollect when I brought it home, it was around 1 - 1.5 feet in length.

This is where I have it today: Majestically standing, 12 feet and counting

Second Mulberry Plant 7 feet tall

Second Mulberry Plant 7 feet tall

Rudraksha

Then:

Rudraksha Plant

Rudraksha Plant

And now:

I feel quite happy about

Rudraksha Plant

Rudraksha Plant

Rudraksha Plant

Rudraksha Plant

Rudraksha Plant

Rudraksha Plant

All these plants are on the very same soil with the very same care taker. What has changed is my experience and learning.

Plant Co-Existence

Plant co-existence is a difficult topic. My knowledge on plants is very limited in general, and co-existence is something tricky, unexplored, at times invisible (when underneath the soil). So it is a difficult topic. So far, what I’ve learnt is purely observations, experiences and hints from the documentaries.

There surely are many many plants that co-exist very well. A good example is my Bael plant itself, which is healthily co-sharing its space with 2 other Croton plants. Same goes for the Rudraksha, which has a close-by neighbor in an Adenium and an Allamanda.

The plant world is mesmerizing. How they behave, communicate and many many more signs. There’s so much to observe, learn, explore and document. I hope to have more such observations and experiences to share 🙏

29 June, 2021 05:05PM by Ritesh Raj Sarraf (rrs@researchut.com)

Enrico Zini

Building a Transilience playbook in a zipapp

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Mitogen is a great library, but scarily complicated, and I've been wondering how hard it would be to make alternative connection methods for Transilience.

Here's a wild idea: can I package a whole Transilience playbook, plus dependencies, in a zipapp, then send the zipapp to the machine to be provisioned, and run it locally?

It turns out I can.

Creating the zipapp

This is somewhat hackish, but until I can rely on Python 3.9's improved importlib.resources module, I cannot think of a better way:

    def zipapp(self, target: str, interpreter=None):
        """
        Bundle this playbook into a self-contained zipapp
        """
        import zipapp
        import jinja2
        import transilience
        if interpreter is None:
            interpreter = sys.executable

        if getattr(transilience.__loader__, "archive", None):
            # Recursively iterating module directories requires Python 3.9+
            raise NotImplementedError("Cannot currently create a zipapp from a zipapp")

        with tempfile.TemporaryDirectory() as workdir:
            # Copy transilience
            shutil.copytree(os.path.dirname(__file__), os.path.join(workdir, "transilience"))
            # Copy jinja2
            shutil.copytree(os.path.dirname(jinja2.__file__), os.path.join(workdir, "jinja2"))
            # Copy argv[0] as __main__.py
            shutil.copy(sys.argv[0], os.path.join(workdir, "__main__.py"))
            # Copy argv[0]/roles
            role_dir = os.path.join(os.path.dirname(sys.argv[0]), "roles")
            if os.path.isdir(role_dir):
                shutil.copytree(role_dir, os.path.join(workdir, "roles"))
            # Turn everything into a zipapp
            zipapp.create_archive(workdir, target, interpreter=interpreter, compressed=True)

Since the zipapp contains not just the playbook, the roles, and the roles' assets, but also Transilience and Jinja2, it can run on any system that has a Python 3.7+ interpreter, and nothing else!

I added it to the standard set of playbook command line options, so any Transilience playbook can turn itself into a self-contained zipapp:

$ ./provision --help
usage: provision [-h] [-v] [--debug] [-C] [--local LOCAL]
                 [--ansible-to-python role | --ansible-to-ast role | --zipapp file.pyz]
[...]
  --zipapp file.pyz     bundle this playbook in a self-contained executable
                        python zipapp

Loading assets from the zipapp

I had to create ZipFile varieties of some bits of infrastructure in Transilience, to load templates, files, and Ansible yaml files from zip files.

You can see above a way to detect if a module is loaded from a zipfile: check if the module's __loader__ attribute has an archive attribute.

Here's a Jinja2 template loader that looks into a zip:

class ZipLoader(jinja2.BaseLoader):
    def __init__(self, archive: zipfile.ZipFile, root: str):
        self.zipfile = archive
        self.root = root

    def get_source(self, environment: jinja2.Environment, template: str):
        path = os.path.join(self.root, template)
        with self.zipfile.open(path, "r") as fd:
            source = fd.read().decode()
        return source, None, lambda: True

I also created a FileAsset abstract interface to represent a local file, and had Role.lookup_file return an appropriate instance:

    def lookup_file(self, path: str) -> str:
        """
        Resolve a pathname inside the place where the role assets are stored.
        Returns a pathname to the file
        """
        if self.role_assets_zipfile is not None:
            return ZipFileAsset(self.role_assets_zipfile, os.path.join(self.role_assets_root, path))
        else:
            return LocalFileAsset(os.path.join(self.role_assets_root, path))

An interesting side effect of having smarter local file accessors is that I can cache the contents of small files and transmit them to the remote host together with the other action parameters, saving a potential network round trip for each builtin.copy action that has a small source.

The result

The result is kind of fun:

$ time ./provision --zipapp test.pyz

real    0m0.203s
user    0m0.174s
sys 0m0.029s

$ time scp test.pyz root@test:
test.pyz                                                                                                         100%  528KB 388.9KB/s   00:01

real    0m1.576s
user    0m0.010s
sys 0m0.007s

And on the remote:

# time ./test.pyz --local=test
2021-06-29 18:05:41,546 test: [connected 0.000s]
[...]
2021-06-29 18:12:31,555 test: 88 total actions in 0.00ms: 87 unchanged, 0 changed, 1 skipped, 0 failed, 0 not executed.

real    0m0.979s
user    0m0.783s
sys 0m0.172s

Compare with a Mitogen run:

$ time PYTHONPATH=../transilience/ ./provision
2021-06-29 18:13:44 test: [connected 0.427s]
[...]
2021-06-29 18:13:46 test: 88 total actions in 2.50s: 87 unchanged, 0 changed, 1 skipped, 0 failed, 0 not executed.

real    0m2.697s
user    0m0.856s
sys 0m0.042s

From a single test run, not a good benchmark, it's 0.203 + 1.576 + 0.979 = 2.758s with the zipapp and 2.697s with Mitogen. Even if I've been lucky, it's a similar order of magnitude.

What can I use this for?

This was mostly a fun hack.

It could however be the basis for a Fabric-based connector, or a clusterssh-based connector, or for bundling a Transilience playbook into an installation image, or to add a provisioning script to the boot partition of a Raspberry Pi. It looks like an interesting trick to have up one's sleeve.

One could even build an Ansible-based connector(!) in which a simple Ansible playbook, with no facts gathering, is used to build the zipapp, push it to remote systems and run it. That would be the wackiest way of speeding up Ansible, ever!

Next: using Systemd containers with unittest, for Transilience's test suite.

29 June, 2021 03:46PM

Antoine Beaupré

Another syncmaildir crash

So I had another major email crash with my syncmaildir setup. This time I was at least able to confirm the issue, and I still haven't lost mail thanks to backups and sheer luck (again).

The crash

It is not really worth going over the crash in details, it's fairly similar to the last one: something bad happened and smd started destroying everything. The hint is that it takes a long time to do what usually takes seconds. It helps that I now have a second monitor showing logs.

I still lost much more mail than the last time. I used to have "301 723 messages", according to notmuch. But then when I ran smd-pull by hand, it was telling me:

95K emails scanned

Oops. You can see notmuch happily noticing the destroyed files on the server:

jun 28 16:33:40 marcos notmuch[28532]: No new mail. Removed 65498 messages. Detected 1699 file renames.
jun 28 16:36:05 marcos notmuch[29746]: No new mail. Removed 68883 messages. Detected 2488 file renames.
jun 28 16:41:40 marcos notmuch[31972]: No new mail. Removed 118295 messages. Detected 3657 file renames.

The final count ended up being 81 042 messages, according to notmuch. A whopping 220 000 mails deleted.

The interesting bit, this time around, is that I caught smd in the act of running two processes in parallel:

jun 28 16:30:09 curie systemd[2845]: Finished pull emails with syncmaildir. 
jun 28 16:30:09 curie systemd[2845]: Starting push emails with syncmaildir... 
jun 28 16:30:09 curie systemd[2845]: Starting pull emails with syncmaildir... 

So clearly that is the source of the bug.

Recovery

Emergency stop on curie:

notmuch dump > notmuch.dump
systemctl --user --now disable smd-pull.service smd-pull.timer smd-push.service smd-push.timer notmuch-new.service notmuch-new.timer

On marcos (the server), guessed the number of messages delivered since the last backup to be 71, just looking at timestamps in the mail log. Made a list:

grep postfix/local /var/log/mail.log | tail -71 > lost-mail

Found postfix queue IDs:

sed 's/.*\]://;s/:.*//' lost-mail > qids

Turn those into message IDs, find those that are missing from the disk (had previously ran notmuch new just to be sure it's up to date):

while read qid ; do 
    grep "$qid: message-id" /var/log/mail.log
done < qids  | sed 's/.*message-id=<//;s/>//' | while read msgid; do
    sudo -u anarcat notmuch count --exclude=false id:$msgid | grep -q 0 && echo $msgid
done

Copy this back on curie as missing-msgids and:

$ wc -l missing-msgids 
48 missing-msgids
$ while read msgid ; do notmuch count --exclude=false id:$msgid | grep -q 0 && echo $msgid ; done < missing-msgids
mailman.189.1624881611.23397.nodes-reseaulibre.ca@reseaulibre.ca
AnwMy7rdSpK-N-vt4AiOag@ismtpd0148p1mdw1.sendgrid.net

only two mails missing! whoohoo!

Copy those back onto marcos as really-missing-msgids, and look at the full mail logs to see what they are:

~anarcat/src/koumbit-scripts/mail/postfix-trace --from-file really-missing-msgids2

I actually remembered deleting those, so no mail lost!

Rebuild the list of msgids that were lost, on marcos:

while read qid ; do grep "$qid: message-id" /var/log/mail.log; done < qids  | sed 's/.*message-id=<//;s/>//'

Copy that on curie as lost-mail-msgids, then copy the files over in a test dir:

while read msgid ; do
    notmuch search --output=files --exclude=false "id:$msgid"
done < lost-mail-msgids | sed 's#/home/anarcat/Maildir/##' | rsync -v  --files-from=- /home/anarcat/Maildir/ shell.anarc.at:restore/Maildir-angela/

If that looks about right, on marcos:

find restore/Maildir-angela/ -type f | wc -l

... should match the number of missing mails, roughly.

Copy if in the real spool:

while read msgid ; do
    notmuch search --output=files --exclude=false "id:$msgid"
done < lost-mail-msgids | sed 's#/home/anarcat/Maildir/##' | rsync -v  --files-from=- /home/anarcat/Maildir/ shell.anarc.at:Maildir/

Then on the server, notmuch new should find the new emails, and we shouldn't have any lost mail anymore:

while read qid ; do grep "$qid: message-id" /var/log/mail.log; done < qids  | sed 's/.*message-id=<//;s/>//' | while read msgid; do sudo -u anarcat notmuch count --exclude=false id:$msgid | grep -q 0 && echo $msgid ; done

Then, crucial moment, try to pull the new mails from the backups on curie:

anarcat@curie:~(main)$ smd-pull  -n  --show-tags -v
Found lockfile of a dead instance. Ignored.
Phase 0: handshake
Phase 1: changes detection
    5K emails scanned
   10K emails scanned
   15K emails scanned
   20K emails scanned
   25K emails scanned
   30K emails scanned
   35K emails scanned
   40K emails scanned
   45K emails scanned
   50K emails scanned
Phase 2: synchronization
Phase 3: agreement
default: smd-client@localhost: TAGS: stats::new-mails(49687), del-mails(0), bytes-received(215752279), xdelta-received(3703852)
"smd-pull  -n  --show-tags -v" took 3 mins 39 secs

This brought me back to the state after the backup plus the mails delivered during the day, which means I had to catchup with all my holiday's read emails (1440 mails!) but thankfully I made a dump of the notmuch database on curie at the start of the procedure, so this actually restored a sane state:

pv notmuch.dump | notmuch restore

Phew!

Workaround

I have filed this as a bug in upstream issue 18. Considering I filed 11 issues and only 3 of those were closed, I'm not holding my breath. I nevertheless filed PR 19 in the hope that this will fix my particular issue, but I'm not even sure this is the right fix...

Fix

At this point, I'm really ready to give up on SMD. It's really, really nice to be able to sync mail over SSH because I don't need to store my IMAP password on disk. But surely there are more reliable syncing mechanisms. I do not remember ever losing that much mail before. At worst, offlineimap would duplicate emails like mad, but never destroy my entire mail spool that way.

As mentioned before, there are other programs that sync mail. I'm looking at:

  • offlineimap3: requires IMAP, used the py2 version in the past, might just still work, first sync painful (IIRC), ways to tunnel over SSH, see comment below
  • isync/mbsync: might be faster, I remember having trouble switching from offlineimap to this, has support for TLS client certs, running over SSH, and generally has good words from multiple Debian and notmuch people
  • getmail: just downloads email, might not be enough
  • nncp: treat the local spool as another mail server, might not be compatible with my "multiple clients" setup
  • doveadm-sync: requires dovecot on both ends, but supports using SSH to sync, will try this next, may have performance problems, see comment below
  • interimap: syncs two IMAP servers, apparently faster than doveadm and offlineimap
  • mail-sync: notify support, happens over any piped transport (e.g. ssh), diff/patch system, requires binary on both ends, mentions UUCP in the manpage, seems awfully complicated to setup, mentions rsmtp which is a nice name for rsendmail

29 June, 2021 03:10PM

Arturo Borrero González

Last couple of talks

Logos

In the last few months I presented several talks. Topics ranged from a round table on free software, to sharing some of my work as SRE in the Cloud Services team at the Wikimedia Foundation. For some of them the videos are now published, so I would like to write a reference here, mostly as a way to collect such events for my own record. Isn’t that what a blog is all about, after all?

Before you continue reading, let me mention that the two talks I’ll reference were given in my native Spanish. The videos are hosted on YouTube and autogenerated subtitles should be available, with doubtful quality though. Also, there was at least one additional private talk that I’m not allowed to comment on here today.

I was invited to participate in a Docker community event called Kroquecon, which was aimed at pushing the spanish-speaking Kubernetes community around the world. The event name is a word play with ‘Kubernetes’, ‘conference’ and ‘croqueta’, typical Spanish food. The talk happened on 2021-04-29, and I was part of a round table about free software, communities and how to join and participate in such projects. I commented on my experience in both the Debian project, Netfilter and my several years in Google Summer of Code (3 as student, 2 as mentor).

Video of the event:


The other event was the CNCF-supported Kubernetes Community Days Spain (KCD Spain). During Kroquecon I was encouraged to submit a talk proposal for this event, to talk about something related to our use of Kubernetes in the Wikimedia Cloud Services team at the Wikimedia Foundation.

The proposal was originally rejected. Then, a couple of weeks before the event itself, I was contacted by the organizers with a greenlight to give it because the other speaker couldn’t make it. My coworker David Caro joined me in the presentation. It was titled “Conoce Wikimedia Toolforge, plataforma basada en Kubernetes” (or “Meet Wikimedia Toolforge, Kubernetes-based platform”).

We explained what Wikimedia Cloud Services is, focusing on Toolforge, and in particular how we use Kubernetes to enable the platform’s most interesting use cases. We covered several interesting topics, including how we handle multi-tenancy, or the problems we had with the etcd & ceph combo. The slides we used are available.

Video of the event:


EOF

29 June, 2021 08:25AM

June 28, 2021

hackergotchi for Shirish Agarwal

Shirish Agarwal

Indian Capital Markets, BSE, NSE

I had been meaning to write on the above topic for almost a couple of months now but just kept procrastinating about it. That push came to a shove when Sucheta Dalal and Debasis Basu shared their understanding, wisdom, and all in the new book called ‘Absolute Power – Inside story of the National Stock Exchange’s amazing success, leading to hubris, regulatory capture and algo scam‘ . Now while I will go into the details of the new book as currently, I have not bought it but even if I had bought it and shared some of the revelations from it, it wouldn’t have done justice to either the book or what is sharing before knowing some of the background before it.

Before I jump ahead, I would suggest people to read my sort of introductory blog post on banking history so they know where I’m coming from. I’m going to deviate a bit from Banking as this is about trade and capital markets, although Banking would come in later on. And I will also be sharing some cultural insights along with history so people are aware of why things happened the way they did.

Calicut, Calcutta, Kolkata, one-time major depot around the world

Now, one cannot start any topic about trade without talking about Kolkata. While today, it seems like a bastion of communism, at one time it was one of the major trade depots around the world. Both William Dalrymple and the Chinese have many times mentioned Kolkata as being one of the major centers of trade. This was between the 13th and the late 19th century. A cursory look throws up this article which talks about Kolkata or Calicut as it was known as a major trade depot. There are of course many, many articles and even books which do tell about how Kolkata was a major trade depot. Now between the 13th and 19th century, a lot of changes happened which made Kolkata poorer and shifted trade to Mumbai/Bombay which in those times was nothing but just a port city like many others.

The Rise of the Zamindar

Around the 15th century when Babur Invaded Hindustan, he realized that Hindustan is too big a country to be governed alone. And Hindustan was much broader than independent India today. So he created the title of Zamindars. Interestingly, if you look at the Mughal period, they were much more in tune with Hindustani practices than the British who came later. They used the caste divisions and hierarchy wisely making sure that the status quo was maintained as far as castes/creed were concerned. While in-fighting with various rulers continued, it was more or less about land and power other than anything else. When the Britishers came they co-opted the same arrangement with a minor adjustment. While in the before system, the zamindars didn’t have powers to be landowners. The Britishers gave them land ownerships. A huge percentage of thess zamindars especially in Bengal were from my own caste Banias or Baniyas.

The problem and the solution for the Britishers had been this was a large land to control and exploit and the number of British officers and nobles were very less. So they gave virtually a lot of powers to the Banias. The only thing the British insisted on were very high rents from the newly minted Zamindars. The Zamindar in turn used the powers of personal fiefdom to give loans at very high interest rates when the poor were unable to pay the interest rate, they would take the land while at the same time slavery was forced on both men and women, many a time rapes and affairs. While there have been many records shedding light on it, don’t think it could be any more powerful as enacted and shared by Shabana Azmi in Ankur:the Seedling. Another prominent grouping was formed around the same time was the Bhadralok. Now as shared Bhadralok while having all the amenities of belonging to the community, turned a blind eye to the excesses being done by the Zamindars. How much they played a hand in the decimation of Bengal has been a matter of debate, but they did have a hand, that much is not contested.

The Rise of Stock Exchanges

Sadly and interestingly, many people believe and continue to believe that stock exchanges is recent phenomena. The first stock exchange though was the Calcutta Stock Exchange rather than the Bombay Stock Exchange. How valuable was Calcutta to the Britishers in its early years can be gauged from the fact that at one time it was made the capital of India in 1772 . In fact, after the Grand Trunk Road (on which there had been even Train names in both countries) x number of books have been written of the trade between Calcutta and Peshawar (Now in Pakistan). And it was not just limited to trade but also cultural give-and-take between the two centers. Even today, if you look at YT (Youtube) and look up some interviews of old people, you find many interesting anecdotes of people sharing both culture and trade.

The problem of the 60’s and rise of BSE

After India became independent and the Constitutional debates happened, the new elites understood that there cannot be two power centers that could govern India. On one hand, were the politicians who had come to power on the back of the popular vote, the other was the Zamindars, who more often than not had abused their powers which resulted in widespread poverty. The Britishers are to blame, but so do the middlemen as they became willing enablers to the same system of oppression. Hence, you had the 1951 amendment to the Constitution and the 1956 Zamindari Abolition Act. In fact, you can find much more of an in-depth article both about Zamindars and their final abolition here. Now once Zamindari was gone, there was nothing to replace it with. The Zamindars ousted of their old roles turned and tried to become Industrialists. The problem was that the poor and the downtrodden had already had experiences with the Zamindars. Also, some Industrialists from North and West also came to Bengal but they had no understanding of either the language or the cultural understanding of what had happened in Bengal. And notice that I have not talked about both the famines and the floods that wrecked Bengal since time immemorial and some of the ones which got etched on soul of Bengal and has marks even today 😦

The psyche of the Bengali and the ‘Bhadralok’ has gone through enormous shifts. I have met quite a few and do see the guilt they feel. If one wonders as to how socialist parties are able to hold power in Bengal, look no further than ‘Tarikh‘ which tells and shares with you that even today how many Bengalis still feel somewhat lost.

The Rise of BSE

Now, while Kolkata Stock Exchange had been going down, for multiple reasons other than listed above. From the 1950s onwards Jawaharlal Nehru had this idea of 5-year plans, borrowed from socialist countries such as Russia, China etc. His vision and ambition for the newly minted Indian state were huge, while at the same time he understood we were poor. The loot by East India Company and the Britishers and on top of that the division of wealth with Pakistan even though the majority of Muslims chose and remained with India. Travel on Indian Railways was a risky affair. My grandfather had shared numerous tales where he used to fill money in socks and put the socks on in boots when going between either Delhi – Kolkata or Pune – Kolkata.

Also, as the Capital became Delhi, it unofficially was for many years, the transparency from Kolkata-based firms became less. So many Kolkata firms either mismanaged and shut down while Maharashtra, my own state, saw a huge boon in Industrialization as well as farming. From the 1960s to the 1990s there were many booms and busts in the stock exchanges but most were manageable.

While the 60s began on a good note as Goa was finally freed from the Portuguese army and influence, the 1962 war with the Chinese made many a soul question where we went wrong. Jawaharlal Nehru went all over the world to ask for help but had to return home empty-handed. Bollywood showed a world of bell-bottoms and cars and whatnot, while the majority were still trying to figure out how to put two square meals on the table. India suffered one of the worst famines in those times. People had to ration food. Families made do with either one meal or just roti (flatbread) rather than rice. In Bengal, things were much more severe. There were huge milk shortages, so Bengalis were told to cut down on sweets. This enraged the Bangalis as nothing else could.

Note – If one wants to read how bad Indians felt at that time, all one has to read is V.S. Naipaul’s ‘An Area of darkness‘ .

This was also the time when quite a few Indians took their first step out of India. While Air India had just started, the fares were prohibitive. Those who were not well off, either worked on ships or went via passenger or cargo ships to Dubai/Qatar middle-east. Some went to Russia and some even to States. While today’s émigrés want to settle in the west forever and have their children and grandchildren grow up in the West, in the 1960s and 70s the idea was far different. The main purpose for a vast majority was to get jobs and whatnot, save maximum money and send it back to India as a remittance. The idea was to make enough money in 3-5-10 years, come back to India, and then lead a comfortable life.

Sadly, there has hardly been any academic work done in India, at least to my knowledge to document the sacrifices done by Indians in search of jobs, life, purpose, etc. in the 1960s and 1970s. The 1970s was also when alternative cinema started its journey with people like Smita Patil, Naseeruddin Shah who portrayed people’s struggles on-screen. Most of them didn’t have commercial success because the movies and the stories were bleak. While the acting was superb, most Indians loved to be captured by fights, car-chases, and whatnot rather than the deary existence which they had. And the alt cinema forced them to look into the mirror, which was frowned upon both by the masses and the classes. So cinema which could have been a wake-up call for a lot of Indians failed. One of the most notable works of that decade, at least to me, was Manthan. 1961 was also marked by the launch of Economic Times and Financial Express which tells that there was some appetite for financial news and understanding.

The 1970s was also a very turbulent time in the corporate sector and stock exchanges. Again, the companies which were listed were run by the very well-off and many of them had been abroad. At the same time, you had fly-by-night operators. One of the happenings which started in this decade is you had corporate wars and hostile takeovers, quite a few of them of which could well have a Web series or two of their own.

This was also a decade marked by huge labor unrest, which again changed the face of Bombay/Mumbai. From the 1950s till the 1970s, Bombay was known for its mills. So large migrant communities from all over India came to Bombay to become the next Bollywood star and if that didn’t happen, they would get jobs in the mills. Bombay/Mumbai has/had this unique feature that somehow you will make money to make ends meet. Of course, with the pandemic, even that has gone for a toss. Labor unrest was a defining character of that decade. Three movies, Kaala Patthar, Kalyug, and Ankush give a broad outlook of what happened in that decade. One thing which is present and omnipresent then and now is how time and time again we lost our demographic dividend. Again there was an exodus of young people who ventured out to seek fortunes elsewhere. The 1970s and 80s were also famous for the license Raj which they bought in. Just like the Soviets, there were waiting periods for everything. A telephone line meant waiting for things anywhere from 4 to 8 years. In 1987, when we applied and got a phone within 2-3 months, most of my relatives both from my mother and father’s side could not believe we paid 0 to get a telephone line. We did pay the telephone guy INR 10/- which was a somewhat princely sum when he was installing it, even then they could not believe it as in Northern India, you couldn’t get a phone line even if your number had come. You had to pay anywhere from INR 500/1000 or more to get a line. This was BSNL and to reiterate there were no alternatives at that time.

The 1990s and the Harshad Mehta Scam

The 90s was when I was a teenager. You do all the stupid things for love, lust, whatever. That is also the time you are introduced really to the world of money. During my time, there were only three choices, Sciences, Commerce, and Arts. If History were your favorite subject then you would take Arts and if it was not, and you were not studious, then you would up commerce. This is how careers were chosen. So I enrolled in Commerce. Due to my grandfather and family on my mother’s side interested in stocks both as a saving and compounding tool, I was able to see Pune Stock Exchange in action one day. The only thing I remember that day is people shouting loudly with various chits. I had no idea that deals of maybe thousands or even lakhs. The Pune Stock Exchange had been newly minted. I also participated in a couple of mock stock exchanges and came to understand that one has to be aggressive in order to win. You had to be really loud to be heard over others, you could not afford to be shy. Also, spread your risks. Sadly, nothing about the stock markets was there in the syllabus. 1991 was also when we saw the Iraq war, the balance of payments crisis in India, and didn’t know that the Harshad Mehta scam was around the corner.

Most of the scams in India have been caught because the person who was doing it was flashy. And this was the reason that even he was caught as Ms. Sucheta Dalal, a young beat reporter from Indian Express who had been covering Indian stock market. Many of her articles were thought-provoking.

Now, a brief understanding is required to know before we actually get to the scam. Because of the 1991 balance of payments crisis, IMF rescued India on the condition that India throws its market open. In the 1980s itself, Rajeev Gandhi had wanted to partially make India open but both politicians and Industrialists advised him not to do the same, we are/were not ready. On 21st May 1991, Rajeev Gandhi was assassinated by the LTTE. A month later, due to the sympathy vote, the Narsimha Rao Govt. took power. While for most new Governments there is usually a honeymoon period lasting 6 months or so till they get settled in their roles before people start asking tough questions. It was not to be for this Govt. Immediately, The problem had been building for a few years. Although, in many ways, our economy was better than it is today. The only thing India didn’t do well at that time was managing foreign exchange. As only a few Indians had both the money and the opportunity to go abroad and need for electronics was limited. One of the biggest imports of the time then and still today is Energy, Oil. While today it is Oil/Gas and electronics, at that time it was only OIl. The Oil import bill was ballooning while exports were more or less stagnant and mostly comprised of raw materials rather than finished products. Even today, it is largely this, one of the biggest Industrialists in India Ambani exports gas/oil while Adani exports coal. Anyways, the deficit was large enough to trigger a payment crisis. And Narsimha Rao had to throw open the Indian market almost overnight. Some changes became quickly apparent, while others took a long time to come.

Satellite Television and Entry of Foreign Banks

Almost overnight, from 1 channel we became multi-channel. Star TV (Rupert Murdoch) bought us Bold and Beautiful, while CNN broadcasted the Iraq War. It was unbelievable for us that we were getting reports of what had happened 24-48 hours earlier. Fortunately or unfortunately, I was still very much a teenager to understand the import of what was happening. Even in my college, except for one or two-person, it wasn’t a topic for debate or talk or even the economy. We were basically somehow cocooned in our own little world.

But this was not the case for the rest of India and especially banks. The entry of foreign banks was a rude shock to Indian banks. The foreign banks were bringing both technology and sophistication in their offerings, and Indian Banks needed and wanted fast money to show hefty profits. Demand for credit wasn’t much, at least nowhere the level it today is. At the same time, default on credit was nowhere high as today is. But that will require its own space and article.

To quench the thirst for hefty profits by banks, Enter Harshad Mehta. At that point in time, banks were not permitted at all to invest in the securities/share market. They could only buy Government securities or bonds which had a coupon rate of say 8-10% which was nowhere enough to satisfy the need for hefty profits as desired by Indian banks. On top of it, that cash was blocked for a long time. Most of these Government bonds had anywhere between 10-20 year maturity date and some even longer. Now, one loophole in that was that the banks themselves could not buy these securities. They had to approach a registered broker of the share market who will do these transactions on their behalf. Here is where Mr. Mehta played his game. He shared both legal and illegal ways in which both the bank and he would prosper. While banking at one time was thought to be conservative and somewhat cautious, either because they were too afraid that Western private banks will take that pie or whatever their reasons might be, they agreed to his antics.

To play the game, Harshad Mehta needed lots of cash, which the banks provided him in the guise of buying securities that were never bought, but the amounts were transferred to his account. He actively traded stocks, at the same time made a group, and also made the rumor mill work to his benefit. The share market is largely a reactionary market. It operates on patience, news, and rumor-mill. The effect of his shenanigans was that the price of a stock that was trending at say INR 200 reached the stratospheric height of INR 9000/- without any change in the fundamentals or outlook of the stock. His thirst didn’t remain restricted to stocks but also ventured into the unglamorous world of Govt. securities where he started trading even in them in large quantities. In order to attract new clients, he coveted a fancy lifestyle. The fancy lifestyle was what caught the eye of Sucheta Dalal, and she started investigating the deals he was doing. Being a reporter, she had the advantage of getting many doors to open and get information that otherwise would be under lock and key. On 23rd April 1992, Sucheta Dalal broke the scam.

The Impact

The impact was almost like a shock to the markets. Even today, it can be counted as one of the biggest scams in the Indian market if you adjust it for inflation. I haven’t revealed much of the scam and what happened, simply because Sucheta Dalal and Debasis Basu wrote The Scam for that purpose. How do I shorten a story and experience which has been roughly written in 300 odd pages in one or two paragraphs, it is simply impossible. The impact though was severe. The Indian stock market became a bear market for two years. Sucheta Dalal was kicked out/made to resign out of Indian Express. The thing is simple, all newspapers survive on readership and advertisements with advertisements. Companies who were having a golden run, whether justified or not, on the bourses/Stock Exchange. For many companies, having a good number on the stock exchange was better than the company fundamentals. There was supposed to be a speedy fast-track court setup for Financial crimes, but it worked only for the Harshad Mehta case and still took over 5 years. It led to the creation of NSE (National Stock Exchange). It also led to the creation of SEBI, perhaps one of the most powerful regulators, giving it a wide range of powers and remit but on the ground more often that proved to be no more than a glorified postman. And the few times it used, it used on the wrong people and people had to go to courts to get justice. But then this is not about SEBI nor is this blog post about NSE. I have anyways shared about Absolute power above, so will not repeat the link here.

The Anecdotal impact was widespread. Our own family broker took the extreme step. For my grandfather on the mother’s side, he was like the second son. The news of his suicide devastated my grandfather quite a bit, which we realized much later when he was diagnosed with Alzheimer’s. Our family stockbroker had been punting, taking lots of cash from the market at very high rates, betting on stocks wildly as the stock market was reaching for the stars when the market crashed, he was insolvent. How the family survived is a tale in itself. They had just got married just a few years ago and had a cute boy and girl soon after. While today, both are grown-up, at that time what the wife faced only she knows. There were also quite a few shareholders who also took the extreme step. The stock markets in those days were largely based on trust and even today is unless you are into day-trading. So there was always some money left on the table for the share/stockbroker which would be squared off in the next deal/transaction where again you will leave something. My grandfather once thought of going over and meeting them, and we went to the lane where their house is, seeing the line of people who had come for recovery of loans, we turned back with a heavy heart.

There was another taboo that kinda got broken that day. The taboo was that the stock market is open to scams. From 1992 to 2021 has been a cycle of scams. Even now, today, the stock market is at unnatural highs. We know for sure that a lot of hot money is rolling around, a lot of American pension funds etc. Till it will work, it will work, some news something and that money will be moved out. Who will be left handing the can, the Indian investors? A Few days back, Ambani writes about Adani. Now while the facts shared are correct, is Adani the only one, the only company to have a small free float in the market. There probably are more than 1/4th or 1/3rd of well-respected companies who may have a similar configuration, the only problem is it is difficult to know who the proxies are.

Now if I were to reflect and compare this either with the 1960s or even the 1990s I don’t find much difference apart from the fact that the proxy is sitting in Mauritius. At the same time, today you can speculate on almost anything. Whether it is stocks, commodities, derivatives, foreign exchange, cricket matches etc. the list is endless. Since 2014, the rise in speculation rather than investment has been dramatic, almost stratospheric. Sadly, there are no studies or even attempts made to document this. How much official and unofficial speculation is there in the market nobody knows. Money markets have become both fluid and non-transparent. In theory, you have all sorts of regulators, but it is still very much like the Wild West. One thing to note that even Income tax had to change and bring it provisions to account for speculative income.So, starting from being totally illegitimate, it has become kind of legal and is part of Income Tax. And if speculation is not wrong, why not make Indian cricket officially a speculative event, that will be honest and GOI will get part of the proceeds.

Conclusion

I wish there was some positive conclusion I could drive, but sadly there is not. Just today read two articles about the ongoing environmental issues in Himachal Pradesh. As I had shared even earlier, the last time I visited those places in 2011, and even at that time I was devastated to see the kind of construction going on. Jogiwara Road which they showed used to be flat single ground/first floor dwellings, most of which were restaurants and whatnot. I had seen the water issues both in Himachal and UT (Uttarakhand) back then and this is when they made huge dams. In U.S. they are removing dams and here we want more dams 😦

28 June, 2021 09:12PM by shirishag75

Russ Allbery

control-archive 1.9.1

This is the set of scripts and configuration files that maintain the Usenet control message archive and newsgroup lists hosted on ftp.isc.org.

This is a data-only update primarily to update the Big Eight control signing key. It also removes some remnant control information about net.* and documents that the hierarchy is abandoned.

You can get the latest release from the control-archive distribution page.

28 June, 2021 02:55AM