March 18, 2025

Petter Reinholdtsen

New theora release 1.2.0beta1 after almost 15 years

When I a few days ago discovered that a security problem reported against the theora library last year was still not fixed, and because I was already up to speed on Xiph development, I decided it was time to wrap up a new theora release. This new release was tagged in the Xiph gitlab theora instance Saturday. You can fetch the new release from the Theora home page.

The list of changes since The 1.2.0alpha1 release from the CHANGES file in the tarball look like this:

libteora 1.2.0beta1 (2025 March 15)

  • Bumped minor SONAME versions as methods changed constness of arguments.
  • Updated libogg dependency to version 1.3.4 for ogg_uint64_t.
  • Updated doxygen setup.
  • Updated autotools setup and support scripts (#1467 #1800 #1987 #2318 #2320).
  • Added support for RISC OS.
  • Fixed mingw build (#2141).
  • Improved ARM support.
  • Converted SCons setup to work with Python 3.
  • Introduced new configure options --enable-mem-constraint and --enable-gcc-sanitizers.
  • Fixed all known compiler warnings and errors from gcc and clang.
  • Improved examples for stability and correctness.
  • Variuos speed, bug fixes and code quality improvements.
  • Fixed build problem with Visual Studio (#2317).
  • Avoids undefined bit shift of signed numbers (#2321, #2322).
  • Avoids example encoder crash on bogus audio input (#2305).
  • Fixed musl linking issue with asm enabled (#2287).
  • Fixed some broken clamping in rate control (#2229).
  • Added NULL check _tc and _setup even for data packets (#2279).
  • Fixed mismatched oc_mb_fill_cmapping11 signature (#2068).
  • Updated the documentation for theora_encode_comment() (#726).
  • Adjusted build to Only link libcompat with dump_video (#1587).
  • Corrected an operator precedence error in the visualization code (#1751).
  • Fixed two spelling errors in the comments (#1804).
  • Avoid negative bit shift operation in huffdec.c (CVE-2024-56431).
  • Improved library documentation and specification text.
  • Adjusted library dependencies so libtheoraenc do not depend on libtheoradec.
  • Handle fallout from CVE-2017-14633 in libvorbis, check return value in encoder_example and transcoder_example.

There are a few bugs still being investigated, and my plan is to wrap up a final 1.2.0 release two weekends from now.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

18 March, 2025 07:30AM

Dima Kogan

Eigen macro specializations crashes

There's an issue in the Eigen linear algebra library where linking together objects compiled with different flags causes the resulting binary to crash. Some details are written-up in this mailing list thread.

I just encountered a situation where a large application sometimes crashes for unknown reasons, and needed a method to determine whether this Eigen issue could be the cause. I ended up doing this by using the DWARF data to see if the linked binary contains the different incompatible flavors of malloc / free or not.

I downloaded the small demo program showing the problem. I built it:

CCXXXFLAGS=-g make

Here if you run ./main, the bug is triggered, and a crash occurs. I looked at the debug info for the code in question:

for o (main lib.so) {
  echo "======== $o";
  readelf --debug-dump=decodedline $o \
  | awk \
    '$1 ~ /^Memory.h/
     {
       if(180 <= $2 && $2 <= 186) {
         have["malloc_glibc"]=1
       }
       if(188 == $2) {
         have["malloc_handmade"]=1
       }
       if(201 <= $2 && $2 <= 204) {
         have["free_glibc"]=1
       }
       if(206 == $2) {
         have["free_handmade"]=1
       }
     }
     END
     {
       for (var in have) {
         print(var);
       }
     }'
}

It says:

======== main
free_handmade
======== lib.so
malloc_glibc
free_glibc

Here I looked at main and lib.so (the build products from this little demo). In a real case you'd look at every shared library linked into the binary and the binary itself. On my machine /usr/include/eigen3/Eigen/src/Core/util/Memory.h looks like this, starting on line 174:

174 EIGEN_DEVICE_FUNC inline void* aligned_malloc(std::size_t size)
175 {
176   check_that_malloc_is_allowed();
177 
178   void *result;
179   #if (EIGEN_DEFAULT_ALIGN_BYTES==0) || EIGEN_MALLOC_ALREADY_ALIGNED
180 
181     EIGEN_USING_STD(malloc)
182     result = malloc(size);
183 
184     #if EIGEN_DEFAULT_ALIGN_BYTES==16
185     eigen_assert((size<16 || (std::size_t(result)%16)==0) && "System's malloc returned an unaligned pointer. Compile with EIGEN_MALLOC_ALREADY_ALIGNED=0 to fallback to handmade aligned memory allocator.");
186     #endif
187   #else
188     result = handmade_aligned_malloc(size);
189   #endif
190 
191   if(!result && size)
192     throw_std_bad_alloc();
193 
194   return result;
195 }
196 
197 /** \internal Frees memory allocated with aligned_malloc. */
198 EIGEN_DEVICE_FUNC inline void aligned_free(void *ptr)
199 {
200   #if (EIGEN_DEFAULT_ALIGN_BYTES==0) || EIGEN_MALLOC_ALREADY_ALIGNED
201 
202     EIGEN_USING_STD(free)
203     free(ptr);
204 
205   #else
206     handmade_aligned_free(ptr);
207   #endif
208 }

The above awk script looks at the two malloc paths and the two free paths, and we can clearly see that it only ever calls malloc_glibc(), but has both flavors of free(). So this can crash. We want to see that the whole executable (shared libraries and all) should only have one type of malloc() and free(), and that would guarantee no crashing.

There are a more functions in that header that should be instrumented (realloc() for instance) and the different alignment paths should be instrumented similarly (as described in the mailing list thread above), but here we see that this technique works.

18 March, 2025 03:52AM by Dima Kogan

March 17, 2025

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

Configuring forgejo actions

Last week I decided I wanted to try out forgejo actions to build this blog instead of using webhooks, so I looked the documentation and started playing with it until I had it working as I wanted.

This post is to describe how I’ve installed and configured a forgejo runner, how I’ve added an oci organization to my instance to build, publish and mirror container images and added a couple of additional organizations (actions and docker for now) to mirror interesting actions.

The changes made to build the site using actions will be documented on a separate post, as I’ll be using this entry to test the new setup on the blog project.

Installing the runner

The first thing I’ve done is to install a runner on my server, I decided to use the OCI image installation method, as it seemed to be the easiest and fastest one.

The commands I’ve used to setup the runner are the following:

$ cd /srv
$ git clone https://forgejo.mixinet.net/blogops/forgejo-runner.git
$ cd forgejo-runner
$ sh ./bin/setup-runner.sh

The setup-runner.sh script does multiple things:

  • create a forgejo-runner user and group
  • create the necessary directories for the runner
  • create a .runner file with a predefined secret and the docker label

The setup-runner.sh code is available here.

After running the script the runner has to be registered with the forgejo server, it can be done using the following command:

$ forgejo forgejo-cli actions register --name "$RUNNER_NAME" \
    --secret "$FORGEJO_SECRET"

The RUNNER_NAME variable is defined on the setup-runner.sh script and the FORGEJO_SECRET must match the value used on the .runner file.

Starting it with docker-compose

To launch the runner I’m going to use a docker-compose.yml file that starts two containers, a docker in docker service to run the containers used by the workflow jobs and another one that runs the forgejo-runner itself.

The initial version used a TCP port to communicate with the dockerd server from the runner, but when I tried to build images from a workflow I noticed that the containers launched by the runner were not going to be able to execute another dockerd inside the dind one and, even if they were, it was going to be expensive computationally.

To avoid the issue I modified the dind service to use a unix socket on a shared volume that can be used by the runner service to communicate with the daemon and also re-shared with the job containers so the dockerd server can be used from them to build images.

Warning:

The use of the same docker server that runs the jobs from them has security implications, but this instance is for a home server where I am the only user, so I am not worried about it and this way I can save some resources (in fact, I could use the host docker server directly instead of using a dind service, but just in case I want to run other containers on the host I prefer to keep the one used for the runner isolated from it).

For those concerned about sharing the same server an alternative would be to launch a second dockerd only for the jobs (i.e. actions-dind) using the same approach (the volume with its socket will have to be shared with the runner service so it can be re-shared, but the runner does not need to use it).

The final docker-compose.yaml file is as follows:

services:
  dind:
    image: docker:dind
    container_name: 'dind'
    privileged: 'true'
    command: ['dockerd', '-H', 'unix:///dind/docker.sock', '-G', '$RUNNER_GID']
    restart: 'unless-stopped'
    volumes:
      - ./dind:/dind
  runner:
    image: 'data.forgejo.org/forgejo/runner:6.2.2'
    links:
      - dind
    depends_on:
      dind:
        condition: service_started
    container_name: 'runner'
    environment:
      DOCKER_HOST: 'unix:///dind/docker.sock'
    user: $RUNNER_UID:$RUNNER_GID
    volumes:
      - ./config.yaml:/config.yaml
      - ./data:/data
      - ./dind:/dind
    restart: 'unless-stopped'
    command: '/bin/sh -c "sleep 5; forgejo-runner daemon -c /config.yaml"'

There are multiple things to comment about this file:

  1. The dockerd server is started with the -H unix:///dind/docker.sock flag to use the unix socket to communicate with the daemon instead of using a TCP port (as said, it is faster and allows us to share the socket with the containers started by the runner).
  2. We are running the dockerd daemon with the RUNNER_GID group so the runner can communicate with it (the socket gets that group which is the same used by the runner).
  3. The runner container mounts three volumes: the data directory, the dind folder where docker creates the unix socket and a config.yaml file used by us to change the default runner configuration.

The config.yaml file was originally created using the forgejo-runner:

$ docker run --rm data.forgejo.org/forgejo/runner:6.2.2 \
    forgejo-runner generate-config > config.yaml

The changes to it are minimal, the runner capacity has been increased to 2 (that allows it to run two jobs at the same time) and the /dind/docker.sock value has been added to the valid_volumes key to allow the containers launched by the runner to mount it when needed; the diff against the default version is as follows:

@@ -13,7 +13,8 @@
   # Where to store the registration result.
   file: .runner
   # Execute how many tasks concurrently at the same time.
-  capacity: 1
+  # STO: Allow 2 concurrent tasks
+  capacity: 2
   # Extra environment variables to run jobs.
   envs:
     A_TEST_ENV_NAME_1: a_test_env_value_1
@@ -87,7 +88,9 @@
   # If you want to allow any volume, please use the following configuration:
   # valid_volumes:
   #   - '**'
-  valid_volumes: []
+  # STO: Allow to mount the /dind/docker.sock on the containers
+  valid_volumes:
+    - /dind/docker.sock
   # overrides the docker client host with the specified one.
   # If "-" or "", an available docker host will automatically be found.
   # If "automount", an available docker host will automatically be found and ...

To start the runner we export the RUNNER_UID and RUNNER_GID variables and call docker-compose up to start the containers on the background:

$ RUNNER_UID="$(id -u forgejo-runner)" RUNNER_GID="$(id -g forgejo-runner)" \
    docker compose up -d

If the server was configured right we are now able to start using actions with this runner.

Preparing the system to run things locally

To avoid unnecessary network traffic we are going to create a multiple organizations in our forgejo instance to maintain our own actions and container images and mirror remote ones.

The rationale behind the mirror use is that we reduce a lot the need to connect to remote servers to download the actions and images, which is good for performance and security reasons.

In fact, we are going to build our own images for some things to install the tools we want without needing to do it over and over again on the workflow jobs.

Mirrored actions

The actions we are mirroring are on the actions and docker organizations, we have created the following ones for now (the mirrors were created using the forgejo web interface and we have disabled manually all the forgejo modules except the code one for them):

To use our actions by default (i.e., without needing to add the server URL on the uses keyword) we have added the following section to the app.ini file of our forgejo server:

[actions]
ENABLED = true
DEFAULT_ACTIONS_URL = https://forgejo.mixinet.net

Setting up credentials to push images

To be able to push images to the oci organization I’ve created a token with package:write permission for my own user because I’m a member of the organization and I’m authorized to publish packages on it (a different user could be created, but as I said this is for personal use, so there is no need to complicate things for now).

To allow the use of those credentials on the actions I have added a secret (REGISTRY_PASS) and a variable (REGISTRY_USER) to the oci organization to allow the actions to use them.

I’ve also logged myself on my local docker client to be able to push images to the oci group by hand, as I it is needed for bootstrapping the system (as I’m using local images on the worflows I need to push them to the server before running the ones that are used to build the images).

Local and mirrored images

Our images will be stored on the packages section of a new organization called oci, inside it we have created two projects that use forgejo actions to keep things in shape:

  • images: contains the source files used to generate our own images and the actions to build, tag and push them to the oci organization group.
  • mirrors: contains a configuration file for the regsync tool to mirror containers and an action to run it.

On the next sections we are going to describe the actions and images we have created and mirrored from those projects.

The oci/images project

The images project is a monorepo that contains the source files for the images we are going to build and a couple of actions.

The image sources are on sub directories of the repository, to be considered an image the folder has to contain a Dockerfile that will be used to build the image.

The repository has two workflows:

  • build-image-from-tag: Workflow to build, tag and push an image to the oci organization
  • multi-semantic-release: Workflow to create tags for the images using the multi-semantic-release tool.

As the workflows are already configured to use some of our images we pushed some of them from a checkout of the repository using the following commands:

registry="forgejo.mixinet.net/oci"
for img in alpine-mixinet node-mixinet multi-semantic-release; do
  docker build -t $registry/$img:1.0.0 $img
  docker tag $registry/$img:1.0.0 $registry/$img:latest
  docker push $registry/$img:1.0.0
  docker push $registry/$img:latest
done

On the next sub sections we will describe what the workflows do and will show their source code.

build-image-from-tag workflow

This workflow uses a docker client to build an image from a tag on the repository with the format image-name-v[0-9].[0-9].[0-9]+.

As the runner is executed on a container (instead of using lxc) it seemed unreasonable to run another dind container from that one, that is why, after some tests, I decided to share the dind service server socket with the runner container and enabled the option to mount it also on the containers launched by the runner when needed (I only do it on the build-image-from-tag action for now).

The action was configured to run using a trigger or when new tags with the right format were created, but when the tag is created by multi-semantic-release the trigger does not work for some reason, so now it only runs the job on triggers and checks if it is launched for a tag with the right format on the job itself.

The source code of the action is as follows:

name: build-image-from-tag
on:
  workflow_dispatch:
jobs:
  build:
    # Don't build the image if the registry credentials are not set, the ref is not a tag or it doesn't contain '-v'
    if: ${{ vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != '' && startsWith(github.ref, 'refs/tags/') && contains(github.ref, '-v') }}
    runs-on: docker
    container:
      image: forgejo.mixinet.net/oci/node-mixinet:latest
      # Mount the dind socket on the container at the default location
      options: -v /dind/docker.sock:/var/run/docker.sock
    steps:
      - name: Extract image name and tag from git and get registry name from env
        id: job_data
        run: |
          echo "::set-output name=img_name::${GITHUB_REF_NAME%%-v*}"
          echo "::set-output name=img_tag::${GITHUB_REF_NAME##*-v}"
          echo "::set-output name=registry::$(
            echo "${{ github.server_url }}" | sed -e 's%https://%%'
          )"
          echo "::set-output name=oci_registry_prefix::$(
            echo "${{ github.server_url }}/oci" | sed -e 's%https://%%'
          )"
      - name: Checkout the repo
        uses: actions/checkout@v4
      - name: Export build dir and Dockerfile
        id: build_data
        run: |
          img="${{ steps.job_data.outputs.img_name }}"
          build_dir="$(pwd)/${img}"
          dockerfile="${build_dir}/Dockerfile"
          if [ -f "$dockerfile" ]; then
            echo "::set-output name=build_dir::$build_dir"
            echo "::set-output name=dockerfile::$dockerfile"
          else
            echo "Couldn't find the Dockerfile for the '$img' image"
            exit 1
          fi
      - name: Login to the Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ steps.job_data.outputs.registry }}
          username: ${{ vars.REGISTRY_USER }}
          password: ${{ secrets.REGISTRY_PASS }}
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
      - name: Build and Push
        uses: docker/build-push-action@v6
        with:
          push: true
          tags: |
            ${{ steps.job_data.outputs.oci_registry_prefix }}/${{ steps.job_data.outputs.img_name }}:${{ steps.job_data.outputs.img_tag }}
            ${{ steps.job_data.outputs.oci_registry_prefix }}/${{ steps.job_data.outputs.img_name }}:latest
          context: ${{ steps.build_data.outputs.build_dir }}
          file: ${{ steps.build_data.outputs.dockerfile }}
          build-args: |
            OCI_REGISTRY_PREFIX=${{ steps.job_data.outputs.oci_registry_prefix }}/

Some notes about this code:

  1. The if condition of the build job is not perfect, but it is good enough to avoid wrong uses as long as nobody uses manual tags with the wrong format and expects things to work (it checks if the REGISTRY_USER and REGISTRY_PASS variables are set, if the ref is a tag and if it contains the -v string).
  2. To be able to access the dind socket we mount it on the container using the options key on the container section of the job (this only works if supported by the runner configuration as explained before).
  3. We use the job_data step to get information about the image from the tag and the registry URL from the environment variables, it is executed first because all the information is available without checking out the repository.
  4. We use the job_data step to get the build dir and Dockerfile paths from the repository (right now we are assuming fixed paths and checking if the Dockerfile exists, but in the future we could use a configuration file to get them, if needed).
  5. As we are using a docker daemon that is already running there is no need to use the docker/setup-docker-action to install it.
  6. On the build and push step we pass the OCI_REGISTRY_PREFIX build argument to the Dockerfile to be able to use it on the FROM instruction (we are using it in our images).

multi-semantic-release workflow

This workflow is used to run the multi-semantic-release tool on pushes to the main branch.

It is configured to create the configuration files on the fly (it prepares things to tag the folders that contain a Dockerfile using a couple of template files available on the repository’s .forgejo directory) and run the multi-semantic-release tool to create tags and push them to the repository if new versions are to be built.

Initially we assumed that the tag creation pushed by multi-semantic-release would be enough to run the build-tagged-image-task action, but as it didn’t work we removed the rule to run the action on tag creation and added code to trigger the action using an api call for the newly created tags (we get them from the output of the multi-semantic-release execution).

The source code of the action is as follows:

name: multi-semantic-release
on:
  push:
    branches:
      - 'main'
jobs:
  multi-semantic-release:
    runs-on: docker
    container:
      image: forgejo.mixinet.net/oci/multi-semantic-release:latest
    steps:
      - name: Checkout the repo
        uses: actions/checkout@v4
      - name: Generate multi-semantic-release configuration
        shell: sh
        run: |
          # Get the list of images to work with (the folders that have a Dockerfile)
          images="$(for img in */Dockerfile; do dirname "$img"; done)"
          # Generate a values.yaml file for the main packages.json file
          package_json_values_yaml=".package.json-values.yaml"
          echo "images:" >"$package_json_values_yaml"
          for img in $images; do
            echo " - $img" >>"$package_json_values_yaml"
          done
          echo "::group::Generated values.yaml for the project"
          cat "$package_json_values_yaml"
          echo "::endgroup::"
          # Generate the package.json file validating that is a good json file with jq
          tmpl -f "$package_json_values_yaml" ".forgejo/package.json.tmpl" | jq . > "package.json"
          echo "::group::Generated package.json for the project"
          cat "package.json"
          echo "::endgroup::"
          # Remove the temporary values file
          rm -f "$package_json_values_yaml"
          # Generate the package.json file for each image
          for img in $images; do
            tmpl -v "img_name=$img" -v "img_path=$img" ".forgejo/ws-package.json.tmpl" | jq . > "$img/package.json"
            echo "::group::Generated package.json for the '$img' image"
            cat "$img/package.json"
            echo "::endgroup::"
          done
      - name: Run multi-semantic-release
        shell: sh
        run: |
          multi-semantic-release | tee .multi-semantic-release.log
      - name: Trigger builds
        shell: sh
        run: |
          # Get the list of tags published on the previous steps
          tags="$(
            sed -n -e 's/^\[.*\] \[\(.*\)\] .* Published release \([0-9]\+\.[0-9]\+\.[0-9]\+\) on .*$/\1-v\2/p' \
              .multi-semantic-release.log
          )"
          rm -f .multi-semantic-release.log
          if [ "$tags" ]; then
            # Prepare the url for building the images
            workflow="build-image-from-tag.yaml"
            dispatch_url="${{ github.api_url }}/repos/${{ github.repository }}/actions/workflows/$workflow/dispatches"
            echo "$tags" | while read -r tag; do
              echo "Triggering build for tag '$tag'"
              curl \
                -H "Content-Type:application/json" \
                -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
                -d "{\"ref\":\"$tag\"}" "$dispatch_url"
            done
          fi

Notes about this code:

  1. The use of the tmpl tool to process the multi-semantic-release configuration templates comes from previous uses, but on this case we could use a different approach (i.e. envsubst could be used) but we left it because it keeps things simple and can be useful in the future if we want to do more complex things with the template files.
  2. We use tee to show and dump to a file the output of the multi-semantic-release execution.
  3. We get the list of pushed tags using sed against the output of the multi-semantic-release execution and for each one found we use curl to call the forgejo API to trigger the build job; as the call is against the same project we can use the GITHUB_TOKEN generated for the workflow to do it, without creating a user token that has to be shared as a secret.

The .forgejo/package.json.tmpl file is the following one:

{
  "name": "multi-semantic-release",
  "version": "0.0.0-semantically-released",
  "private": true,
  "multi-release": {
    "tagFormat": "${name}-v${version}"
  },
  "workspaces": {{ .images | toJson }}
}

As can be seen it only needs a list of paths to the images as argument (the file we generate contains the names and paths, but it could be simplified).

And the .forgejo/ws-package.json.tmpl file is the following one:

{
  "name": "{{ .img_name }}",
  "license": "UNLICENSED",
  "release": {
    "plugins": [
      [
        "@semantic-release/commit-analyzer",
        {
          "preset": "conventionalcommits",
          "releaseRules": [
            { "breaking": true, "release": "major" },
            { "revert": true, "release": "patch" },
            { "type": "feat", "release": "minor" },
            { "type": "fix", "release": "patch" },
            { "type": "perf", "release": "patch" }
          ]
        }
      ],
      [
        "semantic-release-replace-plugin",
        {
          "replacements": [
            {
              "files": [ "{{ .img_path }}/msr.yaml" ],
              "from": "^version:.*$",
              "to": "version: ${nextRelease.version}",
              "allowEmptyPaths": true
            }
          ]
        }
      ],
      [
        "@semantic-release/git",
        {
          "assets": [ "msr.yaml" ],
          "message": "ci(release): {{ .img_name }}-v${nextRelease.version}\n\n${nextRelease.notes}"
        }
      ]
    ],
    "branches": [ "main" ]
  }
}

The oci/mirrors project

The repository contains a template for the configuration file we are going to use with regsync (regsync.envsubst.yml) to mirror images from remote registries using a workflow that generates a configuration file from the template and runs the tool.

The initial version of the regsync.envsubst.yml file is prepared to mirror alpine containers from version 3.21 to 3.29 (we explicitly remove version 3.20) and needs the forgejo.mixinet.net/oci/node-mixinet:latest image to run (as explained before it was pushed manually to the server):

version: 1
creds:
  - registry: "$REGISTRY"
    user: "$REGISTRY_USER"
    pass: "$REGISTRY_PASS"
sync:
  - source: alpine
    target: $REGISTRY/oci/alpine
    type: repository
    tags:
      allow:
        - "latest"
        - "3\\.2\\d+"
        - "3\\.2\\d+.\\d+"
      deny:
        - "3\\.20"
        - "3\\.20.\\d+"

mirror workflow

The mirror workflow creates a configuration file replacing the value of the REGISTRY environment variable (computed by removing the protocol from the server_url), the REGISTRY_USER organization value and the REGISTRY_PASS secret using the envsubst command and running the regsync tool to mirror the images using the configuration file.

The action is configured to run daily, on push events when the regsync.envsubst.yml file is modified on the main branch and can also be triggered manually.

The source code of the action is as follows:

.forgejo/workflows/mirror.yaml
name: mirror
on:
  schedule:
    - cron: '@daily'
  push:
    branches:
      - main
    paths:
      - 'regsync.envsubst.yml'
  workflow_dispatch:
jobs:
  mirror:
    if: ${{ vars.REGISTRY_USER != '' && secrets.REGISTRY_PASS != '' }}
    runs-on: docker
    container:
      image: forgejo.mixinet.net/oci/node-mixinet:latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - name: Sync images
        run: |
          REGISTRY="$(echo "${{ github.server_url }}" | sed -e 's%https://%%')" \
          REGISTRY_USER="${{ vars.REGISTRY_USER }}" \
          REGISTRY_PASS="${{ secrets.REGISTRY_PASS }}" \
            envsubst <regsync.envsubst.yml >.regsync.yml
          regsync --config .regsync.yml once
          rm -f .regsync.yml

Conclusion

We have installed a forgejo-runner and configured it to run actions for our own server and things are working fine.

This approach allows us to have a powerful CI/CD system on a modest home server, something very useful for maintaining personal projects and playing with things without needing SaaS platforms like github or gitlab.

17 March, 2025 07:00PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppExamples 0.1.10: New factor Example, Other Updates

A new version 0.1.10 of the RcppExamples package is now on CRAN, and marks the first release in five and half years.

RcppExamples provides a handful of short examples detailing by concrete working examples how to set up basic R data structures in C++. It also provides a simple example for packaging with Rcpp. The package provides (generally fairly) simple examples, more (and generally longer) examples are at the Rcpp Gallery.

This releases brings a bi-directorial example of factor conversion, updates the Date example, removes the explicitly stated C++ compilation standard (which CRAN now nags about) and brings a number of small fixes and maintenance that accrued since the last release. The NEWS extract follows:

Changes in RcppExamples version 0.1.10 (2025-03-17)

  • Simplified DateExample by removing unused API code

  • Added a new FactorExample with conversion to and from character vectors

  • Updated and modernised continuous integrations multiple times

  • Updated a few documentation links

  • Updated build configuration

  • Updated README.md badges and URLs

  • No longer need to set a C++ compilation standard

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

17 March, 2025 04:33PM

James Valleroy

What’s New for FreedomBox in Debian 13 “trixie”

FreedomBox is a Debian blend that makes it easier to run your own server. Approximately every two years, there is a new stable release of Debian. This year’s release will be called Debian 13 "trixie".

This post will provide an overview of changes between FreedomBox 23.6 (the version that shipped in Debian 12 "bookworm") and 25.5 (the latest release). Note: Debian 13 "trixie" is not yet released, so things may still change, be added or removed, before the official release.

General

  • A number of translations were updated, including Albanian, Arabic, Belarusian, Bulgarian, Chinese (Simplified Han script), Chinese (Traditional Han script), Czech, Dutch, French, German, Hindi, Japanese, Norwegian Bokmål, Polish, Portuguese, Russian, Spanish, Swedish, Telugu, Turkish, and Ukrainian.
  • Fix cases where a package or service is used by multiple apps, so that disabling or uninstalling one app does not affect the other app.
  • When uninstalling an app, purge the packages, to remove all data and configuration.
  • For configuration files that need to be placed into folders owned by other packages, we now install these files under /usr/share/freedombox/etc/, and create a symbolic link to the other package’s configuration folder. This prevents the files being lost when other packages are purged.
  • Add an action to re-run the setup process for an app. This can fix many of the possible issues that occur.
  • Various improvements related to the "force upgrade" feature, which handles upgrading packages with conffile prompts.
  • Fix install/uninstall issues for apps that use MySQL database (WordPress, Zoph).
  • Improve handling of file uploads (Backups, Feather Wiki, Kiwix).
  • Switch to Bootstrap 5 front-end framework.
  • Removed I2P app, since the i2p package was removed from Debian.
  • Various user interface changes, including:
    • Add tags for apps, replacing short descriptions. When a tag is clicked, search and filter for one or multiple tags.
    • Organize the System page into sections.
    • Add breadcrumbs for page hierarchy navigation.
    • Add next steps page after initial FreedomBox setup.

Diagnostics

  • Add diagnostic checks to detect common errors.
  • Add diagnostics daily run, with notifications about failures.
  • Add Repair action for failed diagnostics, and option for automatic repairs.

Name Services

  • Move hostname and domain name configuration to Names page.
  • Support multiple static and/or dynamic domains.
  • Use systemd-resolved for DNS resolution.
  • Add options for setting global DNS-over-TLS and DNSSEC preferences.

Networks

  • Add more options for IPv6 configuration method.
  • Overhaul Wi-Fi networks scan page.

Privacy

  • Add option to disable fallback DNS servers.
  • Add option to set the lookup URL to get the public IP address of the FreedomBox.

Users and Groups

  • Delete or move home folder when user is deleted or renamed.
  • When a user is inactivated, also inactivate the user in LDAP.

Deluge

  • This BitTorrent client app should be available once again in Debian 13 "trixie".

Ejabberd

  • Turn on Message Archive Management setting by default, to help various XMPP clients use it.

Feather Wiki

  • Add new app for note taking.
  • This app lives in a single HTML file, which is downloaded from the FreedomBox website.

GitWeb

  • Disable snapshot feature, due to high resource use.
  • Various fixes for repository operations.

GNOME

  • Add new app to provide a graphical desktop environment.
  • Requires a monitor, keyboard, and mouse to be physically connected to the FreedomBox.
  • Not suitable for low-end hardware.

ikiwiki

  • Disable discussion pages by default for new wiki/blog, to avoid spam.

Kiwix

  • Add new app for offline reader of Wikipedia and other sites.

Matrix Synapse

  • Add an option for token-based registration verification, so that users signing up for new accounts will need to provide a token during account registration.

MediaWiki

  • Allow setting the site language code.
  • Increase PHP maximum execution time to 100 seconds.

MiniDLNA

  • Add media directory selection form.

Miniflux

  • Add new app for reading news from RSS/ATOM feeds.

Nextcloud

  • Add new app for file sync and collaboration.
  • Uses a Docker container maintained by the Nextcloud community. The container is downloaded from FreedomBox container registry.

OpenVPN

  • Renew server/client certificates, and set expiry to 10 years.

Postfix/Dovecot

  • Fix DKIM signing.
  • Show DNS entries for all domains.

Shadowsocks Server

  • Add new app for censorship resistance, separate from Shadowsocks Client app.

SOGo

  • Add new app for groupware (webmail, calendar, tasks, and contacts).
  • Works with Postfix/Dovecot email server app.

TiddlyWiki

  • Add new app for note taking.
  • This app lives in a single HTML file, which is downloaded from the FreedomBox website.

Tor Proxy

  • Add new app for Tor SOCKS proxy, separate from Tor app.

Transmission

  • Allow remote user interfaces to connect.

Conclusion

Over the past two years, FreedomBox has been increasing the number of features and applications available to its users. We have also focused on improving the reliability of the system, detecting unexpected situations, and providing means to return to a known good state. With these improvements, FreedomBox has become a good solution for people with limited time or energy to set up and start running a personal server, at home or in the cloud.

Looking forward, we would like to focus on making more powerful hardware available with FreedomBox pre-installed and ready to be used. This hardware would also support larger storage devices, making it suitable as a NAS or media server. We are also very interested in exploring new features such as atomic updates, which will further enhance the reliability of the system.

17 March, 2025 12:40PM by James Valleroy

Vincent Bernat

Offline PKI using 3 YubiKeys and an ARM single board computer

An offline PKI enhances security by physically isolating the certificate authority from network threats. A YubiKey is a low-cost solution to store a root certificate. You also need an air-gapped environment to operate the root CA.

PKI relying on a set of 3 YubiKeys: 2 for the root CA and 1 for the intermediate CA.
Offline PKI backed up by 3 YubiKeys

This post describes an offline PKI system using the following components:

  • 2 YubiKeys for the root CA (with a 20-year validity),
  • 1 YubiKey for the intermediate CA (with a 5-year validity), and
  • 1 Libre Computer Sweet Potato as an air-gapped SBC.

It is possible to add more YubiKeys as a backup of the root CA if needed. This is not needed for the intermediate CA as you can generate a new one if the current one gets destroyed.

The software part

offline-pki is a small Python application to manage an offline PKI. It relies on yubikey-manager to manage YubiKeys and cryptography for cryptographic operations not executed on the YubiKeys. The application has some opinionated design choices. Notably, the cryptography is hard-coded to use NIST P-384 elliptic curve.

The first step is to reset all your YubiKeys:

$ offline-pki yubikey reset
This will reset the connected YubiKey. Are you sure? [y/N]: y
New PIN code:
Repeat for confirmation:
New PUK code:
Repeat for confirmation:
New management key ('.' to generate a random one):
WARNING[pki-yubikey] Using random management key: e8ffdce07a4e3bd5c0d803aa3948a9c36cfb86ed5a2d5cf533e97b088ae9e629
INFO[pki-yubikey]  0: Yubico YubiKey OTP+FIDO+CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.management] Device config written
INFO[yubikit.piv] PIV application data reset performed
INFO[yubikit.piv] Management key set
INFO[yubikit.piv] New PUK set
INFO[yubikit.piv] New PIN set
INFO[pki-yubikey] YubiKey reset successful!

Then, generate the root CA and create as many copies as you want:

$ offline-pki certificate root
Management key for Root X:
Plug YubiKey "Root X"...
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.piv] Data written to object slot 0x5fc10a
INFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=True
INFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384
Copy root certificate to another YubiKey? [y/N]: y
Plug YubiKey "Root X"...
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.piv] Data written to object slot 0x5fc10a
INFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=True
INFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384
Copy root certificate to another YubiKey? [y/N]: n

You can inspect the result:

$ offline-pki yubikey info
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[pki-yubikey] Slot 9C (SIGNATURE):
INFO[pki-yubikey]   Private key type: ECCP384
INFO[pki-yubikey]   Public key:
INFO[pki-yubikey]     Algorithm:  secp384r1
INFO[pki-yubikey]     Issuer:     CN=Root CA
INFO[pki-yubikey]     Subject:    CN=Root CA
INFO[pki-yubikey]     Serial:     1
INFO[pki-yubikey]     Not before: 2024-07-05T18:17:19+00:00
INFO[pki-yubikey]     Not after:  2044-06-30T18:17:19+00:00
INFO[pki-yubikey]     PEM:
-----BEGIN CERTIFICATE-----
MIIBcjCB+aADAgECAgEBMAoGCCqGSM49BAMDMBIxEDAOBgNVBAMMB1Jvb3QgQ0Ew
HhcNMjQwNzA1MTgxNzE5WhcNNDQwNjMwMTgxNzE5WjASMRAwDgYDVQQDDAdSb290
IENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAERg3Vir6cpEtB8Vgo5cAyBTkku/4w
kXvhWlYZysz7+YzTcxIInZV6mpw61o8W+XbxZV6H6+3YHsr/IeigkK04/HJPi6+i
zU5WJHeBJMqjj2No54Nsx6ep4OtNBMa/7T9foyMwITAPBgNVHRMBAf8EBTADAQH/
MA4GA1UdDwEB/wQEAwIBhjAKBggqhkjOPQQDAwNoADBlAjEAwYKy/L8leJyiZSnn
xrY8xv8wkB9HL2TEAI6fC7gNc2bsISKFwMkyAwg+mKFKN2w7AjBRCtZKg4DZ2iUo
6c0BTXC9a3/28V5aydZj6rvx0JqbF/Ln5+RQL6wFMLoPIvCIiCU=
-----END CERTIFICATE-----

Then, you can create an intermediate certificate with offline-pki yubikey intermediate and use it to sign any CSR with offline-pki certificate sign. Be careful and inspect the CSR before signing it, as only the subject name can be overridden. Check the documentation for more details. Get the available options using the --help flag.

The hardware part

To ensure the operations on the root and intermediate CAs are air-gapped, a cost-efficient solution is to use an ARM64 single board computer. The Libre Computer Sweet Potato SBC is a more open alternative to the well-known Raspberry Pi.1

Libre Computer Sweet Potato single board computer relying on the Amlogic S905X SOC
Libre Computer Sweet Potato SBC, powered by the AML-S905X SOC

I interact with it through an USB to TTL UART converter:

$ tio /dev/ttyUSB0
[16:40:44.546] tio v3.7
[16:40:44.546] Press ctrl-t q to quit
[16:40:44.555] Connected to /dev/ttyUSB0
GXL:BL1:9ac50e:bb16dc;FEAT:ADFC318C:0;POC:1;RCY:0;SPI:0;0.0;CHK:0;
TE: 36574

BL2 Built : 15:21:18, Aug 28 2019. gxl g1bf2b53 - luan.yuan@droid15-sz

set vcck to 1120 mv
set vddee to 1000 mv
Board ID = 4
CPU clk: 1200MHz
[…]

The Nix glue

To bring everything together, I am using Nix with a Flake providing:

  • a package for the offline-pki application, with shell completion,
  • a development shell, including an editable version of the offline-pki application,
  • a NixOS module to setup the offline PKI, resetting the system at each boot,
  • a QEMU image for testing, and
  • an SD card image to be used on the Sweet Potato or an ARM64 SBC.
# Execute the application locally
nix run github:vincentbernat/offline-pki -- --help
# Run the application inside a QEMU VM
nix run github:vincentbernat/offline-pki\#qemu
# Build a SD card for the Sweet Potato or for the Raspberry Pi
nix build --system aarch64-linux github:vincentbernat/offline-pki\#sdcard.potato
nix build --system aarch64-linux github:vincentbernat/offline-pki\#sdcard.generic
# Get a development shell with the application
nix develop github:vincentbernat/offline-pki

  1. The key for the root CA is not generated by the YubiKey. Using an air-gapped computer is all the more important. Put it in a safe with the YubiKeys when done! ↩︎

17 March, 2025 08:12AM by Vincent Bernat

March 16, 2025

Russell Coker

Article Recommendations via FOSS

Google tracking everything we read is bad, particularly since Google abandoned the “don’t be evil” plan and are presumably open to being somewhat evil.

The article recommendations on Chrome on Android are useful and I’d like to be able to get the same quality of recommendations without Google knowing about everything I read. Ideally without anything other than the device I use knowing what interests me.

A ML system to map between sources of news that are of interest should be easy to develop and run on end user devices. The model could be published and when given inputs of articles you like give an output of sites that contain other articles you like. Then an agent on the end user system could spider the sites in question and run a local model to determine which articles to present to the user.

Mapping for hate following is possible for such a system (Google doesn’t do that), the user could have 2 separate model runs for regular reading and hate-following and determine how much of each content to recommend. It could also give negative weight to entries that match the hate criteria.

Some sites with articles (like Medium) give an estimate of reading time. An article recommendation system should have a fixed limit of articles (both in articles and in reading time) to support the “I spend half an hour reading during lunch” model not doom scrolling.

For getting news using only FOSS it seems that the best option at the moment is to use the Lemmy FOSS social network which is like Reddit [1] to recommend articles etc.

The Lemoa client for Lemmy uses GTK [2] but it’s no longer maintained. The Lemonade client for Lemmy is written in Rust [3]. It would be good if one of those was packaged for Debian, preferably one that’s maintained.

16 March, 2025 04:19AM by etbe

March 15, 2025

hackergotchi for Bits from Debian

Bits from Debian

Debian Med Sprint in Berlin

Debian Med sprint in Berlin on 15 and 16 February

The Debian Med team works on software packages that are associated with medicine, pre-clinical research, and life sciences, and makes them available for the Debian distribution. Seven Debian developers and contributors to the team gathered for their annual Sprint, in Berlin, Germany on 15 and 16 February 2025. The purpose of the meeting was to tackle bugs in Debian-Med packages, enhance the quality of the team's packages, and coordinate the efforts of team members overall.

This sprint allowed participants to fix dozens of bugs, including release-critical ones. New upstream versions were uploaded, and the participants took some time to modernize some packages. Additionally, they discussed the long-term goals of the team, prepared a forthcoming invited talk for a conference, and enjoyed working together.

More details on the event and individual agendas/reports can be found at https://wiki.debian.org/Sprints/2025/DebianMed.

15 March, 2025 11:00PM by Pierre Gruet, Jean-Pierre Giraud, Joost van Baal-Ilić

Scarlett Gately Moore

KDE snaps fixed, Thank you for your support

KDE MascotKDE Mascot

Thank you everyone for keeping the lights on for a bit longer. KDE snaps have been restored. I also released 24.12.3! In addition, I have moved “most” snaps to core24. The remaining snaps need newer qt6/kf6, which is a WIP. “The Bad luck girl” has been hit once again with another loss, so with that, I will be reducing my hours on snaps while I consider my options for my future. I am still around, just a bit less.

Thanks again everyone, if you can get me through one more ( lingering broken arm ) surgery I would be forever grateful! https://gofund.me/d5d59582

15 March, 2025 12:52PM by sgmoore

March 14, 2025

Ravi Dwivedi

Libreoffice Conference 2024 in Luxembourg

Last year, I attended the annual LibreOffice Conference in Luxembourg with the help of a generous travel grant by The Document Foundation (TDF). It was a three-day event from the 10th to the 12th of October 2024, with an additional day for community meetup on the 9th.

Luxembourg is a small (twice as big as Delhi) country in Western Europe. After going through an arduous visa process, I reached Luxembourg on the 8th of October. Upon arriving in Luxembourg, I took a bus to the city center, where my hotel — Park Inn — was located. All the public transport in Luxembourg was free of cost. It was as if I stepped in another world. There were separate tracks for cycling and a separate lane for buses, along with good pedestrian infrastructure. In addition, the streets were pretty neat and clean.

Luxembourg's Findel Airport

Separate cycling tracks in Luxembourg

My hotel was 20 km from the conference venue in Belval. However, the commute was convenient due to a free of cost train connection, which were comfortable, smooth, and scenic, covering the distance in half an hour. The hotel included a breakfast buffet, recharging us before the conference.

This is what trains look like in Luxembourg

Pre-conference, a day was reserved for the community meetup on the 9th of October. On that day, the community members introduced themselves and their contributions to the LibreOffice project. It acted as a brainstorming session. I got a lovely conference bag, which contained a T-Shirt, a pen and a few stickers. I also met my long time collaborators Mike, Sophie and Italo from the TDF, whom I had interacted only remotely till then. Likewise, I also met TDF’s sysadmin Guilhem, who I interacted before regarding setting up my LibreOffice mirror.

Conference bag

The conference started on the 10th. There were 5 attendees from India, including me, while most of the attendees were from Europe. The talks were in English. One of the talks that stood out for me was about Luxchat — a chat service run by the Luxembourg government based on the Matrix protocol for the citizens of Luxembourg. I also liked Italo’s talk on why document formats must be freedom-respecting. On the first night, the conference took us to a nice dinner in a restaurant. It offered one more way to socialize with other attendees and explore food at the same time.

One of the slides of Italo's talk

Picture of the hall in which talks were held

On the 11th of October, I went for a walk in the morning with Biswadeep for some sightseeing around our hotel area. As a consequence, I missed the group photo of the conference, which I wanted to be in. Anyway, we enjoyed roaming around the picturesque Luxembourg city. We also sampled a tram ride to return to our hotel.

We encountered such scenic views during our walk

Another view of Luxembourg city area

The conference ended on the 12th with a couple of talks. This conference gave me an opportunity to meet the global LibreOffice community, connect and share ideas. It also gave me a peek into the country of Luxembourg and its people, where I had good experience. English was widely known, and I had no issues getting by.

Thanks to all the organizers and sponsors of the conference!

14 March, 2025 04:18PM

Dima Kogan

Getting precise timings out of RS-232 output

For uninteresting reasons I need very regular 58Hz pulses coming out of an RS-232 Tx line: the time between each pulse should be as close to 1/58s as possible. I produce each pulse by writing an \xFF byte to the device. The start bit is the only active-voltage bit being sent, and that produces my pulse. I wrote this obvious C program:

#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <sys/ioctl.h>
#include <unistd.h>
#include <fcntl.h>
#include <termios.h>
#include <stdint.h>
#include <sys/time.h>

static uint64_t gettimeofday_uint64()
{
    struct timeval tv;
    gettimeofday(&tv, NULL);
    return (uint64_t) tv.tv_sec * 1000000ULL + (uint64_t) tv.tv_usec;
}

int main(int argc, char* argv[])
{
    // open the serial device, and make it as raw as possible
    const char* device = "/dev/ttyS0";
    const speed_t baud = B9600;

    int fd = open(device, O_WRONLY|O_NOCTTY);
    tcflush(fd, TCIOFLUSH);

    struct termios options = {.c_iflag = IGNBRK,
                              .c_cflag = CS8 | CREAD | CLOCAL};
    cfsetspeed(&options, baud);
    tcsetattr(fd, TCSANOW, &options);

    const uint64_t T_us = (uint64_t)(1e6 / 58.);

    const uint64_t t0 = gettimeofday_uint64();
    for(int i=0; ; i++)
    {
        const uint64_t t_target = t0 + T_us*i;
        const uint64_t t1       = gettimeofday_uint64();

        if(t_target > t1)
            usleep(t_target - t1);

        write(fd, &((char){'\xff'}), 1);
    }
    return 0;
}

This tries to make sure that each write() call happens at 58Hz. I need these pulses to be regular, so I need to also make sure that the time between each userspace write() and when the edge actually hits the line is as short as possible or, at least, stable.

Potential reasons for timing errors:

  1. The usleep() doesn't wake up exactly when it should. This is subject to the Linux scheduler waking up the trigger process
  2. The write() almost certainly ends up scheduling a helper task to actually write the \xFF to the hardware. This helper task is also subject to the Linux scheduler waking it up.
  3. Whatever the hardware does. RS-232 doesn't give you any guarantees about byte-byte timings, so this could be an unfixable source of errors

The scheduler-related questions are observable without any extra hardware, so let's do that first.

I run the ./trigger program, and look at diagnostics while that's running.

I look at some device details:

# ls -lh /dev/ttyS0
crw-rw---- 1 root dialout 4, 64 Mar  6 18:11 /dev/ttyS0

# ls -lh /sys/dev/char/4:64/
total 0
-r--r--r-- 1 root root 4.0K Mar  6 16:51 close_delay
-r--r--r-- 1 root root 4.0K Mar  6 16:51 closing_wait
-rw-r--r-- 1 root root 4.0K Mar  6 16:51 console
-r--r--r-- 1 root root 4.0K Mar  6 16:51 custom_divisor
-r--r--r-- 1 root root 4.0K Mar  6 16:51 dev
lrwxrwxrwx 1 root root    0 Mar  6 16:51 device -> ../../../0000:00:16.3:0.0
-r--r--r-- 1 root root 4.0K Mar  6 16:51 flags
-r--r--r-- 1 root root 4.0K Mar  6 16:51 iomem_base
-r--r--r-- 1 root root 4.0K Mar  6 16:51 iomem_reg_shift
-r--r--r-- 1 root root 4.0K Mar  6 16:51 io_type
-r--r--r-- 1 root root 4.0K Mar  6 16:51 irq
-r--r--r-- 1 root root 4.0K Mar  6 16:51 line
-r--r--r-- 1 root root 4.0K Mar  6 16:51 port
drwxr-xr-x 2 root root    0 Mar  6 16:51 power
-rw-r--r-- 1 root root 4.0K Mar  6 16:51 rx_trig_bytes
lrwxrwxrwx 1 root root    0 Mar  6 16:51 subsystem -> ../../../../../../../class/tty
-r--r--r-- 1 root root 4.0K Mar  6 16:51 type
-r--r--r-- 1 root root 4.0K Mar  6 16:51 uartclk
-rw-r--r-- 1 root root 4.0K Mar  6 16:51 uevent
-r--r--r-- 1 root root 4.0K Mar  6 16:51 xmit_fifo_size

Unsurprisingly, this is a part of the tty subsystem. I don't want to spend the time to really figure out how this works, so let me look at all the tty kernel calls and also at all the kernel tasks scheduled by the trigger process, since I suspect that the actual hardware poke is happening in a helper task. I see this:

# bpftrace -e 'k:*tty* /comm=="trigger"/
               { printf("%d %d %s\n",pid,tid,probe); }
               t:sched:sched_wakeup /comm=="trigger"/
               { printf("switching to %s(%d); current backtrace:", args.comm, args.pid); print(kstack());  }'

...

3397345 3397345 kprobe:tty_ioctl
3397345 3397345 kprobe:tty_check_change
3397345 3397345 kprobe:__tty_check_change
3397345 3397345 kprobe:tty_wait_until_sent
3397345 3397345 kprobe:tty_write
3397345 3397345 kprobe:file_tty_write.isra.0
3397345 3397345 kprobe:tty_ldisc_ref_wait
3397345 3397345 kprobe:n_tty_write
3397345 3397345 kprobe:tty_hung_up_p
switching to kworker/0:1(3400169); current backtrace:
        ttwu_do_activate+268
        ttwu_do_activate+268
        try_to_wake_up+605
        kick_pool+92
        __queue_work.part.0+582
        queue_work_on+101
        rpm_resume+1398
        __pm_runtime_resume+75
        __uart_start+85
        uart_write+150
        n_tty_write+1012
        file_tty_write.isra.0+373
        vfs_write+656
        ksys_write+109
        do_syscall_64+130
        entry_SYSCALL_64_after_hwframe+118

3397345 3397345 kprobe:tty_update_time
3397345 3397345 kprobe:tty_ldisc_deref

... repeated with each pulse ...

Looking at the sources I see that uart_write() calls __uart_start(), which schedules a task to call serial_port_runtime_resume() which eventually calls serial8250_tx_chars(), which calls some low-level functions to actually send the bits.

I look at the time between two of those calls to quantify the scheduler latency:

pulserate=58

sudo zsh -c \
  '( echo "# dt_write_ns dt_task_latency_ns";
     bpftrace -q -e "k:vfs_write /comm==\"trigger\" && arg2==1/
                     {\$t=nsecs(); if(@t0) { @dt_write = \$t-@t0; } @t0=\$t;}
                     k:serial8250_tx_chars /@dt_write/
                     {\$t=nsecs(); printf(\"%d %d\\n\", @dt_write, \$t-@t0);}"
   )' \
| vnl-filter                  \
    --stream -p dt_write_ms="dt_write_ns/1e6 - 1e3/$pulserate",dt_task_latency_ms=dt_task_latency_ns/1e6 \
| feedgnuplot  \
    --stream   \
    --lines    \
    --points   \
    --xlen 200 \
    --vnl      \
    --autolegend \
    --xlabel 'Pulse index' \
    --ylabel 'Latency (ms)'

Here I'm making a realtime plot showing

  • The offset from 58Hz of when each write() call happens. This shows effect #1 from above: how promptly the trigger process wakes up
  • The latency of the helper task. This shows effect #2 above.

The raw data as I tweak things lives here. Initially I see big latency spikes:

timings.scheduler.1.noise.svg

These can be fixed by adjusting the priority of the trigger task. This tells the scheduler to wake that task up first, even if something else is currently using the CPU. I do this:

sudo chrt -p 90 `pidof trigger`

And I get better-looking latencies:

timings.scheduler.2.clean.svg

During some experiments (not in this dataset) I would see high helper-task timing instabilities as well. These could be fixed by prioritizing the helper task. In this kernel (6.12) the helper task is called kworker/N where N is the CPU index. I tie the trigger process to cpu 0, and priorities all the relevant helpers:

taskset -c 0 ./trigger 58

pgrep -f kworker/0 | while { read pid } { sudo chrt -p 90 $pid }

This fixes the helper-task latency spikes.

OK, so it looks like on the software side we're good to within 0.1ms of the true period. This is in the ballpark of the precision I need; even this might be too high. It's possible to try to push the software to do better: one could look at the kernel sources a bit more, to do smarter things with priorities or to try an -rt kernel. But all this doesn't matter if the serial hardware adds unacceptable delays. Let's look.

Let's look at it with a logic analyzer. I use a saleae logic analyzer with sigrok. The tool spits out the samples as it gets them, and an awk script finds the edges and reports the timings to give me a realtime plot.

samplerate=500000;
pulserate=58.;
sigrok-cli -c samplerate=$samplerate -O csv --continuous -C D1 \
| mawk -Winteractive  \
    "prev_logic==0 && \$0==1 \
     { 
       iedge = NR;
       if(prev_iedge)
       {
         di = iedge -prev_iedge;
         dt = di/$samplerate;
         print(dt*1000);
       }
       prev_iedge = iedge;
     }
     {
       prev_logic=\$0;
     } " | feedgnuplot --stream --ylabel 'Period (ms)' --equation "1000./$pulserate title \"True ${pulserate}Hz period\""

On the server I was using (physical RS-232 port, ancient 3.something kernel):

timings.hw.serial-server.svg

OK… This is very discrete for some reason, and generally worse than 0.1ms. What about my laptop (physical RS-232 port, recent 6.12 kernel)?

timings.hw.serial-laptop.svg

Not discrete anymore, but not really any more precise. What about using a usb-serial converter? I expect this to be worse.

timings.hw.usbserial.svg

Yeah, looks worse. For my purposes, an accuracy of 0.1ms is marginal, and the hardware adds non-negligible errors. So I cut my losses, and use an external signal generator:

timings.hw.generator.svg

Yeah. That's better, so that's what I use.

14 March, 2025 12:47PM by Dima Kogan

hackergotchi for Junichi Uekawa

Junichi Uekawa

Filing tax this year was really painful.

Filing tax this year was really painful. But mostly because my home network. It was ipv4 over ipv6 was not working correctly. First I swapped the Router which was trying to reinitialize the MAP-E table every time there was a dhcp client reconfiguration and overwhelming the server. Then I changed the DNS configuration not use ipv4 UDP lookup which was overwhelming the ipv4 ports. Tax return itself is a painful process. Debugging network issues is making things was just making everything more painful.

14 March, 2025 01:27AM by Junichi Uekawa

March 11, 2025

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Debian.Social administration, DebConf 25 preparations, Fixing Time-based test failure in Python requests package and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-02

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Debian.Social administration, by Stefano Rivera

Over the last year, the Debian.social services outgrew the infrastructure that was supporting them. The matrix bridge in particular was hosted on a cloud instance backed by a large expensive storage volume. Debian.CH rented a new large physical server to host all these instances, earlier this year. Stefano set up Incus on the new physical machine and migrated all the old debian.social LXC Containers, libvirt VMs, and cloud instances into Incus-managed LXC containers.

Stefano set up Prometheus monitoring and alerts for the new infrastructure and a Grafana dashboard. The current stack of debian.social services seem to comfortably fit on the new machine, with good room to grow.

DebConf 25, by Santiago Ruano Rincón and Stefano Rivera

DebConf 25 preparations continue. The team is currently finalizing a budget. Stefano helped to review the current budget proposals and suggest approaches for balancing it.

Stefano installed a Zammad instance to organize queries from attendees, for the registration and visa teams.

Santiago continued discussions with possible caterers so we can have options for the different diet requirements and that could fit into the DebConf budget. Also, in collaboration with Anupa, Santiago pushed the first draft changes to document the venue information in the DebConf 25 website and how to get to Brest.

Time-based test failure in requests, by Colin Watson

Colin fixed a fun bug in the Python requests package. Santiago Vila has been running tests of what happens when Debian packages are built on a system in which time has been artificially set to somewhere around the end of the support period for the next Debian release, in order to make it easier to do things like issuing security updates for the lifetime of that release. In this case, the failure indicated an expired test certificate, and since the repository already helpfully included scripts to regenerate those certificates, it seemed natural to try regenerating them just before running tests. However, this failed for more obscure reasons and Colin spent some time investigating. This turned out to be because the test CA was missing the CA constraint and so recent versions of OpenSSL reject it; Colin sent a pull request to fix this.

Priority list for outdated packages, by Santiago Ruano Rincón

Santiago started a discussion on debian-devel about packages that have a history of security issues and that are outdated regarding new upstream releases. The goal of the mentioned effort is to have a prioritized list of packages needing some work, from a security point of view. Moreover, the aim of publicly sharing the list of packages with the Debian Developers community is to make it easier to look at the packages maintained by teams, or even other maintainers’ where help could be welcome. Santiago is planning to take into account the feedback provided in debian-devel and to propose a tooling that could help to regularly bring collective awareness of these packages.

Miscellaneous contributions

  • Carles worked on English to Catalan po-debconf translations: reviewed translations, created merge requests and followed up with developers for more than 30 packages using po-debconf-manager.
  • Carles helped users, fixed bugs and implemented downloading updated templates on po-debconf-manager.
  • Carles packaged a new upstream version of python-pyaarlo.
  • Carles improved reproducibility of qnetload (now reported as reproducible) and simplemonitor (followed up with upstream and pending update of Debian package).
  • Carles collaborated with debian-history package: fixed FTBFS from master branch, enabled salsa-ci and investigated reproducibility.
  • Emilio improved support for automatically marking CVEs as NOT-FOR-US in the security-tracker, closing #1073012.
  • Emilio updated xorg-server and xwayland in unstable, fixing the last round of security vulnerabilities.
  • Stefano prepared a few PyPy and cPython uploads, and started the python3.13-only transition.
  • Helmut Grohne sent patches for 24 cross build failures.
  • Helmut fixed two problems in the Debian /usr-merge analysis tool. In one instance, it would overmatch Debian bugs to issues and in another it would fail to recognize Pre-Depends as a conflict mechanism.
  • Helmut attempted making rebootstrap work for gcc-15 with limited success as very many packages FTBFS with gcc-15 due to using function declarations without arguments.
  • Helmut provided a change to the security-tracker that would pre-compute /data/json during database updates rather than on demand resulting in a reduced response time.
  • Colin uploaded OpenSSH security updates for testing/unstable, bookworm, bullseye, buster, and stretch.
  • Colin fixed upstream monitoring for 26 Python packages, and upgraded 54 packages (mostly Python-related, but also PuTTY) to new upstream versions.
  • Colin updated python-django in bookworm-backports to 4.2.18 (issuing BSA-121), and added new backports of python-django-dynamic-fixture and python-django-pgtrigger, all of which are dependencies of debusine.
  • Thorsten Alteholz finally managed to upload hplip to fix two release critical and some normal bugs. The next step in March would be to upload the latest version of hplip.
  • Faidon updated crun in unstable & trixie, resolving a long-standing request of enabling criu support and thus enabling podman with checkpoint/restore functionality (With gratitude to Salvatore Bonaccorso and Reinhard Tartler for the cooperation and collaboration).
  • Faidon uploaded a number of packages (librdkafka, libmaxminddb, python-maxminddb, lowdown, tox, tox-uv, pyproject-api, xiccd and gdnsd) bringing them up to date with new upstream releases, resolving various bugs.
  • Lucas Kanashiro uploaded some ruby packages involved in the Rails 7 transition with new upstream releases.
  • Lucas triaged a ruby3.1 bug (#1092595)) and prepared a fix for the next stable release update.
  • Lucas set up the needed wiki pages and updated the Debian Project status in the Outreachy portal, in order to send out a call for projects and mentors for the next round of Outreachy.
  • Anupa joined Santiago to prepare a list of companies to contact via LinkedIn for DebConf 25 sponsorship.
  • Anupa printed Debian stickers and sponsorship brochures, flyers for DebConf 25 to be distributed at FOSS ASIA summit 2025.
  • Anupa participated in the Debian publicity team meeting and discussed the upcoming events and tasks.
  • Raphaël packaged zim 0.76.1 and integrated an upstream patch for another regression that he reported.
  • Raphaël worked with the Debian System Administrators for tracker.debian.org to better cope with gmail’s requirement for mails to be authenticated.

11 March, 2025 12:00AM by Anupa Ann Joseph

March 10, 2025

hackergotchi for Joachim Breitner

Joachim Breitner

Extrinsic termination proofs for well-founded recursion in Lean

A few months ago I explained that one reason why this blog has become more quiet is that all my work on Lean is covered elsewhere.

This post is an exception, because it is an observation that is (arguably) interesting, but does not lead anywhere, so where else to put it than my own blog…

Want to share your thoughts about this? Please join the discussion on the Lean community zulip!

Background

When defining a function recursively in Lean that has nested recursion, e.g. a recusive call that is in the argument to a higher-order function like List.map, then extra attention used to be necessary so that Lean can see that xs.map applies its argument only elements of the list xs. The usual idiom is to write xs.attach.map instead, where List.attach attaches to the list elements a proof that they are in that list. You can read more about this my Lean blog post on recursive definitions and our new shiny reference manual, look for Example “Nested Recursion in Higher-order Functions”.

To make this step less tedious I taught Lean to automatically rewrite xs.map to xs.attach.map (where suitable) within the construction of well-founded recursion, so that nested recursion just works (issue #5471). We already do such a rewriting to change if c then … else … to the dependent if h : c then … else …, but the attach-introduction is much more ambitious (the rewrites are not definitionally equal, there are higher-order arguments etc.) Rewriting the terms in a way that we can still prove the connection later when creating the equational lemmas is hairy at best. Also, we want the whole machinery to be extensible by the user, setting up their own higher order functions to add more facts to the context of the termination proof.

I implemented it like this (PR #6744) and it ships with 4.18.0, but in the course of this work I thought about a quite different and maybe better™ way to do this, and well-founded recursion in general:

A simpler fix

Recall that to use WellFounded.fix

WellFounded.fix : (hwf : WellFounded r) (F : (x : α) → ((y : α) → r y x → C y) → C x) (x : α) : C x

we have to rewrite the functorial of the recursive function, which naturally has type

F : ((y : α) →  C y) → ((x : α) → C x)

to the one above, where all recursive calls take the termination proof r y x. This is a fairly hairy operation, mangling the type of matcher’s motives and whatnot.

Things are simpler for recursive definitions using the new partial_fixpoint machinery, where we use Lean.Order.fix

Lean.Order.fix : [CCPO α] (F : β → β) (hmono : monotone F) : β

so the functorial’s type is unmodified (here β will be ((x : α) → C x)), and everything else is in the propositional side-condition montone F. For this predicate we have a syntax-guided compositional tactic, and it’s easily extensible, e.g. by

theorem monotone_mapM (f : γ → α → m β) (xs : List α) (hmono : monotone f) :
    monotone (fun x => xs.mapM (f x)) 

Once given, we don’t care about the content of that proof. In particular proving the unfolding theorem only deals with the unmodified F that closely matches the function definition as written by the user. Much simpler!

Isabelle has it easier

Isabelle also supports well-founded recursion, and has great support for nested recursion. And it’s much simpler!

There, all you have to do to make nested recursion work is to define a congruence lemma of the form, for List.map something like our List.map_congr_left

List.map_congr_left : (h : ∀ a ∈ l, f a = g a) :
    List.map f l = List.map g l

This is because in Isabelle, too, the termination proofs is a side-condition that essentially states “the functorial F calls its argument f only on smaller arguments”.

Can we have it easy, too?

I had wished we could do the same in Lean for a while, but that form of congruence lemma just isn’t strong enough for us.

But maybe there is a way to do it, using an existential to give a witness that F can alternatively implemented using the more restrictive argument. The following callsOn P F predicate can express that F calls its higher-order argument only on arguments that satisfy the predicate P:

section setup

variable {α : Sort u}
variable {β : α → Sort v}
variable {γ : Sort w}

def callsOn (P : α → Prop) (F : (∀ y, β y) → γ) :=
  ∃ (F': (∀ y, P y → β y) → γ), ∀ f, F' (fun y _ => f y) = F f

variable (R : α → α → Prop)
variable (F : (∀ y, β y) → (∀ x, β x))

local infix:50 " ≺ " => R

def recursesVia : Prop := ∀ x, callsOn (· ≺ x) (fun f => F f x)

noncomputable def fix (wf : WellFounded R) (h : recursesVia R F) : (∀ x, β x) :=
  wf.fix (fun x => (h x).choose)

def fix_eq (wf : WellFounded R) h x :
    fix R F wf h x = F (fix R F wf h) x := by
  unfold fix
  rw [wf.fix_eq]
  apply (h x).choose_spec

This allows nice compositional lemmas to discharge callsOn predicates:

theorem callsOn_base (y : α) (hy : P y) :
    callsOn P (fun (f : ∀ x, β x) => f y) := by
  exists fun f => f y hy
  intros; rfl

@[simp]
theorem callsOn_const (x : γ) :
    callsOn P (fun (_ : ∀ x, β x) => x) :=
  ⟨fun _ => x, fun _ => rfl⟩

theorem callsOn_app
    {γ₁ : Sort uu} {γ₂ : Sort ww}
    (F₁ :  (∀ y, β y) → γ₂ → γ₁) -- can this also support dependent types?
    (F₂ :  (∀ y, β y) → γ₂)
    (h₁ : callsOn P F₁)
    (h₂ : callsOn P F₂) :
    callsOn P (fun f => F₁ f (F₂ f)) := by
  obtain ⟨F₁', h₁⟩ := h₁
  obtain ⟨F₂', h₂⟩ := h₂
  exists (fun f => F₁' f (F₂' f))
  intros; simp_all

theorem callsOn_lam
    {γ₁ : Sort uu}
    (F : γ₁ → (∀ y, β y) → γ) -- can this also support dependent types?
    (h : ∀ x, callsOn P (F x)) :
    callsOn P (fun f x => F x f) := by
  exists (fun f x => (h x).choose f)
  intro f
  ext x
  apply (h x).choose_spec

theorem callsOn_app2
    {γ₁ : Sort uu} {γ₂ : Sort ww}
    (g : γ₁ → γ₂ → γ)
    (F₁ :  (∀ y, β y) → γ₁) -- can this also support dependent types?
    (F₂ :  (∀ y, β y) → γ₂)
    (h₁ : callsOn P F₁)
    (h₂ : callsOn P F₂) :
    callsOn P (fun f => g (F₁ f) (F₂ f)) := by
  apply_rules [callsOn_app, callsOn_const]

With this setup, we can have the following, possibly user-defined, lemma expressing that List.map calls its arguments only on elements of the list:

theorem callsOn_map (δ : Type uu) (γ : Type ww)
    (P : α → Prop) (F : (∀ y, β y) → δ → γ) (xs : List δ)
    (h : ∀ x, x ∈ xs → callsOn P (fun f => F f x)) :
    callsOn P (fun f => xs.map (fun x => F f x)) := by
  suffices callsOn P (fun f => xs.attach.map (fun ⟨x, h⟩ => F f x)) by
    simpa
  apply callsOn_app
  · apply callsOn_app
    · apply callsOn_const
    · apply callsOn_lam
      intro ⟨x', hx'⟩
      dsimp
      exact (h x' hx')
  · apply callsOn_const

end setup

So here is the (manual) construction of a nested map for trees:

section examples

structure Tree (α : Type u) where
  val : α
  cs : List (Tree α)

-- essentially
-- def Tree.map (f : α → β) : Tree α → Tree β :=
--   fun t => ⟨f t.val, t.cs.map Tree.map⟩)
noncomputable def Tree.map (f : α → β) : Tree α → Tree β :=
  fix (sizeOf · < sizeOf ·) (fun map t => ⟨f t.val, t.cs.map map⟩)
    (InvImage.wf (sizeOf ·) WellFoundedRelation.wf) <| by
  intro ⟨v, cs⟩
  dsimp only
  apply callsOn_app2
  · apply callsOn_const
  · apply callsOn_map
    intro t' ht'
    apply callsOn_base
    -- ht' : t' ∈ cs -- !
    -- ⊢ sizeOf t' < sizeOf { val := v, cs := cs }
    decreasing_trivial

end examples

This makes me happy!

All details of the construction are now contained in a proof that can proceed by a syntax-driven tactic and that’s easily and (likely robustly) extensible by the user. It also means that we can share a lot of code paths (e.g. everything related to equational theorems) between well-founded recursion and partial_fixpoint.

I wonder if this construction is really as powerful as our current one, or if there are certain (likely dependently typed) functions where this doesn’t fit, but the β above is dependent, so it looks good.

With this construction, functions defined by well-founded recursion will reduce even worse in the kernel, I assume. This may be a good thing.

The cake is a lie

What unfortunately kills this idea, though, is the generation of the functional induction principles, which I believe is not (easily) possible with this construction: The functional induction principle is proved by massaging F to return a proof, but since the extra assumptions (e.g. for ite or List.map) only exist in the termination proof, they are not available in F.

Oh wey, how anticlimactic.

PS: Path dependencies

Curiously, if we didn’t have functional induction at this point yet, then very likely I’d change Lean to use this construction, and then we’d either not get functional induction, or it would be implemented very differently, maybe a more syntactic approach that would re-prove termination. I guess that’s called path dependence.

10 March, 2025 05:47PM by Joachim Breitner (mail@joachim-breitner.de)

Thorsten Alteholz

My Debian Activities in February 2025

Debian LTS

This was my hundred-twenty-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4072-1] xorg-server security update to fix eight CVEs related to possible privilege escalation in X.
  • [DLA 4073-1] ffmpeg security update to fix three CVEs related to out-of-bounds read, assert errors and NULL pointer dereferences. This was the second update that I announced last month.

Last but not least I did some days of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-ninth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1337-1] xorg-server security update to fix eight CVEs in Buster, Stretch and Jessie, related to possible privilege escalation in X.
  • [ELA-882-2] amanda regression update to improve a fix for privilege escalation. This old regression was detected by Beuc during his work as FD and now finally fixed.

Last but not least I did some days of FD this month and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

  • hplip to fix some bugs and let hplip migrate to testing again.

This work is generously funded by Freexian!

Debian Matomo

This month I uploaded new packages or new upstream or bugfix versions of:

Finally matomo was uploaded. Thanks a lot to Utkarsh Gupta and William Desportes for doing most of the work to make this happen.

This work is generously funded by Freexian!

Debian Astro

Unfortunately I didn’t found any time to upload packages.

Have you ever heard of poliastro? It was a package to do calculations related to astrodynamics and orbital mechanics? It was archived by upstream end of 2023. I am now trying to revive it under the new name boinor and hope to get it back into Debian over the next months.

This is almost the last month that Patrick, our Outreachy intern for the Debian Astro project, is handling his tasks. He is working on automatic updates of the indi 3rd-party driver.

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

Unfortunately I didn’t found any time to work on this topic.

FTP master

This month I accepted 437 and rejected 64 packages. The overall number of packages that got accepted was 445.

10 March, 2025 03:33PM by alteholz

March 09, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppNLoptExample 0.0.2: Minor Updates

An update to our package RcppNLoptExample arrived on CRAN earlier today marking the first update since the intial release more than four year ago. The nloptr package, created by Jelmer Ypma, has long been providing an excellent R interface to NLopt, a very comprehensive library for nonlinear optimization. In particular, Jelmer carefully exposed the API entry points such that other R packages can rely on NLopt without having to explicitly link to it (as one can rely on R providing sufficient function calling and registration to make this possible by referring back to nloptr which naturally has the linking information and resolution). This package demonstrates this in a simple-to-use Rcpp example package that can serve as a stanza.

More recent NLopt versions appear to have changed behaviour a little so that an example we relied upon in simple unit test now converges to a marginally different numerical value, so we adjusted a convergence treshold. Other than that we did a number of the usual small updates to package metadata, to the README.md file, and to continuous integration.

The (very short) NEWS entry follows:

Changes in version 0.0.2 (2025-03-09)

  • Updated tolerance in simple test as newer upstream nlopt change behaviour ever so slightly leading to an other spurious failure

  • Numerous small and standard updates to DESCRIPTION, README.md, badges, and continuous integration setup

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

09 March, 2025 08:06PM

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Bahía Blanca floods - Mother nature says: no Nuremberg for you today

Update 20250309 13:20-03:00 - How to help

A friend of mine living in the USA sent me this link to help the flood victims: Support Bahía Blanca (Argentina) Flood Victims

Original blog post

These are not good news. In fact, much the contrary. Compared to the real issue, the fact that I'm not able to attend Embedded World at Nuremberg is, well, a detail. Or at least that's what I'm forcing myself to believe, as I REALLY wanted to be there. But mother nature said otherwise.

Plaza Dr. Alberto Martinelli - Barrio Parque Las Cañitas

Park "D. Alberto Martinelli", Las Cañitas, Bahía Blanca (Google Maps)

Bahía Blanca , the city I live, has received a lot on rainfall. Really, a lot. Let me introduce the number like this: the previous highest recorded measurement was 170mm (6.69 inch)... in a month. Yesterday Friday 07 we had more than 400mm (15.75 inch) in 9 hours.

But those are just numbers. Some things are better seen in images.

I'll start with some soft ones.

Streetk sink 1 Streetk sink 2

Sink in Fournier street near Cambaceres (Google Maps)

I also happen to do figure skating in the same school of the 4 times world champions (where "world" means the whole world) Roller Dreams precision skating team - Instagram, from Club El Nacional. Our skating rink got severely damaged with the hail we had like 3 weeks ago (yes, we had hail too!!!). Now it's just impossible:

Roller Dreams CEN skating rink

The "real" thing

Let's get to the heavy, heartbreaker part. I did go to downtown Bahía Blanca, but during night, so let me share some links, most of them in Spanish, but images are images:

My alma matter, Universidad Nacional del Sur, lost its main library, great part of the Physics department and a lot of labs :-(

A nearby town, General Cerri, had even worst luck. In Bahía Blanca, a city of 300k+ people, has around 400 evacuated people. General Cerri, a town of 3000? people, had at least 800.

Bahía Blanca, devil's land

Every place has its legends. We do too. This land was called "Huecuvú Mapú", something like "Devil's land" by the original inhabitants of the zone, due to its harsh climate: string winters and hot summers, couple with fierce wind. But back in 1855 the Cacique (chief) José María Bulnes Yanquetruz had a peace agreement with commander Nicanor Otamendi. But a battle ensued, which Yanquetruz won. At this point history defers depending upon who tells it. Some say Yanquetruz was assigned a military grade as Captain of the indigenous auxiliary forces and provided a military suit, some say he stole it, some say this was a setup of another chief wanting to disrupt peace. But what is known is that Yanquetruz was killed, and his wife, the "machi" (sorceress), issued a curse over the land that would last 1000 years, and the curse was on the climate.

Aftermath

No, we are not there yet. This has just happened. The third violent climate occurrence in 15 months. The city needs to mourn and start healing itself. Time will say.

09 March, 2025 06:35PM by Lisandro Damián Nicanor Pérez Meyer

Niels Thykier

Improving Debian packaging in Kate

The other day, I noted that the emacs integration with debputy stopped working. After debugging for a while, I realized that emacs no longer sent the didOpen notification that is expected of it, which confused debputy. At this point, I was already several hours into the debugging and I noted there was some discussions on debian-devel about emacs and byte compilation not working. So I figured I would shelve the emacs problem for now.

But I needed an LSP capable editor and with my vi skills leaving much to be desired, I skipped out on vim-youcompleteme. Instead, I pulled out kate, which I had not been using for years. It had LSP support, so it would fine, right?

Well, no. Turns out that debputy LSP support had some assumptions that worked for emacs but not kate. Plus once you start down the rabbit hole, you stumble on things you missed previously.

Getting started

First order of business was to tell kate about debputy. Conveniently, kate has a configuration tab for adding language servers in a JSON format right next to the tab where you can see its configuration for built-in LSP (also in JSON format9. So a quick bit of copy-paste magic and that was done.

Yesterday, I opened an MR against upstream to have the configuration added (https://invent.kde.org/utilities/kate/-/merge_requests/1748) and they already merged it. Today, I then filed a wishlist against kate in Debian to have the Debian maintainers cherry-pick it, so it works out of the box for Trixie (https://bugs.debian.org/1099876).

So far so good.

Inlay hint woes

Since July (2024), debputy has support for Inlay hints. They are basically small bits of text that the LSP server can ask the editor to inject into the text to provide hints to the reader.

Typically, you see them used to provide typing hints, where the editor or the underlying LSP server has figured out the type of a variable or expression that you did not explicitly type. Another common use case is to inject the parameter name for positional arguments when calling a function, so the user do not have to count the position to figure out which value is passed as which parameter.

In debputy, I have been using the Inlay hints to show inherited fields in debian/control. As an example, if you have a definition like:

Source: foo-src
Section: devel
Priority: optional

Package: foo-bin
Architecture: any

Then foo-bin inherits the Section and Priority field since it does not supply its own. Previously, debputy would that by injecting the fields themselves and their value just below the Package field as if you had typed them out directly. The editor always renders Inlay hints distinctly from regular text, so there was no risk of confusion and it made the text look like a valid debian/control file end to end. The result looked something like:

Source: foo-src
Section: devel
Priority: optional

Package: foo-bin
Section: devel
Priority: optional
Architecture: any

With the second instances of Section and Priority being rendered differently than its surrendering (usually faded or colorlessly).

Unfortunately, kate did not like injecting Inlay hints with a newline in them, which was needed for this trick. Reading into the LSP specs, it says nothing about multi-line Inlay hints being a thing and I figured I would see this problem again with other editors if I left it be.

I ended up changing the Inlay hints to be placed at the end of the Package field and then included surrounding () for better visuals. So now, it looks like:

Source: foo-src
Section: devel
Priority: optional

Package: foo-bin  (Section: devel)  (Priority: optional)
Architecture: any

Unfortunately, it is no longer 1:1 with the underlying syntax which I liked about the previous one. But it works in more editors and is still explicit. I also removed the Inlay hint for the Homepage field. It takes too much space and I have yet to meet someone missing it in the binary stanza.

If you have any better ideas for how to render it, feel free to reach out to me.

Spurious completion and hover

As I was debugging the Inlay hints, I wanted to do a quick restart of debputy after each fix. Then I would trigger a small change to the document to ensure kate would request an update from debputy to render the Inlay hints with the new code.

The full outgoing payloads are sent via the logs to the client, so it was really about minimizing which LSP requests are sent to debputy. Notably, two cases would flood the log:

  • Completion requests. These are triggered by typing anything at all and since I wanted to a change, I could not avoid this. So here it was about making sure there would be nothing to complete, so the result was a small as possible.
  • Hover doc requests. These are triggered by mouse hovering over field, so this was mostly about ensuring my mouse movement did not linger over any field on the way between restarting the LSP server and scrolling the log in kate.

In my infinite wisdom, I chose to make a comment line where I would do the change. I figured it would neuter the completion requests completely and it should not matter if my cursor landed on the comment as there would be no hover docs for comments either.

Unfortunately for me, debputy would ignore the fact that it was on a comment line. Instead, it would find the next field after the comment line and try to complete based on that. Normally you do not see this, because the editor correctly identifies that none of the completion suggestions start with a \#, so they are all discarded.

But it was pretty annoying for the debugging, so now debputy has been told to explicitly stop these requests early on comment lines.

Hover docs for packages

I added a feature in debputy where you can hover over package names in your relationship fields (such as Depends) and debputy will render a small snippet about it based on data from your local APT cache.

This doc is then handed to the editor and tagged as markdown provided the editor supports markdown rendering. Both emacs and kate support markdown. However, not all markdown renderings are equal. Notably, emacs's rendering does not reformat the text into paragraphs. In a sense, emacs rendering works a bit like <pre>...</pre> except it does a bit of fancy rendering inside the <pre>...</pre>.

On the other hand, kate seems to convert the markdown to HTML and then throw the result into an HTML render engine. Here it is important to remember that not all newlines are equal in markdown. A Foo<newline>Bar is treated as one "paragraph" (<p>...</p>) and the HTML render happily renders this as single line Foo Bar provided there is sufficient width to do so.

A couple of extra newlines made wonders for the kate rendering, but I have a feeling this is not going to be the last time the hover docs will need some tweaking for prettification. Feel free to reach out if you spot a weirdly rendered hover doc somewhere.

Making quickfixes available in kate

Quickfixes are treated as generic code actions in the LSP specs. Each code action has a "type" (kind in the LSP lingo), which enables the editor to group the actions accordingly or filter by certain types of code actions.

The design in the specs leads to the following flow:

  1. The LSP server provides the editor with diagnostics (there are multiple ways to trigger this, so we will keep this part simple).
  2. The editor renders them to the user and the user chooses to interact with one of them.
  3. The interaction makes the editor asks the LSP server, which code actions are available at that location (optionally with filter to only see quickfixes).
  4. The LSP server looks at the provided range and is expected to return the relevant quickfixes here.

This flow is really annoying from a LSP server writer point of view. When you do the diagnostics (in step 1), you tend to already know what the possible quickfixes would be. The LSP spec authors realized this at some point, so there are two features the editor provides to simplify this.

  1. In the editor request for code actions, the editor is expected to provide the diagnostics that they received from the server. Side note: I cannot quite tell if this is optional or required from the spec.
  2. The editor can provide support for remembering a data member in each diagnostic. The server can then store arbitrary information in that member, which they will see again in the code actions request. Again, provided that the editor supports this optional feature.

All the quickfix logic in debputy so far has hinged on both of these two features.

As life would have it, kate provides neither of them.

Which meant I had to teach debputy to keep track of its diagnostics on its own. The plus side is that makes it easier to support "pull diagnostics" down the line, since it requires a similar feature. Additionally, it also means that quickfixes are now available in more editors. For consistency, debputy logic is now always used rather than relying on the editor support when present.

The downside is that I had to spend hours coming up with and debugging a way to find the diagnostics that overlap with the range provided by the editor. The most difficult part was keeping the logic straight and getting the runes correct for it.

Making the quickfixes actually work

With all of that, kate would show the quickfixes for diagnostics from debputy and you could use them too. However, they would always apply twice with suboptimal outcome as a result.

The LSP spec has multiple ways of defining what need to be changed in response to activating a code action. In debputy, all edits are currently done via the WorkspaceEdit type. It has two ways of defining the changes. Either via changes or documentChanges with documentChanges being the preferred one if both parties support this.

I originally read that as I was allowed to provide both and the editor would pick the one it preferred. However, after seeing kate blindly use both when they are present, I reviewed the spec and it does say "The edit should either provide changes or documentChanges", so I think that one is on me.

None of the changes in debputy currently require documentChanges, so I went with just using changes for now despite it not being preferred. I cannot figure out the logic of whether an editor supports documentChanges. As I read the notes for this part of the spec, my understanding is that kate does not announce its support for documentChanges but it clearly uses them when present. Therefore, I decided to keep it simple for now until I have time to dig deeper.

Remaining limitations with kate

There is one remaining limitation with kate that I have not yet solved. The kate program uses KSyntaxHighlighting for its language detection, which in turn is the basis for which LSP server is assigned to a given document.

This engine does not seem to support as complex detection logic as I hoped from it. Concretely, it either works on matching on an extension / a basename (same field for both cases) or mime type. This combined with our habit in Debian to use extension less files like debian/control vs. debian/tests/control or debian/rules or debian/upstream/metadata makes things awkward a best.

Concretely, the syntax engine cannot tell debian/control from debian/tests/control as they use the same basename. Fortunately, the syntax is close enough to work for both and debputy is set to use filename based lookups, so this case works well enough.

However, for debian/rules and debian/upstream/metadata, my understanding is that if I assign these in the syntax engine as Debian files, these rules will also trigger for any file named foo.rules or bar.metadata. That seems a bit too broad for me, so I have opted out of that for now. The down side is that these files will not work out of the box with kate for now.

The current LSP configuration in kate does not recognize makefiles or YAML either. Ideally, we would assign custom languages for the affected Debian files, so we do not steal the ID from other language servers. Notably, kate has a built-in language server for YAML and debputy does nothing for a generic YAML document. However, adding YAML as a supported language for debputy would cause conflict and regressions for users that are already happy with their generic YAML language server from kate.

So there are certainly still work to be done. If you are good with KSyntaxHighlighting and know how to solve some of this, I hope you will help me out.

Changes unrelated to kate

While I was working on debputy, I also added some other features that I want to mention.

  1. The debputy lint command will now show related context to diagnostic in its terminal report when such information is available and is from the same file as the diagnostic itself (cross file cases are rendered without related information).

    The related information is typically used to highlight a source of a conflict. As an example, if you use the same field twice in a stanza of debian/control, then debputy will add a diagnostic to the second occurrence. The related information for that diagnostic would provide the position of the first occurrence.

    This should make it easier to find the source of the conflict in the cases where debputy provides it. Let me know if you are missing it for certain diagnostics.

  2. The diagnostics analysis of debian/control will now identify and flag simple duplicated relations (complex ones like OR relations are ignored for now). Thanks to Matthias Geiger for suggesting the feature and Otto Kekäläinen for reporting a false positive that is now fixed.

Closing

I am glad I tested with kate to weed out most of these issues in time before the freeze. The Debian freeze will start within a week from now. Since debputy is a part of the toolchain packages it will be frozen from there except for important bug fixes.

09 March, 2025 12:05PM by Niels Thykier

March 08, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

The author has been doctored.

Almost exactly four years after I started with this project, yesterday I presented my PhD defense.

My thesis was what I’ve been presenting advances of all around since ≈2022: «A certificate-poisoning-resistant protocol for the synchronization of Web of Trust networks»

Lots of paperwork is still on the road for me. But at least in the immediate future, I can finally use this keyring my friend Raúl Gómez 3D-printed for me:

08 March, 2025 06:31PM

hackergotchi for Debian Brasil

Debian Brasil

MiniDebConf Belo Horizonte 2024 - a brief report

From April 27th to 30th, 2024, MiniDebConf Belo Horizonte 2024 was held at the Pampulha Campus of UFMG - Federal University of Minas Gerais, in Belo Horizonte city.

MiniDebConf BH 2024 banners This was the fifth time that a MiniDebConf (as an exclusive in-person event about Debian) took place in Brazil. Previous editions were in Curitiba (2016, 2017, and 2018), and in Brasília 2023. We had other MiniDebConfs editions held within Free Software events such as FISL and Latinoware, and other online events. See our event history.

Parallel to MiniDebConf, on 27th (Saturday) FLISOL - Latin American Free Software Installation Festival took place. It's the largest event in Latin America to promote Free Software, and It has been held since 2005 simultaneously in several cities.

MiniDebConf Belo Horizonte 2024 was a success (as were previous editions) thanks to the participation of everyone, regardless of their level of knowledge about Debian. We value the presence of both beginner users who are familiarizing themselves with the system and the official project developers. The spirit of welcome and collaboration was present during all the event.

MiniDebConf BH 2024 flisol

2024 edition numbers

During the four days of the event, several activities took place for all levels of users and collaborators of the Debian project. The official schedule was composed of:

  • 06 rooms in parallel on Saturday;
  • 02 auditoriums in parallel on Monday and Tuesday;
  • 30 talks/BoFs of all levels;
  • 05 workshops for hands-on activities;
  • 09 lightning talks on general topics;
  • 01 Live Electronics performance with Free Software;
  • Install fest to install Debian on attendees' laptops;
  • BSP (Bug Squashing Party);
  • Uploads of new or updated packages.

MiniDebConf BH 2024 palestra The final numbers for MiniDebConf Belo Horizonte 2024 show that we had a record number of participants.

  • Total people registered: 399
  • Total attendees in the event: 224

Of the 224 participants, 15 were official Brazilian contributors, 10 being DDs (Debian Developers) and 05 (Debian Maintainers), in addition to several unofficial contributors.

The organization was carried out by 14 people who started working at the end of 2023, including Prof. Loïc Cerf from the Computing Department who made the event possible at UFMG, and 37 volunteers who helped during the event.

As MiniDebConf was held at UFMG facilities, we had the help of more than 10 University employees.

See the list with the names of people who helped in some way in organizing MiniDebConf Belo Horizonte 2024.

The difference between the number of people registered and the number of attendees in the event is probably explained by the fact that there is no registration fee, so if the person decides not to go to the event, they will not suffer financial losses.

The 2024 edition of MiniDebconf Belo Horizonte was truly grand and shows the result of the constant efforts made over the last few years to attract more contributors to the Debian community in Brazil. With each edition the numbers only increase, with more attendees, more activities, more rooms, and more sponsors/supporters.

MiniDebConf BH 2024 grupo

MiniDebConf BH 2024 grupo

Activities

The MiniDebConf schedule was intense and diverse. On the 27th, 29th and 30th (Saturday, Monday and Tuesday) we had talks, discussions, workshops and many practical activities.

MiniDebConf BH 2024 palestra On the 28th (Sunday), the Day Trip took place, a day dedicated to sightseeing around the city. In the morning we left the hotel and went, on a chartered bus, to the Belo Horizonte Central Market. People took the opportunity to buy various things such as cheeses, sweets, cachaças and souvenirs, as well as tasting some local foods.

MiniDebConf BH 2024 mercado After a 2-hour tour of the Market, we got back on the bus and hit the road for lunch at a typical Minas Gerais food restaurant.

MiniDebConf BH 2024 palestra With everyone well fed, we returned to Belo Horizonte to visit the city's main tourist attraction: Lagoa da Pampulha and Capela São Francisco de Assis, better known as Igrejinha da Pampulha. MiniDebConf BH 2024 palestra We went back to the hotel and the day ended in the hacker space that we set up in the events room for people to chat, packaging, and eat pizzas.

MiniDebConf BH 2024 palestra

Crowdfunding

For the third time we ran a crowdfunding campaign and it was incredible how people contributed! The initial goal was to raise the amount equivalent to a gold tier of R$ 3,000.00. When we reached this goal, we defined a new one, equivalent to one gold tier + one silver tier (R$ 5,000.00). And again we achieved this goal. So we proposed as a final goal the value of a gold + silver + bronze tiers, which would be equivalent to R$ 6,000.00. The result was that we raised R$7,239.65 (~ USD 1,400) with the help of more than 100 people!

Thank you very much to the people who contributed any amount. As a thank you, we list the names of the people who donated. MiniDebConf BH 2024 doadores

Food, accommodation and/or travel grants for participants

Each edition of MiniDebConf brought some innovation, or some different benefit for the attendees. In this year's edition in Belo Horizonte, as with DebConfs, we offered bursaries for food, accommodation and/or travel to help those people who would like to come to the event but who would need some kind of help.

In the registration form, we included the option for the person to request a food, accommodation and/or travel bursary, but to do so, they would have to identify themselves as a contributor (official or unofficial) to Debian and write a justification for the request.

Number of people benefited:

  • Food: 69
  • Accommodation: 20
  • Travel: 18

The food bursary provided lunch and dinner every day. The lunches included attendees who live in Belo Horizonte and the region. Dinners were paid for attendees who also received accommodation and/or travel. The accommodation was held at the BH Jaraguá Hotel. And the travels included airplane or bus tickets, or fuel (for those who came by car or motorbike).

Much of the money to fund the bursaries came from the Debian Project, mainly for travels. We sent a budget request to the former Debian leader Jonathan Carter, and He promptly approved our request.

In addition to this event budget, the leader also approved individual requests sent by some DDs who preferred to request directly from him.

The experience of offering the bursaries was really good because it allowed several people to come from other cities.

MiniDebConf BH 2024 grupo

Photos and videos

You can watch recordings of the talks at the links below:

Thanks

We would like to thank all the attendees, organizers, volunteers, sponsors and supporters who contributed to the success of MiniDebConf Belo Horizonte 2024.

MiniDebConf BH 2024 grupo

Sponsors

Gold:

Silver:

Bronze:

Organizers

08 March, 2025 05:28PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppTOML 0.2.3 on CRAN: Compiler Nag, Small Updates

A new (mostly maintenance) release 0.2.3 of RcppTOML is now on CRAN.

TOMLis a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML – though sadly these may be too ubiquitous now. TOML is frequently being used with the projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka “packages”) for the Rust language.

This release was tickled by another CRAN request: just like yesterday’s and the RcppDate release two days ago, it responds to the esoteric ‘whitespace in literal operator’ depreceation warning. We alerted upstream too.

The short summary of changes follows.

Changes in version 0.2.3 (2025-03-08)

  • Correct the minimum version of Rcpp to 1.0.8 (Walter Somerville)

  • The package now uses Authors@R as mandated by CRAN

  • Updated 'whitespace in literal' issue upsetting clang++-20

  • Continuous integration updates including simpler r-ci setup

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

08 March, 2025 02:08PM

Vincent Bernat

Auto-expanding aliases in Zsh

To avoid needless typing, the fish shell features command abbreviations to expand some words after pressing space. We can emulate such a feature with Zsh:

# Definition of abbrev-alias for auto-expanding aliases
typeset -ga _vbe_abbrevations
abbrev-alias() {
    alias $1
    _vbe_abbrevations+=(${1%%\=*})
}
_vbe_zle-autoexpand() {
    local -a words; words=(${(z)LBUFFER})
    if (( ${​#_vbe_abbrevations[(r)${words[-1]}]} )); then
        zle _expand_alias
    fi
    zle magic-space
}
zle -N _vbe_zle-autoexpand
bindkey -M emacs " " _vbe_zle-autoexpand
bindkey -M isearch " " magic-space

# Correct common typos
(( $+commands[git] )) && abbrev-alias gti=git
(( $+commands[grep] )) && abbrev-alias grpe=grep
(( $+commands[sudo] )) && abbrev-alias suod=sudo
(( $+commands[ssh] )) && abbrev-alias shs=ssh

# Save a few keystrokes
(( $+commands[git] )) && abbrev-alias gls="git ls-files"
(( $+commands[ip] )) && {
  abbrev-alias ip6='ip -6'
  abbrev-alias ipb='ip -brief'
}

# Hard to remember options
(( $+commands[mtr] )) && abbrev-alias mtrr='mtr -wzbe'

Here is a demo where gls is expanded to git ls-files after pressing space:

Auto-expanding gls to git ls-files

I don’t auto-expand all aliases. I keep using regular aliases when slightly modifying the behavior of a command or for well-known abbreviations:

alias df='df -h'
alias du='du -h'
alias rm='rm -i'
alias mv='mv -i'
alias ll='ls -ltrhA'

08 March, 2025 09:58AM by Vincent Bernat

March 07, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.1.13 on CRAN: Compiler Nag, New Upsteam

A new release 0.1.13 of the RcppSimdJson package is now on CRAN.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.

This release was tickled by another CRAN request: just like yesterday’s RcppDate release, it responds to the esoteric ‘whitespace in literal operator’ depreceation warning. Turns out that upstream simdjson had this fixed a few months ago as the node bindings package ran into it. Other changes include a bit of earlier polish by Daniel, another CRAN mandated update, CI improvements, and a move of two demos to examples/ to avoid having to add half a dozen packages to Suggests: for no real usage gain in the package.

The short NEWS entry for this release follows.

Changes in version 0.1.13 (2025-03-07)

  • A call to std::string::erase is now guarded (Daniel)

  • The package now uses Authors@R as mandated by CRAN (Dirk)

  • simdjson was upgraded to version 3.12.2 (Dirk)

  • Continuous integration updated to more compilers and simpler setup

  • Two demos are now in inst/examples to not inflate Suggests

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

07 March, 2025 07:12PM

hackergotchi for Paulo Henrique de Lima Santana

Paulo Henrique de Lima Santana

Bits from FOSDEM 2025

This year I was at FOSDEM 2025, and it was the fifth edition in a row that I participated in person (before it was in 2019, 2020, 2023 and 2024). The event took place on February 1st and 2nd, as always at the ULB campus in Brussels.

We arrived on Friday at lunchtime and went straight to the hotel to drop off our bags. This time we stayed at Ibis in the city center, very close to the hustle and bustle. The price was good and the location was really good for us to be able to go out in the city center and come back late at night. We found a Japanese restaurant near the hotel and it was definitely worth having lunch there because of the all-you-can-eat price. After taking a nap, we went out for a walk. Since January 31st is the last day of the winter sales in the city, the streets in the city center were crowded, there were lots of people in the stores, and the prices were discounted. We concluded that if we have the opportunity to go to Brussels again at this time, it would be better wait to buy clothes for cold weather there.


Fosdem 2025

Unlike in 2023 and 2024, the FOSDEM organization did not approve my request for the Translations DevRoom,so my goal was to participate in the event and collaborate at the Debian booth. And also as I always do, I volunteered to operate the broadcast camera in the main auditorium on both days, for two hours each.

The Debian booth:
Fosdem 2025

Me in the auditorium helping with the broadcast:
Fosdem 2025

2 weeks before the event, the organization put out a call for interested people to request a room for their community’s BoF (Birds of a Feather), and I requested a room for Debian and it was approved :-)

It was great to see that people were really interested in participating at the BoF and the room was packed! As the host of the discussions, I tried to leave the space open for anyone who wanted to talk about any subject related to Debian. We started with a talk from MiniDebConf25 organizers, that will be taking place this year in France. Then other topics followed with people talking, asking and answering questions, etc. It was worth organizing this BoF. Who knows, the idea will remain in 2026.


Fosdem 2025

Carlos (a.k.a Charles), Athos, Maíra and Melissa talked at Fosdem, and Kanashiro was one for organizers of Distributions DevRoom


Fosdem 2025

During the two days of the event, it didn’t rain or get too cold. The days were sunny (and people celebrated the weather in Brussels). But I have to admit that it would have been nice to see snow like I did in 2019. Unlike last year, this time I felt more motivated to stay at the event the whole time.

Deixo meu agradecimento especial para o Andreas Tille, atual Líder do Debian que aprovou o meu pedido de passagens para que eu pudesse participar dos FOSDEM 2025. Como sempre, essa ajuda foi essencial para viabilizar a minha viagem para Bruxelas.

I would like to give my special thanks to Andreas Tille, the current Debian Leader, who approved my request for flight tickets so that I could join FOSDEM 2025. As always, this help was essential in making my trip to Brussels possible.

And once again Jandira was with me on this adventure. On Monday we went for a walk around Brussels and we also traveled to visit Bruges again. The visit to this city is really worth it because walking through the historic streets is like going back in time. This time we even took a boat trip through the canals, which was really cool.


Fosdem 2025

Fosdem 2025

07 March, 2025 02:00PM

Paul Wise

FLOSS Activities February 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

07 March, 2025 07:26AM

Valhalla's Things

MOAR Slippers

Posted on March 7, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear

A pair of espadrille-style slippers in black denim with a shiny black design on the uppers and twine soles.

A couple of years ago, I made myself a pair of slippers in linen with a braided twine sole and then another pair of hiking slippers: I am happy to report that they have been mostly a success.

Now, as I feared, the white linen fabric wasn’t a great choice: not only it became dirt-grey linen fabric in a very short time, the area under the ball of the foot was quickly consumed by friction, just as it usually happens with bought slippers.

I have no pictures for a number of reasons, but trust me when I say that they look pretty bad.

The same slippers, one of them is turned upside down to show the sole made from a twine braid, sewn in a spiral until it is the shape of a sole.

However, the sole is still going strong, and the general concept has proved valid, so when I needed a second pair of slippers I used the same pattern, with a sole made from the same twine but this time with denim taken from the legs of an old pair of jeans.

To make them a bit nicer, and to test the technique, I also added a design with a stencil and iridescent black acrylic paint (with fabric medium): I like the tone-on-tone effect, as it’s both (relatively) subtle and shiny.

A pair of open-heeled slippers in faded blue jeans.

Then, my partner also needed new slippers, and I wanted to make his too.

His preference, however, is for open heeled slippers, so I adjusted the pattern into a new one, making it from an old pair of blue jeans, rather than black as mine.

A braided twine sole, showing how an heel has been made in the same technique and sewn under the sole with blanket stitches.

He also finds completely flat soles a bit uncomfortable, so I made an heel with the same braided twine technique: this also seems to be working fine, and I’ve also added these instructions to the braided soles ones

Both of these have now been work for a few months: the jeans is working much better than the linen (which isn’t a complete surprise) and we’re both finding them comfortable, so if we’ll ever need new slippers I think I’ll keep using this pattern.

Now the plan is to wash the linen slippers, and then look into repairing them, either with just a new fabric inner sole + padding, or if washing isn’t as successful as I’d like by making a new fabric part in a different material and reusing just the twine sole. Either way they are going back into use.

07 March, 2025 12:00AM

March 06, 2025

Antoine Beaupré

Nix Notes

Meta

In case you haven't noticed, I'm trying to post and one of the things that entails is to just dump over the fence a bunch of draft notes. In this specific case, I had a set of rough notes about NixOS and particularly Nix, the package manager.

In this case, you can see the very birth of an article, what it looks like before it becomes the questionable prose it is now, by looking at the Git history of this file, particularly its birth. I have a couple of those left, and it would be pretty easy to publish them as is, but I feel I'd be doing others (and myself! I write for my own documentation too after all) a disservice by not going the extra mile on those.

So here's the long version of my experiment with Nix.

Nix

A couple friends are real fans of Nix. Just like I work with Puppet a lot, they deploy and maintain servers (if not fleets of servers) with NixOS and its declarative package management system. Essentially, they use it as a configuration management system, which is pretty awesome.

That, however, is a bit too high of a bar for me. I rarely try new operating systems these days: I'm a Debian developer and it takes most of my time to keep that functional. I'm not going to go around messing with other systems as I know that would inevitably get me dragged down into contributing into yet another free software project. I'm mature now and know where to draw the line. Right?

So I'm just testing Nix, the package manager, on Debian, because I learned from my friend that nixpkgs is the largest package repository out there, a mind-boggling 100,000 at the time of writing (with 88% of packages up to date), compared to around 40,000 in Debian (or 72,000 if you count binary packages, with 72% up to date). I naively thought Debian was the largest, perhaps competing with Arch, and I was wrong: Arch is larger than Debian too.

What brought me there is I wanted to run Harper, a fast spell-checker written in Rust. The logic behind using Nix instead of just downloading the source and running it myself is that I delegate the work of supply-chain integrity checking to a distributor, a bit like you trust Debian developers like myself to package things in a sane way. I know this widens the attack surface to a third party of course, but the rationale is that I shift cryptographic verification to another stack than just "TLS + GitHub" (although that is somewhat still involved) that's linked with my current chain (Debian packages).

I have since then stopped using Harper for various reasons and also wrapped up my Nix experiment, but felt it worthwhile to jot down some observations on the project.

Hot take

Overall, Nix is hard to get into, with a complicated learning curve. I have found the documentation to be a bit confusing, since there are many ways to do certain things. I particularly tripped on "flakes" and, frankly, incomprehensible error reporting.

It didn't help that I tried to run nixpkgs on Debian which is technically possible, but you can tell that I'm not supposed to be doing this. My friend who reviewed this article expressed surprised at how easy this was, but then he only saw the finished result, not me tearing my hair out to make this actually work.

Nix on Debian primer

So here's how I got started. First I installed the nix binary package:

apt install nix-bin

Then I had to add myself to the right group and logout/log back in to get the rights to deploy Nix packages:

adduser anarcat nix-users

That wasn't easy to find, but is mentioned in the README.Debian file shipped with the Debian package.

Then, I didn't write this down, but the README.Debian file above mentions it, so I think I added a "channel" like this:

nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
nix-channel --update

And I likely installed the Harper package with:

nix-env --install harper

At this point, harper was installed in a ... profile? Not sure.

I had to add ~/.nix-profile/bin (a symlink to /nix/store/sympqw0zyybxqzz6fzhv03lyivqqrq92-harper-0.10.0/bin) to my $PATH environment for this to actually work.

Side notes on documentation

Those last two commands (nix-channel and nix-env) were hard to figure out, which is kind of amazing because you'd think a tutorial on Nix would feature something like this prominently. But three different tutorials failed to bring me up to that basic setup, even the README.Debian didn't spell that out clearly.

The tutorials all show me how to develop packages for Nix, not plainly how to install Nix software. This is presumably because "I'm doing it wrong": you shouldn't just "install a package", you should setup an environment declaratively and tell it what you want to do.

But here's the thing: I didn't want to "do the right thing". I just wanted to install Harper, and documentation failed to bring me to that basic "hello world" stage. Here's what one of the tutorials suggests as a first step, for example:

curl -L https://nixos.org/nix/install | sh
nix-shell --packages cowsay lolcat
nix-collect-garbage

... which, when you follow through, leaves you with almost precisely nothing left installed (apart from Nix itself, setup with a nasty "curl pipe bash". So while that works in testing Nix, you're not much better off than when you started.

Rolling back everything

Now that I have stopped using Harper, I don't need Nix anymore, which I'm sure my Nix friends will be sad to read about. Don't worry, I have notes now, and can try again!

But still, I wanted to clear things out, so I did this, as root:

deluser anarcat nix-users
apt purge nix-bin
rm -rf /nix ~/.nix*

I think this cleared things out, but I'm not actually sure.

Side note on Nix drama

This blurb wouldn't be complete without a mention that the Nix community has been somewhat tainted by the behavior of its founder. I won't bother you too much with this; LWN covered it well in 2024, and made a followup article about spinoffs and forks that's worth reading as well.

I did want to say that everyone I have been in contact with in the Nix community was absolutely fantastic. So I am really sad that the behavior of a single individual can pollute a community in such a way.

As a leader, if you have all but one responsability, it's to behave properly for people around you. It's actually really, really hard to do that, because yes, it means you need to act differently than others and no, you just don't get to be upset at others like you would normally do with friends, because you're in a position of authority.

It's a lesson I'm still learning myself, to be fair. But at least I don't work with arms manufacturers or, if I would, I would be sure as hell to accept the nick (or nix?) on the chin when people would get upset, and try to make amends.

So long live the Nix people! I hope the community recovers from that dark moment, so far it seems like it will.

And thanks for helping me test Harper!

06 March, 2025 08:44PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppDate 0.0.5: Address Minor Compiler Nag

RcppDate wraps the featureful date library written by Howard Hinnant for use with R. This header-only modern C++ library has been in pretty wide-spread use for a while now, and adds to C++11/C++14/C++17 what will is (with minor modifications) the ‘date’ library in C++20. The RcppDate adds no extra R or C++ code and can therefore be a zero-cost dependency for any other project; yet a number of other projects decided to re-vendor it resulting in less-efficient duplication. Oh well. C’est la via.

This release sync wuth the (already mostly included) upstream release 3.0.3, and also addresses a new fresh (and mildly esoteric) nag from clang++-20. One upstream PR already addressed this in the files tickled by some CRAN packages, I followed this up with another upstream PR addressing this in a few more occurrences.

Changes in version 0.0.5 (2025-03-06)

  • Updated to upstream version 3.0.3

  • Updated 'whitespace in literal' issue upsetting clang++-20; this is also fixed upstream via two PRs

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

06 March, 2025 12:50PM

Russell Coker

8k Video Cards

I previously blogged about getting an 8K TV [1]. Now I’m working on getting 8K video out for a computer that talks to it. I borrowed an NVidia RTX A2000 card which according to it’s specs can do 8K [2] with a mini-DisplayPort to HDMI cable rated at 8K but on both Windows and Linux the two highest resolutions on offer are 3840*2160 (regular 4K) and 4096*2160 which is strange and not useful.

The various documents on the A2000 differ on whether it has DisplayPort version 1.4 or 1.4a. According to the DisplayPort Wikipedia page [3] both versions 1.4 and 1.4a have a maximum of HBR3 speed and the difference is what version of DSC (Display Stream Compression [4]) is in use. DSC apparently causes no noticeable loss of quality for movies or games but apparently can be bad for text. According to the DisplayPort Wikipedia page version 1.4 can do 8K uncompressed at 30Hz or 24Hz with high dynamic range. So this should be able to work.

My theories as to why it doesn’t work are:

  • NVidia specs lie
  • My 8K cable isn’t really an 8K cable
  • Something weird happens converting DisplayPort to HDMI
  • The video card can only handle refresh rates for 8K that don’t match supported input for the TV

To get some more input on this issue I posted on Lemmy, here is the Lemmy post [5]. I signed up to lemmy.ml because it was the first one I found that seemed reasonable and was giving away free accounts, I haven’t tried any others and can’t review it but it seems to work well enough and it’s free. It’s described as “A community of privacy and FOSS enthusiasts, run by Lemmy’s developers” which is positive, I recommend that everyone who’s into FOSS create an account there or some other Lemmy server.

My Lemmy post was about what video cards to buy. I was looking at the Gigabyte RX 6400 Eagle 4G as a cheap card from a local store that does 8K, it also does DisplayPort 1.4 so might have the same issues, also apparently FOSS drivers don’t support 8K on HDMI because the people who manage HDMI specs are jerks. It’s a $200 card at MSY and a bit less on ebay so it’s an amount I can afford to risk on a product that might not do what I want, but it seems to have a high probability of getting the same result. The NVidia cards have the option of proprietary drivers which allow using HDMI and there are cards with DisplayPort 1.4 (which can do 8K@30Hz) and HDMI 2.1 (which can do 8K@50Hz). So HDMI is a better option for some cards just based on card output and has the additional benefit of not needing DisplayPort to HDMI conversion.

The best option apparently is the Intel cards which do DisplayPort internally and convert to HDMI in hardware which avoids the issue of FOSS drivers for HDMI at 8K. The Intel Arc B580 has nice specs [6], HDMI 2.1a and DisplayPort 2.1 output, 12G of RAM, and being faster than the low end cards like the RX 6400. But the local computer store price is $470 and the ebay price is a bit over $400. If it turns out to not do what I need it still will be a long way from the worst way I’ve wasted money on computer gear. But I’m still hesitating about this.

Any suggestions?

06 March, 2025 10:53AM by etbe

March 05, 2025

Reproducible Builds

Reproducible Builds in February 2025

Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. Reproducible Builds at FOSDEM 2025
  2. Reproducible Builds at PyCascades 2025
  3. Does Functional Package Management Enable Reproducible Builds at Scale?
  4. reproduce.debian.net updates
  5. Upstream patches
  6. Distribution work
  7. diffoscope & strip-nondeterminism
  8. Website updates
  9. Reproducibility testing framework

Reproducible Builds at FOSDEM 2025

Similar to last year’s event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year’s event in which Holger Levsen presented in the main track.)


Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal — which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.


Zbigniew Jędrzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: “It’s been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different.” The slides of this talk are available, as is the full video (28m32s).


In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: “We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn’t exist any monitoring of Nixpkgs as a whole. In this talk I’ll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache.” Unfortunately, no video of the talk is available, but there is a blog and article on the results.


Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon’s talk “describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research.” The slides for the talk are available, as is the full video (23m17s).


Reproducible Builds at PyCascades 2025

Vagrant Cascadian presented at this year’s PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant’s talk, entitled Re-Py-Ducible Builds caught the audience’s attention with the following abstract:

Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing… or even more important, if someone else builds it, they get the exact same thing too.

More info is available on the talk’s page.


“Does Functional Package Management Enable Reproducible Builds at Scale?”

On our mailing list last month, Julien Malka, Stefano Zacchiroli and Théo Zimmermann of Télécom Paris’ in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) announced that they had published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale? (PDF).

This month, however, Ludovic Courtès followed up to the original announcement on our mailing list mentioning, amongst other things, the Guix Data Service and how that it shows the reproducibility of GNU Guix over time, as described in a GNU Guix blog back in March 2024.


reproduce.debian.net updates

The last few months have seen the introduction of reproduce.debian.net. Announced first at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project.

Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:

  • Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.

  • Increased the number of riscv64 nodes to a total of 4, and added a new amd64 node added thanks to our (now 10-year sponsor), IONOS.

  • Discovered an issue in the Debian build service where some new ‘incoming’ build-dependencies do not end up historically archived.

  • Uploaded the devscripts package, incorporating changes from Jochen Sprickerhof to the debrebuild script — specifically to fix the handling the Rules-Requires-Root header in Debian source packages.

  • Uploaded a number of Rust dependencies of rebuilderd (rust-libbz2-rs-sys, rust-actix-web, rust-actix-server, rust-actix-http, rust-actix-server, rust-actix-http, rust-actix-web-codegen and rust-time-tz) after they were prepared by kpcyrd :

Jochen Sprickerhof also updated the sbuild package to:

  • Obey requests from the user/developer for a different temporary directory.
  • Use the root/superuser for some values of Rules-Requires-Root.
  • Don’t pass --root-owner-group to old versions of dpkg.

… and additionally requested that many Debian packages are rebuilt by the build servers in order to work around bugs found on reproduce.debian.net. [][[][]


Lastly, kpcyrd has also worked towards getting rebuilderd packaged in NixOS, and Jelle van der Waa picked up the existing pull request for Fedora support within in rebuilderd and made it work with the existing Koji rebuilderd script. The server is being packaged for Fedora in an unofficial ‘copr’ repository and in the official repositories after all the dependencies are packaged.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Distribution work

There as been the usual work in various distributions this month, such as:

In Debian, 17 reviews of Debian packages were added, 6 were updated and 8 were removed this month adding to our knowledge about identified issues.


Fedora developers Davide Cavalca and Zbigniew Jędrzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.


Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project’s work on unprivileged and reproducible builds continued this month. Notable fixes include:


The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions.

Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.


Finally, Douglas DeMaio published an article on the openSUSE blog on announcing that the Reproducible-openSUSE (RBOS) Project Hits [Significant] Milestone. In particular:

The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.

This news was also announced on our mailing list by Bernhard M. Wiedemann, who also published another report for openSUSE as well.


diffoscope & strip-nondeterminism

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288 and 289 to Debian:

  • Add asar to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS in order to address Debian bug #1095057) []
  • Catch a CalledProcessError when calling html2text. []
  • Update the minimal Black version. []

Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 [][] and 288 [][] as well as submitted a patch to update to 289 []. Vagrant also fixed an issue that was breaking reprotest on Guix [][].

strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-2 was uploaded to Debian unstable by Holger Levsen.


Website updates

There were a large number of improvements made to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add a helper script to manually schedule packages. [][][][][]
    • Fix a link in the website footer. []
    • Strip the “💠🍥♻” emojis from package names on the manual rebuilder in order to ease copy-and-paste. []
    • On the various statistics pages, provide the number of affected source packages [][] as well as provide various totals [][].
    • Fix graph labels for the various architectures [][] and make them clickable too [][][].
    • Break the displayed HTML in blocks of 256 packages in order to address rendering issues. [][]
    • Add monitoring jobs for riscv64 archicture nodes and integrate them elsewhere in our infrastructure. [][]
    • Add riscv64 architecture nodes. [][][][][]
    • Update much of the documentation. [][][]
    • Make a number of improvements to the layout and style. [][][][][][][]
    • Remove direct links to JSON and database backups. []
    • Drop a Blues Brothers reference from frontpage. []
  • Debian-related:

    • Deal with /boot/vmlinuz* being called vmlinux* on the riscv64 architecture. []
    • Add a new ionos17 node. [][][][][]
    • Install debian-repro-status on all Debian trixie and unstable jobs. []
  • FreeBSD-related:

    • Switch to run latest branch of FreeBSD. []
  • Misc:

    • Fix /etc/cron.d and /etc/logrotate.d permissions for Jenkins nodes. []
    • Add support for riscv64 architecture nodes. [][]
    • Grant Jochen Sprickerhof access to the o4 node. []
    • Disable the janitor-setup-worker. [][]

In addition:

  • kpcyrd fixed the /all/api/ API endpoints on reproduce.debian.net by altering the nginx configuration. []

  • James Addison updated reproduce.debian.net to display the so-called ‘bad’ reasons hyperlink inline [] and merged the “Categorized issues” links into the “Reproduced builds” column [].

  • Jochen Sprickerhof also made some reproduce.debian.net-related changes, adding support for detecting a bug in the mmdebstrap package [] as well as updating some documentation [].

  • Roland Clobus continued their work on reproducible ‘live’ images for Debian, making changes related to new clustering of jobs in openQA. []

And finally, both Holger Levsen [][][] and Vagrant Cascadian performed significant node maintenance. [][][][][]


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 March, 2025 01:31PM

Dima Kogan

Shop scheduling with PuLP

I recently used the PuLP modeler to solve a work scheduling problem to assign workers to shifts. Here are notes about doing that. This is a common use case, but isn't explicitly covered in the case studies in the PuLP documentation.

Here's the problem:

  • We are trying to put together a schedule for one week
  • Each day has some set of work shifts that need to be staffed
  • Each shift must be staffed with exactly one worker
  • The shift schedule is known beforehand, and the workers each declare their preferences beforehand: they mark each shift in the week as one of:
    • PREFERRED (if they want to be scheduled on that shift)
    • NEUTRAL
    • DISFAVORED (if they don't love that shift)
    • REFUSED (if they absolutely cannot work that shift)

The tool is supposed to allocate workers to the shifts to try to cover all the shifts, give everybody work, and try to match their preferences. I implemented the tool:

#!/usr/bin/python3

import sys
import os
import re

def report_solution_to_console(vars):
    for w in days_of_week:
        annotation = ''
        if human_annotate is not None:
            for s in shifts.keys():
                m = re.match(rf'{w} - ', s)
                if not m: continue
                if vars[human_annotate][s].value():
                    annotation = f" ({human_annotate} SCHEDULED)"
                    break
            if not len(annotation):
                annotation = f" ({human_annotate} OFF)"

        print(f"{w}{annotation}")

        for s in shifts.keys():
            m = re.match(rf'{w} - ', s)
            if not m: continue

            annotation = ''
            if human_annotate is not None:
                annotation = f" ({human_annotate} {shifts[s][human_annotate]})"
            print(f"    ---- {s[m.end():]}{annotation}")

            for h in humans:
                if vars[h][s].value():
                    print(f"         {h} ({shifts[s][h]})")

def report_solution_summary_to_console(vars):
    print("\nSUMMARY")

    for h in humans:
        print(f"-- {h}")
        print(f"   benefit: {benefits[h].value():.3f}")

        counts = dict()
        for a in availabilities:
            counts[a] = 0

        for s in shifts.keys():
            if vars[h][s].value():
                counts[shifts[s][h]] += 1

        for a in availabilities:
            print(f"   {counts[a]} {a}")


human_annotate = None

days_of_week = ('SUNDAY',
                'MONDAY',
                'TUESDAY',
                'WEDNESDAY',
                'THURSDAY',
                'FRIDAY',
                'SATURDAY')

humans = ['ALICE', 'BOB',
          'CAROL', 'DAVID', 'EVE', 'FRANK', 'GRACE', 'HEIDI', 'IVAN', 'JUDY']

shifts = {'SUNDAY - SANDING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL'},
          'WEDNESDAY - SAWING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED'},
          'THURSDAY - SANDING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED'},
          'SATURDAY - SAWING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED'},
          'SUNDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'MONDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'TUESDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'WEDNESDAY - PAINTING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'FRIDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED'},
          'SATURDAY - PAINTING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SUNDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'MONDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'TUESDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - SANDING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL',
           'EVE':   'REFUSED'},
          'THURSDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'FRIDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - SANDING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED'},
          'SUNDAY - PAINTING 11:00 AM - 6:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'PREFERRED',
           'IVAN':  'NEUTRAL',
           'JUDY':  'NEUTRAL',
           'DAVID': 'REFUSED'},
          'MONDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'PREFERRED',
           'IVAN':  'NEUTRAL',
           'JUDY':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'TUESDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'FRIDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'FRANK': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SUNDAY - SAWING 12:00 PM - 7:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'MONDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'TUESDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'FRIDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SUNDAY - PAINTING 12:15 PM - 7:15 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'NEUTRAL',
           'DAVID': 'REFUSED'},
          'MONDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'DAVID': 'REFUSED'},
          'TUESDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'DAVID': 'REFUSED'},
          'FRIDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'}}

availabilities = ['PREFERRED', 'NEUTRAL', 'DISFAVORED']



import pulp
prob = pulp.LpProblem("Scheduling", pulp.LpMaximize)

vars = pulp.LpVariable.dicts("Assignments",
                             (humans, shifts.keys()),
                             None,None, # bounds; unused, since these are binary variables
                             pulp.LpBinary)

# Everyone works at least 2 shifts
Nshifts_min = 2
for h in humans:
    prob += (
        pulp.lpSum([vars[h][s] for s in shifts.keys()]) >= Nshifts_min,
        f"{h} works at least {Nshifts_min} shifts",
    )

# each shift is ~ 8 hours, so I limit everyone to 40/8 = 5 shifts
Nshifts_max = 5
for h in humans:
    prob += (
        pulp.lpSum([vars[h][s] for s in shifts.keys()]) <= Nshifts_max,
        f"{h} works at most {Nshifts_max} shifts",
    )

# all shifts staffed and not double-staffed
for s in shifts.keys():
    prob += (
        pulp.lpSum([vars[h][s] for h in humans]) == 1,
        f"{s} is staffed",
    )

# each human can work at most one shift on any given day
for w in days_of_week:
    for h in humans:
        prob += (
            pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(rf'{w} ',s)]) <= 1,
            f"{h} cannot be double-booked on {w}"
        )


#### Some explicit constraints; as an example
# DAVID can't work any PAINTING shift and is off on Thu and Sun
h = 'DAVID'
prob += (
    pulp.lpSum([vars[h][s] for s in shifts.keys() if re.search(r'- PAINTING',s)]) == 0,
    f"{h} can't work any PAINTING shift"
)
prob += (
    pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(r'THURSDAY|SUNDAY',s)]) == 0,
    f"{h} is off on Thursday and Sunday"
)

# Do not assign any "REFUSED" shifts
for s in shifts.keys():
    for h in humans:
        if shifts[s][h] == 'REFUSED':
            prob += (
                vars[h][s] == 0,
                f"{h} is not available for {s}"
            )


# Objective. I try to maximize the "happiness". Each human sees each shift as
# one of:
#
#   PREFERRED
#   NEUTRAL
#   DISFAVORED
#   REFUSED
#
# I set a hard constraint to handle "REFUSED", and arbitrarily, I set these
# benefit values for the others
benefit_availability = dict()
benefit_availability['PREFERRED']  = 3
benefit_availability['NEUTRAL']    = 2
benefit_availability['DISFAVORED'] = 1

# Not used, since this is a hard constraint. But the code needs this to be a
# part of the benefit. I can ignore these in the code, but let's keep this
# simple
benefit_availability['REFUSED' ] = -1000

benefits = dict()
for h in humans:
    benefits[h] = \
        pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \
                    for s in shifts.keys()])

benefit_total = \
    pulp.lpSum([benefits[h] \
                for h in humans])

prob += (
    benefit_total,
    "happiness",
)

prob.solve()

if pulp.LpStatus[prob.status] == "Optimal":
    report_solution_to_console(vars)
    report_solution_summary_to_console(vars)

The set of workers is in the humans variable, and the shift schedule and the workers' preferences are encoded in the shifts dict. The problem is defined by a vars dict of dicts, each a boolean variable indicating whether a particular worker is scheduled for a particular shift. We define a set of constraints to these worker allocations to restrict ourselves to valid solutions. And among these valid solutions, we try to find the one that maximizes some benefit function, defined here as:

benefit_availability = dict()
benefit_availability['PREFERRED']  = 3
benefit_availability['NEUTRAL']    = 2
benefit_availability['DISFAVORED'] = 1

benefits = dict()
for h in humans:
    benefits[h] = \
        pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \
                    for s in shifts.keys()])

benefit_total = \
    pulp.lpSum([benefits[h] \
                for h in humans])

So for instance each shift that was scheduled as somebody's PREFERRED shift gives us 3 benefit points. And if all the shifts ended up being PREFERRED, we'd have a total benefit value of 3*Nshifts. This is impossible, however, because that would violate some constraints in the problem.

The exact trade-off between the different preferences is set in the benefit_availability dict. With the above numbers, it's equally good for somebody to have a NEUTRAL shift and a day off as it is for them to have DISFAVORED shifts. If we really want to encourage the program to work people as much as possible (days off discouraged), we'd want to raise the DISFAVORED threshold.

I run this program and I get:

....
Result - Optimal solution found

Objective value:                108.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.01
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.02   (Wallclock seconds):       0.02

SUNDAY
    ---- SANDING 9:00 AM - 4:00 PM
         EVE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM
         IVAN (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 11:00 AM - 6:00 PM
         HEIDI (PREFERRED)
    ---- SAWING 12:00 PM - 7:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 12:15 PM - 7:15 PM
         CAROL (PREFERRED)
MONDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         IVAN (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         GRACE (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
TUESDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
WEDNESDAY
    ---- SAWING 7:30 AM - 2:30 PM
         DAVID (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         ALICE (NEUTRAL)
THURSDAY
    ---- SANDING 9:00 AM - 4:00 PM
         GRACE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM
         CAROL (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         ALICE (NEUTRAL)
FRIDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         GRACE (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
SATURDAY
    ---- SAWING 7:30 AM - 2:30 PM
         CAROL (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM
         DAVID (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         BOB (NEUTRAL)

SUMMARY
-- ALICE
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- BOB
   benefit: 14.000
   4 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- CAROL
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- DAVID
   benefit: 15.000
   5 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- EVE
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- FRANK
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- GRACE
   benefit: 8.000
   2 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- HEIDI
   benefit: 9.000
   1 PREFERRED
   3 NEUTRAL
   0 DISFAVORED
-- IVAN
   benefit: 12.000
   4 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- JUDY
   benefit: 6.000
   2 PREFERRED
   0 NEUTRAL
   0 DISFAVORED

So we have a solution! We have 108 total benefit points. But it looks a bit uneven: Judy only works 2 days, while some people work many more: David works 5 for instance. Why is that? I update the program with =human_annotate = 'JUDY'=, run it again, and it tells me more about Judy's preferences:

Objective value:                108.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.01
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.01   (Wallclock seconds):       0.02

SUNDAY (JUDY OFF)
    ---- SANDING 9:00 AM - 4:00 PM (JUDY NEUTRAL)
         EVE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         IVAN (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         FRANK (PREFERRED)
    ---- PAINTING 11:00 AM - 6:00 PM (JUDY NEUTRAL)
         HEIDI (PREFERRED)
    ---- SAWING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         ALICE (PREFERRED)
    ---- PAINTING 12:15 PM - 7:15 PM (JUDY NEUTRAL)
         CAROL (PREFERRED)
MONDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY NEUTRAL)
         IVAN (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY NEUTRAL)
         GRACE (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         HEIDI (NEUTRAL)
TUESDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY REFUSED)
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED)
         HEIDI (NEUTRAL)
WEDNESDAY (JUDY SCHEDULED)
    ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED)
         DAVID (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED)
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM (JUDY NEUTRAL)
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (NEUTRAL)
THURSDAY (JUDY SCHEDULED)
    ---- SANDING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         GRACE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         CAROL (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (NEUTRAL)
FRIDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY DISFAVORED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY DISFAVORED)
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED)
         GRACE (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED)
         HEIDI (NEUTRAL)
SATURDAY (JUDY OFF)
    ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED)
         CAROL (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED)
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM (JUDY REFUSED)
         DAVID (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED)
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (NEUTRAL)

SUMMARY
-- ALICE
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- BOB
   benefit: 14.000
   4 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- CAROL
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- DAVID
   benefit: 15.000
   5 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- EVE
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- FRANK
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- GRACE
   benefit: 8.000
   2 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- HEIDI
   benefit: 9.000
   1 PREFERRED
   3 NEUTRAL
   0 DISFAVORED
-- IVAN
   benefit: 12.000
   4 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- JUDY
   benefit: 6.000
   2 PREFERRED
   0 NEUTRAL
   0 DISFAVORED

This tells us that on Monday Judy does not work, although she marked the SAWING shift as PREFERRED. Instead David got that shift. What would happen if David gave that shift to Judy? He would lose 3 points, she would gain 3 points, and the total would remain exactly the same at 108.

How would we favor a more even distribution? We need some sort of tie-break. I want to add a nonlinearity to strongly disfavor people getting a low number of shifts. But PuLP is very explicitly a linear programming solver, and cannot solve nonlinear problems. Here we can get around this by enumerating each specific case, and assigning it a nonlinear benefit function. The most obvious approach is to define another set of boolean variables: vars_Nshifts[human][N]. And then using them to add extra benefit terms, with values nonlinearly related to Nshifts. Something like this:

benefit_boost_Nshifts = \
    {2: -0.8,
     3: -0.5,
     4: -0.3,
     5: -0.2}
for h in humans:
    benefits[h] = \
        ... + \
        pulp.lpSum([vars_Nshifts[h][n] * benefit_boost_Nshifts[n] \
                    for n in benefit_boost_Nshifts.keys()])

So in the previous example we considered giving David's 5th shift to Judy, for her 3rd shift. In that scenario, David's extra benefit would change from -0.2 to -0.3 (a shift of -0.1), while Judy's would change from -0.8 to -0.5 (a shift of +0.3). So the balancing out the shifts in this way would work: the solver would favor the solution with the higher benefit function.

Great. In order for this to work, we need the vars_Nshifts[human][N] variables to function as intended: they need to be binary indicators of whether a specific person has that many shifts or not. That would need to be implemented with constraints. Let's plot it like this:

#!/usr/bin/python3
import numpy as np
import gnuplotlib as gp

Nshifts_eq  = 4
Nshifts_max = 10

Nshifts = np.arange(Nshifts_max+1)
i0 = np.nonzero(Nshifts != Nshifts_eq)[0]
i1 = np.nonzero(Nshifts == Nshifts_eq)[0]

gp.plot( # True value: var_Nshifts4==0, Nshifts!=4
         ( np.zeros(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # True value: var_Nshifts4==1, Nshifts==4
         ( np.ones(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # False value: var_Nshifts4==1, Nshifts!=4
         ( np.ones(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
         # False value: var_Nshifts4==0, Nshifts==4
         ( np.zeros(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
        unset=('grid'),
        _set = (f'xtics ("(Nshifts=={Nshifts_eq}) == 0" 0, "(Nshifts=={Nshifts_eq}) == 1" 1)'),
        _xrange = (-0.1, 1.1),
        ylabel = "Nshifts",
        title = "Nshifts equality variable: not linearly separable",
        hardcopy = "/tmp/scheduling-Nshifts-eq.svg")

scheduling-Nshifts-eq.svg

So a hypothetical vars_Nshifts[h][4] variable (plotted on the x axis of this plot) would need to be defined by a set of linear AND constraints to linearly separate the true (red) values of this variable from the false (black) values. As can be seen in this plot, this isn't possible. So this representation does not work.

How do we fix it? We can use inequality variables instead. I define a different set of variables vars_Nshifts_leq[human][N] that are 1 iff Nshifts <= N. The equality variable from before can be expressed as a difference of these inequality variables: vars_Nshifts[human][N] = vars_Nshifts_leq[human][N]-vars_Nshifts_leq[human][N-1]

Can these vars_Nshifts_leq variables be defined by a set of linear AND constraints? Yes:

#!/usr/bin/python3
import numpy as np
import numpysane as nps
import gnuplotlib as gp

Nshifts_leq = 4
Nshifts_max = 10

Nshifts = np.arange(Nshifts_max+1)
i0 = np.nonzero(Nshifts >  Nshifts_leq)[0]
i1 = np.nonzero(Nshifts <= Nshifts_leq)[0]

def linear_slope_yintercept(xy0,xy1):
    m = (xy1[1] - xy0[1])/(xy1[0] - xy0[0])
    b = xy1[1] - m * xy1[0]
    return np.array(( m, b ))
x01     = np.arange(2)
x01_one = nps.glue( nps.transpose(x01), np.ones((2,1)), axis=-1)
y_lowerbound = nps.inner(x01_one,
                         linear_slope_yintercept( np.array((0, Nshifts_leq+1)),
                                                  np.array((1, 0)) ))
y_upperbound = nps.inner(x01_one,
                         linear_slope_yintercept( np.array((0, Nshifts_max)),
                                                  np.array((1, Nshifts_leq)) ))
y_lowerbound_check = (1-x01) * (Nshifts_leq+1)
y_upperbound_check = Nshifts_max - x01*(Nshifts_max-Nshifts_leq)

gp.plot( # True value: var_Nshifts_leq4==0, Nshifts>4
         ( np.zeros(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # True value: var_Nshifts_leq4==1, Nshifts<=4
         ( np.ones(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # False value: var_Nshifts_leq4==1, Nshifts>4
         ( np.ones(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
         # False value: var_Nshifts_leq4==0, Nshifts<=4
         ( np.zeros(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),

         ( x01, y_lowerbound, y_upperbound,
           dict( _with     = 'filledcurves lc "green"',
                 tuplesize = 3) ),
         ( x01, nps.cat(y_lowerbound_check, y_upperbound_check),
           dict( _with     = 'lines lc "green" lw 2',
                 tuplesize = 2) ),

        unset=('grid'),
        _set = (f'xtics ("(Nshifts<={Nshifts_leq}) == 0" 0, "(Nshifts<={Nshifts_leq}) == 1" 1)',
                'style fill transparent pattern 1'),
        _xrange = (-0.1, 1.1),
        ylabel = "Nshifts",
        title = "Nshifts inequality variable: linearly separable",
        hardcopy = "/tmp/scheduling-Nshifts-leq.svg")

scheduling-Nshifts-leq.svg

So we can use two linear constraints to make each of these variables work properly. To use these in the benefit function we can use the equality constraint expression from above, or we can use these directly:

# I want to favor people getting more extra shifts at the start to balance
# things out: somebody getting one more shift on their pile shouldn't take
# shifts away from under-utilized people
benefit_boost_leq_bound = \
    {2: .2,
     3: .3,
     4: .4,
     5: .5}

# Constrain vars_Nshifts_leq variables to do the right thing
for h in humans:
    for b in benefit_boost_leq_bound.keys():
        prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()])
                 >= (1 - vars_Nshifts_leq[h][b])*(b+1),
                 f"{h} at least {b} shifts: lower bound")
        prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()])
                 <= Nshifts_max - vars_Nshifts_leq[h][b]*(Nshifts_max-b),
                 f"{h} at least {b} shifts: upper bound")

benefits = dict()
for h in humans:
    benefits[h] = \
        ... + \
        pulp.lpSum([vars_Nshifts_leq[h][b] * benefit_boost_leq_bound[b] \
                    for b in benefit_boost_leq_bound.keys()])

In this scenario, David would get a boost of 0.4 from giving up his 5th shift, while Judy would lose a boost of 0.2 from getting her 3rd, for a net gain of 0.2 benefit points. The exact numbers will need to be adjusted on a case by case basis, but this works.

The full program, with this and other extra features is available here.

05 March, 2025 12:02PM by Dima Kogan

hackergotchi for Otto Kekäläinen

Otto Kekäläinen

Will decentralized social media soon go mainstream?

Featured image of post Will decentralized social media soon go mainstream?

In today’s digital landscape, social media is more than just a communication tool — it is the primary medium for global discourse. Heads of state, corporate leaders and cultural influencers now broadcast their statements directly to the world, shaping public opinion in real time. However, the dominance of a few centralized platforms — X/Twitter, Facebook and YouTube — raises critical concerns about control, censorship and the monopolization of information. Those who control these networks effectively wield significant power over public discourse.

In response, a new wave of distributed social media platforms has emerged, each built on different decentralized protocols designed to provide greater autonomy, censorship resistance and user control. While Wikipedia maintains a comprehensive list of distributed social networking software and protocols, it does not cover recent blockchain-based systems, nor does it highlight which have the most potential for mainstream adoption.

This post explores the leading decentralized social media platforms and the protocols they are based on: Mastodon (ActivityPub), Bluesky (AT Protocol), Warpcast (Farcaster), Hey (Lens) and Primal (Nostr).

Comparison of architecture and mainstream adoption potential

Protocol Identity System Example Storage model Cost for end users Potential
Mastodon Tied to server domain @ottok@mastodon.social Federated instances Free (some instances charge) High
Bluesky Portable (DID) ottoke.bsky.social Federated instances Free Moderate
Farcaster ENS (Ethereum) @ottok Blockchain + off-chain Small gas fees Moderate
Lens NFT-based (Polygon) @ottok Blockchain + off-chain Small gas fees Niche
Nostr Cryptographic Keys npub16lc6uhqpg6dnqajylkhwuh3j7ynhcnje508tt4v6703w9kjlv9vqzz4z7f Federated instances Free (some instances charge) Niche

1. Mastodon (ActivityPub)

Screenshot of Mastodon

Mastodon was created in 2016 by Eugen Rochko, a German software developer who sought to provide a decentralized and user-controlled alternative to Twitter. It was built on the ActivityPub protocol, now standardized by W3C Social Web Working Group, to allow users to join independent servers while still communicating across the broader Mastodon network.

Mastodon operates on a federated model, where multiple independently run servers communicate via ActivityPub. Each server sets its own moderation policies, leading to a decentralized but fragmented experience. The servers can alternatively be called instances, relays or nodes, depending on what vocabulary a protocol has standardized on.

  • Identity: User identity is tied to the instance where they registered, represented as @username@instance.tld.
  • Storage: Data is stored on individual instances, which federate messages to other instances based on their configurations.
  • Cost: Free to use, but relies on instance operators willing to run the servers.

The protocol defines multiple activities such as:

  • Creating a post
  • Liking
  • Sharing
  • Following
  • Commenting

Example Message in ActivityPub (JSON-LD Format)

json
{
 "@context": "https://www.w3.org/ns/activitystreams",
 "type": "Create",
 "actor": "https://mastodon.social/users/ottok",
 "object": {
 "type": "Note",
 "content": "Hello from #Mastodon!",
 "published": "2025-03-03T12:00:00Z",
 "to": ["https://www.w3.org/ns/activitystreams#Public"]
 }
}

Servers communicate across different platforms by publishing activities to their followers or forwarding activities between servers. Standard HTTPS is used between servers for communication, and the messages use JSON-LD for data representation. The WebFinger protocol is used for user discovery. There is however no neat way for home server discovery yet. This means that if you are browsing e.g. Fosstodon and want to follow a user and press Follow, a dialog will pop up asking you to enter your own home server (e.g. mastodon.social) to redirect you there for actually executing the Follow action on with your account.

Mastodon is open source under the AGPL at github.com/mastodon/mastodon. Anyone can operate their own instance. It just requires to run your own server and some skills to maintain a Ruby on Rails app with a PostgreSQL database backend, and basic understanding of the protocol to configure federation with other ActivityPub instances.

Popularity: Already established, but will it grow more?

Mastodon has seen steady growth, especially after Twitter’s acquisition in 2022, with some estimates stating it peaked at 10 million users across thousands of instances. However, its fragmented user experience and the complexity of choosing instances have hindered mainstream adoption. Still, it remains the most established decentralized alternative to Twitter.

Note that Donald Trump’s Truth Social is based on the Mastodon software but does not federate with the ActivityPub network.

The ActivityPub protocol is the most widely used of its kind. One of the other most popular services is the Lemmy link sharing service, similar to Reddit. The larger ecosystem of ActivityPub is called Fediverse, and estimates put the total active user count around 6 million.

2. Bluesky (AT Protocol)

Screenshot of Bluesky

Interestingly, Bluesky was conceived within Twitter in 2019 by Twitter founder Jack Dorsey. After being incubated as a Twitter-funded project, it spun off as an independent Public Benefit LLC in February 2022 and launched its public beta in February 2023.

Bluesky runs on top of the Authenticated Transfer (AT) Protocol published at https://github.com/bluesky-social/atproto. The protocol enables portable identities and data ownership, meaning users can migrate between platforms while keeping their identity and content intact. In practice, however, there is only one popular server at the moment, which is Bluesky itself.

  • Identity: Usernames are domain-based (e.g., @user.bsky.social).
  • Storage: Content is theoretically federated among various servers.
  • Cost: Free to use, but relies on instance operators willing to run the servers.

Example Message in AT Protocol (JSON Format)

json
{
 "repo": "did:plc:ottoke.bsky.social",
 "collection": "app.bsky.feed.post",
 "record": {
 "$type": "app.bsky.feed.post",
 "text": "Hello from Bluesky!",
 "createdAt": "2025-03-03T12:00:00Z",
 "langs": ["en"]
 }
}

Popularity: Hybrid approach may have business benefits?

Bluesky reported over 3 million users by 2024, probably getting traction due to its Twitter-like interface and Jack Dorsey’s involvement. Its hybrid approach — decentralized identity with centralized components — could make it a strong candidate for mainstream adoption, assuming it can scale effectively.

3. Warpcast (Farcaster Network)

Farcaster was launched in 2021 by Dan Romero and Varun Srinivasan, both former crypto exchange Coinbase executives, to create a decentralized but user-friendly social network. Built on the Ethereum blockchain, it could potentially offer a very attack-resistant communication medium.

However, in my own testing, Farcaster does not seem to fully leverage what Ethereum could offer. First of all, there is no diversity in programs implementing the protocol as at the moment there is only Warpcast. In Warpcast the signup requires an initial 5 USD fee that is not payable in ETH, and users need to create a new wallet address on the Ethereum layer 2 network Base instead of simply reusing their existing Ethereum wallet address or ENS name.

Despite this, I can understand why Farcaster may have decided to start out like this. Having a single client program may be the best strategy initially. One of the decentralized chat protocol Matrix founders, Matthew Hodgson, shared in his FOSDEM 2025 talk that he slightly regrets focusing too much on developing the protocol instead of making sure the app to use it is attractive to end users. So it may be sensible to ensure Warpcast gets popular first, before attempting to make the Farcaster protocol widely used.

As a protocol Farcaster’s hybrid approach makes it more scalable than fully on-chain networks, giving it a higher chance of mainstream adoption if it integrates seamlessly with broader Web3 ecosystems.

  • Identity: ENS (Ethereum Name Service) domains are used as usernames.
  • Storage: Messages are stored in off-chain hubs, while identity is on-chain.
  • Cost: Users must pay gas fees for some operations but reading and posting messages is mostly free.

Example Message in Farcaster (JSON Format)

json
{
 "fid": 766579,
 "username": "ottok",
 "custodyAddress": "0x127853e48be3870172baa4215d63b6d815d18f21",
 "connectedWallet": "0x3ebe43aa3ae5b891ca1577d9c49563c0cee8da88",
 "text": "Hello from Farcaster!",
 "publishedAt": 1709424000,
 "replyTo": null,
 "embeds": []
}

Popularity: Decentralized social media + decentralized payments a winning combo?

Ethereum founder Vitalik Buterin (warpcast.com/vbuterin) and many core developers are active on the platform. Warpcast, the main client for Farcaster, has seen increasing adoption, especially among Ethereum developers and Web3 enthusiasts. I too have an profile at warpcast.com/ottok. However, the numbers are still very low and far from reaching network effects to really take off.

Blockchain-based social media networks, particularly those built on Ethereum, are compelling because they leverage existing user wallets and persistent identities while enabling native payment functionality. When combined with decentralized content funding through micropayments, these blockchain-backed social networks could offer unique advantages that centralized platforms may find difficult to replicate, being decentralized both as a technical network and in a funding mechanism.

4. Hey.xyz (Lens Network)

The Lens Protocol was developed by decentralized finance (DeFi) team Aave and launched in May 2022 to provide a user-owned social media network. While initially built on Polygon, it has since launched its own Layer 2 network called the Lens Network in February 2024. Lens is currently the main competitor to Farcaster.

Lens stores profile ownership and references on-chain, while content is stored on IPFS/Arweave, enabling composability with DeFi and NFTs.

  • Identity: Profile ownership is tied to NFTs on the Polygon blockchain.
  • Storage: Content is on-chain and integrates with IPFS/Arweave (like NFTs).
  • Cost: Users must pay gas fees for some operations but reading and posting messages is mostly free.

Example Message in Lens (JSON Format)

json
{
 "profileId": "@ottok",
 "contentURI": "ar://QmExampleHash",
 "collectModule": "0x23b9467334bEb345aAa6fd1545538F3d54436e96",
 "referenceModule": "0x0000000000000000000000000000000000000000",
 "timestamp": 1709558400
}

Popularity: Probably not as social media site, but maybe as protocol?

The social media side of Lens is mainly the Hey.xyz website, which seems to have fewer users than Warpcast, and is even further away from reaching critical mass for network effects. The Lens protocol however has a lot of advanced features and it may gain adoption as the building block for many Web3 apps.

5. Primal.net (Nostr Network)

Nostr (Notes and Other Stuff Transmitted by Relays) was conceptualized in 2020 by an anonymous developer known as fiatjaf. One of the primary design tenets was to be a censorship-resistant protocol and it is popular among Bitcoin enthusiasts, with Jack Dorsey being one of the public supporters. Unlike the Farcaster and Lens protocols, Nostr is not blockchain-based but just a network of relay servers for message distribution. If does however use public key cryptography for identities, similar to how wallets work in crypto.

  • Identity: Public-private key pairs define identity (with prefix npub...).
  • Storage: Content is federated among multiple servers, which in Nostr vocabulary are called relays.
  • Cost: No gas fees, but relies on relay operators willing to run the servers.

Example Message in Nostr (JSON Format)

json
{
 "id": "note1xyz...",
 "pubkey": "npub1...",
 "kind": 1,
 "content": "Hello from Nostr!",
 "created_at": 1709558400,
 "tags": [],
 "sig": "sig1..."
}

Popularity: If Jack Dorsey and Bitcoiners promote it enough?

Primal.net as a web app is pretty solid, but it does not stand out much. While Jack Dorsey has shown support by donating $1.5 million to the protocol development in December 2021, its success likely depends on broader adoption by the Bitcoin community.

Will any of these replace X/Twitter?

As usage patterns vary, the statistics are not fully comparable, but this overview of the situation in March 2025 gives a decent overview.

Platform Total Accounts Active Users Growth Trend
Mastodon ~10 million ~1 million Steady
Bluesky ~33 million ~1 million Steady
Nostr ~41 million ~20 thousand Steady
Farcaster ~850 thousand ~50 thousand Flat
Lens ~140 thousand ~20 thousand Flat

Mastodon and Bluesky have already reached millions of users, while Lens and Farcaster are growing within crypto communities. It is however clear that none of these are anywhere close to how popular X/Twitter is. In particular, Mastodon had a huge influx of users in the fall of 2022 when Twitter was acquired, but to challenge the incumbents the growth would need to significantly accelerate. We can all accelerate this development by embracing decentralized social media now alongside existing dominant platforms.

Who knows, given the right circumstances maybe X.com leadership decides to change the operating model and start federating contents to break out from a walled garden model. The likelyhood of such development would increase if decentralized networks get popular, and the encumbents feel they need to participate to not lose out.

Past and future

The idea of decentralized social media is not new. One early pioneer identi.ca launched in 2008, only two years after Twitter, using the OStatus protocol to promote decentralization. A few years later it evolved into pump.io with the ActivityPump protocol, and also forked into GNU Social that continued with OStatus. I remember when these happened, and that in 2010 also Diaspora launched with fairly large publicity. Surprisingly both of these still operate (I can still post both on identi.ca and diasp.org), but the activity fizzled out years ago. The protocol however survived partially and evolved into ActivityPub, which is now the backbone of the Fediverse.

The evolution of decentralized social media over the next decade will likely parallel developments in democracy, freedom of speech and public discourse. While the early 2010s emphasized maximum independence and freedom, the late 2010s saw growing support for content moderation to combat misinformation. The AI era introduces new challenges, potentially requiring proof-of-humanity verification for content authenticity.

Key factors that will determine success:

  • User experience and ease of onboarding
  • Network effects and critical mass of users
  • Integration with existing web3 infrastructure
  • Balance between decentralization and usability
  • Sustainable economic models for infrastructure

This is clearly an area of development worth monitoring closely, as the next few years may determine which protocol becomes the de facto standard for decentralized social communication.

05 March, 2025 12:00AM

March 04, 2025

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Reverse Engineering (another) Restaurant Pager system 🍽️

Some of you may remember that I recently felt a bit underwhelmed by the last pager I reverse engineered – the Retekess TD-158, mostly due to how intuitive their design decions were. It was pretty easy to jump to conclusions because they had made some pretty good decisions on how to do things.

I figured I’d spin the wheel again and try a new pager system – this time I went for a SU-68G-10 pager, since I recognized the form factor as another fairly common unit I’ve seen around town. Off to Amazon I went, bought a set, and got to work trying to track down the FCC filings on this model. I eventually found what seemed to be the right make/model, and it, once again, indicated that this system should be operating in the 433 MHz ISM band likely using OOK modulation. So, figured I’d start with the center of the band (again) at 433.92 MHz, take a capture, test my luck, and was greeted with a now very familiar sight.

Same as the last goarounds, except the premable here is a 0 symbol followed by 6-ish symbol durations of no data, followed by 25 bits of a packet. Careful readers will observe 26 symbols above after the preamble – I did too! The last 0 in the screenshot above is not actually a part of the packet – rather, it’s part of the next packet’s preamble. Each packet is packed in pretty tight.

By Hand Demodulation

Going off the same premise as last time, I figured i’d give it a manual demod and see what shakes out (again). This is now the third time i’ve run this play, so check out either of my prior two posts for a better written description of what’s going on here – I’ll skip all the details since i’d just be copy-pasting from those posts into here. Long story short, I demodulated a call for pager 1, call for pager 10, and a power off command.

What Bits
Call 1 1101111111100100100000000
Call 101101111111100100010100000
Off 1101111111100111101101110

A few things jump out at me here – the first 14 bits are fixed (in my case, 11011111111001), which means some mix of preamble, system id, or other system-wide constant. Additionally, The last 9 bits also look like they are our pager – the 1 and 10 pager numbers (LSB bit order) jump right out (100000000 and 010100000, respectively). That just leaves the two remaining bits which look to be the “action” – 00 for a “Call”, and 11 for a “Power off”. I don’t super love this since command has two bits rather than one, the base station ID seems really long, and a 9-bit Pager ID is just weird. Also, what is up with that power-off pager id? Weird. So, let’s go and see what we can do to narrow down and confirm things by hand.

Testing bit flips

Rather than call it a day at that, I figure it’s worth a bit of diligence to make sure it’s all correct – so I figured we should try sending packets to my pagers and see how they react to different messages after flipping bits in parts of the packet.

I implemented a simple base station for the pagers using my Ettus B210mini, and threw together a simple OOK modulator and transmitter program which allows me to send specifically crafted test packets on frequency. Implementing the base station is pretty straightforward, because of the modulation of the signal (OOK), it’s mostly a matter of setting a buffer to 1 and 0 for where the carrier signal is on or off timed to the sample rate, and sending that off to the radio. If you’re interested in a more detailed writeup on the steps involved, there’s a bit more in my christmas tree post.

First off, I’d like to check the base id. I want to know if all the bits in what I’m calling the “base id” are truly part of the base station ID, or perhaps they have some other purpose (version, preamble?). I wound up following a three-step process for every base station id:

  • Starting with an unmodified call packet for the pager under test:
    • Flip the Nth bit, and transmit the call. See if the pager reacts.
    • Hold “SET”, and pair the pager with the new packet.
    • Transmit the call. See if the pager reacts.
    • After re-setting the ID, transmit the call with the physical base station, see if the pager reacts.
  • Starting with an unmodified off packet for the pager system
  • Flip the Nth bit, transmit the off, see if the pager reacts.

What wound up happening is that changing any bit in the first 14 bits meant that the packet no longer worked with any pager until it was re-paired, at which point it begun to work again. This likely means the first 14 bits are part of the base station ID – and not static between base stations, or some constant like a version or something. All bits appear to be used.

I repeated the same process with the “command” bits, and found that only 11 and 00 caused the pagers to react for the pager ids i’ve tried.

I repeated this process one last time with the “pager id” bits this time, and found the last bit in the packet isn’t part of the pager ID, and can be either a 1 or a 0 and still cause the pager to react as if it were a 0. This means that the last bit is unknown but it has no impact on either a power off or call, and all messages sent by my base station always have a 0 set. It’s not clear if this is used by anything – likely not since setting a bit there doesn’t result in any change of behavior I can see yet.

Final Packet Structure

After playing around with flipping bits and testing, the final structure I was able to come up with based on behavior I was able to observe from transmitting hand-crafted packets and watching pagers buzz:

base id
command
pager id
???

Commands

The command section bit comes in two flavors – either a “call” or an “off” command.

Type Id (2 bits) Description
Call00Call the pager identified by the id in pager id
Off11Request pagers power off, pager id is always 10110111

As for the actual RF PHY characteristics, here’s my best guesses at what’s going on with them:

What Description
Center Frequency 433.92 MHz
Modulation OOK
Symbol Duration 1300us
Bits 25
Preamble 325us of carrier, followed by 8800us of no carrier

I’m not 100% on the timings, but they appear to be close enough to work reliabily. Same with the center frequency, it’s roughly right but there may be a slight difference i’m missing.

Lingering Questions

This was all generally pretty understandable – another system that had some good decisions, and wasn’t too bad to reverse engineer. This was a bit more fun to do, since there was a bit more ambiguity here, but still not crazy. At least this one was a bit more ambiguous that needed a bit of followup to confirm things, which made it a bit more fun.

I am left with a few questions, though – which I’m kinda interested in understanding, but I’ll likely need a lot more data and/or original source:

Why is the “command” two bits here? This was a bit tough to understand because of the number of bits they have at their disposal – given the one last bit at the end of the packet that doesn’t seem to do anything, there’s no reason this couldn’t have been a 16 bit base station id, and an 8 bit pager id along with a single bit command (call or off).

When sending an “off” – why is power off that bit pattern? Other pager IDs don’t seem to work with “off”, so it has some meaning, but I’m not sure what that is. You press and hold 9 on the physical base station, but the code winds up coming out to 0xED, 237 or maybe -19 if it’s signed. I can’t quite figure out why it’s this value. Are there other codes?

Finally – what’s up with the last bit? Why is it 25 bits and not 24? It must take more work to process something that isn’t 8 bit aligned – and all for something that’s not being used!

04 March, 2025 03:00PM

March 03, 2025

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is bits from DPL for February.

Ftpmaster team is seeking for new team members

In December, Scott Kitterman announced his retirement from the project. I personally regret this, as I vividly remember his invaluable support during the Debian Med sprint at the start of the COVID-19 pandemic. He even took time off to ensure new packages cleared the queue in under 24 hours. I want to take this opportunity to personally thank Scott for his contributions during that sprint and for all his work in Debian.

With one fewer FTP assistant, I am concerned about the increased workload on the remaining team. I encourage anyone in the Debian community who is interested to consider reaching out to the FTP masters about joining their team.

If you're wondering about the role of the FTP masters, I'd like to share a fellow developer's perspective:

"My read on the FTP masters is:

  • In truth, they are the heart of the project.
  • They know it.
  • They do a fantastic job."

I fully agree and see it as part of my role as DPL to ensure this remains true for Debian's future.

If you're looking for a way to support Debian in a critical role where many developers will deeply appreciate your work, consider reaching out to the team. It's a great opportunity for any Debian Developer to contribute to a key part of the project.

Project Status: Six Months of Bug of the Day

In my Bits from the DPL talk at DebConf24, I announced the Tiny Tasks effort, which I intended to start with a Bug of the Day project. Another idea was an Autopkgtest of the Day, but this has been postponed due to limited time resources-I cannot run both projects in parallel.

The original goal was to provide small, time-bound examples for newcomers. To put it bluntly: in terms of attracting new contributors, it has been a failure so far. My offer to explain individual bug-fixing commits in detail, if needed, received no response, and despite my efforts to encourage questions, none were asked.

However, the project has several positive aspects: experienced developers actively exchange ideas, collaborate on fixing bugs, assess whether packages are worth fixing or should be removed, and work together to find technical solutions for non-trivial problems.

So far, the project has been engaging and rewarding every day, bringing new discoveries and challenges-not just technical, but also social. Fortunately, in the vast majority of cases, I receive positive responses and appreciation from maintainers. Even in the few instances where help was declined, it was encouraging to see that in two cases, maintainers used the ping as motivation to work on their packages themselves. This reflects the dedication and high standards of maintainers, whose work is essential to the project's success.

I once used the metaphor that this project is like wandering through a dark basement with a lone flashlight-exploring aimlessly and discovering a wide variety of things that have accumulated over the years. Among them are true marvels with popcon >10,000, ingenious tools, and delightful games that I only recently learned about. There are also some packages whose time may have come to an end-but each of them reflects the dedication and effort of those who maintained them, and that deserves the utmost respect.

Leaving aside the challenge of attracting newcomers, what have we achieved since August 1st last year?

  • Fixed more than one package per day, typically addressing multiple bugs.
  • Added and corrected numerous Homepage fields and watch files.
  • The most frequently patched issue was "Fails To Cross-Build From Source" (all including patches).
  • Migrated several packages from cdbs/debhelper to dh.
  • Rewrote many d/copyright files to DEP5 format and thoroughly reviewed them.
  • Integrated all affected packages into Salsa and enabled Salsa CI.
  • Approximately half of the packages were moved to appropriate teams, while the rest are maintained within the Debian or Salvage teams.
  • Regularly performed team uploads, ITS, NMUs, or QA uploads.
  • Filed several RoQA bugs to propose package removals where appropriate.
  • Reported multiple maintainers to the MIA team when necessary.

With some goodwill, you can see a slight impact on the trends.debian.net graphs (thank you Lucas for the graphs), but I would never claim that this project alone is responsible for the progress. What I have also observed is the steady stream of daily uploads to the delayed queue, demonstrating the continuous efforts of many contributors. This ongoing work often remains unseen by most-including myself, if not for my regular check-ins on this list. I would like to extend my sincere thanks to everyone pushing fixes there, contributing to the overall quality and progress of Debian's QA efforts.

If you examine the graphs for "Version Control System" and "VCS Hosting" with the goodwill mentioned above, you might notice a positive trend since mid-last year. The "Package Smells" category has also seen reductions in several areas: "no git", "no DEP5 copyright", "compat <9", and "not salsa". I'd also like to acknowledge the NMUers who have been working hard to address the "format != 3.0" issue. Thanks to all their efforts, this specific issue never surfaced in the Bug of the Day effort, but their contributions deserve recognition here.

The experience I gathered in this project taught me a lot and inspired me to some followup we should discuss at a Sprint at DebCamp this year.

Finally, if any newcomer finds this information interesting, I'd be happy to slow down and patiently explain individual steps as needed. All it takes is asking questions on the Matrix channel to turn this into a "teaching by example" session.

By the way, for newcomers who are interested, I used quite a few abbreviations-all of which are explained in the Debian Glossary.

Sneak Peek at Upcoming Conferences

I will join two conferences in March-feel free to talk to me if you spot me there.

  1. FOSSASIA Summit 2025 (March 13-15, Bangkok, Thailand) Schedule: https://eventyay.com/e/4c0e0c27/schedule

  2. Chemnitzer Linux-Tage (March 22-23, Chemnitz, Germany) Schedule: https://chemnitzer.linux-tage.de/2025/de/programm/vortraege

Both events will have a Debian booth-come say hi!

Kind regards Andreas.

03 March, 2025 11:00PM by Andreas Tille

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Going to Embedded World 2025 in Nuremberg

This year I'll be participating of Embedded World 2025 in Nuremberg, representing the company I work for, ICS. You will be able to find me at the Automotive Grade Linux booth on hall 4, 4-209.

If you are around be sure to come and say hi, and why not, exchange PGP/GPG keys!

03 March, 2025 01:44PM by Lisandro Damián Nicanor Pérez Meyer

March 02, 2025

PGP/GPG transition from 0x6286A7D0 to 0xB48C1072

I am currently transitioning my GPG/GPG key from D/4096 0x12DDFA84AC23B2BBF04B313CAB645F406286A7D0 to D/4096 0xA94C9FBFA49AA7CD4F40BB9F5E9030CCB48C1072.

Let's put this in plain text, signed with both keys:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

- -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

I am currently transitioning my GPG/GPG key from D/4096 0x12DDFA84AC23B2BBF04B313CAB645F406286A7D0 to D/4096 0xA94C9FBFA49AA7CD4F40BB9F5E9030CCB48C1072.

This file is first signed with the new key and then with the old one.

- -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEqUyfv6Sap81PQLufXpAwzLSMEHIFAmfE6RwACgkQXpAwzLSM
EHJpUBAAwMAbOwGcRiuX/aBjqDMA9HerRgimNWE9xA35Asg3F+A5/AFrBo+BDng3
jviCGxR6YdicSLZptaScLuRnqG1i/OcochGDxvHYVQ9I/G9SuHB7ylqD7zDnO5pw
Lldwx9jovkszgXMC+vs1E9tQ4vpuWNQ1I7q90rdikywhvNdNs8XUSCUNCLol5fzm
u64hcKex3pwt7wYs6TxtgO5DLpp//5Z6NoZ5f/esC0837zqy5Py6+7scN3tgRmXj
SyALlhfOCsy4+v22K5xk0VNelEWUg+VKqgMjPYbEfGQ3e4LXId6gGlKF+OuXCJX5
Eqi2leO/O3c+1MZ8LMh3YQft1/TmYktASMTdwV7Y87qMgVkXsJqIvw8d9VNlZvET
B3MMsuPK9VNKCokbSiHwB2ZQR235Hq6LPrBfMPnoVb5QzUgIk8Kz92wM3NWVAjzE
oj/660SZ7SfbBi6qmQyMjYKSKN+kSZazQfoUZo0fK1Y1mywN/XkeeV+gq/ZiYPhI
QLbjEfoeHEVcufgQCU0PvUuKr/+ud8BAwdH/9YWxYnObAzXFxgOJ9AvDqKxbD+rw
MVXCU4xMtNHHDqgZ+pSdB0br/bYtIqh1YsFfHw16lUgj9lcmfnujhl+h700pob6d
oArO0Bjb0bM9PTRRAn3CMiz2UeerBzY6gvaSnO3oBQc/UAx3RgA=
=r9Sr
- -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEEt36hKwjsrvwSzE8q2RfQGKGp9AFAmfE6U8ACgkQq2RfQGKG
p9DEWA/+N1AtaPwVGRi3OTcC+mzjjVd3oB4H4E80559FCbWQLvbnlazCTgdVHxp5
Pjlm4I/hKYSaWNirUvE7Dq7LNWYYhZRBunXc/VrrX2fkxj99D+F9co5fXYO3fsQn
vlz1UZzq8OrvWJo5Cv65CkblQReB31SNY//gBk5SjaeL4bnH3qOLCn6gGrqIgkyj
qb8vQzk9ssb0b2P2hNJlkYQA20LUshyShyfnaAJuEtmDYp3F3fWfuyTPEznJZ0AJ
efxfkYqQIznY36Om8dW0ec5LI3Xb+Obj4ccfNhWBfVG4RKruKHEhQCDtZbMSGPDn
ns4yOl5cqbN/2Gqa/Ww+LafWPsa73NYQNDOIM2XhVFLf2wikGMnb2bew3iZrEBo5
BORucyd1sBFsdD2tXAZEaXBpuCU+7mI9bJz9Co2+NWf1+IDaKyvJSgl7cQxuUtd4
tp7mDB7Czf4yDK+QHqeWY46DtU0dlDpyOt2IijkJzhH6nL9cfo+W4JUFJrhd42Tr
fRqjt7WeGrauX+d8wfvVV/KFrCkuw51ojLAtztvH7iwDP85wAOu95AlT1kT4ZwlE
uEmdgtYE3GGwQKP2osndJZwic/tZuKrm7p5xFYJr8N95nsRNlk1ia4EkyvQbe49m
2+JHO8Q0EjUGfV2+bSw4Eupi6qEgWp2s4sIGpHEGzWYfNqmozWE=
=A5kI
-----END PGP SIGNATURE-----

The above can be found as a file here.

02 March, 2025 11:16PM by Lisandro Damián Nicanor Pérez Meyer

hackergotchi for Jonathan McDowell

Jonathan McDowell

RIP: Steve Langasek

[I’d like to stop writing posts like this. I’ve been trying to work out what to say now for nearly 2 months (writing the mail to -private to tell the Debian project about his death is one of the hardest things I’ve had to write, and I bottled out and wrote something that was mostly just factual, because it wasn’t the place), and I’ve decided I just have to accept this won’t be the post I want it to be, but posted is better than languishing in drafts.]

Last weekend I was in Portland, for the Celebration of Life of my friend Steve, who sadly passed away at the start of the year. It wasn’t entirely unexpected, but that doesn’t make it any easier.

I’ve struggled to work out what to say about Steve. I’ve seen many touching comments from others in Debian about their work with him, but what that’s mostly brought home to me is that while I met Steve through Debian, he was first and foremost my friend rather than someone I worked with in Debian. And so everything I have to say is more about that friendship (and thus feels a bit self-centred).

My first memory of Steve is getting lost with him in Porto Alegre, Brazil, during DebConf4. We’d decided to walk to a local mall to meet up with some other folk (I can’t recall how they were getting there, but it wasn’t walking), ended up deep in conversation (ISTR it was about shared library transititions), and then it took a bit longer than we expected. I don’t know how that managed to cement a friendship (neither of us saw it as the near death experience others feared we’d had), but it did.

Unlike others I never texted Steve much; we’d occasionally chat on IRC, but nothing major. That didn’t seem to matter when we actually saw each other in person though, we just picked up like we’d seen each other the previous week. DebConf became a recurring theme of when we’d see each other. Even outside DebConf we went places together. The first time I went somewhere in the US that wasn’t the Bay Area, it was to Portland to see Steve. He, and his family, came to visit me in Belfast a couple of times, and I did road trip from Dublin to Cork with him. He took me to a volcano.

Steve saw injustice in the world and actually tried to do something about it. I still have a copy of the US constitution sitting on my desk that he gave me. He made me want to be a better person.

The world is a worse place without him in it, and while I am better for having known him, I am sadder for the fact he’s gone.

02 March, 2025 04:56PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in February 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

OpenSSH upstream released 9.9p2 with fixes for CVE-2025-26465 and CVE-2025-26466. I got a heads-up on this in advance from the Debian security team, and prepared updates for all of testing/unstable, bookworm (Debian 12), bullseye (Debian 11), buster (Debian 10, LTS), and stretch (Debian 9, ELTS). jessie (Debian 8) is also still in ELTS for a few more months, but wasn’t affected by either vulnerability.

Although I’m not particularly active in the Perl team, I fixed a libnet-ssleay-perl build failure because it was blocking openssl from migrating to testing, which in turn was blocking the above openssh fixes.

I also sent a minor sshd -T fix upstream, simplified a number of autopkgtests using the newish Restrictions: needs-sudo facility, and prepared for removing the obsolete slogin symlink.

PuTTY

I upgraded to the new upstream version 0.83.

GCC 15 build failures

I fixed build failures with GCC 15 in a few packages:

Python team

A lot of my Python team work is driven by its maintainer dashboard. Now that we’ve finished the transition to Python 3.13 as the default version, and inspired by a recent debian-devel thread started by Santiago, I thought it might be worth spending a bit of time on the “uscan error” section. uscan is typically scraping upstream web sites to figure out whether new versions are available, and so it’s easy for its configuration to become outdated or broken. Most of this work is pretty boring, but it can often reveal situations where we didn’t even realize that a Debian package was out of date. I fixed these packages:

  • cssutils (this in particular was very out of date due to a new and active upstream maintainer since 2021)
  • django-assets
  • django-celery-email
  • django-sass
  • django-yarnpkg
  • json-tricks
  • mercurial-extension-utils
  • pydbus
  • pydispatcher
  • pylint-celery
  • pyspread
  • pytest-pretty
  • python-apptools
  • python-django-libsass (contributed a packaging fix upstream in passing)
  • python-django-postgres-extra
  • python-django-waffle
  • python-ephemeral-port-reserve
  • python-ifaddr
  • python-log-symbols
  • python-msrest
  • python-msrestazure
  • python-netdisco
  • python-pathtools
  • python-user-agents
  • sinntp
  • wchartype

I upgraded these packages to new upstream versions:

  • cssutils (contributed a packaging tweak upstream)
  • django-iconify
  • django-sass
  • domdf-python-tools
  • extra-data (fixing a numpy 2.0 failure)
  • flufl.i18n
  • json-tricks
  • jsonpickle
  • mercurial-extension-utils
  • mod-wsgi
  • nbconvert
  • orderly-set
  • pydispatcher (contributed a Python 3.12 fix upstream)
  • pylint
  • pytest-rerunfailures
  • python-asyncssh
  • python-box (contributed a packaging fix upstream)
  • python-charset-normalizer
  • python-django-constance
  • python-django-guid
  • python-django-pgtrigger
  • python-django-waffle
  • python-djangorestframework-simplejwt
  • python-formencode
  • python-holidays (contributed a test fix upstream)
  • python-legacy-cgi
  • python-marshmallow-polyfield (fixing a test failure)
  • python-model-bakery
  • python-mrcz (fixing a numpy 2.0 failure)
  • python-netdisco
  • python-npe2
  • python-persistent
  • python-pkginfo (fixing a test failure)
  • python-proto-plus
  • python-requests-ntlm
  • python-roman
  • python-semantic-release
  • python-setproctitle
  • python-stdlib-list
  • python-trustme
  • python-typeguard (fixing a test failure)
  • python-tzlocal
  • pyzmq
  • setuptools-scm
  • sqlfluff
  • stravalib
  • tomopy
  • trove-classifiers
  • xhtml2pdf (fixing CVE-2024-25885)
  • xonsh
  • zodbpickle
  • zope.deprecation
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.18-1 (issuing BSA-121) and added new backports of python-django-dynamic-fixture and python-django-pgtrigger, all of which are dependencies of debusine.

I went through all the build failures related to python-click 8.2.0 (which was confusingly tagged but not fully released upstream and posted an analysis.

I fixed or helped to fix various other build/test failures:

I dropped support for the old setup.py ftest command from zope.testrunner upstream.

I fixed various odds and ends of bugs:

Installer team

Following up on last month, I merged and uploaded Helmut’s /usr-move fix.

02 March, 2025 01:49PM by Colin Watson

March 01, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

Network is unreliable.

Network is unreliable. Seems like my router is trying to reconnect every 20 seconds after something triggers.

01 March, 2025 10:01PM by Junichi Uekawa

hackergotchi for Debian Brasil

Debian Brasil

MiniDebConf Belo Horizonte 2024 - a brief report


title: MiniDebConf Belo Horizonte 2024 - a brief report description: by Paulo Henrique de Lima Santana (phls) published: true date: 2025-03-01T17:40:50.904Z tags: blog, english editor: markdown

dateCreated: 2024-06-06T09:00:00.000Z

From April 27th to 30th, 2024, MiniDebConf Belo Horizonte 2024 was held at the Pampulha Campus of UFMG - Federal University of Minas Gerais, in Belo Horizonte city.

MiniDebConf BH 2024 banners This was the fifth time that a MiniDebConf (as an exclusive in-person event about Debian) took place in Brazil. Previous editions were in Curitiba (2016, 2017, and 2018), and in Brasília 2023. We had other MiniDebConfs editions held within Free Software events such as FISL and Latinoware, and other online events. See our event history.

Parallel to MiniDebConf, on 27th (Saturday) FLISOL - Latin American Free Software Installation Festival took place. It's the largest event in Latin America to promote Free Software, and It has been held since 2005 simultaneously in several cities.

MiniDebConf Belo Horizonte 2024 was a success (as were previous editions) thanks to the participation of everyone, regardless of their level of knowledge about Debian. We value the presence of both beginner users who are familiarizing themselves with the system and the official project developers. The spirit of welcome and collaboration was present during all the event.

MiniDebConf BH 2024 flisol

2024 edition numbers

During the four days of the event, several activities took place for all levels of users and collaborators of the Debian project. The official schedule was composed of:

  • 06 rooms in parallel on Saturday;
  • 02 auditoriums in parallel on Monday and Tuesday;
  • 30 talks/BoFs of all levels;
  • 05 workshops for hands-on activities;
  • 09 lightning talks on general topics;
  • 01 Live Electronics performance with Free Software;
  • Install fest to install Debian on attendees' laptops;
  • BSP (Bug Squashing Party);
  • Uploads of new or updated packages.

MiniDebConf BH 2024 palestra The final numbers for MiniDebConf Belo Horizonte 2024 show that we had a record number of participants.

  • Total people registered: 399
  • Total attendees in the event: 224

Of the 224 participants, 15 were official Brazilian contributors, 10 being DDs (Debian Developers) and 05 (Debian Maintainers), in addition to several unofficial contributors.

The organization was carried out by 14 people who started working at the end of 2023, including Prof. Loïc Cerf from the Computing Department who made the event possible at UFMG, and 37 volunteers who helped during the event.

As MiniDebConf was held at UFMG facilities, we had the help of more than 10 University employees.

See the list with the names of people who helped in some way in organizing MiniDebConf Belo Horizonte 2024.

The difference between the number of people registered and the number of attendees in the event is probably explained by the fact that there is no registration fee, so if the person decides not to go to the event, they will not suffer financial losses.

The 2024 edition of MiniDebconf Belo Horizonte was truly grand and shows the result of the constant efforts made over the last few years to attract more contributors to the Debian community in Brazil. With each edition the numbers only increase, with more attendees, more activities, more rooms, and more sponsors/supporters.

MiniDebConf BH 2024 grupo

MiniDebConf BH 2024 grupo

Activities

The MiniDebConf schedule was intense and diverse. On the 27th, 29th and 30th (Saturday, Monday and Tuesday) we had talks, discussions, workshops and many practical activities.

MiniDebConf BH 2024 palestra On the 28th (Sunday), the Day Trip took place, a day dedicated to sightseeing around the city. In the morning we left the hotel and went, on a chartered bus, to the Belo Horizonte Central Market. People took the opportunity to buy various things such as cheeses, sweets, cachaças and souvenirs, as well as tasting some local foods.

MiniDebConf BH 2024 mercado After a 2-hour tour of the Market, we got back on the bus and hit the road for lunch at a typical Minas Gerais food restaurant.

MiniDebConf BH 2024 palestra With everyone well fed, we returned to Belo Horizonte to visit the city's main tourist attraction: Lagoa da Pampulha and Capela São Francisco de Assis, better known as Igrejinha da Pampulha. MiniDebConf BH 2024 palestra We went back to the hotel and the day ended in the hacker space that we set up in the events room for people to chat, packaging, and eat pizzas.

MiniDebConf BH 2024 palestra

Crowdfunding

For the third time we ran a crowdfunding campaign and it was incredible how people contributed! The initial goal was to raise the amount equivalent to a gold tier of R$ 3,000.00. When we reached this goal, we defined a new one, equivalent to one gold tier + one silver tier (R$ 5,000.00). And again we achieved this goal. So we proposed as a final goal the value of a gold + silver + bronze tiers, which would be equivalent to R$ 6,000.00. The result was that we raised R$7,239.65 (~ USD 1,400) with the help of more than 100 people!

Thank you very much to the people who contributed any amount. As a thank you, we list the names of the people who donated. MiniDebConf BH 2024 doadores

Food, accommodation and/or travel grants for participants

Each edition of MiniDebConf brought some innovation, or some different benefit for the attendees. In this year's edition in Belo Horizonte, as with DebConfs, we offered bursaries for food, accommodation and/or travel to help those people who would like to come to the event but who would need some kind of help.

In the registration form, we included the option for the person to request a food, accommodation and/or travel bursary, but to do so, they would have to identify themselves as a contributor (official or unofficial) to Debian and write a justification for the request.

Number of people benefited:

  • Food: 69
  • Accommodation: 20
  • Travel: 18

The food bursary provided lunch and dinner every day. The lunches included attendees who live in Belo Horizonte and the region. Dinners were paid for attendees who also received accommodation and/or travel. The accommodation was held at the BH Jaraguá Hotel. And the travels included airplane or bus tickets, or fuel (for those who came by car or motorbike).

Much of the money to fund the bursaries came from the Debian Project, mainly for travels. We sent a budget request to the former Debian leader Jonathan Carter, and He promptly approved our request.

In addition to this event budget, the leader also approved individual requests sent by some DDs who preferred to request directly from him.

The experience of offering the bursaries was really good because it allowed several people to come from other cities.

MiniDebConf BH 2024 grupo

Photos and videos

You can watch recordings of the talks at the links below:

Thanks

We would like to thank all the attendees, organizers, volunteers, sponsors and supporters who contributed to the success of MiniDebConf Belo Horizonte 2024.

MiniDebConf BH 2024 grupo

Sponsors

Gold:

Silver:

Bronze:

Organizers

01 March, 2025 05:40PM

Debian Day 2024 in Santa Maria - Brazil

by por Andrew Gonçalves

Debian Day in Santa Maria - RS 2024 was held after a 5-year hiatus from the previous version of the event. It took place on the morning of August 16, in the Blue Hall of the Franciscan University (UFN) with support from the Debian community and the Computing Practices Laboratory of UFN.

The event was attended by students from all semesters of the Computer Science, Digital Games and Informational Systems, where we had the opportunity to talk to the participants.

Around 60 students attended a lecture introducing them to Free and Open Source Software, Linux and were introduced to the Debian project, both about the philosophy of the project and how it works in practice and the opportunities that have opened up for participants by being part of Debian.

After the talk, a packaging demonstration was given by local DD Francisco Vilmar, who demonstrated in practice how software packaging works in Debian.

I would like to thank all the people who helped us:

  • Debian Project
  • Professor Ana Paula Canal (UFN)
  • Professor Sylvio André Garcia (UFN)
  • Laboratory of Computing Practices
  • Francisco Vilmar (local DD)

And thanks to all the participants who attended this event asking intriguing questions and taking an interest in the world of Free Software.

Photos:

DD em Santa Maria 1 DD em Santa Maria 2 DD em Santa Maria 3 DD em Santa Maria 4

01 March, 2025 05:39PM

Debian Day 2024 in Pouso Alegre - Brazil

by Thiago Pezzo and Giovani Ferreira

Local celebrations of Debian 2024 Day also happened on [Pouso Alegre, MG, Brazil] (https://www.openstreetmap.org/relation/315431). In this year we managed to organize two days of lectures!

On the 14th of August 2024, Wednesday morning, we were on the [Federal Institute of Education, Science and Technology of the South of Minas Gerais] (https://portal.ifsuldeminas.edu.br/index.php), (IFSULDEMINAS), Pouso Alegre campus. We did an introductory presentation of the Project Debian, operating system and community, for the three years of the Technical Course in Informatics (professional high school). The event was closed to IFSULDEMINAS students and talked to 60 people.

On August 17th, 2024, a Saturday morning, we held the event open to the community at the University of the Sapucaí Valley (Univás), with institutional support of the Information Systems Course. We speak about the Debian Project with Giovani Ferreira (Debian Developer); about the Debian pt_BR translation team with Thiago Pezzo; about everyday experiences using free software with Virginia Cardoso; and on how to set up a development environment ready for production using Debian and Docker with Marcos António dos Santos. After the lectures, snacks, coffee and cake were served, while the participants talked, asked questions and shared experiences.

We would like to thank all the people who have helped us:

  • Michelle Nery (IFSULDEMINAS) and André Martins (UNIVÁS) for the aid in the local organization - Paulo Santana (Debian Brazil) by the general organization
  • Virginia Cardoso, Giovani Ferreira, Marco António and Thiago Pezzo for the lectures - And a special thanks to all of you who participated in our celebratio

Some pictures from Pouso Alegre:

Presentation at IFSULDEMINAS Pouso Alegre campus 1 Presentation at IFSULDEMINAS Pouso Alegre campus 2 Presentation at UNIVÁS Fátima campus 1 Presentation at UNIVÁS Fátima campus 2 Presentation at UNIVÁS Fátima campus 3 Presentation at UNIVÁS Fátima campus 4

01 March, 2025 05:39PM

Debian Day 30 years in São Carlos - Brazil

This year's Debian day was a pretty special one, we are celebrating 30 years! Giving the importance of this event, the Brazilian community planned a very special week. Instead of only local gatherings, we had a week of online talks streamed via Debian Brazil's youtube channel (soon the recordings will be uploaded to Debian's peertube instance). Nonetheless the local celebrations happened around the country and I've organized one in São Carlos with the help of GELOS, the FLOSS group at University of São Paulo. The event happened on August 19th and went on the whole afternoon. We had some talks about Debian and free software (see table below), a coffee break where we had the chance to talk, and finished with a group photo (check this one and many others below). Actually, it wasn't the end, we carried on the conversation about Debian and free software in a local bar :-)

We had around 30 people in the event and reached a greater audience via the announcements across the university's press releases and emails sent to our Brazilian mailing lists. You can check some of them below.

Time | Author | Title -----|--------|------ 14:10 | GELOS | Intro to GELOS 14:30 | Carlos Melara (Charles) | A ~~not so~~ Brief Introdution to the Debian Project 15:15 | Guilherme Paixão | Debian and the Free Culture 15:45 | zé | Free Software: the paths to a free life 16:15 | -- | Coffee Break 17:15 | Prof. Dr. Francisco José Monaco | The FOSS Ecosystem and You

Here are some photos taken during the event!

Preparation for Debian Day Getting things ready for the event.

Intro to GELOS Intro to GELOS talk.

Debian intro A ~~not so~~ Brief Introdution to the Debian Project talk.

Everyone knows Debian Everyone already knew Debian!

Free software Debian and the Free Culture talk

Auditorium People in the auditorium space.

Free software Free Software: the paths to a free life talk

Coffee Break Coffee Break.

FOSS Ecosystem The FOSS Ecosystem and You talk.

Group photo Group photo.

Celebration afterwards Celebration goes on in the bar.

01 March, 2025 05:39PM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities February 2025

Another short status update of what happened on my side last month. One larger blocks are the Phosh 0.45 release, also reviews took a considerable amount of time. From the fun side debugging bananui and coming up with a fix in phoc as well as setting up a small GSM network using osmocom to test more Cell Broadcast thingies were likely the most fun parts.

phosh

  • Release 0.45~beta1, 0.45~rc1, 0.45.0
  • Don't hide player when track is stopped (MR) - helps with e.g. Shortwave
  • Fetch cover art via http (MR)
  • Update CI images (MR)
  • Robustify symbol file generation (MR)
  • Handle cutouts in the indicators area (MR)
  • Reduce flicker when opening overview (MR)
  • Select less noisy default background (MR)

phoc

  • Release 0.45~beta1, 0.45~rc1, 0.45.0
  • Add support for ext-foreign-toplevel-v1 (MR)
  • Keep wlroots-0.19.x in shape and add support for ext-image-copy-capture-v1 (MR)
  • Fix geometry with scale when rendering to a buffer (MR)
  • Allow to tweak log domains at runtime (MR)
  • Print more useful information on startup (MR)
  • Provide PID of toplevel for phosh (MR)
  • Improve detection for hardware keyboards (MR) (mostly to help bananui)
  • Make tests a bit more flexible (MR)
  • Use wlr_damage_ring_rotate_buffer (MR). Another prep for 0.19.x.
  • Support wp-alpha-modifier-v1 protocol (MR)

phosh-osk-stub

phosh-tour

phosh-mobile-settings

pfs

  • Add common checks and check meson files (MR)

libphosh-rs

meta-phosh

  • Add common dot files and job to check meson formatting (MR)
  • Add l10n modules to string freeze announcement (based on suggestion by Alexandre Franke) (MR)
  • Bring over mk-gitlab-rel and improve it for alpha, beta, RCs (MR)

libcmatrix

Debian

  • Upload phoc 0.45~beta1, 0.45~rc1, 0.45.0
  • Upload phosh 0.45~beta1, 0.45~rc1, 0.45.0
  • Uplaod feedbackd 0.7.0
  • Upload xdg-desktop-portal-phosh 0.45.0
  • Upload phosh-tour 0.45~rc1, 0.45.0
  • Upload phosh-osk-stub 0.45~rc1, 0.45.0
  • Upload phosh-mobile-settings 0.45~rc1, 0.45.0
  • phosh: Fix dependencies of library dev package (MR) (and add a test)
  • Update libphosh-rs to 0.0.6 (MR)
  • Update iio-sensor-proxy to 3.6 (MR)
  • Backport qbootctl RDONLY patch (MR) to make generating the boot image more robust
  • libssc: Update to 0.2.1 (MR)
  • dom-tools: Write errors to stderr (MR)
  • dom-tools: Use underscored version to drop the branch ~ (MR)
  • libmbim: Upload 1.31.6 to experimental (MR)
  • ModemManager: Upload 1.23.12 to experimental (MR)

gmobile

  • data: Add display-panel for Furilabs FLX1 (MR)

feedbackd

grim

  • Allow to force screen capture protocol (MR)

Wayland protocols

  • Address multiple rounds of review comments in the xdg-occlusion (now xdg-cutouts) protocol (MR)

g4music

  • Set prefs parent (MR)

wlroots

  • Backport touch up fix to 0.18 (MR)

qbootctl

  • Don't recreate all partitions on read operations (MR)

bananui-shell

  • Check for keyboard caps before using them (Patch, issue)

libssc

  • Allow for python3 as interpreter as well (MR)
  • Don't leak unprefixed symbols into ABI (MR)
  • Improve info on test failures (MR)
  • Support mutiarch when loading libqrtr (MR)

ModemManager

  • Cell Broadcast: Allow to set channel list via API (MR)

Waycheck

  • Add Phosh's protocols (MR)

Bug reports

  • Support haptic feedback on Linux in Firefox (Bug)
  • Get pmOS to boot again on Nokia 2780 (Bug)

Reviews

This is not code by me but reviews on other peoples code. The list is a slightly incomplete. Thanks for the contributions!

  • Debian: qcom-phone-utils rework (MR)
  • Simplify ui files (MR) - partially merged
  • calls: Implement ussd interface for ofono (MR)
  • chatty: Build docs using gi-docgen (MR)
  • chatty: Search related improvements (MR)
  • chatty: Fix crash on stuck SMS removal (MR)
  • feedbackd: stop flash when "prefer flash" is disabled (MR) - merged
  • gmobile: Support for nothingphone notch (MR)
  • iio-sensor-proxy: polkit for compass (MR) - merged
  • libcmatrix: Improved error code (MR) - merged
  • libcmatrix: Load room members is current (MR) - merged
  • libcmatrix: Start 0.0.4 cycle (MR) - merged
  • libhosh-rs: Update to 0.45~rc1 (MR) - merged
  • libphosh-rs: Update to reduced API surface (MR) - merged
  • phoc: Use color-rect for shields: (MR) - merged
  • phoc: unresponsive toplevel state (MR)
  • phoc: view: Don't multiply by scale in get_geometry_default (MR)
  • phoc: render: Fix subsurface scaling when rendering to buffer (MR)
  • phoc: render: Avoid rendering textures with alpha set to zero (MR)
  • phoc: Render a spinner on output shield (MR)
  • phosh: Manage libpohsh API version separately (MR) - merged
  • phosh: Prepare container APIs for GTK4 (MR)
  • phosh: Reduce API surface further (MR) - merged
  • phosh: Simplify UI files for GTK4 migration (MR) - merged
  • phosh: Simplify gvc-channel bar (MR) - merged
  • phosh: Simplify parent lookup (MR) - merged
  • phosh: Split out private header for LF (MR) - merged
  • phosh: Use symbols file for libphosh (MR) - merged
  • phosh: stylesheet: Improve legibility of app grid and top bar (MR)
  • several mobile-broadband-provider-info updates under (MR) - mostly merged

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 March, 2025 01:38PM

Michael Ablassmeier

pbsav - scan backups on proxmox backup server via clamav

Little side project this weekend:

pbsav

Small utility to scan virtual machine backups on PBS via clamav.

01 March, 2025 12:00AM

February 28, 2025

Petter Reinholdtsen

Brushing up on old packages in Xiph and Debian

Since my motivation boost in the beginning of the month caused me to wrap up a new release of liboggz, I have used the same boost to wrap up new editions of libfishsound, liboggplay and libkate too. These have been tagged in upstream git, but not yet published on the Xiph download location. I am waiting for someone with access to have time to move the tarballs there, I hope it will happen in a few days. The same is the case for a minor update of liboggz too.

As I was looking at Xiph packages lacking updates, it occurred to me that there are packages in Debian that have not received a new upload in a long time. Looking for a way to identify them, I came across the ltnu script from the devscripts package. It can sort by last update, packages maintained by a single user/group, and is useful to figure out which packages a single maintainer should have a look at. But I wanted a archive wide summary. Based on the UDD SQL query used by ltnu, I ended up with the following command:

#!/bin/sh
env PGPASSWORD=udd-mirror psql --host=udd-mirror.debian.net --user=udd-mirror udd --command="
select source,
       max(version) as ver,
       max(date) as uploaded
from upload_history
where distribution='unstable' and
      source in (select source
                 from sources
                 where release='sid')
group by source
order by max(date) asc
limit 50;"

This will sort all source packages in Debian by upload date, and list the 50 oldest ones. The end result is a list of packages I suspect could use some attention:

           source            |           ver           |        uploaded        
-----------------------------+-------------------------+------------------------
 xserver-xorg-video-ivtvdev  | 1.1.2-1                 | 2011-02-09 22:26:27+00
 dynamite                    | 0.1.1-2                 | 2011-04-30 16:47:20+00
 xkbind                      | 2010.05.20-1            | 2011-05-02 22:48:05+00
 libspctag                   | 0.2-1                   | 2011-09-22 18:47:07+00
 gromit                      | 20041213-9              | 2011-11-13 21:02:56+00
 s3switch                    | 0.1-1                   | 2011-11-22 15:47:40+00
 cd5                         | 0.1-3                   | 2011-12-07 21:19:05+00
 xserver-xorg-video-glide    | 1.2.0-1                 | 2011-12-30 16:50:48+00
 blahtexml                   | 0.9-1.1                 | 2012-04-25 11:32:11+00
 aggregate                   | 1.6-7                   | 2012-05-01 00:47:11+00
 rtfilter                    | 1.1-4                   | 2012-05-11 12:50:00+00
 sic                         | 1.1-5                   | 2012-05-11 19:10:31+00
 kbdd                        | 0.6-4                   | 2012-05-12 07:33:32+00
 logtop                      | 0.4.3-1                 | 2012-06-05 23:04:20+00
 gbemol                      | 0.3.2-2                 | 2012-06-26 17:03:11+00
 pidgin-mra                  | 20100304-1              | 2012-06-29 23:07:41+00
 mumudvb                     | 1.7.1-1                 | 2012-06-30 09:12:14+00
 libdr-sundown-perl          | 0.02-1                  | 2012-08-18 10:00:07+00
 ztex-bmp                    | 20120314-2              | 2012-08-18 19:47:55+00
 display-dhammapada          | 1.0-0.1                 | 2012-12-19 12:02:32+00
 eot-utils                   | 1.1-1                   | 2013-02-19 17:02:28+00
 multiwatch                  | 1.0.0-rc1+really1.0.0-1 | 2013-02-19 17:02:35+00
 pidgin-latex                | 1.5.0-1                 | 2013-04-04 15:03:43+00
 libkeepalive                | 0.2-1                   | 2013-04-08 22:00:07+00
 dfu-programmer              | 0.6.1-1                 | 2013-04-23 13:32:32+00
 libb64                      | 1.2-3                   | 2013-05-05 21:04:51+00
 i810switch                  | 0.6.5-7.1               | 2013-05-10 13:03:18+00
 premake4                    | 4.3+repack1-2           | 2013-05-31 12:48:51+00
 unagi                       | 0.3.4-1                 | 2013-06-05 11:19:32+00
 mod-vhost-ldap              | 2.4.0-1                 | 2013-07-12 07:19:00+00
 libapache2-mod-ldap-userdir | 1.1.19-2.1              | 2013-07-12 21:22:48+00
 w9wm                        | 0.4.2-8                 | 2013-07-18 11:49:10+00
 vish                        | 0.0.20130812-1          | 2013-08-12 21:10:37+00
 xfishtank                   | 2.5-1                   | 2013-08-20 17:34:06+00
 wap-wml-tools               | 0.0.4-7                 | 2013-08-21 16:19:10+00
 ttysnoop                    | 0.12d-6                 | 2013-08-24 17:33:09+00
 libkaz                      | 1.21-2                  | 2013-09-02 16:00:10+00
 rarpd                       | 0.981107-9              | 2013-09-02 19:48:24+00
 libimager-qrcode-perl       | 0.033-1.2               | 2013-09-04 21:06:31+00
 dov4l                       | 0.9+repack-1            | 2013-09-22 19:33:25+00
 textdraw                    | 0.2+ds-0+nmu1           | 2013-10-07 21:25:03+00
 gzrt                        | 0.8-1                   | 2013-10-08 06:33:13+00
 away                        | 0.9.5+ds-0+nmu2         | 2013-10-25 01:18:18+00
 jshon                       | 20131010-1              | 2013-11-30 00:00:11+00
 libstar-parser-perl         | 0.59-4                  | 2013-12-23 21:50:43+00
 gcal                        | 3.6.3-3                 | 2013-12-29 18:33:29+00
 fonts-larabie               | 1:20011216-5            | 2014-01-02 21:20:49+00
 ccd2iso                     | 0.3-4                   | 2014-01-28 06:33:35+00
 kerneltop                   | 0.91-1                  | 2014-02-04 12:03:30+00
 vera++                      | 1.2.1-2                 | 2014-02-04 21:21:37+00
(50 rows)

So there are 8 packages last uploaded to unstable in 2011, 12 packages in 2012 and 26 packages in 2013. I suspect their maintainers need help and we should all offer our assistance. I already contacted two of them and hope the rest of the Debian community will chip in to help too. We should ensure any Debian specific patches are passed upstream if they still exist, that the package is brought up to speed with the latest Debian policy, as well as ensure the source can built with the current compiler set in Debian.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

28 February, 2025 03:45PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

printables.com feed

I wanted to follow new content posted to Printables.com with a feed reader, but Printables.com doesn't provide one. Neither do the other obvious 3d model catalogues. So, I started building one.

I have something that spits out an Atom feed and a couple of beta testers gave me some valuable feedback. I had planned to make it public, with the ultimate goal being to convince Printables.com to implement feeds themselves.

Meanwhile, I stumbled across someone else who has done basically the same thing. Here are 3rd party feeds for

The format of their feeds is JSON-Feed, which is new to me. FreshRSS and NetNewsWire seems happy with it. (I went with Atom.) I may still release my take, if I find time to make one improvmment that my beta-testers suggested.

28 February, 2025 12:26PM

hackergotchi for Joey Hess

Joey Hess

WASM Wayland Web (WWW)

So there are only 2 web browser engines, and it seems likely there will soon only be 1, and making a whole new web browser from the ground up is effectively impossible because the browsers vendors have weaponized web standards complexity against any newcomers. Maybe eventually someone will succeed and there will be 2 again. Best case. What a situation.

So throw out all the web standards. Make a browser that just runs WASM blobs, and gives them a surface to use, sorta like Wayland does. It has tabs, and a throbber, and urls, but no HTML, no javascript, no CSS. Just HTTP of WASM blobs.

This is where the web browser is going eventually anyway, except in the current line of evolution it will be WASM with all the web standards complexity baked in and reinforcing the current situation.

Would this be a mass of proprietary software? Have you looked at any corporate website's "source" lately? But what's important is that this would make it easy enough to build new browsers that they would stop being a point of control.

Want a browser that natively supports RSS? Poll the feeds, make a UI, download the WASM enclosures to view the posts. Want a browser that supports IPFS or gopher? Fork any browser and add it, the mantenance load will be minimal. Want to provide access to GPIO pins or something? Add an extension that can be accessed via the WASI component model. This would allow for so many things like that which won't and can't happen with the current market duopoly browser situation.

And as for your WASM web pages, well you can still use HTML if you like. Use the WASI component model to pull in a HTML engine. It doesn't need to support everything, just the parts of web standards that you want to use. Or you can do something entitely different in your WASM that is not HTML based at all but a better paradigm (oh hi Spritely or display postscript or gemini capsules or whatever).

Dual innovation sources or duopoly? I know which I'd prefer. This is not my project to build though.

28 February, 2025 06:37AM

Antoine Beaupré

testing the fish shell

I have been testing fish for a couple months now (this file started on 2025-01-03T23:52:15-0500 according to stat(1)), and those are my notes. I suspect people will have Opinions about my comments here. Do not comment unless you have some Constructive feedback to provide: I don't want to know if you think I am holding it Wrong. Consider that I might have used UNIX shells for longer that you have lived.

I'm not sure I'll keep using fish, but so far it's the first shell that survived heavy use outside of zsh(1) (unless you count tcsh(1), but that was in another millenia).

My normal shell is bash(1), and it's still the shell I used everywhere else than my laptop, as I haven't switched on all the servers I managed, although it is available since August 2022 on torproject.org servers. I first got interested in fish because they ported to Rust, making it one of the rare shells out there written in a "safe" and modern programming language, released after an impressive ~2 year of work with Fish 4.0.

Cool things

Current directory gets shortened, ~/wikis/anarc.at/software/desktop/wayland shows up as ~/w/a/s/d/wayland

Autocompletion rocks.

Default prompt rocks. Doesn't seem vulnerable to command injection assaults, at least it doesn't trip on the git-landmine.

It even includes pipe status output, which was a huge pain to implement in bash. Made me realized that if the last command succeeds, we don't see other failures, which is the case of my current prompt anyways! Signal reporting is better than my bash implementation too.

So far the only modification I have made to the prompt is to add a printf '\a' to output a bell.

By default, fish keeps a directory history (but separate from the pushd stack), that can be navigated with cdh, prevd, and nextd, dirh shows the history.

Less cool

I feel there's visible latency in the prompt creation.

POSIX-style functions (foo() { true }) are unsupported. Instead, fish uses whitespace-sensitive definitions like this:

function foo
    true
end

This means my (modest) collection of POSIX functions need to be ported to fish. Workaround: simple functions can be turned into aliases, which fish supports (but implements using functions).

EOF heredocs are considered to be "minor syntactic sugar". I find them frigging useful.

Process substitution is split on newlines, not whitespace. you need to pipe through string split -n " " to get the equivalent.

<(cmd) doesn't exist: they claim you can use cmd | foo - as a replacement, but that's not correct: I used <(cmd) mostly where foo does not support - as a magic character to say 'read from stdin'.

Documentation is... limited. It seems mostly geared the web docs which are... okay (but I couldn't find out about ~/.config/fish/conf.d there!), but this is really inconvenient when you're trying to browse the manual pages. For example, fish thinks there's a fish_prompt manual page, according to its own completion mechanism, but man(1) cannot find that manual page. I can't find the manual for the time command (which is actually a keyword!)

Fish renders multi-line commands with newlines. So if your terminal looks like this, say:

anarcat@angela:~> sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

... but it's actually one line, when you copy-paste the above, in foot(1), it will show up exactly like this, newlines and all:

sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

Whereas it should show up like this:

sq keyring merge torproject-keyring/lavamind-95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyring/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

Note that this is an issue specific to foot(1), alacritty(1) and gnome-terminal(1) don't suffer from that issue. I have already filed it upstream in foot and it is apparently fixed already.

Globbing is driving me nuts. You can't pass a * to a command unless fish agrees it's going to match something. You need to escape it if it doesn't immediately match, and then you need the called command to actually support globbing. 202[345] doesn't match folders named 2023, 2024, 2025, it will send the string 202[345] to the command.

Blockers

() is like $(): it's process substitution, and not a subshell. This is really impractical: I use ( cd foo ; do_something) all the time to avoid losing the current directory... I guess I'm supposed to use pushd for this, but ouch. This wouldn't be so bad if it was just for cd though. Clean constructs like this:

( git grep -l '^#!/.*bin/python' ; fdfind .py ) | sort -u

Turn into what i find rather horrible:

begin; git grep -l '^#!/.*bin/python' ; fdfind .py ; end | sort -ub

It... works, but it goes back to "oh dear, now there's a new langage again". I only found out about this construct while trying:

{ git grep -l '^#!/.*bin/python' ; fdfind .py } | sort -u 

... which fails and suggests using begin/end, at which point: why not just support the curly braces?

FOO=bar is not allowed. It's actually recognized syntax, but creates a warning. We're supposed to use set foo bar instead. This really feels like a needless divergence from standard.

Aliases are... peculiar. Typical constructs like alias mv="\mv -i" don't work because fish treats aliases as a function definition, and \ is not magical there. This can be worked around by specifying the full path to the command, with e.g. alias mv="/bin/mv -i". Another problem is trying to override a built-in, which seems completely impossible. In my case, I like the time(1) command the way it is, thank you very much, and fish provides no way to bypass that builtin. It is possible to call time(1) with command time, but it's not possible to replace the command keyword so that means a lot of typing.

Again: you can't use \ to bypass aliases. This is a huge annoyance for me. I would need to learn to type command in long form, and I use that stuff pretty regularly. I guess I could alias command to c or something, but this is one of those huge muscle memory challenges.

alt . doesn't always work the way i expect.

28 February, 2025 05:31AM

Michael Ablassmeier

proxmox backup nbdkit plugin round 2

I re-implemented the proxmox backup nbdkit plugin in C.

It seems golang shared libraries don’t play well with programs that fork().

As a result, the Plugin was only usable if nbdkit was run in foreground mode (-f), making it impossible to use nbdkit’s’ captive modes, which are quite useful.. Lessons learned.

Here is the C version

28 February, 2025 12:00AM

February 25, 2025

Divine Attah-Ohiemi

Re-styling Debian's Download Page

main points from this blog post:

I am tasked with contributing to the debianhugo project which aim is to re-design the old debian pages and make the content better accessible. We've since reached a significant milestone and migrated multiple pages including the start, intro, news and now the download page.

creating "simple" and "advanced" download pages

At first we made the "simple" download page:

  • target audience; less experienced users
  • only added more common download arhitectures/options i.e amd64-64 bit pc and arm64
  • descriptive content for easy user experience; listing positives and negatives of each option, adding download sizes
  • interactive download cards with iso, torrent and debian-cd mirror selections.

"advanced" download page:

  • more download architectures and options including testing release streams
  • straight to point content

challenges/issues while developing

The mirror selection option while it might help with faster downloads depending on the region is still somewhat a manual proccess on the user end and could come with various complications like the chosen debian-cd mirror not having the latest version of debian.

We're looking into testing if the delivery of the images/ISOs is also possible to be done through the Fastly CDN, this would prevent us to provide manual mirror selection.

25 February, 2025 09:54PM by Divine Attah-Ohiemi

Michael Ablassmeier

proxmox backup nbdkit plugin

nbdkit is a really powerful NBD toolkit.

Lately, i wanted to access VM backups from a Proxmox Backup Server via network (not by using the proxmox-backup-client map function..)

For example, to test-boot a virtual machine snapshot directly from a backup. NBD suits that usecase quite well, so i quickly put a nbdkit plugin together that can be used for this.

The available golang bindings for the proxmox backup client API, made that quite easy.

As nbdkit already comes with a neat COW plugin, its only been a few lines of go code resulting in: pbsnbd

25 February, 2025 12:00AM

February 24, 2025

Russ Allbery

Review: A Little Vice

Review: A Little Vice, by Erin E. Elkin

Publisher: Erin Elkin
Copyright: June 2024
ASIN: B0CTHRK61X
Format: Kindle
Pages: 398

A Little Vice is a stand-alone self-published magical girl novel. It is the author's first novel.

C is a high school student and frequent near-victim of monster attacks. Due to the nefarious work of Avaritia Wolf and her allies, his high school is constantly attacked by Beasts, who are magical corruptions of some internal desire taken to absurd extremes. Standing in their way are the Angelic Saints: magical girls who transform into Saint Castitas, Saint Diligentia, and Saint Temperantia and fight the monsters. The monsters for some reason seem disposed to pick C as their victim for hostage-taking, mind control, use as a human shield, and other rather traumatic activities. He's always rescued by the Saints before any great harm is done, but in some ways this makes the situation worse.

It is obvious to C that the Saints are his three friends Inessa, Ida, and Temperance, even though no one else seems able to figure this out despite the blatant clues. Inessa has been his best friend since childhood when she was awkward and needed his support. Now, she and his other friends have become literal heroes, beautiful and powerful and capable, constantly protecting the school and innocent people, and C is little more than a helpless burden to be rescued. More than anything else, he wishes he could be an Angelic Saint like them, but of course the whole idea is impossible. Boys don't get to be magical girls.

(I'm using he/him pronouns for C in this review because C uses them for himself for most of the book.)

This is a difficult book to review because it is deeply focused on portraying a specific internal emotional battle in all of its sometimes-ugly complexity, and to some extent it prioritizes that portrayal over conventional story-telling. You have probably already guessed that this is a transgender coming-out story — Elkin's choice of the magical girl genre was done with deep understanding of its role in transgender narratives — but more than that, it is a transgender coming-out story of a very specific and closely-observed type. C knows who he wishes he was, but he is certain that this transformation is absolutely impossible. He is very deep in a cycle of self-loathing for wanting something so manifestly absurd and insulting to people who have the virtues that C does not.

A Little Vice is told in the first person from C's perspective, and most of this book is a relentless observation of C's anxiety and shame spiral and reflexive deflection of any possibility of a way out. This is very well-written: Elkin knows the reader is going to disagree with C's internalized disgust and hopelessness, knows the reader desperately wants C to break out of that mindset, and clearly signals in a myriad of adroit ways that Elkin is on the reader's side and does not agree with C's analysis. C's friends are sympathetic, good-hearted people, and while sometimes oblivious, it is obvious to the reader that they're also on the reader's side and would help C in a heartbeat if they saw an opening. But much of the point of the book is that it's not that easy, that breaking out of the internal anxiety spiral is nearly impossible, and that C is very good at rejecting help, both because he cannot imagine what form it could take but also because he is certain that he does not deserve it.

In other words, much of the reading experience of this book involves watching C torture and insult himself. It's all the more effective because it isn't gratuitous. C's internal monologue sounds exactly like how an anxiety spiral feels, complete with the sort of half-effective coping mechanisms, deflections, and emotional suppression one develops to blunt that type of emotional turmoil.

I normally hate this kind of book. I am a happy ending and competence porn reader by default. The world is full of enough pain that I don't turn to fiction to read about more pain. It says a lot about how well-constructed this book is that I stuck with it. Elkin is going somewhere with the story, C gets moments of joy and delight along the way to keep the reader from bogging down completely, and the best parts of the book feel like a prolonged musical crescendo with suspended chords. There is a climax coming, but Elkin is going to make you wait for it for far longer than you want to.

The main element that protects A Little Vice from being too grim is that it is a genre novel that is very playful about both magical girls and superhero tropes in general. I've already alluded to one of those elements: Elkin plays with the Mask Principle (the inability of people to see through entirely obvious secret identities) in knowing and entertaining ways. But there are also villains, and that leads me to the absolutely delightful Avaritia Wolf, who for me was the best character in this book.

The Angelic Saints are not the only possible approach to magical girl powers in this universe. There are villains who can perform a similar transformation, except they embrace a vice rather than a virtue. Avaritia Wolf embraces the vice of greed. They (Avaritia's pronouns change over the course of the book) also have a secret identity, which I suspect will be blindingly obvious to most readers but which I'll avoid mentioning since it's still arguably a spoiler.

The primary plot arc of this book is an attempt to recruit C to the side of the villains. The Beasts are drawn to him because he has magical potential, and the villains are less picky about gender. This initially involves some creepy and disturbing mind control, but it also brings C into contact with Avaritia and Avaritia's very specific understanding of greed. As far as Avaritia is concerned, greed means wanting whatever they want, for whatever reason they feel like wanting it, and there is absolutely no reason why that shouldn't include being greedy for their friends to be happy. Or doing whatever they can to make their friends happy, whether or not that looks like villainy.

Elkin does two things with this plot that I thought were remarkably skillful. The first is that she directly examines and then undermines the "easy" transgender magical girl ending. In a world of transformation magic, someone who wants to be a girl could simply turn into a girl and thus apparently resolve the conflict in a way that makes everyone happy. I think there is an important place for that story (I am a vigorous defender of escapist fantasy and happy endings), but that is not the story that Elkin is telling. I won't go into the details of why and how the story complicates and undermines this easy ending, but it's a lot of why this book feels both painful and honest to a specific, and very not easy, transgender experience, even though it takes place in an utterly unrealistic world.

But the second, which is more happy and joyful, is that Avaritia gleefully uses a wholehearted embrace of every implication of the vice of greed to bulldoze the binary morality of the story and question the classification of human emotions into virtues and vices. They are not a hero, or even all that good; they have some serious flaws and a very anarchic attitude towards society. But Avaritia provides the compelling, infectious thrill of the character who looks at the social construction of morality that is constraining the story and decides that it's all bullshit and refuses to comply. This is almost the exact opposite of C's default emotional position at the start of the book, and watching the two characters play off of each other in a complex friendship is an absolute delight.

The ending of this book is complicated, messy, and incomplete. It is the sort of ending that I think could be incredibly powerful if it hits precisely the right chords with the reader, but if you're not that reader, it can also be a little heartbreaking because Elkin refuses to provide an easy resolution. The ending also drops some threads that I wish Elkin hadn't dropped; there are some characters who I thought deserved a resolution that they don't get. But this is one of those books where the author knows exactly what story they're trying to tell and tells it whether or not that fits what the reader wants. Those books are often not easy reading, but I think there's something special about them.

This is not the novel for people who want detailed world-building that puts a solid explanation under events. I thought Elkin did a great job playing with the conventions of an episodic anime, including starting the book on Episode 12 to imply C's backstory with monster attacks and hinting at a parallel light anime story by providing TV-trailer-style plot summaries and teasers at the start and end of each chapter. There is a fascinating interplay between the story in which the Angelic Saints are the protagonists, which the reader can partly extrapolate, and the novel about C that one is actually reading. But the details of the world-building are kept at the anime plot level: There's an arch-villain, a World Tree, and a bit of backstory, but none of it makes that much sense or turns into a coherent set of rules. This is a psychological novel; the background and rules exist to support C's story.

If you do want that psychological novel... well, I'm not sure whether to recommend this book or not. I admire the construction of this book a great deal, but I don't think appealing to the broadest possible audience was the goal. C's anxiety spiral is very repetitive, because anxiety spirals are very repetitive, and you have to be willing to read for the grace notes on the doom loop if you're going to enjoy this book. The sentence-by-sentence writing quality is fine but nothing remarkable, and is a bit shy of the average traditionally-published novel. The main appeal of A Little Vice is in the deep and unflinching portrayal of a specific emotional journey. I think this book is going to work if you're sufficiently invested in that journey that you are willing to read the brutal and repetitive parts. If you're not, there's a chance you will bounce off this hard.

I was invested, and I'm glad I read this, but caveat emptor. You may want to try a sample first.

One final note: If you're deep in the book world, you may wonder, like I did, if the title is a reference to Hanya Yanagihara's (in)famous A Little Life. I do not know for certain — I have not read that book because I am not interested in being emotionally brutalized — but if it is, I don't think there is much similarity. Both books are to some extent about four friends, but I couldn't find any other obvious connections from some Wikipedia reading, and A Little Vice, despite C's emotional turmoil, seems to be considerably more upbeat.

Content notes: Emotionally abusive parent, some thoughts of self-harm, mind control, body dysmorphia, and a lot (a lot) of shame and self-loathing.

Rating: 7 out of 10

24 February, 2025 05:04AM

Valhalla's Things

Hexagonal Pattern Weights

Posted on February 24, 2025
Tags: madeof:atoms, craft:3dprint, craft:sewing

Eight hexagonal pieces with free software / culture related graphics on top.

For quite a few years, I’ve been using pattern weights instead of pins when cutting fabric, starting with random objects and then mostly using some big washers from the local hardware store.

However, at about 22 g per washer, I needed quite a few of them, and dealing with them tended to get unwieldy; I don’t remember how it happened, but one day I decided to make myself some bigger weights with a few washers each.

I suspect I had seen somebody online with some nice hexagonal pattern weights, and hexagonal of course reminded me of the Stickers Standard, so of course I settled on an hexagon 5 cm tall and I decided I could 3D-print it in a way that could be filled with washers for weight.

Rather than bothering with adding a lid (and fitting it), I decided to close the bottom by gluing a piece of felt, with the added advantage that it would protect whatever the weight was being used on. And of course the top could be decorated with a nerdish sticker, because, well, I am a nerd.

I made a few of these pattern weights, used them for a while, was happy with them, and then a few days ago I received some new hexagonal stickers I had had printed, and realized that while I had taken a picture with all of the steps in assembling them, I had never published any kind of instructions on how to make them — and I had not even pushed the source file on the craft tools git repository.

And yesterday I fixed that: the instructions are now on my craft pattern website, with generated STL files, the git repository has been updated with the current sources, and now I’ve even written this blog post :)

24 February, 2025 12:00AM

February 23, 2025

hackergotchi for Colin Watson

Colin Watson

Qalculate time hacks

Anarcat recently wrote about Qalculate, and I think I’m a convert, even though I’ve only barely scratched the surface.

The thing I almost immediately started using it for is time calculations. When I started tracking my time, I quickly found that Timewarrior was good at keeping all the data I needed, but I often found myself extracting bits of it and reprocessing it in variously clumsy ways. For example, I often don’t finish a task in one sitting; maybe I take breaks, or I switch back and forth between a couple of different tasks. The raw output of timew summary is a bit clumsy for this, as it shows each chunk of time spent as a separate row:

$ timew summary 2025-02-18 Debian

Wk Date       Day Tags                            Start      End    Time   Total
W8 2025-02-18 Tue CVE-2025-26465, Debian,       9:41:44 10:24:17 0:42:33
                  next, openssh
                  Debian, FTBFS with GCC-15,   10:24:17 10:27:12 0:02:55
                  icoutils
                  Debian, FTBFS with GCC-15,   11:50:05 11:57:25 0:07:20
                  kali
                  Debian, Upgrade to 0.67,     11:58:21 12:12:41 0:14:20
                  python_holidays
                  Debian, FTBFS with GCC-15,   12:14:15 12:33:19 0:19:04
                  vigor
                  Debian, FTBFS with GCC-15,   12:39:02 12:39:38 0:00:36
                  python_setproctitle
                  Debian, Upgrade to 1.3.4,    12:39:39 12:46:05 0:06:26
                  python_setproctitle
                  Debian, FTBFS with GCC-15,   12:48:28 12:49:42 0:01:14
                  python_setproctitle
                  Debian, Upgrade to 3.4.1,    12:52:07 13:02:27 0:10:20 1:44:48
                  python_charset_normalizer

                                                                         1:44:48

So I wrote this Python program to help me:

#! /usr/bin/python3

"""
Summarize timewarrior data, grouped and sorted by time spent.
"""

import json
import subprocess
from argparse import ArgumentParser, RawDescriptionHelpFormatter
from collections import defaultdict
from datetime import datetime, timedelta, timezone
from operator import itemgetter

from rich import box, print
from rich.table import Table


parser = ArgumentParser(
    description=__doc__, formatter_class=RawDescriptionHelpFormatter
)
parser.add_argument("-t", "--only-total", default=False, action="store_true")
parser.add_argument(
    "range",
    nargs="?",
    default=":today",
    help="Time range (usually a hint, e.g. :lastweek)",
)
parser.add_argument("tag", nargs="*", help="Tags to filter by")
args = parser.parse_args()

entries: defaultdict[str, timedelta] = defaultdict(timedelta)
now = datetime.now(timezone.utc)
for entry in json.loads(
    subprocess.run(
        ["timew", "export", args.range, *args.tag],
        check=True,
        capture_output=True,
        text=True,
    ).stdout
):
    start = datetime.fromisoformat(entry["start"])
    if "end" in entry:
        end = datetime.fromisoformat(entry["end"])
    else:
        end = now
    entries[", ".join(entry["tags"])] += end - start

if not args.only_total:
    table = Table(box=box.SIMPLE, highlight=True)
    table.add_column("Tags")
    table.add_column("Time", justify="right")
    for tags, time in sorted(entries.items(), key=itemgetter(1), reverse=True):
        table.add_row(tags, str(time))
    print(table)

total = sum(entries.values(), start=timedelta())
hours, rest = divmod(total, timedelta(hours=1))
minutes, rest = divmod(rest, timedelta(minutes=1))
seconds = rest.seconds
print(f"Total time: {hours:02}:{minutes:02}:{seconds:02}")
$ summarize-time 2025-02-18 Debian

  Tags                                                     Time
 ───────────────────────────────────────────────────────────────
  CVE-2025-26465, Debian, next, openssh                 0:42:33
  Debian, FTBFS with GCC-15, vigor                      0:19:04
  Debian, Upgrade to 0.67, python_holidays              0:14:20
  Debian, Upgrade to 3.4.1, python_charset_normalizer   0:10:20
  Debian, FTBFS with GCC-15, kali                       0:07:20
  Debian, Upgrade to 1.3.4, python_setproctitle         0:06:26
  Debian, FTBFS with GCC-15, icoutils                   0:02:55
  Debian, FTBFS with GCC-15, python_setproctitle        0:01:50

Total time: 01:44:48

Much nicer. But that only helps with some of my reporting. At the end of a month, I have to work out how much time to bill Freexian for and fill out a timesheet, and for various reasons those queries don’t correspond to single timew tags: they sometimes correspond to the sum of all time spent on multiple tags, or to the time spent on one tag minus the time spent on another tag, or similar. As a result I quite often have to do basic arithmetic on time intervals; but that’s surprisingly annoying! I didn’t previously have good tools for that, and was reduced to doing things like str(timedelta(hours=..., minutes=..., seconds=...) + ...) in Python, which gets old fast.

Instead:

$ qalc '62:46:30 - 51:02:42 to time'
(225990 / 3600) − (183762 / 3600) = 11:43:48

I also often want to work out how much of my time I’ve spent on Debian work this month so far, since Freexian pays me for up to 20% of my work time on Debian; if I’m under that then I might want to prioritize more Debian projects, and if I’m over then I should be prioritizing more Freexian projects as otherwise I’m not going to get paid for that time.

$ summarize-time -t :month Freexian
Total time: 69:19:42
$ summarize-time -t :month Debian
Total time: 24:05:30
$ qalc '24:05:30 / (24:05:30 + 69:19:42) to %'
(86730 / 3600) / ((86730 / 3600) + (249582 / 3600)) ≈ 25.78855349%

I love it.

23 February, 2025 08:00PM by Colin Watson

Iustin Pop

Still alive, but this blog not really

Sigh, sometimes I really don’t understand time. And I don’t mean, in the physics sense.

It’s just, the days have way fewer hours than 10 years ago, or there’s way more stuff to do. Probably the latter 😅

No time for real open-source work, but I managed to do some minor coding, released a couple of minor version (as upstream), and packaged some refreshes in Debian. The later only because I got involved, against better judgement, into some too heated discussions, but they ended well, somehow. But the whole episode motivated me to actually do some work, even if minor, than just rant on mailing lists 🙊.

My sports life is still pretty erratic, but despite some repeated sickness (my fault, for not sleeping well enough) and tendon issues, there are months in which I can put down 100km. And the skiing season was really awesome.

So life goes on, but I definitely am not keeping up with entropy, even in simple things such as my inbox. One day I’ll make real blog post, not just an update, but in the meantime, it is what it is.

And yes, running 10km while still sick just because you’re bored is not the best idea. According to a friend, of course, not to my Strava account.

23 February, 2025 03:20PM

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Short journey to Mozc 2.29.5160.102+dfsg-1

Introduction

This is just a note-taking about how to upgrading Mozc package for up-coming trixie ready (with many restrictions) last year.

Maybe Mozc 2.29.5160.102+dfsg-1.3 will be shipped for Debian 13 (trixie).

FTBFS with Mozc 2.28.4715.102+dfsg-2.2

In May 2024, I've found that Mozc was removed from testing, and still in FTBFS.

#1068186 - mozc: FTBFS with abseil 20230802: ../../base/init_mozc.cc:90:29: error: ‘absl::debian5::flags_internal::ArgvListAction’ has not been declared - Debian Bug report logs

That FTBFS was fixed in the Mozc upstream, but not applied for a while. Not only upstream patch, but also additional linkage patch was required to fix it.

Mozc is the de-fact standard input method editor for Japanese. Most of Japanese uses it by default on linux desktop.

(Even though frontend input method framework is different, the background engine is Mozc in most cases - uim-mozc for task-japanese-desktop, ibus-mozc for task-japanese-gnome-desktop in Debian)

There is a case that Mozc was re-built locally with integrated external dictionary to improve quantity of vocabulary. If FTBFS keep ongoing, it means that it blocks such a usage. So I've sent patches to fix it and they were merged.

Motivation to update Mozc

With fixing #1068186, I've also found Mozc version is not synced to upstream for a long time.

At that time, Mozc in unstable was version 2.28.4715.102+dfsg, but upstream already released 2.30.5544.102. It seems that Mozc's maintainer was too busy and can't afford to update it, so I've tried to do it.

The blockers for updating Mozc

But, it was not so easy task to do so. If you want to package latest Mozc, there were many blockers.

  • Newer Mozc requires Bazel to build, but there is no Bazel package to fit it (There is bazel-bootstrap 4.x, but it's old. v6.x or newer one is required.)
  • Newer abseil and protobuf were required
  • Renderer was changed to Qt. GTK renderer was removed
  • Revise existing patchsets (e.g. for UIM, for Fcitx)

It was not all.

Road to latest Mozc

First, I knew the existence of debian-bazel, so I've posted about bazel-packaging progress.

Any updates about bazel packaging effort?

Sadly there was no response from it. Thus, it was not realistic to adopt Bazel as build tool chain. In other words, we need to keep GYP patch and maintain it.

And as another topic, upstream changed renderer from GTK+ to Qt.

Here are the major topics about each release of Mozc.

  • 2.30.5544.102 Require abseil 20240116.1 or later
  • 2.29.5544.102 GYP was deprecated
  • 2.29.5374.102
  • 2.29.5268.102 No gtk renderer anymore, need Qt.
  • 2.29.5160.102
    • The last version that gtk renderer is available.
    • --use_gyp_for_ibus_build option was removed.
  • 2.28.5029.102
  • 2.28.4880.102
  • 2.28.4715.102+dfsg Debian sid

The internal renderer change are too big, and before GYP deprecation in 2.29.5544.102, GYP support was already removed gradually.

As a result, target to 2.29.5160.102 was the practical approach to make it forward.

Revisit existing patchsets for 2.28.4715.102+dfsg

Second, need to revisit existing patchset to triage them.

  • 0001-Update-uim-mozc-to-c979f127acaeb7b35d3344e8b1e40848e.patch
    • Required
  • 0002-Support-fcitx.patch
    • Required
  • 0003-Change-compiler-from-clang-to-gcc.patch
  • 0004-Add-usage_dict.txt.patch
    • Required. (maybe)
  • 0005-Enable-verbose-build.patch
    • Required.
  • 0006-Update-gyp-using-absl.patch
    • Required and need massive refactoring.
  • 0007-common.gypi-Use-command-v-instead-of-which.patch
    • (maybe) Not needed anymore
  • 0009-protobuf.gyp-Add-latomic-to-link_settings.patch
    • Required.
  • 0010-Fix-the-compile-error-of-ParseCommandLineFlags-with.patch
    • Required. Should be merged into 0006 patch.
  • 0011-Fix-missing-abseil-gyp-link-settings.patch
    • Required. Should be merged into 0006 patch.

UIM patch was maintained in third-party repository, and directory structure was quite different from Mozc. It seems that maintenance activity was too low, so it was not enough that picking changes from macuim. It was required to fix FTBFS additionally.

Fcitx patch was also maintained in fcitx/mozc. But it tracks only master branch, so it was hard to pick patchset for specific version of Mozc.

Finally, I could manage to refresh patchset for 2.29.5160.102.

  • support-uim.patch
  • support-fcitx.patch
  • change-compiler-from-clang-to-gcc.patch
  • add-japanese-usage-dictionary.patch
  • enable-verbose-build.patch
  • update-gyp-using-system-abseil.patch
  • gyp-using-command-instead-of-which.patch
  • gyp-protobuf-link-with-atomic.patch
  • enable-deprecated-gtk-renderer.patch
  • fix-compile-error-of-ParseCommandLineFlags.patch
  • enable-use_gyp_for_ibus_build-again.patch
  • ibus-drop-needless-client_mock.patch
  • protobuf-revert-internal-cleanup.patch
  • uim-mozc-fix-ftbfs.patch

Improve packaging task

Mozc need to be repacked, but it didn't use Files-Excluded yet. So I've introduced d/watch to repack upstream source.

It makes source package more reproducible.

OT: Hardware breakage

There was another blocker to do this task. I've hit the situation that g++ cause SEGV during building Mozc randomly. First, I wonder why it fails, but digging further more, finally I've found that memory module was corrupted. Thus I've lost 32GB memory modules. :-<

Unexpected behaviour in uim-mozc

When uploaded Mozc 2.29.5160.102+dfsg-1 to experimental, I've found that there is a case that uim-mozc behaves weird. The candidate words were shown with flickering.

But it was not regression in this upload.

uim-mozc with Wayland cause that problem.

Thus GNOME and derivatives might not be affected because ibus-mozc will be used.

Mozc 2.29.5160.102+dfsg-1

As the patchset was matured, then uploaded 2.29.5160.102+dfsg-1 with --delayed 15 option.

$ dput --delayed 15 mozc_2.29.5160.102+dfsg-1_source.changes
Uploading mozc using ftp to ftp-master (host: ftp.upload.debian.org; directory: /pub/UploadQueue/DELAYED/15-day)
running allowed-distribution: check whether a local profile permits uploads to the target distribution
running protected-distribution: warn before uploading to distributions where a special policy applies
running checksum: verify checksums before uploading
running suite-mismatch: check the target distribution for common errors
running gpg: check GnuPG signatures before the upload
 signfile dsc mozc_2.29.5160.102+dfsg-1.dsc 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3

 fixup_buildinfo mozc_2.29.5160.102+dfsg-1.dsc mozc_2.29.5160.102+dfsg-1_amd64.buildinfo
 signfile buildinfo mozc_2.29.5160.102+dfsg-1_amd64.buildinfo 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3

 fixup_changes dsc mozc_2.29.5160.102+dfsg-1.dsc mozc_2.29.5160.102+dfsg-1_source.changes
 fixup_changes buildinfo mozc_2.29.5160.102+dfsg-1_amd64.buildinfo mozc_2.29.5160.102+dfsg-1_source.changes
 signfile changes mozc_2.29.5160.102+dfsg-1_source.changes 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3

Successfully signed dsc, buildinfo, changes files
Uploading mozc_2.29.5160.102+dfsg-1.dsc
Uploading mozc_2.29.5160.102+dfsg-1.debian.tar.xz
Uploading mozc_2.29.5160.102+dfsg-1_amd64.buildinfo
Uploading mozc_2.29.5160.102+dfsg-1_source.changes

Mozc 2.29.5160.102+dfsg-1 was landed to unstable at 2024-12-20.

Additional bug fixes

Additionally, the following bugs were also fixed.

These bugs were fixed in 2.29.5160.102+dfsg-1.1

And more, I've found that even though missing pristine-tar branch commit, salsa CI succeeds. I've sent MR for this issue and already merged into.

Mozc and future in Debian

In this short journey, I gave up to updating more newer Mozc because the version of dependency libraries were not updated.

Note that protobuf 3.25.4 on experimental depends on older absl 20230802, so it must be rebuilt against absl 20240722.0.

And more, we need to consider how to migrate from GTK renderer to Qt renderer in the future.

23 February, 2025 01:14PM

Valhalla's Things

Water Resistant Hood

Posted on February 23, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear

a person wearing a relatively boxy water resistant jacket with pockets and a zipper, and a detached hood with a big square cowl that reaches mid-torso.

Many years ago I made myself a vest with lots of pockets 1 in a few layers of cheap cotton, and wore the hell out of it, for the added warmth, but most importantly for the convenience provided by the pockets.

the same person showing just the vest, with two applied pockets on the bust, closed with buttons, and two big flaps covering two welted pockets at waist level, plus a strip of fabric with loops where things may be attached.

Then a few years ago the cheap cotton had started to get worn, and I decided I needed to replace it. I found a second choice (and thus cheaper :) ) version of a water-repellent cotton and made another vest, lined with regular cotton, for a total of just two layers.

the same person, this time there are also two sleeves, attached to the vest with big snaps, the outline of which can be seen on the vest. they are significantly less faded than the vest.

This time I skipped a few pockets that I had found I didn’t use that much, and I didn’t add a hood, which didn’t play that well when worn over a hoodie, but I added some detached sleeves, for additional wind protection.

This left about 60 cm and some odd pieces of leftover fabric in my stash, for which I had no plan.

the hood pulled down on the back, showing the big square cowl.

And then February2 came, and I needed a quick simple mindless handsewing projects for the first weekend, I saw the vest (which I’m wearing as much as the old one), the sleeves (which have been used much less, but I’d like to change this) and thought about making a matching hood for it, using my square hood pattern.

Since the etaproof is a bit stiff and not that nice to the touch I decide to line3 it with the same cotton as the vest and sleeves, and in the style of the pattern I did so by finishing each panel with its own lining (with regular cotton thread) and then whipstitching the panels together with the corespun cotton/poly thread recommended by the seller of the fabric. I’m not sure this is the best way to construct something that is supposed to resist the rain, but if I notice issues I can always add some sealing tape afterwards.

I do have a waterproof cape to wear in case of real rain, so this is only supposed to work for light rain anyway, and it may prove not to be an issue.

As something designed to be worn in light rain, this is also something likely to be worn in low light conditions, where 100% black may not be the wisest look. On the vest I had added reflective piping to the armscyes, but I was out of the same piping.

from the front; a flash was used to take the picture, making the border of the cowl very visible.

I did however have a spool of reflector thread made of glass fibre by Rico Design, which I think was originally sold to be worked into knitting or crochet projects (it is now discontinued) and I had never used.

I decided to try and sew a decorative blanket stitch border, a decision I had reasons to regret, since the thread broke and tangled like crazy, but in the end it was done, I like how it looks, and it seems pretty functional. I hope it won’t break with time and use, and if it does I’ll either fix it or try to redo with something else.

Of course, the day I finished sewing the reflective border it stopped raining, so I haven’t worn it yet, but I hope I’ll be able to, and if it is an horrible failure I’ll make sure to update this post.


  1. and I’ve just realized that I haven’t migrated that pattern to my pattern website, and I should do that. just don’t hold your breath for it to happen O:-). And for the time being it will not have step-by-step pictures, as I currently don’t need another vest.↩︎

  2. and February of course means a weekend in front of a screen that is showing a live-streamed conference.↩︎

  3. and of course I updated the pattern with instructions on how to add a lining.↩︎

23 February, 2025 12:00AM

February 21, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

haskell streaming libraries

For my PhD, my colleagues/collaborators and I built a distributed stream-processing system using Haskell. There are several other Haskell stream-processing systems. How do they compare?

First, let's briefly discuss and define streaming in this context.

Structure and Interpretation of Computer Programs introduces Streams as an analogue of lists, to support delayed evaluation. In brief, the inductive list type (a list is either an empty list or a head element pre-pended to another list) is replaced with a structure with a head element and a promise which, when evaluated, will generate the tail (which in turn may have a head element and a promise to generate another tail, culminating in the equivalent of an empty list.) Later on SICP also covers lazy evaluation.

However, the streaming we're talking about originates in the relational community, rather than the functional one, and is subtly different. It's about building a pipeline of processing that receives and emits data but doesn't need to (indeed, cannot) reference the whole stream (which may be infinite) at once.

Haskell streaming systems

Now let's go over some Haskell streaming systems.

conduit (2011-)

Conduit is the oldest of the ones I am reviewing here, but I doubt it's the first in the Haskell ecosystem. If I've made any obvious omissions, please let me know!

Conduit provides a new set of types to model streaming data, and a completely new set of functions which are analogues of standard Prelude functions, e.g. sumC in place of sum. It provides its own combinator(s) such as .| ( aka fuse) which is like composition but reads left-to-right.

The motivation for this is to enable (near?) constant memory usage for processing large streams of data -- presumably versus using a list-based approach and to provide some determinism: the README gives the example of "promptly closing file handles". I think this is another way of saying that it uses strict evaluation, or at least avoids lazy evaluation for some things.

Conduit offers interleaved effects: which is to say, IO can be performed mid-stream.

Conduit supports distributed operation via Data.Conduit.Network in the conduit-extra package. Michael Snoyman, principal Conduit author, wrote up how to use it here: https://www.yesodweb.com/blog/2014/03/network-conduit-async To write a distributed Conduit application, the application programmer must manually determine the boundaries between the clients/servers and write specific code to connect them.

pipes (2012-)

The Pipes Tutorial contrasts itself with "Conventional Haskell stream programming": whether that means Conduit or something else, I don't know.

Paraphrasing their pitch: Effects, Streaming Composability: pick two. That's the situation they describe for stream programming prior to Pipes. They argue Pipes offers all three.

Pipes offers it's own combinators (which read left-to-right) and offers interleaved effects.

At this point I can't really see what fundamentally distinguishes Pipes from Conduit.

Pipes has some support for distributed operation via the sister library pipes-network. It looks like you must send and receive ByteStrings, which means rolling your own serialisation for other types. As with Conduit, to send or receive over a network, the application programmer must divide their program up into the sub-programs for each node, and add the necessary ingress/egress code.

io-streams (2013-)

io-streams emphasises simple primitives. Reading and writing is done under the IO Monad, thus, in an effectful (but non-pure) context. The presence or absence of further stream data are signalled by using the Maybe type (Just more data or Nothing: the producer has finished.)

It provides a library of functions that shadow the standard Prelude, such as S.fromList, S.mapM, etc.

It's not clear to me what the motivation for io-streams is, beyond providing a simple interface. There's no declaration of intent that I can find about (e.g.) constant-memory operation.

There's no mention of or support (that I can find) for distributed operation.

streaming (2015-)

Similar to io-streams, Streaming emphasises providing a simple interface that gels well with traditional Haskell methods. Streaming provides effectful streams (via a Monad -- any Monad?) and a collection of functions for manipulating streams which are designed to closely mimic standard Prelude (and Data.List) functions.

Streaming doesn't push its own combinators: the examples provided use $ and read right-to-left.

The motivation for Streaming seems to be to avoid memory leaks caused by extracting pure lists from IO with traditional functions like mapM, which require all the list constructors to be evaluated, the list to be completely deconstructed, and then a new list constructed.

Like io-streams, the focus of the library is providing a low-level streaming abstraction, and there is no support for distributed operation.

streamly (2017-)

Streamly appears to have the grand goal of providing a unified programming tool as suited for quick-and-dirty programming tasks (normally the domain of scripting languages) and high-performance work (C, Java, Rust, etc.). Their intended audience appears to be everyone, or at least, not just existing Haskell programmers. See their rationale

Streamly offers an interface to permit composing concurrent (note: not distributed) programs via combinators. It relies upon fusing a streaming pipeline to remove intermediate list structure allocations and de-allocations (i.e. de-forestation, similar to GHC rewrite rules)

The examples I've seen use standard combinators (e.g. Control.Function.&, which reads left-to-right, and Applicative).

Streamly provide benchmarks versus Haskell pure lists, Streaming, Pipes and Conduit: these generally show Streamly several orders of magnitude faster.

I'm finding it hard to evaluate Streamly. It's big, and it's focus is wide. It provides shadows of Prelude functions, as many of these libraries do.

wrap-up

It seems almost like it must be a rite-of-passage to write a streaming system in Haskell. Stones and glass houses, I'm guilty of that too.

The focus of the surveyed libraries is mostly on providing a streaming abstraction, normally with an analogous interface to standard Haskell lists. They differ on various philosophical points (whether to abstract away the mechanics behind type synonyms, how much to leverage existing Haskell idioms, etc). A few of the libraries have some rudimentary support for distributed operation, but this is limited to connecting separate nodes together: in some cases serialising data remains the application programmer's job, and in all cases the application programmer must manually carve up their processing according to a fixed idea of what nodes they are deploying to. They all define a fixed-function pipeline.

21 February, 2025 11:52AM

hackergotchi for Luke Faraone

Luke Faraone

I'm running for the OSI board... maybe

The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats. 

In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.

Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February. 

I was dismayed when I received the following mail from Nick Vidal:

Dear Luke,

Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.

We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.

Best regards,
OSI Election Teams

Nowhere on the "OSI’s board of directors in 2025: details about the elections" page do they list a timezone for closure of nominations; they simply list Monday 17 February. 

The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.

I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy. 

I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle. 

21 February, 2025 10:35AM by Luke Faraone (noreply@blogger.com)