October 15, 2021

Sven Hoexter

ThinkPad P15v Gen1, Xorg and a Samsung QHD Display

Wasted quite some hours until I found a working Modeline in this stack exchange post so the ThinkPad works with a HDMI attached Samsung QHD display.

Internal display of the ThinkPad is a FHD display detected as eDP-1, the external one is DP-3 and according to the packaging known by Samsung as S24A600NWU. The auto deteced EDID modes for QHD - 2560x1440 - did not work at all, the display simply stays dark. After a lot of back and forth with the i915 driver vs nouveau vs nvidia/nvidia-drm with and without modesetting, the following Modeline did the magic:

xrandr --newmode 2560x1440_54.97  221.00  2560 2608 2640 2720  1440 1443 1447 1478  +HSync -VSync
xrandr --addmode DP-3 2560x1440_54.97
xrandr --output DP-3 --mode 2560x1440_54.97 --right-of eDP-1 --primary

Modelines for 50Hz and 60Hz generated with cvt 2560 1440 60 did not work, neither did the one extracted with edid-decode -X from the hex blob found in .local/share/xorg/Xorg.0.log.

From the auto-detected Modelines FHD - 1920x1080 - did work. In case someone struggles with a similar setup, that might be a starting point. Fun part, if I attach my several years old Dell E7470 everything is just fine out of the box. But that one just has an Intel GPU and not the unholy combination I've here:

$ lspci|grep -E "VGA|3D"
00:02.0 VGA compatible controller: Intel Corporation CometLake-H GT2 [UHD Graphics] (rev 05)
01:00.0 3D controller: NVIDIA Corporation GP107GLM [Quadro P620] (rev ff)

15 October, 2021 02:12PM

hackergotchi for Adnan Hodzic

Adnan Hodzic

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

15 October, 2021 01:20PM by admin

October 13, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

GitHub Streak: Round Eight

Seven years ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking 366 days:

github activity october 2013 to october 2014github activity october 2013 to october 2014

And six years ago a first follow-up appeared in this post about 731 days:

github activity october 2014 to october 2015github activity october 2014 to october 2015

And five years ago we had a followup at 1096 days

github activity october 2015 to october 2016github activity october 2015 to october 2016

And four years ago we had another one marking 1461 days

github activity october 2016 to october 2017github activity october 2016 to october 2017

And three years ago another one for 1826 days

github activity october 2017 to october 2018github activity october 2017 to october 2018

And two year another one bringing it to 2191 days

github activity october 2018 to october 2019github activity october 2018 to october 2019

And last year another one bringing it to 2257 days

github activity october 2019 to october 2020github activity october 2019 to october 2020

And as today is October 12, here is the newest one from 2020 to 2021 with a new total of 2922 days:

github activity october 2020 to october 2021github activity october 2020 to october 2021

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 October, 2021 02:44AM

RcppQuantuccia 0.0.4 on CRAN: Updated Calendar

A new release of RcppQuantuccia arrived on CRAN earlier today. RcppQuantuccia brings the Quantuccia header-only subset / variant of QuantLib to R. At the current stage, it mostly offers date and calendaring functions.

This release is the first in two years and brings a few internal updates (such as a swift to continuous integration to the trusted r-ci setup) along with a first update of the United States calendar. Which, just like RQuantLib, now knows about two new calendars LiborUpdate and FederalReserve. So now we can for example look for holidays during June of next year under the ‘Federal Reserve’ calendar and see

> library(RcppQuantuccia)
> setCalendar("UnitedStates/FederalReserve")
> getHolidays(as.Date("2022-06-01"), as.Date("2022-06-30"))
[1] "2022-06-20"
> 

that Juneteenth 2022 will be observed on (Monday) June 20th.

We should note that Quantuccia itself was a bit of a trial balloon and is not actively maintained so we may concentrate on these calendaring functions to keep them in sync with QuantLib. Being a header-only subset is good, and the removal of the (very !!) “expensive” (in terms of compiled library size) Sobol sequence-based RNG in release 0.0.3 was the right call. So time permitting, a leaner, meaner RcppQuantuccia with a calendaring focus may emerge.

The complete list changes follows.

Changes in version 0.0.4 (2021-10-12)

  • Allow for 'Null' calendar without weekends or holidays

  • Switch CI use to r-ci

  • Updated UnitedStates calendar to current QuantLib calendar

  • Small updates to DESCRIPTION and README.md

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 October, 2021 12:38AM

October 12, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Apache bug with mpm-itk

It seems there's a bug in Apache 2.4.49 (or newer) and mpm-itk; any forked child will segfault instead of exiting cleanly. This is, well, aesthetically not nice, and also causes problems with exit hooks for certain modules not being run.

It seems Apache upstream is on the case; from my limited understanding of the changes, there's not a lot mpm-itk as an Apache module can do here, so we'll just have to wait for upstream to deal with it. I hope we can get whatever fix in as a regression update to bullseye-security, though :-)

12 October, 2021 10:23PM

Antonio Terceiro

Triaging Debian build failure logs with collab-qa-tools

The Ruby team is working now on transitioning to ruby 3.0. Even though most packages will work just fine, there is substantial amount of packages that require some work to adapt. We have been doing test rebuilds for a while during transitions, but usually triaged the problems manually.

This time I decided to try collab-qa-tools, a set of scripts Lucas Nussbaum uses when he does archive-wide rebuilds. I'm really glad that I did, because those tols save a lot of time when processing a large number of build failures. In this post, I will go through how to triage a set of build logs using collab-qa-tools.

I have some some improvements to the code. Given my last merge request is very new and was not merged yet, a few of the things I mention here may apply only to my own ruby3.0 branch.

collab-qa-tools also contains a few tools do perform the builds in the cloud, but since we already had the builds done, I will not be mentioning that part and will write exclusively about the triaging tools.

Installing collab-qa-tools

The first step is to clone the git repository. Make sure you have the dependencies from debian/control installed (a few Ruby libraries).

One of the patches I sent, and was already accepted, is the ability to run it without the need to install:

source /path/to/collab-qa-tools/activate.sh

This will add the tools to your $PATH.

Preparation

The first think you need to do is getting all your build logs in a directory. The tools assume .log file extension, and they can be named ${PACKAGE}_*.log or just ${PACKAGE}.log.

Creating a TODO file

cqa-scanlogs | grep -v OK > todo

todo will contain one line for each log with a summary of the failure, if it's able to find one. collab-qa-tools has a large set of regular expressions for finding errors in the build logs

It's a good idea to split the TODO file in multiple ones. This can easily be done with split(1), and can be used to delimit triaging sessions, and/or to split the triaging between multiple people. For example this will create todo into todo00, todo01, ..., each containing 30 lines:

split --lines=30 --numeric-suffixes todo todo

Triaging

You can now do the triaging. Let's say we split the TODO files, and will start with todo01.

The first step is calling cqa-fetchbugs (it does what it says on the tin):

cqa-fetchbugs --TODO=todo01

Then, cqa-annotate will guide you through the logs and allow you to report bugs:

cqa-annotate --TODO=todo01

I wrote myself a process.sh wrapper script for cqa-fetchbugs and cqa-annotate that looks like this:

#!/bin/sh

set -eu

for todo in $@; do
  # force downloading bugs
  awk '{print(".bugs." $1)}' "${todo}" | xargs rm -f
  cqa-fetchbugs --TODO="${todo}"

  cqa-annotate \
    --template=template.txt.jinja2 \
    --TODO="${todo}"
done

The --template option is a recent contribution of mine. This is a template for the bug reports you will be sending. It uses Liquid templates, which is very similar to Jinja2 for Python. You will notice that I am even pretending it is Jinja2 to trick vim into doing syntax highlighting for me. The template I'm using looks like this:

From: {{ fullname }} <{{ email }}>
To: submit@bugs.debian.org
Subject: {{ package }}: FTBFS with ruby3.0: {{ summary }}

Source: {{ package }}
Version: {{ version | split:'+rebuild' | first }}
Severity: serious
Justification: FTBFS
Tags: bookworm sid ftbfs
User: debian-ruby@lists.debian.org
Usertags: ruby3.0

Hi,

We are about to enable building against ruby3.0 on unstable. During a test
rebuild, {{ package }} was found to fail to build in that situation.

To reproduce this locally, you need to install ruby-all-dev from experimental
on an unstable system or build chroot.

Relevant part (hopefully):
{% for line in extract %}> {{ line }}
{% endfor %}

The full build log is available at
https://people.debian.org/~kanashiro/ruby3.0/round2/builds/3/{{ package }}/{{ filename | replace:".log",".build.txt" }}

The cqa-annotate loop

cqa-annotate will parse each log file, display an extract of what it found as possibly being the relevant part, and wait for your input:

######## ruby-cocaine_0.5.8-1.1+rebuild1633376733_amd64.log ########
--------- Error:
     Failure/Error: undef_method :exitstatus

     FrozenError:
       can't modify frozen object: pid 2351759 exit 0
     # ./spec/support/unsetting_exitstatus.rb:4:in `undef_method'
     # ./spec/support/unsetting_exitstatus.rb:4:in `singleton class'
     # ./spec/support/unsetting_exitstatus.rb:3:in `assuming_no_processes_have_been_run'
     # ./spec/cocaine/errors_spec.rb:55:in `block (2 levels) in <top (required)>'

Deprecation Warnings:

Using `should` from rspec-expectations' old `:should` syntax without explicitly enabling the syntax is deprecated. Use the new `:expect` syntax or explicitly enable `:should` with `config.expect_with(:rspec) { |c| c.syntax = :should }` instead. Called from /<<PKGBUILDDIR>>/spec/cocaine/command_line/runners/backticks_runner_spec.rb:19:in `block (2 levels) in <top (required)>'.


If you need more of the backtrace for any of these deprecations to
identify where to make the necessary changes, you can configure
`config.raise_errors_for_deprecations!`, and it will turn the
deprecation warnings into errors, giving you the full backtrace.

1 deprecation warning total

Finished in 6.87 seconds (files took 2.68 seconds to load)
67 examples, 1 failure

Failed examples:

rspec ./spec/cocaine/errors_spec.rb:54 # When an error happens does not blow up if running the command errored before execution

/usr/bin/ruby3.0 -I/usr/share/rubygems-integration/all/gems/rspec-support-3.9.3/lib:/usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/lib /usr/share/rubygems-integration/all/gems/rspec-core-3.9.2/exe/rspec --pattern ./spec/\*\*/\*_spec.rb --format documentation failed
ERROR: Test "ruby3.0" failed:
----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s|i|r|f]:

You can then choose one of the options:

  • s - skip this package and do nothing. You can run cqa-annotate again later and come back to it.
  • i - ignore this package completely. New runs of cqa-annotate won't ask about it again.

    This is useful if the package only fails in your rebuilds due to another package, and would just work when that other package gets fixes. In the Ruby transition this happens when A depends on B, while B builds a C extension and failed to build against the new Ruby. So once B is fixed, A should just work (in principle). But even if A would even have problems of its own, we can't really know before B is fixed so we can retry A.

  • r - report a bug. cqa-annotate will expand the template with the data from the current log, and feed it to mutt. This is currently a limitation: you have to use mutt to report bugs.

    After you report the bug, cqa-annotate will ask if it should edit the TODO file. In my opinion it's best to not do this, and annotate the package with a bug number when you have one (see below).

  • f - view the full log. This is useful when the extract displayed doesn't have enough info, or you want to inspect something that happened earlier (or later) during the build.

When there are existing bugs in the package, cqa-annotate will list them among the options. If you choose a bug number, the TODO file will be annotated with that bug number and new runs of cqa-annotate will not ask about that package anymore. For example after I reported a bug for ruby-cocaine for the issue listed above, I aborted with a ctrl-c, and when I run my process.sh script again I then get this prompt:

----------------
ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus
----------------
package: ruby-cocaine
lines: 30
------------------------------------------------------------------------
s: skip
i: ignore this package permanently
1: 996206 serious ruby-cocaine: FTBFS with ruby3.0: ERROR: Test "ruby3.0" failed:      Failure/Error: undef_method :exitstatus ||
r: report new bug
f: view full log
------------------------------------------------------------------------
Action [s|i|1|r|f]:

Chosing 1 will annotate the TODO file with the bug number, and I'm done with this package. Only a few other hundreds to go.

12 October, 2021 08:30AM

October 10, 2021

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, September 2021

In August I was assigned 12.75 hours of work by Freexian's Debian LTS initiative and carried over 18 hours from earlier months. I worked 2 hours and will carry over the remainder.

I started work on an update to the linux package, but did not make an upload yet.

10 October, 2021 12:03PM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live contrib archive available via CTAN mirrors

The TeX Live contrib repository has been for many years now a valuable source of packages that cannot enter proper TeX Live due to license restrictions etc. I took over maintenance of it in 2017 from Taco, and since then the repository has been available via my server. Since a few weeks, tlcontrib is now available via the CTAN mirror network, the Comprehensive TeX Archive Network.

Thanks to the team of CTAN who offered to mirror the tlcontrib, users can get much faster (and reliable) access via the mirror, by adding tlcontrib as additional repository source for tlmgr, either permanently via:

tlmgr repository add https://mirrors.ctan.org/systems/texlive/tlcontrib tlcontrib

or via a one-shot

tlmgr --repository https://mirrors.ctan.org/systems/texlive/tlcontrib install PACKAGE

The list of packages can be seen here, and includes besides others:

  • support for commercial fonts (lucida, garamond, …)
  • Noto condensed
  • various sets of programs around acrotex

(and much more!).

You can install all packages from the repository by installing the new collection-contrib.

Thanks to the whole CTAN team, and please switch your repositories to the CTAN mirror to get load of my server, thanks a lot!

Enjoy.

10 October, 2021 03:39AM by Norbert Preining

October 09, 2021

Thorsten Alteholz

My Debian Activities in September 2021

FTP master

This month I accepted 224 and rejected 47 packages. This is almost thrice the rejects of last month. Please, be more careful and check your package twice before uploading. The overall number of packages that got accepted was 233.

Debian LTS

This was my eighty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 24.75h. During that time I did LTS and normal security uploads of:

  • [DLA 2755-1] btrbk security update for one CVE
  • [DLA 2762-1] grilo security update for one CVE
  • [DLA 2766-1] openssl security update for one CVE
  • [DLA 2774-1] openssl1.0 security update for one CVE
  • [DLA 2773-1] curl security update for two CVEs

I also started to work on exiv2 and faad2.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the thirty-ninth ELTS month.

Unfortunately during my allocated time I could not process any upload. I worked on openssl, curl and squashfs-tools but for one reason or another the prepared packages didn’t pass all tests. In order to avoid regressions, I postponed the uploads (meanwhile an ELA for curl was published …).

Last but not least I did some days of frontdesk duties.

Other stuff

On my neverending golang challenge I again uploaded some packages either for NEW or as source upload.

As Odyx took a break from all Debian activities, I volunteered to take care of the printing packages. Please be merciful when somethings breaks after I did an upload. My first printing upload was hplip

09 October, 2021 07:42PM by alteholz

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Lotus to Lily

The Lotus story so far

My very first experience with water flowering plants was pretty good. I learnt a good deal of things; from setting up the pond, germinating the lotus seeds, setting up the right soil, witnessing the growth of the lotus plant, fish eco-system to take care of the pond. Overall, a lot of things learnt.

But I couldn’t succeed in getting the Lotus flower. A lot many reasons. The granite container developed some leakage, which I had to fix by emptying it, which might have caused some shock to the lotus. But more than that, in my understanding, the reason for not being able to flower the lotus, was the amount of sunlight. From what I have learned, these plants need a minimum of 6-8 hrs of sunlight to really give you with the flowering result, whereas the setup of my pond was on the ground with hardly 3-4 hrs of sun. And that too, with all the plants growing, resulted in indirect sunlight.

Lotus to Lily

For my new setup, I chose a large oval container. And this one, I placed on my terrace, carefully choosing a spot where it’d get 6-8 hrs of very bright sun on usual days. Other than that, the rest of the setup is pretty similar to my previous setup in the garden. Guppies, Solar Water Fountain etc.

Initial lily pond setup

Initial lily pond setup

The good thing about the terrace is that the setup gets ample amount of sun. You can see that in the picture above, with the amount of algae that has been formed. Something that is vital for the plant’s ecosystem.

I must thank my wonderful neighbor who kindly shared a sapling from their lily plant. They already had had success with flowering the lily. So I had high hopes to see the day come when I’d be happy to write down my experience in this blog post. Though, a lot of patience is needed. I got the lily some time in January this year. And it blossomed now, in October.

So, here’s me sharing my happiness here, in particular order of how I documented the process.

Monday morning greeted with a blossomed lily

Monday morning greeted with a blossomed lily

Lily Blossom Closeup

Lily Blossom Closeup

Beautiful water reflection

Beautiful water reflection

Dawn to Dusk

The other thing that I learned in this whole lily episode is that the flower goes back to sleeping at dusk. And back to flowering again at dawn. There’s so much to learn in the surrounding, only if you spare some time to the little things with mother nature.

Lily status at dusk

Lily status at dusk

Lily the next day

Lily the next day

Not sure how long this phenomenon is to last, but overall witnessing this whole process has been mesmerizing.

This past week has been great. 🙏🏼

09 October, 2021 04:22PM by Ritesh Raj Sarraf (rrs@researchut.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

corels 0.0.3 on CRAN: Update

An updated version of the corels package is now on CRAN!

The change is chiefly an updated configure.ac (just like RcppGSL yesterday, RQuantLib two days ago, and littler three days ago.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 October, 2021 02:45AM

October 08, 2021

hackergotchi for Neil Williams

Neil Williams

Using Salsa with contrib and non-free

OK, I know contrib and non-free aren't popular topics to many but I've had to sort out some simple CI for such contributions and I thought it best to document how to get it working. You will need access to the GitLab Settings for the project in Salsa - or ask someone to add some CI/CD variables on your behalf. (If CI isn't running at all, the settings will need to be modified to enable debian/salsa-ci.yml first, in the same way as packages in main).

The default Salsa config (debian/salsa-ci.yml) won't get a passing build for packages in contrib or non-free:

# For more information on what jobs are run see:
# https://salsa.debian.org/salsa-ci-team/pipeline
#
---
include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml

Variables need to be added. piuparts can use the extra contrib and non-free components directly from these variables.

variables:
   RELEASE: 'unstable'
   SALSA_CI_COMPONENTS: 'main contrib non-free'

Many packages in contrib and non-free only support amd64 - so the i386 build job needs to be removed from the pipeline by extending the variables dictionary:

variables:
   RELEASE: 'unstable'
   SALSA_CI_COMPONENTS: 'main contrib non-free'
   SALSA_CI_DISABLE_BUILD_PACKAGE_I386: 1

The extra step is to add the apt source file variable to the CI/CD settings for the project.

The CI/CD settings are at a URL like:

https://salsa.debian.org/<team>/<project>/-/settings/ci_cd

Expand the section on Variables and add a <b>File</b> type variable:

Key: SALSA_CI_EXTRA_REPOSITORY

Value: deb https://deb.debian.org/debian/ sid contrib non-free

The pipeline will run at the next push - alternatively, the CI/CD pipelines page has a "Run Pipeline" button. The settings added to the main CI/CD settings will be applied, so there is no need to add a variable at this stage. (This can be used to test the variables themselves but only with manually triggered CI pipelines.)

For more information and additional settings (for example disabling or allowing certain jobs to fail), check https://salsa.debian.org/salsa-ci-team/pipeline

08 October, 2021 03:45PM by Neil Williams

hackergotchi for Chris Lamb

Chris Lamb

Reproducible Builds: Increasing the Integrity of Software Supply Chains (2021)

I didn't blog about it at the time, but a paper I co-authored with Stefano Zacchiroli was accepted by IEEE Software in April of this year. Titled Reproducible Builds: Increasing the Integrity of Software Supply Chains, the abstract of the paper is as follows:

Although it is possible to increase confidence in Free and Open Source Software (FOSS) by reviewing its source code, trusting code is not the same as trusting its executable counterparts. These are typically built and distributed by third-party vendors with severe security consequences if their supply chains are compromised.

In this paper, we present reproducible builds, an approach that can determine whether generated binaries correspond with their original source code. We first define the problem and then provide insight into the challenges of making real-world software build in a "reproducible" manner — that is, when every build generates bit-for-bit identical results. Through the experience of the Reproducible Builds project making the Debian Linux distribution reproducible, we also describe the affinity between reproducibility and quality assurance (QA).

The full text of the paper can be found in PDF format and should appear, with an alternative layout, within a forthcoming issue of the physical IEEE Software magazine.

08 October, 2021 02:22PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppGSL 0.3.10: Small Update

A new release 0.3.10 of RcppGSL is now on CRAN. upload](https://dirk.eddelbuettel.com/blog/2020/06/21#rcppgsl_0.3.8). The RcppGSL package provides an interface from R to the GNU GSL by relying on the Rcpp package.

This release brings a requested configure.ac update (just like RQuantLib yesterday and littler two days ago, along with the at-work tiledb update today). It also adds a small testing improvement. No user-visible changes, no new features. Details follow from the NEWS file.

Changes in version 0.3.10 (2021-10-07)

  • Tests of the client package now skip of no LIB_GSL is set

  • The configure files were updated to the standard of version 2.69 following a CRAN request

Courtesy of CRANberries, a summary of changes in the most recent release is also available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 October, 2021 02:10AM

Reproducible Builds (diffoscope)

diffoscope 187 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 187. This version includes the following changes:

* Add support for comparing .pyc files. Thanks to Sergei Trofimovich.
  (Closes: reproducible-builds/diffoscope#278)

You find out more by visiting the project homepage.

08 October, 2021 12:00AM

October 07, 2021

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Sharing mentoring a new Debian contributor experience, lots of fun

I recently did mentoring a new Debian contributor. This is carried out in a framework with OSS Gate on-boarding.

oss-gate.github.io

In "OSS Gate on-boarding", recruit a new contributor who want to work on continuously. Then, corporation sponsor its employee as a mentor. Thus, employees can do it as a one of their job.

During Aug - Oct period, I worked with a new debian contributor every 2h in a week. This experience is lots of fun, and learned a new things for me.

The most important point is: a new Debian contributor aimed to do their work continuously even though mentoring period has finished.

So, some of the work has been finished, but not for all. I tried to transfer knowledge for it.

I'm looking forward that he makes things forward in consultation with other person's help.

Here is the report about my activity as a mentor.

First OSS Gate onboarding (The article is written by Japanese)

The original blog entry is written by Japanese, I don't afford to translate it, so just paste link to google translate for your hints

I hope someone can do a similar attempt too!

For the record, I worked with a new Debian contributor about:

07 October, 2021 08:19AM

Tim Retout

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.14: More Calendars plus Update

A new release 0.4.14 of RQuantLib was uploaded to CRAN earlier today, and has by now been uploaded to Debian as well.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

The release of RQuantLib comes just one months after the previous release, and brings three changes. First, we added both two more US-based calendars (including ‘FederalReserve’) along with a bunch of not-yet-included other calendars which should complete the coverage in the R package relative to the upstream library. Should we have forgotten any, feel free to open an issue. Second, CRAN currently aims to have older autoconf conventions updated and notified maintainers of affected packages. I received a handful of these, and just like yesterday’s update to littler refreshed this here. Third, we set up automated container builds on GitHub. No other changes were made, details follow.

Changes in RQuantLib version 0.4.14 (2021-10-06)

  • Changes in RQuantLib code:

    • Several new calendars were added (Dirk in #159 closing #155)
  • Changes in RQuantLib package and setup:

    • Docker containers are now updated on a monthly schedule via GitHub Actions

    • The configure files were updated to the standard of version 2.69 following a CRAN request

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

07 October, 2021 12:18AM

October 06, 2021

hackergotchi for Matthew Palmer

Matthew Palmer

Discovering AWS IAM accounts

Let’s say you’re someone who happens to discover an AWS account number, and would like to take a stab at guessing what IAM users might be valid in that account. Tricky problem, right? Not with this One Weird Trick!

In your own AWS account, create a KMS key and try to reference an ARN representing an IAM user in the other account as the principal. If the policy is accepted by PutKeyPolicy, then that IAM account exists, and if the error says “Policy contains a statement with one or more invalid principals” then the user doesn’t exist.

As an example, say you want to guess at IAM users in AWS account 111111111111. Then make sure this statement is in your key policy:

{
  "Sid": "Test existence of user",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::111111111111:user/bob"
  },
  "Action": "kms:DescribeKey",
  "Resource": "*"
}

If that policy is accepted, then the account has an IAM user named bob. Otherwise, the user doesn’t exist. Scripting this is left as an exercise for the reader.

Sadly, wildcards aren’t accepted in the username portion of the ARN, otherwise you could do some funky searching with ...:user/a*, ...:user/b*, etc. You can’t have everything; where would you put it all?

I did mention this to AWS as an account enumeration risk. They’re of the opinion that it’s a good thing you can know what users exist in random other AWS accounts. I guess that means this is a technique you can put in your toolbox safe in the knowledge it’ll work forever.

Given this is intended behaviour, I assume you don’t need to use a key policy for this, but that’s where I stumbled over it. Also, you can probably use it to enumerate roles and anything else that can be a principal, but since I don’t see as much use for that, I didn’t bother exploring it.

There you are, then. If you ever need to guess at IAM users in another AWS account, now you can!

06 October, 2021 11:30PM by Matt Palmer (mpalmer@hezmatt.org)

hackergotchi for Thomas Goirand

Thomas Goirand

OpenStack Xena, the 24th OpenStack release, is out

It was out at 3pm, and I managed to finish uploading the last bits to Unstable at 9pm… Of course, that’s because all of the packaging and testing work was done before the release date. All of it is, as usual, also available through a Bullseye non-official backports repository that can be added using extrepo (ie: “extrepo enable openstack_xena”).

06 October, 2021 09:26PM by Goirand Thomas

Infomaniak launches its public IaaS cloud with ground breaking prices

My employer, the biggest Swiss server hosting company, Infomaniak, has just opened registration for its new IaaS (Infrastructure as a Service) OpenStack-based public cloud. Well, in fact, it’s been opened since a week or so. Previously, it was only in beta (during that beta period, we hosted (for free) the whole Debconf 21 infrastructure). Nothing really new in the market, except that it is by far cheaper than most (if not all) of its (OpenStack-based or not) competitors, including AWS, GCE or Azure.

Also, everything is hosted in Switzerland, in our own data centers, where data protection is written in the law (and Infomaniak often advertises about data privacy: this is real here…).

Not only Infomaniak is (by far…) the cheapest offer in the market (including a 300 CHF free tier: enough for our smallest VM for a full year), but we also have very good technical support, and the hardware we used is top notch:

  • 6th Gen NVMe (read intensive) Ceph-based block devices
  • AMD Epyc CPU (128 threads per server)
  • 2x 25Gbits/s (using BGP-to-the-host networking)

Some of our customers didn’t even believe how we could do such pricing. Well, the reason is simple: most of our competitors are simply really overpriced, and are making too much money. Since we’re late in the market, and that newer hardware (with many cores on a single server) makes is possible to increase density without too much over-commit, my bosses decided that since we could, we would be the cheapest! Hopefully, this will work as a good business strategy.

All of that public cloud infrastructure has been setup with OpenStack Cluster Installer for which I’m the main author, and that is fully in Debian. All of this is running on a plain, unmodified Debian Bullseye (well, with a few OpenStack packages a little bit more up-to-date, but really not much, and all of that is publicly available…).

Last, choosing the cheapest and best offer is also a good action: it promotes OpenStack and cloud computing in Debian, which I believe is the least vendor locked-in IaaS solution.

06 October, 2021 09:23PM by Goirand Thomas

Reproducible Builds

Reproducible Builds in September 2021

The goal behind “reproducible builds” is to ensure that no deliberate flaws have been introduced during compilation processes via promising or mandating that identical results are always generated from a given source. This allowing multiple third-parties to come to an agreement on whether a build was compromised or not by a system of distributed consensus.

In these reports we outline the most important things that have been happening in the world of reproducible builds in the past month:


First mentioned in our March 2021 report, Martin Heinz published two blog posts on sigstore, a project that endeavours to offer software signing as a “public good, [the] software-signing equivalent to Let’s Encrypt”. The two posts, the first entitled Sigstore: A Solution to Software Supply Chain Security outlines more about the project and justifies its existence:

Software signing is not a new problem, so there must be some solution already, right? Yes, but signing software and maintaining keys is very difficult especially for non-security folks and UX of existing tools such as PGP leave much to be desired. That’s why we need something like sigstore - an easy to use software/toolset for signing software artifacts.

The second post (titled Signing Software The Easy Way with Sigstore and Cosign) goes into some technical details of getting started.


There was an interesting thread in the /r/Signal subreddit that started from the observation that Signal’s apk doesn’t match with the source code:

Some time ago I checked Signal’s reproducibility and it failed. I asked others to test in case I did something wrong, but nobody made any reports. Since then I tried to test the Google Play Store version of the apk against one I compiled myself, and that doesn’t match either.


BitcoinBinary.org was announced this month, which aims to be a “repository of Reproducible Build Proofs for Bitcoin Projects”:

Most users are not capable of building from source code themselves, but we can at least get them able enough to check signatures and shasums. When reputable people who can tell everyone they were able to reproduce the project’s build, others at least have a secondary source of validation.


Distribution work

Frédéric Pierret announceda new testing service at beta.tests.reproducible-builds.org, showing actual rebuilds of binaries distributed by both the Debian and Qubes distributions.

In Debian specifically, however, 51 reviews of Debian packages were added, 31 were updated and 31 were removed this month to our database of classified issues. As part of this, Chris Lamb refreshed a number of notes, including the build_path_in_record_file_generated_by_pybuild_flit_plugin issue.

Elsewhere in Debian, Roland Clobus posted his Fourth status update about reproducible live-build ISO images in Jenkins to our mailing list, which mentions (amongst other things) that:

  • All major configurations are still built regularly using live-build and bullseye.
  • All major configurations are reproducible now; Jenkins is green.
    • I’ve worked around the issue for the Cinnamon image.
    • The patch was accepted and released within a few hours.
  • My main focus for the last month was on the live-build tool itself.

Related to this, there was continuing discussion on how to embed/encode the build metadata for the Debian “live” images which were being worked on by Roland Clobus.


Ariadne Conill published another detailed blog post related to various security initiatives within the Alpine Linux distribution. After summarising some conventional security work being done (eg. with sudo and the release of OpenSSH version 3.0), Ariadne included another section on reproducible builds: “The main blocker [was] determining what to do about storing the build metadata so that a build environment can be recreated precisely”.

Finally, Bernhard M. Wiedemann posted his monthly reproducible builds status report.


Community news

On our website this month, Bernhard M. Wiedemann fixed some broken links [] and Holger Levsen made a number of changes to the Who is Involved? page [][][]. On our mailing list, Magnus Ihse Bursie started a thread with the subject Reproducible builds on Java, which begins as follows:

I’m working for Oracle in the Build Group for OpenJDK which is primary responsible for creating a built artifact of the OpenJDK source code. […] For the last few years, we have worked on a low-effort, background-style project to make the build of OpenJDK itself building reproducible. We’ve come far, but there are still issues I’d like to address. []


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 183, 184 and 185 as well as performed significant triaging of merge requests and other issues in addition to making the following changes:

  • New features:

    • Support a newer format version of the R language’s .rds files. []
    • Update tests for OCaml 4.12. []
    • Add a missing format_class import. []
  • Bug fixes:

    • Don’t call close_archive when garbage collecting Archive instances, unless open_archive definitely returned successfully. This prevents, for example, an AttributeError where PGPContainer’s cleanup routines were rightfully assuming that its temporary directory had actually been created. []
    • Fix (and test) the comparison of R language’s .rdb files after refactoring temporary directory handling. []
    • Ensure that “RPM archives” exists in the Debian package description, regardless of whether python3-rpm is installed or not at build time. []
  • Codebase improvements:

    • Use our assert_diff routine in tests/comparators/test_rdata.py. []
    • Move diffoscope.versions to diffoscope.tests.utils.versions. []
    • Reformat a number of modules with Black. [][]

However, the following changes were also made:

  • Mattia Rizzolo:

    • Fix an autopkgtest caused by the androguard module not being in the (expected) python3-androguard Debian package. []
    • Appease a shellcheck warning in debian/tests/control.sh. []
    • Ignore a warning from h5py in our tests that doesn’t concern us. []
    • Drop a trailing .1 from the Standards-Version field as it’s required. []
  • Zbigniew Jędrzejewski-Szmek:

    • Stop using the deprecated distutils.spawn.find_executable utility. [][][][][]
    • Adjust an LLVM-related test for LLVM version 13. []
    • Update invocations of llvm-objdump. []
    • Adjust a test with a one-byte text file for file version 5.40. []

And, finally, Benjamin Peterson added a --diff-context option to control unified diff context size [] and Jean-Romain Garnier fixed the Macho comparator for architectures other than x86-64 [].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Testing framework

The Reproducible Builds project runs a testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Holger Levsen:

    • Drop my package rebuilder prototype as it’s not useful anymore. []
    • Schedule old packages in Debian bookworm. []
    • Stop scheduling packages for Debian buster. [][]
    • Don’t include PostgreSQL debug output in package lists. []
    • Detect Python library mismatches during build in the node health check. []
    • Update a note on updating the FreeBSD system. []
  • Mattia Rizzolo:

    • Silence a warning from Git. []
    • Update a setting to reflect that Debian bookworm is the new testing. []
    • Upgrade the PostgreSQL database to version 13. []
  • Roland Clobus (Debian “live” image generation):

    • Workaround non-reproducible config files in the libxml-sax-perl package. []
    • Use the new DNS for the ‘snapshot’ service. []
  • Vagrant Cascadian:

    • Also note that the armhf architecture also systematically varies by the kernel. []


Contributing

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 October, 2021 04:56PM

October 05, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

plocate 1.1.12 released

plocate 1.1.12 has been released, with some minor bugfixes and a minor new option.

More interesting is that plocate is now one year old! plocate 1.0.0 was released October 11th, 2020, so I'm maybe getting a bit ahead of myself, but it feels like a good milestone. I haven't really achieved my goal of being in the default Debian install, simply because there is too much resistance to having a default locate at all, but it's now hit most major distributions (thanks to a host of packagers) and has largely supplanted mlocate in general.

plocate still feels to me like the obvious way of doing a locate; like, “why didn't anyone just do this earlier”. But io_uring really couldn't have been done just a few years ago, and added a few very interesting touches (both as a programmer, and for users). In general, it feels like plocate is “done”; it's doing one thing, doing it well, and there's nothing obvious I'm missing. (I keep getting bug reports, but they're getting increasingly obscure, and it's more like a trickle than a flood.) But I'd still love for basic UNIX tools to care more about performance—our data sets are bigger than ever, yet we wrote the for a time when our systems had just a few thousand files. The old UNIX brute force mantra just isn't good enough in the 2020s. And we don't have the manpower (in terms of developer interest) to fix it.

05 October, 2021 10:09PM

October 04, 2021

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, August 2021

A Debian LTS logo

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In August, we put aside 2460 EUR to fund Debian projects. We received a new project proposal that got approved and there’s an associated bid request if you feel like proposing yourself to implement this project.

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In August, 14 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 4.0h (out of 14h assigned and 5h from August), thus carrying over 15h to September.
  • Adrian Bunk did 11h (out of 23.75h assigned), thus carrying over 12.75h to September.
  • Anton Gladky did 12h (out of 12h assigned).
  • Ben Hutchings did 1.25h (out of 13.25h assigned and 6h from August), thus carrying over 18h to September.
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did not report back about their work so we assume they did nothing (out of 23.75h assigned plus 50.75h from August), thus is carrying over 74.5h for September.
  • Holger Levsen did 3h (out of 12h assigned) to help coordinate the team, and gave back the remaining hours.
  • Lee Garrett did nothing (out of 23.75h assigned), thus is carrying over 23.75h for September.
  • Markus Koschany did 35h (out of 23.75h assigned and 30h from August), thus carrying over 18.75h to September.
  • Neil Williams did 24h (out of 23.75h assigned), thus anticipating 0.25h of October.
  • Roberto C. Sánchez did 22.25h (out of 23.75h assigned), thus carrying over 1.5h to September.
  • Sylvain Beucler did 21.5h (out of 23.75h assigned), thus carrying over 2.25h to September.
  • Thorsten Alteholz did 23.75h (out of 23.75h assigned).
  • Utkarsh Gupta did 23.75h (out of 23.75h assigned).

Evolution of the situation

In August we released 30 DLAs.

This is the first month of Jeremiah coordinating LTS contributors. We would like to thank Holger Levsen for his work on this role up to now.

Also, we would like to remark once again that we are constantly looking for new contributors. Please contact Jeremiah if you are interested!

The security tracker currently lists 73 packages with a known CVE and the dla-needed.txt file has 29 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

04 October, 2021 05:04PM by Raphaël Hertzog

hackergotchi for Jonathan Carter

Jonathan Carter

Free Software Activities for 2021-09

Here’s a bunch of uploads for September. Mostly catching up with a few things after the Bullseye release.

2021-09-01: Upload package bundlewrap (4.11.2-1) to Debian unstable.

2021-09-01: Upload package calamares (3.2.41.1-1) to Debian unstable.

2021-09-01: Upload package g-disk (1.0.8-2) to Debian unstable (Closes: #993109).

2021-09-01: Upload package bcachefs-tools (0.1+git20201025.742dbbdb-1) to Debian unstable (Closes: #976474).

2021-09-02: Upload package fabulous (0.4.0+dfsg1-1) to Debian unstable (Closes: #983247).

2021-09-02: Upload package feed2toot (0.17-1) to Debian unstable.

2021-09-02: Merge MR!1 for fracplanet.2021-09-02:2021-09-02:

2021-09-02: Upload package fracplanet (0.5.1-6) to Debian unstable (Closes: #980808).

2021-09-02: Upload package toot (0.28.0-1) to Debian unstable.

2021-09-02: Upload package toot (0.28.0-2) to Debian unstable.

2021-09-02: Merge MR!1 for gnome-shell-extension-gamemode.

2021-09-02: Merge MR!1 for gnome-shell-extension-no-annoyance.

2021-09-02: Upload package gnome-shell-extension-no-annoyance (0+20210717-12dc667) to Debian unstable (Closes: #993193).

2021-09-02: Upload package gnome-shell-extension-gamemode (5-2) to Debian unstable.

2021-09-02: Merge MR!2 for gnome-shell-extension-harddisk-led.

2021-09-02: Upload package gnome-shell-extension-pixelsaver (1.24-2) to Debian unstable (Closes: #993195).

2021-09-02: Upload package gnome-shell-extension-dash-to-panel (43-1) to Debian unstable (Closes: #993058, #989546).

2021-09-02: Upload package gnome-shell-extension-harddisk-led (25-1) to Debian unstable (Closes: #993181).

2021-09-02: Upload package gnome-shell-extension-impatience (0.4.5+git20210412-e8e132f-1) to Debian unstable (Closes: #993190).

2021-09-02: Upload package s-tui (1.1.3-1) to Debian unstable.

2021-09-02: Upload package flask-restful (0.3.9-2) to Debian unstable.

2021-09-02: Upload package python-aniso8601 (9.0.1-2) to Debian unstable.

2021-09-03: Sponsor package fonts-jetbrains-mono (2.242+ds-1) for Debian unstable (Debian Mentors request).

2021-09-03: Sponsor package python-toml (0.10.2-1) for Debian unstable (Python team request).

2021-09-03: Sponsor package buildbot (3.3.0-1) for Debian unstable (Python team request).

2021-09-03: Sponsor package python-strictyaml (1.4.4-1) for Debian unstable (Python team request).

2021-09-03: Sponsor package python-absl (0.13.0-1) for Debian unstable (Python team request).

2021-09-03: Merge MR!1 for xabacus.

2021-09-03: Upload package aalib (1.4p5-49) to Debian unstable (Closes: #981503).

2021-09-03: File ROM for gnome-shell-extension-remove-dropdown-arrows (#993577, closing: #993196).

2021-09-03: Upload package bcachefs-tools (0.1+git20210805.6c42566-2) to Debian unstable.

2021-09-05: Upload package tuxpaint (0.9.26-1~exp1) to Debian experimental.

2021-09-05: Upload package tuxpaint-config (0.17rc1-1~exp1) to Debian experimental.

2021-09-05: Upload package tuxpaint-stamps (2021.06.28-1~exp1) to Debian experimental (Closes: #988347).

2021-09-05: Upload package tuxpaint-stamps (2021.06.28-1) to Debian experimental.

2021-09-05: Upload package tuxpaint (0.9.26-1) to Debian unstable (Closes: #942889).

2021-09-06: Merge MR!2 for connectagram.

2021-09-06: Upload package connectagram (1.2.11-2) to Debian unstable.

2021-09-06: Upload package aalib (1.4p5-50) to Debian unstable (Closes: #993729).

2021-09-06: Upload packag gdisk (1.0.8-3) to Debian unstable (Closes: #993732).

2021-09-06: Upload package tuxpaint-config (0.17rc1-1) to Debian unstable.

2021-09-06: Upload package grapefruit (0.1_a3+dfsg-10) to Debian unstable.

2021-09-07: File ROM for gnome-shell-extension-hide-activities ().

2021-09-09: Upload package calamares (3.2.42-1) to Debian unstable.

2021-09-09: Upgraded peertube.debian.social to PeerTube 3.4.0.

2021-09-17: Upload calamares (3.2.43-1) to Debian unstable.

2021-09-28: Upload calamares (3.2.44.2-1) to Debian unstable.

04 October, 2021 12:50PM by jonathan

Paul Wise

FLOSS Activities September 2021

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: reopened bugs closed by a spammer
  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The purple-discord/harmony/pyemd/librecaptcha/esprima-python work was sponsored by my employer. All other work was done on a volunteer basis.

04 October, 2021 04:15AM

October 03, 2021

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Human Society

In my past, I’ve had experiences that have had me thinking. My experiences have been mostly in the South Asian Indian Sub-Continent, so may not be fair to generalize it.

  • Help with finding a job: I’ve learnt many times, that when people reach out asking for help, say, for helping them with finding a job; it isn’t about you making a recommendation/referral for them. It, instead, implies that you are indirectly being asked to find and arrange them a job.

  • Gifts for people: My impression of offering a gift to someone is usually presenting them with something I’ve found useful and dear to me. This is irrespective of whether the gift is a brand new unpacked item or a used (immaculate) one. On the contrary, many people define a gift as an item which is unpacked and one that comes with its sealed original packaging.

03 October, 2021 10:54AM by Ritesh Raj Sarraf (rrs@researchut.com)

hackergotchi for Junichi Uekawa

Junichi Uekawa

Using podman for most of my local development environment.

Using podman for most of my local development environment. For my personal/upstream development I started using podman instead of lxc and pbuilder and other toolings. Most projects provide reasonable docker images (such as rust) and I am happier keeping my environment as a whole stable while I can iterate. I have a Dockerfile for the development environment like this:

03 October, 2021 08:02AM by Junichi Uekawa

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

ANC is not for me

Active noise cancellation (ANC) has been all the rage lately in the headphones and in-ear monitors market. It seems after Apple got heavily praised for their AirPods Pro, every somewhat serious electronics manufacturer released their own design incorporating this technology.

The first headphones with ANC I remember trying on (in the early 2010s) were the Bose QuietComfort 15. Although the concept did work (they indeed cancelled some sounds), they weren't amazing and did a great job of convincing me ANC was some weird fad for people who flew often.

The Sony WH-1000X M3 folded in their case

As the years passed, chip size decreased, battery capacity improved and machine learning blossomed — truly a perfect storm for the wireless ANC headphones market. I had mostly stayed a sceptic of this tech until recently a kind friend offered to let me try a pair of Sony WH-1000X M3.

Having tested them thoroughly, I have to say I'm really tempted to buy them from him, as they truly are fantastic headphones1. They are very light, comfortable, work without a proprietary app and sound very good with the ANC on2 — if a little bass-heavy for my taste3.

The ANC itself is truly astounding and is leaps and bounds beyond what was available five years ago. It still isn't perfect and doesn't cancel ALL sounds, but transforms the low hum of the subway I find myself sitting in too often these days into a light *swoosh*. When you turn the ANC on, HVAC simply disappears. Most impressive to me is the way they completely cancel the dreaded sound of your footsteps resonating in your headphones when you walk with them.

My old pair of Senheiser HD 280 Pro, with aftermarket sheepskin earpads

I won't be keeping them though.

Whilst I really like what Sony has achieved here, I've grown to understand ANC simply isn't for me. Some of the drawbacks of ANC somewhat bother me: the ear pressure it creates is tolerable, but is an additional energy drain over long periods of time and eventually gives me headaches. I've also found ANC accentuates the motion sickness I suffer from, probably because it messes up with some part of the inner ear balance system.

Most of all, I found that it didn't provide noticeable improvements over good passive noise cancellation solutions, at least in terms of how high I have to turn the volume up to hear music or podcasts clearly. The human brain works in mysterious ways and it seems ANC cancelling a class of noises (low hums, constant noises, etc.) makes other noises so much more noticeable. People talking or bursty high pitched noises bothered me much more with ANC on than without.

So for now, I'll keep using my trusty Senheiser HD 280 Pro4 at work and good in-ear monitors with Comply foam tips on the go.


  1. This blog post certainly doesn't aim to be a comprehensive review of these headphones. See Zeos' review if you want something more in-depth. 

  2. As most ANC headphones, they don't sound as good when used passively through the 3.5mm port, but that's just a testament of how a great job Sony did of tuning the DSP. 

  3. Easily fixed using an EQ. 

  4. Retrofitted with aftermarket sheepskin earpads, they provide more than 32db of passive noise reduction. 

03 October, 2021 04:00AM by Louis-Philippe Véronneau

October 02, 2021

François Marier

Anuradha Weeraman

On blood-lines, forks and survivors

The lineage of a classic operating system

02 October, 2021 04:30PM by Anuradha Weeraman

Jacob Adams

SSH Port Forwarding and the Command Cargo Cult

Someone is Wrong on the Internet

If you look up how to only forward ports with ssh, you may come across solutions like this:

ssh -nNT -L 8000:example.com:80 user@bastion.example.com

Or perhaps this, if you also wanted to send ssh to the background:

ssh -NT -L 3306:db.example.com:3306 example.com &

Both of these use at least one option that is entirely redundant, and the second can cause ssh to fail to connect if you happen to be using password authentication. However, they seem to still persist in various articles about ssh port forwarding. I myself was using the first variation until just recently, and I figured I would write this up to inform others who might be still using these solutions.

The correct option for this situation is not -nNT but simply -N, as in:

ssh -N -L 8000:example.com:80 user@bastion.example.com

If you want to also send ssh to the background, then you’ll want to add -f instead of using your shell’s built-in & feature, because you can then input passwords into ssh if necessary1

Honestly, that’s the point of this article, so you can stop here if you want. If you’re looking for a detailed explaination of what each of these options actually does, or if you have no idea what I’m talking about, read on!

What is SSH Port Forwarding?

ssh is a powerful tool for remote access to servers, allowing you to execute commands on a remote machine. It can also forward ports through a secure tunnel with the -L and -R options. Basically, you can forward a connection to a local port to a remote server like so:

ssh -L 8080:other.example.com:80 ssh.example.com

In this example, you connect to ssh.example.com and then ssh forwards any traffic on your local machine port 80802 to other.example.com port 80 via ssh.example.com. This is a really powerful feature, allowing you to jump3 inside your firewall with just an ssh server exposed to the world.

It can work in reverse as well with the -R option, allowing connections on a remote host in to a server running on your local machine. For example, say you were running a website on your local machine on port 8080 but wanted it accessible on example.com port 804. You could use something like:

ssh -R 8080:example.com:80 example.com

The trouble with ssh port forwarding is that, absent any additional options, you also open a shell on the remote machine. If you’re planning to both work on a remote machine and use it to forward some connection, this is fine, but if you just need to forward a port quickly and don’t care about a shell at that moment, it can be annoying, especially since if the shell closes ssh will close the forwarding port as well.

This is where the -N option comes in.

SSH just forwarding ports

In the ssh manual page5, -N is explained like so:

Do not execute a remote command. This is useful for just forwarding ports.

This is all we need. It instructs ssh to run no commands on the remote server, just forward the ports specified in the -L or -R options. But people seem to think that there are a bunch of other necessary options, so what do those do?

SSH and stdin

-n controls how ssh interacts with standard input, specifically telling it not to:

Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically for‐ warded over an encrypted channel. The ssh program will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)

SSH passwords and backgrounding

-f sends ssh to background, freeing up the terminal in which you ran ssh to do other things.

Requests ssh to go to background just before command execution. This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. This implies -n. The recommended way to start X11 programs at a remote site is with something like ssh -f host xterm.

As indicated in the description of -n, this does the same thing as using the shell’s & feature with -n, but allows you to put in any necessary passwords first.

SSH and pseudo-terminals

-T is a little more complicated than the others and has a very short explanation:

Disable pseudo-terminal allocation.

It has a counterpart in -t, which is explained a little better:

Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.

As the description of -t indicates, ssh is allocating a pseudo-terminal on the remote machine, not the local one. However, I have confirmed6 that -N doesn’t allocate a pseudo-terminal either, since it doesn’t run any commands. Thus this option is entirely unnecessary.

What’s a pseudo-terminal?

This is a bit complicated, but basically it’s an interface used in UNIX-like systems, like Linux or BSD, that pretends to be a terminal (thus pseudo-terminal). Programs like your shell, or any text-based menu system made in libraries like ncurses, expect to be connected to one (when used interactively at least). Basically it fakes as if the input it is given (over the network, in the case of ssh) was typed on a physical terminal device and do things like raise an interrupt (SIGINT) if Ctrl+C is pressed.

Why?

I don’t know why these incorrect uses of ssh got passed around as correct, but I suspect it’s a form of cargo cult, where we use example commands others provide and don’t question what they do. One stack overflow answer I read that provided these options seemed to think -T was disabling the local pseudo-terminal, which might go some way towards explaining why they thought it was necessary.

I guess the moral of this story is to question everything and actually read the manual, instead of just googling it.

  1. Not that you SHOULD be using ssh with password authentication anyway, but people do. 

  2. Only on your loopback address by default, so that you’re not allowing random people on your network to use your tunnel. 

  3. In fact, ssh even supports Jump Hosts, allowing you to automatically forward an ssh connection through another machine. 

  4. I can’t say I recommend a setup like this for anything serious, as you’d need to ssh as root to forward ports less than 1024. SSH forwarding is not for permanent solutions, just short-lived connections to machines that would be otherwise inaccessible. 

  5. Specifically, my source is the ssh(1) manual page in OpenSSH 8.4, shipped as 1:8.4p1-5 in Debian bullseye. 

  6. I just forwarded ports with -N and then logged in to that same machine and looked at psuedo-terminal allocations via ps ux. No terminal is associated with ssh connections using just the -N option. 

02 October, 2021 12:00AM

October 01, 2021

Russell Coker

Getting Started With Kali

Kali is a Debian based distribution aimed at penetration testing. I haven’t felt a need to use it in the past because Debian has packages for all the scanning tools I regularly use, and all the rest are free software that can be obtained separately. But I recently decided to try it.

Here’s the URL to get Kali [1]. For a VM you can get VMWare or VirtualBox images, I chose VMWare as it’s the most popular image format and also a much smaller download (2.7G vs 4G). For unknown reasons the torrent for it didn’t work (might be a problem with my torrent client). The download link for it was extremely slow in Australia, so I downloaded it to a system in Germany and then copied it from there.

I don’t want to use either VMWare or VirtualBox because I find KVM/Qemu sufficient to do everything I want and they are in the Main section of Debian, so I needed to convert the image files. Some of the documentation on converting image formats to use with QEMU/KVM says to use a program called “kvm-img” which doesn’t seem to exist, I used “qemu-img” from the qemu-utils package in Debian/Bullseye. The man page qemu-img(1) doesn’t list the types of output format supported by the “-O” option and the examples returned by a web search show using “-O qcow2“. It turns out that the following command will convert the image to “raw” format which is the format I prefer. I use BTRFS for storing all my VM images and that does all the copy-on-write I need.

qemu-img convert Kali-Linux-2021.3-vmware-amd64.vmdk ../kali

After converting it the file was 500M smaller than the VMWare files (10.2 vs 10.7G). Probably the Kali distribution file could be reduced in size by converting it to raw and then back to VMWare format. The Kali VMWare image is compressed with 7zip which has a good compression ratio, I waited almost 90 minutes for zstd to compress it with -19 and the result was 12% larger than the 7zip file.

VMWare apparently likes to use an emulated SCSI controller, I spent some time trying to get that going in KVM. Apparently recent versions of QEMU changed the way this works and therefore older web pages aren’t helpful. Also allegedly the SCSI emulation is buggy and unreliable (but I didn’t manage to get it going so can’t be sure). It turns out that the VM is configured to work with the virtio interface, the initramfs.conf has the configuration option “MODULES=most” which makes it boot on all common configurations (good work by the initramfs-tools maintainers). The image works well with the Spice display interface, so it doesn’t capture my mouse, the window for the VM works the same way as other windows on my desktop and doesn’t capture the mouse cursor. I don’t know if this level of Spice integration is in Debian now, last time I tested it didn’t work that way.

I also downloaded Metasploitable [2] which is a VM image designed to be full of security flaws for testing the tools that are in Kali. Again it worked nicely after converting from VMWare to raw format. One thing to note about Metasploitable is that you must not make it available on the public Internet. My home network has NAT for IPv4 but all systems get public IPv6 addresses. It’s usually nice that those things just work on VMs but not for this. So I added an iptables command to block IPv6 to /etc/rc.local.

Conclusion

Installing VMs for both these distributions was quite easy. Most of my time was spent downloading from a slow server, trying to get SCSI emulation working, working out how to convert image files, and testing different compression options. The time spent doing stuff once I knew what to do was very small.

Kali has zsh as the default shell, it’s quite nice. I’ve been happy with bash for decades, but I might end up trying zsh out on other machines.

01 October, 2021 04:56AM by etbe

hackergotchi for Junichi Uekawa

Junichi Uekawa

Garbage collecting with podman system prune.

Garbage collecting with podman system prune. Tells me it freed 20GB when it seems to have freed 4GB. Wondering where that discrepancy comes from.

01 October, 2021 02:37AM by Junichi Uekawa

Reproducible Builds (diffoscope)

diffoscope 186 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 186. This version includes the following changes:

[ Chris Lamb ]
* Don't call close_archive when garbage-collecting Archive instances unless
  open_archive returned successfully. This prevents, amongst others, an
  AttributeError traceback due to PGPContainer's cleanup routines assuming
  that its temporary directory had been created.
  (Closes: reproducible-builds/diffoscope#276)
* Ensure that the string "RPM archives" exists in the package description,
  regardless of whether python3-rpm is installed or not at build time.

[ Jean-Romain Garnier ]
* Fix the LVM Macho comparator for non-x86-64 architectures.

You find out more by visiting the project homepage.

01 October, 2021 12:00AM

September 30, 2021

hackergotchi for Holger Levsen

Holger Levsen

20210930-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021 is almost over...

The Debian Reunion Hamburg 2021 is almost over now, half the attendees have already left for Regensburg, while five remaining people are still busy here, though tonight there will be two concerts at the venue, plus some lovely food and more. Together with the day trip tomorrow (involving lots of water but hopefully not from above...) I don't expect much more work to be done, so that I feel comfortable publishing the following statistics now, even though I expect some more work will be done while travelling back or due to renewed energy from the event! So I might update these numbers later :-)

Together we did:

  • 27 uploads plus 117 uploads from Gregor from the Perl term
  • 6 RC bugs closed
  • 2 RC bugs opened
  • 1 presentation given
  • 2 DM upload permission was given
  • 1 DNS entry was setup for beta.tests.reproducible-builds.org, showing preliminary real-world data for Debian and Qubes OS, thanks to Qubes OS developer Frédéric Pierret
  • 1 dinner cooked
  • 5 people didn't show up, only 2 notified us
  • 2 people showed up without registration
  • had pretty good times and other quality stuff which is harder to quantify

I think that's a pretty awesome and am very happy we did this event!

Debian Reunion / MiniDebConf Hamburg 2022 - save the date, almost!

Thus I think we should have another Debian event at Fux in 2022, and after checking suitable free dates with the venue I think what could work out is an event from Monday May 23rd until Sunday May 29th 2022. What do you think?

For now these dates are preliminary. If you know any reasons why these dates could be less than optimal for such an event, please let me know. Assuming there's no feedback indicating this is a bad idea, the dates shall be finalized by November 1st 2021. Obviously assuming having physical events is still and again a thing! ;-)

30 September, 2021 06:04PM

September 29, 2021

Ingo Juergensmann

LetsEncrypt CA Chain Issues with Ejabberd

UPDATE:
It’s not as simple as described below, I’m afraid… It appears that it’s not that easy to obtain new/correct certs from LetsEncrypt that are not cross-signed by DST Root X3 CA. Additionally older OpenSSL version (1.0.x) seems to have problems. So even when you think that your system is now ok, the remote server might refuse to accept your SSL cert. The same is valid for the SSL check on xmpp.net, which seems to be very outdated and beyond repair.

Honestly, I think the solution needs to be provided by LetsEncrypt…


I was having some strange issues on my ejabberd XMPP server the other day: some users complained that they couldn’t connect anymore to the MUC rooms on my server and in the logfiles I discovered some weird warnings about LetsEncrypt certificates being expired – although they were just new and valid until end of December.

It looks like this:

[warning] <0.368.0>@ejabberd_pkix:log_warnings/1:393 Invalid certificate in /etc/letsencrypt.sh/certs/buildd.net/fullchain.pem: at line 37: certificate is no longer valid as its expiration date has passed

and…

[warning] <0.18328.2>@ejabberd_s2s_out:process_closed/2:157 Failed to establish outbound s2s connection nerdica.net -> forum.friendi.ca: Stream closed by peer: Your server's certificate is invalid, expired, or not trusted by forum.friendi.ca (not-authorized); bouncing for 237 seconds

When checking out with some online tools like SSLlabs or XMPP.net the result was strange, because SSLlabs reported everything was ok while XMPP.net was showing the chain with X3 and D3 certs as having a short term validity of a few days:

After some days of fiddling around with the issue, trying to find a solution, it appears that there is a problem in Ejabberd when there are some old SSL certifcates being found by Ejabberd that are using the old CA chain. Ejabberd has a really nice feature where you can just configure a SSL cert directory (or a path containing wildcars. Ejabberd then reads all of the SSL certs and compare them to the list of configured domains to see which it will need and which not.

What helped (for me at least) was to delete all expired SSL certs from my directory, downloading the current CA file pems from LetsEncrypt (see their blog post from September 2020), run update-ca-certificates and ejabberdctl restart (instead of just ejabberdctl reload-config). UPDATE: be sure to use dpkg-reconfigure ca-certificates to uncheck the DST Root X3 cert (and others if necessary) before renewing the certs or running update-ca-certificates. Otherwise the update will bring in the expired cert again.

Currently I see at least two other XMPP domains in my server logs having certicate issues and in some MUCs there are reports of other domains as well.

Disclaimer: Again: this helped me in my case. I don’t know if this is a bug in Ejabberd or if this procedure will help you in your case nor if this is the proper solution. But maybe my story will help you solving your issue if you experience SSL certs issues in the last few days, especially now that the R3 cert has already expired and the X3 cert following in a few hours.

29 September, 2021 09:49PM by ij

Ian Jackson

Rust for the Polyglot Programmer

Rust is definitely in the news. I'm definitely on the bandwagon. (To me it feels like I've been wanting something like Rust for many years.) There're a huge number of intro tutorials, and of course there's the Rust Book.

A friend observed to me, though, that while there's a lot of "write your first simple Rust program" there's a dearth of material aimed at the programmer who already knows a dozen diverse languages, and is familiar with computer architecture, basic type theory, and so on. Or indeed, for the impatient and confident reader more generally. I thought I would have a go.

Rust for the Polyglot Programmer is the result.

Compared to much other information about Rust, Rust for the Polyglot Programmer is:

  • Dense: I assume a lot of starting knowledge. Or to look at it another way: I expect my reader to be able to look up and digest non-Rust-specific words or concepts.

  • Broad: I cover not just the language and tools, but also the library ecosystem, development approach, community ideology, and so on.

  • Frank: much material about Rust has a tendency to gloss over or minimise the bad parts. I don't do that. That also frees me to talk about strategies for dealing with the bad parts.

  • Non-neutral: I'm not afraid to recommend particular libraries, for example. I'm not afraid to extol Rust's virtues in the areas where it does well.

  • Terse, and sometimes shallow: I often gloss over what I see as unimportant or fiddly details; instead I provide links to appropriate reference materials.

After reading Rust for the Polyglot Programmer, you won't know everything you need to know to use Rust for any project, but should know where to find it.

Thanks are due to Simon Tatham, Mark Wooding, Daniel Silverstone, and others, for encouragement, and helpful reviews including important corrections. Particular thanks to Mark Wooding for wrestling pandoc and LaTeX into producing a pretty good-looking PDF. Remaining errors are, of course, mine.

Comments are welcome of course, via the Dreamwidth comments or Salsa issue or MR. (If you're making a contribution, please indicate your agreement with the Developer Certificate of Origin.)

edited 2021-09-29 16:58 UTC to fix Salsa link targe, and 17:01 and 17:21 to for minor grammar fixes



comment count unavailable comments

29 September, 2021 05:21PM

September 28, 2021

hackergotchi for Jonathan McDowell

Jonathan McDowell

Adding Zigbee to my home automation

SonOff Zigbee Door Sensor

My home automation setup has been fairly static recently; it does what we need and generally works fine. One area I think could be better is controlling it; we have access Home Assistant on our phones, and the Alexa downstairs can control things, but there are no smart assistants upstairs and sometimes it would be nice to just push a button to turn on the light rather than having to get my phone out. Thanks to the fact the UK generally doesn’t have neutral wire in wall switches that means looking at something battery powered. Which means wifi based devices are a poor choice, and it’s necessary to look at something lower power like Zigbee or Z-Wave.

Zigbee seems like the better choice; it’s a more open standard and there are generally more devices easily available from what I’ve seen (e.g. Philips Hue and IKEA TRÅDFRI). So I bought a couple of Xiaomi Mi Smart Home Wireless Switches, and a CC2530 module and then ignored it for the best part of a year. Finally I got around to flashing the Z-Stack firmware that Koen Kanters kindly provides. (Insert rant about hardware manufacturers that require pay-for tool chains. The CC2530 is even worse because it’s 8051 based, so SDCC should be able to compile for it, but the TI Zigbee libraries are only available in a format suitable for IAR’s embedded workbench.)

Flashing the CC2530 is a bit of faff. I ended up using the CCLib fork by Stephan Hadinger which supports the ESP8266. The nice thing about the CC2530 module is it has 2.54mm pitch pins so nice and easy to jumper up. It then needs a USB/serial dongle to connect it up to a suitable machine, where I ran Zigbee2MQTT. This scares me a bit, because it’s a bunch of node.js pulling in a chunk of stuff off npm. On the flip side, it Just Works and I was able to pair the Xiaomi button with the device and see MQTT messages that I could then use with Home Assistant. So of course I tore down that setup and went and ordered a CC2531 (the variant with USB as part of the chip). The idea here was my test setup was upstairs with my laptop, and I wanted something hooked up in a more permanent fashion.

Once the CC2531 arrived I got distracted writing support for the Desk Viking to support CCLib (and modified it a bit for Python3 and some speed ups). I flashed the dongle up with the Z-Stack Home 1.2 (default) firmware, and plugged it into the house server. At this point I more closely investigated what Home Assistant had to offer in terms of Zigbee integration. It turns out the ZHA integration has support for the ZNP protocol that the TI devices speak (I’m reasonably sure it didn’t when I first looked some time ago), so that seemed like a better option than adding the MQTT layer in the middle.

I hit some complexity passing the dongle (which turns up as /dev/ttyACM0) through to the Home Assistant container. First I needed an override file in /etc/systemd/nspawn/hass.nspawn:

[Files]
Bind=/dev/ttyACM0:/dev/zigbee

[Network]
VirtualEthernet=true

(I’m not clear why the VirtualEthernet needed to exist; without it networking broke entirely but I couldn’t see why it worked with no override file.)

A udev rule on the host to change the ownership of the device file so the root user and dialout group in the container could see it was also necessary, so into /etc/udev/rules.d/70-persistent-serial.rules went:

# Zigbee for HASS
SUBSYSTEM=="tty", ATTRS{idVendor}=="0451", ATTRS{idProduct}=="16a8", SYMLINK+="zigbee", \
	MODE="660", OWNER="1321926676", GROUP="1321926676"

In the container itself I had to switch PrivateDevices=true to PrivateDevices=false in the home-assistant.service file (which took me a while to figure out; yay for locking things down and then needing to use those locked down things).

Finally I added the hass user to the dialout group. At that point I was able to go and add the integration with Home Assistant, and add the button as a new device. Excellent. I did find I needed a newer version of Home Assistant to get support for the button, however. I was still on 2021.1.5 due to upstream dropping support for Python 3.7 and not being prepared to upgrade to Debian 11 until it was actually released, so the version of zha-quirks didn’t have the correct info. Upgrading to Home Assistant 2021.8.7 sorted that out.

There was another slight problem. Range. Really I want to use the button upstairs. The server is downstairs, and most of my internal walls are brick. The solution turned out to be a TRÅDFRI socket, which replaced the existing ESP8266 wifi socket controlling the stair lights. That was close enough to the server to have a decent signal, and it acts as a Zigbee router so provides a strong enough signal for devices upstairs. The normal approach seems to be to have a lot of Zigbee light bulbs, but I have mostly kept overhead lights as uncontrolled - we don’t use them day to day and it provides a nice fallback if the home automation has issues.

Of course installing Zigbee for a single button would seem to be a bit pointless. So I ordered up a Sonoff door sensor to put on the front door (much smaller than expected - those white boxes on the door are it in the picture above). And I have a 4 gang wireless switch ordered to go on the landing wall upstairs.

Now I’ve got a Zigbee setup there are a few more things I’m thinking of adding, where wifi isn’t an option due to the need for battery operation (monitoring the external gas meter springs to mind). The CC2530 probably isn’t suitable for my needs, as I’ll need to write some custom code to handle the bits I want, but there do seem to be some ARM based devices which might well prove suitable…

28 September, 2021 02:42PM

hackergotchi for Holger Levsen

Holger Levsen

20210928-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021, klein aber fein / small but beautiful

So the Debian Reunion Hamburg 2021 has been going on for not yet 48h now and it appears people are having fun, enjoying discussions between fellow Debian people and getting some stuff done as well. I guess I'll write some more about it once the event is over...

Sharing android screens...

For now I just want to share one little gem I learned about yesterday on the hallway track:

$ sudo apt install scrcpy
$ scrcpy

And voila, once again I can type on my phone with a proper keyboard and copy and paste URLs between the two devices. One can even watch videos on the big screen with it :)

(This requires ADB debugging enabled on the phone, but doesn't require root access.)

28 September, 2021 11:16AM

September 27, 2021

hackergotchi for Wouter Verhelst

Wouter Verhelst

SReview::Video is now Media::Convert

SReview, the video review and transcode tool that I originally wrote for FOSDEM 2017 but which has since been used for debconfs and minidebconfs as well, has long had a sizeable component for inspecting media files with ffprobe, and generating ffmpeg command lines to convert media files from one format to another.

This component, SReview::Video (plus a number of supporting modules), is really not tied very much to the SReview webinterface or the transcoding backend. That is, the webinterface and the transcoding backend obviously use the ffmpeg handling library, but they don't provide any services that SReview::Video could not live without. It did use the configuration API that I wrote for SReview, but disentangling that turned out to be very easy.

As I think SReview::Video is actually an easy to use, flexible API, I decided to refactor it into Media::Convert, and have just uploaded the latter to CPAN itself.

The intent is to refactor the SReview webinterface and transcoding backend so that they will also use Media::Convert instead of SReview::Video in the near future -- otherwise I would end up maintaining everything twice, and then what's the point. This hasn't happened yet, but it will soon (this shouldn't be too difficult after all).

Unfortunately Media::Convert doesn't currently install cleanly from CPAN, since I made it depend on Alien::ffmpeg which currently doesn't work (I'm in communication with the Alien::ffmpeg maintainer in order to get that resolved), so if you want to try it out you'll have to do a few steps manually.

I'll upload it to Debian soon, too.

27 September, 2021 12:31PM

Russ Allbery

Review: The Problem with Work

Review: The Problem with Work, by Kathi Weeks

Publisher: Duke University Press
Copyright: 2011
ISBN: 0-8223-5112-9
Format: Kindle
Pages: 304

One of the assumptions baked deeply into US society (and many others) is that people are largely defined by the work they do, and that work is the primary focus of life. Even in Marxist analysis, which is otherwise critical of how work is economically organized, work itself reigns supreme. This has been part of the feminist critique of both capitalism and Marxism, namely that both devalue domestic labor that has traditionally been unpaid, but even that criticism is normally framed as expanding the definition of work to include more of human activity. A few exceptions aside, we shy away from fundamentally rethinking the centrality of work to human experience.

The Problem with Work begins as a critical analysis of that centrality of work and a history of some less-well-known movements against it. But, more valuably for me, it becomes a discussion of the types and merits of utopian thinking, including why convincing other people is not the only purpose for making a political demand.

The largest problem with this book will be obvious early on: the writing style ranges from unnecessarily complex to nearly unreadable. Here's an excerpt from the first chapter:

The lack of interest in representing the daily grind of work routines in various forms of popular culture is perhaps understandable, as is the tendency among cultural critics to focus on the animation and meaningfulness of commodities rather than the eclipse of laboring activity that Marx identifies as the source of their fetishization (Marx 1976, 164-65). The preference for a level of abstraction that tends not to register either the qualitative dimensions or the hierarchical relations of work can also account for its relative neglect in the field of mainstream economics. But the lack of attention to the lived experiences and political textures of work within political theory would seem to be another matter. Indeed, political theorists tend to be more interested in our lives as citizens and noncitizens, legal subjects and bearers of rights, consumers and spectators, religious devotees and family members, than in our daily lives as workers.

This is only a quarter of a paragraph, and the entire book is written like this.

I don't mind the occasional use of longer words for their precise meanings ("qualitative," "hierarchical") and can tolerate the academic habit of inserting mostly unnecessary citations. I have less patience with the meandering and complex sentences, excessive hedge words ("perhaps," "seem to be," "tend to be"), unnecessarily indirect phrasing ("can also account for" instead of "explains"), or obscure terms that are unnecessary to the sentence (what is "animation of commodities"?). And please have mercy and throw a reader some paragraph breaks.

The writing style means substantial unnecessary effort for the reader, which is why it took me six months to read this book. It stalled all of my non-work non-fiction reading and I'm not sure it was worth the effort. That's unfortunate, because there were several important ideas in here that were new to me.

The first was the overview of the "wages for housework" movement, which I had not previously heard of. It started from the common feminist position that traditional "women's work" is undervalued and advocated taking the next logical step of giving it equality with paid work by making it paid work. This was not successful, obviously, although the increasing prevalence of day care and cleaning services has made it partly true within certain economic classes in an odd and more capitalist way. While I, like Weeks, am dubious this was the right remedy, the observation that household work is essential to support capitalist activity but is unmeasured by GDP and often uncompensated both economically and socially has only become more accurate since the 1970s.

Weeks argues that the usefulness of this movement should not be judged by its lack of success in achieving its demands, which leads to the second interesting point: the role of utopian demands in reframing and expanding a discussion. I normally judge a political demand on its effectiveness at convincing others to grant that demand, by which standard many activist campaigns (such as wages for housework) are unsuccessful. Weeks points out that making a utopian demand changes the way the person making the demand perceives the world, and this can have value even if the demand will never be granted. For example, to demand wages for housework requires rethinking how work is defined, what activities are compensated by the economic system, how such wages would be paid, and the implications for domestic social structures, among other things. That, in turn, helps in questioning assumptions and understanding more about how existing society sustains itself.

Similarly, even if a utopian demand is never granted by society at large, forcing it to be rebutted can produce the same movement in thinking in others. In order to rebut a demand, one has to take it seriously and mount a defense of the premises that would allow one to rebut it. That can open a path to discussing and questioning those premises, which can have long-term persuasive power apart from the specific utopian demand. It's a similar concept as the Overton Window, but with more nuance: the idea isn't solely to move the perceived range of accepted discussion, but to force society to examine its assumptions and premises well enough to defend them, or possibly discover they're harder to defend than one might have thought.

Weeks applies this principle to universal basic income, as a utopian demand that questions the premise that work should be central to personal identity. I kept thinking of the Black Lives Matter movement and the demand to abolish the police, which (at least in popular discussion) is a more recent example than this book but follows many of the same principles. The demand itself is unlikely to be met, but to rebut it requires defending the existence and nature of the police. That in turn leads to questions about the effectiveness of policing, such as clearance rates (which are far lower than one might have assumed). Many more examples came to mind. I've had that experience of discovering problems with my assumptions I'd never considered when debating others, but had not previously linked it with the merits of making demands that may be politically infeasible.

The book closes with an interesting discussion of the types of utopias, starting from the closed utopia in the style of Thomas More in which the author sets up an ideal society. Weeks points out that this sort of utopia tends to collapse with the first impossibility or inconsistency the reader notices. The next step is utopias that acknowledge their own limitations and problems, which are more engaging (she cites Le Guin's The Dispossessed). More conditional than that is the utopian manifesto, which only addresses part of society. The least comprehensive and the most open is the utopian demand, such as wages for housework or universal basic income, which asks for a specific piece of utopia while intentionally leaving unspecified the rest of the society that could achieve it. The demand leaves room to maneuver; one can discuss possible improvements to society that would approach that utopian goal without committing to a single approach.

I wish this book were better-written and easier to read, since as it stands I can't recommend it. There were large sections that I read but didn't have the mental energy to fully decipher or retain, such as the extended discussion of Ernst Bloch and Friedrich Nietzsche in the context of utopias. But that way of thinking about utopian demands and their merits for both the people making them and for those rebutting them, even if they're not politically feasible, will stick with me.

Rating: 5 out of 10

27 September, 2021 04:41AM

September 22, 2021

hackergotchi for Gunnar Wolf

Gunnar Wolf

New book out! «Mecanismos de privacidad y anonimato en redes, una visión transdisciplinaria»

Three years ago, I organized a fun and most interesting colloquium at Facultad de Ingeniería, UNAM about privacy and anonymity online.

I would have loved to share this earlier with the world, but… The university’s processes are quite slow (and, to be fair, I also took quite a bit of time to push things through). But today, I’m finally happy to share the result of that work with all of you. We managed to get 11 of the talks in the colloquium as articles. The back-cover text reads (in Spanish):

We live in an era where human to human interactions are more and more often mediated by technology. This, of course, means everything leaves a digital trail, a trail that can follow and us relentlessly. Privacy is recognized, however, as a human right — although one that is under growing threats. Anonymity is the best tool to secure it. Throughout history, clear steps have been taken –legally, technically and technologically– to defend it. Various studies point out this is not only a known issue for the network's users, but that a large majority has searched for alternatives to protect their communications' privacy. This book stems from a colloquium held by *Laboratorio de Investigación y Desarrollo de Software Libre* (LIDSOL) of Facultad de Ingeniería, UNAM, towards the end of 2018, where we invited experts from disciplines so far apart as law and systems development, psychology and economics, to contribute with their experiences to a transdisciplinary vision.

If this interests you, you can get the book at our institutional repository.

Oh, and… What about the birds?

In Spanish (Mexican only?), we have a saying, «hay pájaros en el alambre», meaning watch your words, as uninvited people might be listening, as birds resting over the wires over which phone calls used to be made (back in the day where wiretapping was that easy). I found the design proposed by our editor ingenious and very fitting for our topic!

22 September, 2021 06:26PM

Ian Jackson

Tricky compatibility issue - Rust's io::ErrorKind

This post is about some changes recently made to Rust's ErrorKind, which aims to categorise OS errors in a portable way.

Audiences for this post

  • The educated general reader interested in a case study involving error handling, stability, API design, and/or Rust.
  • Rust users who have tripped over these changes. If this is you, you can cut to the chase and skip to How to fix.

Background and context

Error handling principles

Handling different errors differently is often important (although, sadly, often neglected). For example, if a program tries to read its default configuration file, and gets a "file not found" error, it can proceed with its default configuration, knowing that the user hasn't provided a specific config.

If it gets some other error, it should probably complain and quit, printing the message from the error (and the filename). Otherwise, if the network fileserver is down (say), the program might erroneously run with the default configuration and do something entirely wrong.

Rust's portability aims

The Rust programming language tries to make it straightforward to write portable code. Portable error handling is always a bit tricky. One of Rust's facilities in this area is std::io::ErrorKind which is an enum which tries to categorise (and, sometimes, enumerate) OS errors. The idea is that a program can check the error kind, and handle the error accordingly.

That these ErrorKinds are part of the Rust standard library means that to get this right, you don't need to delve down and get the actual underlying operating system error number, and write separate code for each platform you want to support. You can check whether the error is ErrorKind::NotFound (or whatever).

Because ErrorKind is so important in many Rust APIs, some code which isn't really doing an OS call can still have to provide an ErrorKind. For this purpose, Rust provides a special category ErrorKind::Other, which doesn't correspond to any particular OS error.

Rust's stability aims and approach

Another thing Rust tries to do is keep existing code working. More specifically, Rust tries to:

  1. Avoid making changes which would contradict the previously-published documentation of Rust's language and features.
  2. Tell you if you accidentally rely on properties which are not part of the published documentation.

By and large, this has been very successful. It means that if you write code now, and it compiles and runs cleanly, it is quite likely that it will continue work properly in the future, even as the language and ecosystem evolves.

This blog post is about a case where Rust failed to do (2), above, and, sadly, it turned out that several people had accidentally relied on something the Rust project definitely intended to change. Furthermore, it was something which needed to change. And the new (corrected) way of using the API is not so obvious.

Rust enums, as relevant to io::ErrorKind

(Very briefly:)

When you have a value which is an io::ErrorKind, you can compare it with specific values:

    if error.kind() == ErrorKind::NotFound { ...
  
But in Rust it's more usual to write something like this (which you can read like a switch statement):
    match error.kind() {
      ErrorKind::NotFound => use_default_configuration(),
      _ => panic!("could not read config file {}: {}", &file, &error),
    }
  

Here _ means "anything else". Rust insists that match statements are exhaustive, meaning that each one covers all the possibilities. So if you left out the line with the _, it wouldn't compile.

Rust enums can also be marked non_exhaustive, which is a declaration by the API designer that they plan to add more kinds. This has been done for ErrorKind, so the _ is mandatory, even if you write out all the possibilities that exist right now: this ensures that if new ErrorKinds appear, they won't stop your code compiling.

Improving the error categorisation

The set of error categories stabilised in Rust 1.0 was too small. It missed many important kinds of error. This makes writing error-handling code awkward. In any case, we expect to add new error categories occasionally. I set about trying to improve this by proposing new ErrorKinds. This obviously needed considerable community review, which is why it took about 9 months.

The trouble with Other and tests

Rust has to assign an ErrorKind to every OS error, even ones it doesn't really know about. Until recently, it mapped all errors it didn't understand to ErrorKind::Other - reusing the category for "not an OS error at all".

Serious people who write serious code like to have serious tests. In particular, testing error conditions is really important. For example, you might want to test your program's handling of disk full, to make sure it didn't crash, or corrupt files. You would set up some contraption that would simulate a full disk. And then, in your tests, you might check that the error was correct.

But until very recently (still now, in Stable Rust), there was no ErrorKind::StorageFull. You would get ErrorKind::Other. If you were diligent you would dig out the OS error code (and check for ENOSPC on Unix, corresponding Windows errors, etc.). But that's tiresome. The more obvious thing to do is to check that the kind is Other.

Obvious but wrong. ErrorKind is non_exhaustive, implying that more error kinds will appears, and, naturally, these would more finely categorise previously-Other OS errors.

Unfortunately, the documentation note

Errors that are Other now may move to a different or a new ErrorKind variant in the future.
was only added in May 2020. So the wrongness of the "obvious" approach was, itself, not very obvious. And even with that docs note, there was no compiler warning or anything.

The unfortunate result is that there is a body of code out there in the world which might break any time an error that was previously Other becomes properly categorised. Furthermore, there was nothing stopping new people writing new obvious-but-wrong code.

Chosen solution: Uncategorized

The Rust developers wanted an engineered safeguard against the bug of assuming that a particular error shows up as Other. They chose the following solution:

There is now a new ErrorKind::Uncategorized which is now used for all OS errors for which there isn't a more specific categorisation. The fallback translation of unknown errors was changed from Other to Uncategorised.

This is de jure justified by the fact that this enum has always been marked non_exhaustive. But in practice because this bug wasn't previously detected, there is such code in the wild. That code now breaks (usually, in the form of failing test cases). Usually when Rust starts to detect a particular programming error, it is reported as a new warning, which doesn't break anything. But that's not possible here, because this is a behavioural change.

The new ErrorKind::Uncategorized is marked unstable. This makes it impossible to write code on Stable Rust which insists that an error comes out as Uncategorized. So, one cannot now write code that will break when new ErrorKinds are added. That's the intended effect.

The downside is that this does break old code, and, worse, it is not as clear as it should be what the fixed code looks like.

Alternatives considered and rejected by the Rust developers

Not adding more ErrorKinds

This was not tenable. The existing set is already too small, and error categorisation is in any case expected to improve over time.

Just adding ErrorKinds as had been done before

This would mean occasionally breaking test cases (or, possibly, production code) when an error that was previously Other becomes categorised. The broken code would have been "obvious", but de jure wrong, just as it is now, So this option amounts to expecting this broken code to continue to be written and continuing to break it occasionally.

Somehow using Rust's Edition system

The Rust language has a system to allow language evolution, where code declares its Edition (2015, 2018, 2021). Code from multiple editions can be combined, so that the ecosystem can upgrade gradually.

It's not clear how this could be used for ErrorKind, though. Errors have to be passed between code with different editions. If those different editions had different categorisations, the resulting programs would have incoherent and broken error handling.

Also some of the schemes for making this change would mean that new ErrorKinds could only be stabilised about once every 3 years, which is far too slow.

How to fix code broken by this change

Most main-line error handling code already has a fallback case for unknown errors. Simply replacing any occurrence of Other with _ is right.

How to fix thorough tests

The tricky problem is tests. Typically, a thorough test case wants to check that the error is "precisely as expected" (as far as the test can tell). Now that unknown errors come out as an unstable Uncategorized variant that's not so easy. If the test is expecting an error that is currently not categorised, you want to write code that says "if the error is any of the recognised kinds, call it a test failure".

What does "any of the recognised kinds" mean here ? It doesn't meany any of the kinds recognised by the version of the Rust stdlib that is actually in use. That set might get bigger. When the test is compiled and run later, perhaps years later, the error in this test case might indeed be categorised. What you actually mean is "the error must not be any of the kinds which existed when the test was written".

IMO therefore the right solution for such a test case is to cut and paste the current list of stable ErrorKinds into your code. This will seem wrong at first glance, because the list in your code and in Rust can get out of step. But when they do get out of step you want your version, not the stdlib's. So freezing the list at a point in time is precisely right.

You probably only want to maintain one copy of this list, so put it somewhere central in your codebase's test support machinery. Periodically, you can update the list deliberately - and fix any resulting test failures.

Unfortunately this approach is not suggested by the documentation. In theory you could work all this out yourself from first principles, given even the situation prior to May 2020, but it seems unlikely that many people have done so. In particular, cutting and pasting the list of recognised errors would seem very unnatural.

Conclusions

This was not an easy problem to solve well. I think Rust has done a plausible job given the various constraints, and the result is technically good.

It is a shame that this change to make the error handling stability more correct caused the most trouble for the most careful people who write the most thorough tests. I also think the docs could be improved.

edited shortly after posting, and again 2021-09-22 16:11 UTC, to fix HTML slips



comment count unavailable comments

22 September, 2021 04:10PM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2021 for Debian

The release of TeX Live 2021 is already half a year away, but due to the delay of waiting for Debian/Bullseye release, we haven’t updated TeX Live in Debian for quite some time. But the waiting is over, today I uploaded the first packages of TeX Live 2021 to unstable.

All the changes listed in the upstream release blog apply also to the Debian packages.

I expect a few hiccups, but it is good to see it out of the door finally.

Enjoy.

22 September, 2021 05:42AM by Norbert Preining

September 21, 2021

hackergotchi for Clint Adams

Clint Adams

Outrage culture killed my dog

Why can't Debian have embarrassing flamewars like this thread?

Posted on 2021-09-21
Tags: barks

21 September, 2021 03:36PM

Russell Coker

hackergotchi for Norbert Preining

Norbert Preining

Plasma 5.23 Anniversary Edition Beta for Debian available for testing

Last week has seen the release of the first beta of Plasma 5.23 Anniversary Edition. Work behind the scenes to get this release as soon as possible into Debian has progressed well.

Starting with today, we provide binaries of Plasma 5.23 for Debian stable (bullseye), testing, and unstable, in each case for three architectures: amd64, i386, and aarch64.

To test the current beta, please add

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma523/DISTRIBUTION/ ./

to your apt sources (replacing DISTRIBUTION with one of Debian_11 (for Bullseye), Debian_Testing, or Debian_Unstable). For further details see this blog.

Warning

This is a beta release, and let me recall the warning from the upstream release announcement:

DISCLAIMER: This is beta software and is released for testing purposes. You are advised to NOT use Plasma 25th Anniversary Edition Beta in a production environment or as your daily desktop. If you do install Plasma 25th Anniversary Edition Beta, you must be prepared to encounter (and report to the creators) bugs that may interfere with your day-to-day use of your computer.

— https://kde.org/announcements/plasma/5/5.22.90/

Enjoy, and please report bugs!

21 September, 2021 05:53AM by Norbert Preining

Reproducible Builds (diffoscope)

diffoscope 185 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 185. This version includes the following changes:

[ Mattia Rizzolo ]
* Fix the autopkgtest in order to fix testing migration: the androguard
  Python module is not in the python3-androguard Debian package
* Ignore a warning in the tests from the h5py package that doesn't concern
  diffoscope.

[ Chris Lamb ]
* Bump Standards-Version to 4.6.0.

You find out more by visiting the project homepage.

21 September, 2021 12:00AM

September 20, 2021

Jamie McClelland

Putty Problems

I upgraded my first servers from buster to bullseye over the weekend and it went very smoothly, so big thank you to all the debian developers who contributed your labor to the bullseye release!

This morning, however, I hit a snag when the first windows users tried to login. It seems like a putty bug (see update below).

First, the user received an error related to algorithm selection. I didn’t record the exact error and simply suggested that the user upgrade.

Once the user was running the latest version of putty (0.76), they received a new error:

Server refused public-key signature despite accepting key!

I turned up debugging on the server and recorded:

Sep 20 13:10:32 container001 sshd[1647842]: Accepted key RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4 found at /home/XXXXXXXXX/.ssh/authorized_keys:6
Sep 20 13:10:32 container001 sshd[1647842]: debug1: restore_uid: 0/0
Sep 20 13:10:32 container001 sshd[1647842]: Postponed publickey for XXXXXXXXX from xxx.xxx.xxx.xxx port 63579 ssh2 [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: userauth-request for user XXXXXXXXX service ssh-connection method publickey [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: attempt 2 failures 0 [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: temporarily_use_uid: 1000/1000 (e=0/0)
Sep 20 13:10:33 container001 sshd[1647842]: debug1: trying public key file /home/XXXXXXXXX/.ssh/authorized_keys
Sep 20 13:10:33 container001 sshd[1647842]: debug1: fd 5 clearing O_NONBLOCK
Sep 20 13:10:33 container001 sshd[1647842]: debug1: /home/XXXXXXXXX/.ssh/authorized_keys:6: matching key found: RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4
Sep 20 13:10:33 container001 sshd[1647842]: debug1: /home/XXXXXXXXX/.ssh/authorized_keys:6: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding
Sep 20 13:10:33 container001 sshd[1647842]: Accepted key RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4 found at /home/XXXXXXXXX/.ssh/authorized_keys:6
Sep 20 13:10:33 container001 sshd[1647842]: debug1: restore_uid: 0/0
Sep 20 13:10:33 container001 sshd[1647842]: debug1: auth_activate_options: setting new authentication options
Sep 20 13:10:33 container001 sshd[1647842]: Failed publickey for XXXXXXXXX from xxx.xxx.xxx.xxx port 63579 ssh2: RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4
Sep 20 13:10:39 container001 sshd[1647514]: debug1: Forked child 1648153.
Sep 20 13:10:39 container001 sshd[1648153]: debug1: Set /proc/self/oom_score_adj to 0
Sep 20 13:10:39 container001 sshd[1648153]: debug1: rexec start in 5 out 5 newsock 5 pipe 8 sock 9
Sep 20 13:10:39 container001 sshd[1648153]: debug1: inetd sockets after dupping: 4, 4

The server log seems to agree with the client returned message: first the key was accepted, then it was refused.

We re-generated a new key. We turned off the windows firewall. We deleted all the putty settings via the windows registry and re-set them from scratch.

Nothing seemed to work. Then, another windows user reported no problem (and that user was running putty version 0.74). So the first user downgraded to 0.74 and everything worked fine.

Update

Wow, very impressed with the responsiveness of putty devs!

And, who knew that putty is available in debian??

Long story short: putty version 0.76 works on linux and, from what I can tell, works for everyone except my one user. Maybe it’s their provider doing some filtering? Maybe a nuance to their version of Windows?

20 September, 2021 12:27PM

hackergotchi for Andy Simpkins

Andy Simpkins

COVID-19

Nearly 4 weeks after contracting COVID-19 I am finally able to return to work…

Yes I have had both Jabs (my 2nd dose was back in June), and this knocked me for six. I spent most of the time in bed, and only started to get up and about 10 days ago.

I passed this on to both my wife and daughter (my wife has also been double jabbed), fortunately they didn’t get it as bad as me and have been back at work / school for the last week. I also passed it on to a friend at the UK Debian BBQ, hosted once again by Sledge and Randombird, before I started showing symptoms. Fortunately (after a lot of PCR tests for attendees) it doesn’t look like I passed it to anyone else

I wouldn’t wish this on anyone.

I went on holiday back in August (still in England) thinking that having both jabs we would be fine. We stayed in self catering accommodation and we spent our time outside, we visited open air museums, walked around gardens etc, however we did eat out in relatively empty pubs and restaurants.

And yes we did all have face masks on when we went indoors (although obviously we removed them whilst eating).

I guess that is when I caught this, but have no idea exactly when or where.

Even after vaccination, it is still possible to both catch and spread this virus. Fortunately having been vaccinated my resulting illness was (statistically) less bad than it would otherwise have been.

I dread to think how bad I would have been if I had not already been vaccinated, I suspect that I would have ended up in ICU.  I am still very tired, and have been told it may take many more weeks to get back to my former self. Other than being overweight, prior to this I have been in good health.

If you are reading this and have NOT yet had a vaccine and one is available for you, please, please get it done.

20 September, 2021 11:07AM by andy

hackergotchi for Holger Levsen

Holger Levsen

20210920-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021, we still have free beds

We still have some free slots and beds available for the "Debian Reunion Hamburg 2021" taking place in Hamburg at the venue of the 2018 & 2019 MiniDebConfs from Monday, Sep 27 2021 until Friday Oct 1 2021, with Sunday, Sep 26 2021 as arrival day.

So again, Debian people will meet in Hamburg. The exact format is less defined and structured than previous years, probably we will just be hacking from Monday to Wednesday, have talks on Thursday and a nice day trip on Friday.

Please read https://wiki.debian.org/DebianEvents/de/2021/DebianReunionHamburg and if you intend to attend, please register there. If additionally you would like to stay on site (in a single room or shared with one another person), please mail me.

I'm looking forward to this event, even though (or maybe also because) it will be much smaller than last years. I suppose this will lead to more personal interactions and more intense hacking, though of course it is to be seen how this works out exactly!

20 September, 2021 07:20AM

September 19, 2021

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, August 2021

In August I was assigned 13.25 hours of work by Freexian's Debian LTS initiative and carried over 6 hours from earlier months. I worked 1.25 hours and will carry over the remainder.

I attended an LTS team meeting, and wrote my report for July 2021, but did not work on any updates.

19 September, 2021 09:28PM

September 18, 2021

hackergotchi for Mike Gabriel

Mike Gabriel

X2Go, Remmina and X2GoKdrive

In this blog post, I will cover a few related but also different topics around X2Go - the GNU/Linux based remote computing framework.

Introduction and Catch Up

For those, who haven't come across X2Go, so far... With X2Go [0] you can log into remote GNU/Linux machines graphically and launch headless desktop environments, seamless/published applications or access an already running desktop session (on a local Xserver or running as a headless X2Go desktop session) via X2Go's session shadowing / mirroring feature.

Graphical backend: NXv3

For several years, there was only one graphical backend available in X2Go, the NXv3 software. In NXv3, you have a headless or nested (it can do both) Xserver that has some remote magic built-in and is able to transfer the Xserver's graphical data to a remote client (NX proxy). Over the wire, the NX protocol allows for data compression (JPEG, PNG, etc.) and combines it with bitmap caching, so that the overall result is a fast and responsive desktop experience even on low latency and low bandwidth connections. This especially applies to X desktop environments that use many native X protocol operations for drawing windows and widget onto the screen. The more bitmaps involved (e.g. in applications with client-side rendering of window controls and such), the worse the quality of a session experience.

The current main maintainer of NVv3 (aka nx-libs [1]) is Ulrich Sibiller. Uli has my and the X2Go community's full appreciation, admiration and gratitude for all the work he does on nx-libs, constantly improving NXv3 without breaking compatibility with legacy use cases (yes, FreeNX is still alive, by the way).

NEW: Alternative Graphical Backend: X2Go Kdrive

Over the past 1.5 years, Oleksandr Shneyder (Alex), co-founder of X2Go, has been working on a re-implementation of an alternative, less X11-dependent graphical backend. The underlying Xserver technology is the kdrive part of the X.org server project. People on GNU/Linux might have used kdrive technology already: The Xephyr nested Xserver uses the kdrive implementation.

The idea of the X2Go Kdrive [2] implementation in X2Go is providing a headless Xserver on the X2Go Server side for running X11 based desktop sessions inside while using an X11-agnostic data protocol for sending the graphical desktop data to the client-side for rendering. Whereas, with NXv3 technology, you need a local Xserver on the client side, with X2Go Kdrive you only need a client app(lication) that can draw bitmaps into some sort of framebuffer, such as a client-side X11 Xserver, a client-side Wayland compositor or (hold your breath) an HTMLv5 canvas in a web browser.

X2Go Kdrive Client Implementations

During first half of this year, I tested and DEB-packaged Alex's X2Go HTMLv5 client code [3] and it has been available for testing in the X2Go nightly builds archive for a while now.

Of course, the native X2Go Client application has X2Go Kdrive support for a while, too, but it requires a Qt5 application in the background, the x2gokdriveclient (which is still only available in X2Go nightly builds or from X2Go Git [4]).

X2Go and Remmina

As currently posted by the Remmina community [5], one of my employees has been working on finalizing an already existing draft of mine for the last couple of months: Remmina Plugin X2Go. This project has been contracted by BAUR-ITCS UG (haftungsbeschränkt) already a while back and has been financed via X2Go funding from one of their customers. Unfortunately, I never got around really to finalizing the project. Apologies for this.

Daniel Teichmann, who has been in the company for a while now, but just recently switched to an employment model with considerably more work hours per week, now picked up this project two months ago and achieved awesome things on the way.

Daniel Teichmann and Antenore Gatta (Remmina core developer, aka tmow) have been cooperating intensely on this, recently, with the objective of getting the X2Go plugin code merged into Remmina asap. We are pretty close to the first touchdown (i.e. code merge) of this endeavour.

Thanks to Antenore for his support on this. This is much appreciated.

Remmina Plugin X2Go - Current Challenges

The X2Go Plugin for Remmina implementation uses Python X2Go (PyHoca-CLI) under the bonnet and basically does a system call to pyhoca-cli according to the session settings configured in the Remmina session profile UI. When using NXv3 based sessions, the session window appears on the client-side Xserver and immediately gets caught by Remmina and embedded into the Remmina frame (via Xembed protocol) where its remote sessions are supposed to appear. (Thanks that GtkSocket is still around in GTK-3). The knowing GTK-3 experts among you may have noticed: GtkSocket is obsolete and has been removed from GTK-4. Also, GtkSocket support is only available in GTK-3 when using its X11 rendering backend.

For the X2Go Kdrive implementation, we tested a similar approach (embedding the x2gokdriveclient Qt5 window via Xembed/GtkSocket), but it seems that GtkSocket and Qt5 applications don't work well together and we did not succeed in embedding the Qt5 window of the x2gokdriveclient application into Remmina, so far. Also, this would be a work-around for the bigger problem: We want, long-term, provide X2Go Kdrive support in Remmina, not only for Remmina running with GTK-3/X11, but also when Remmina is used natively on top of Wayland.

So, the more sustainable approach for showing an X2Go Kdrive based X2Go session in Remmina would be a GTK-3/4 or a Glib-2.0 + Cairo based rendering client provided as a shared library. This then could be used by Remmina for drawing the session bitmaps into the Remmina session frame.

This would require a port of the x2gokdriveclient Qt code into a non-Qt implementation. However, we are running out of funding to make this happen at the moment.

More Funding Needed for this Journey

As you might guess, such a project as proposed is a project that some people do in their spare time, others do it for a living.

I'd love to continue this project and have Daniel Teichmann continue his work on this, so that Remmina might soon be able to provide native X2Go Kdrive Client support.

If people read this and are interested in supporting such a project, please get in touch [6]. Thanks so much!

light+love
Mike (aka sunweaver)

[0] https://wiki.x2go.org/
[1] https://github.com/ArcticaProject/nx-libs
[2] https://code.x2go.org/gitweb?p=x2gokdrive.git;a=tree
[3] https://code.x2go.org/gitweb?p=x2gohtmlclient.git;a=tree
[4] https://code.x2go.org/gitweb?p=x2gokdriveclient.git;a=tree
[5] https://remmina.org/x2go/
[6] https://das-netzwerkteam.de/

18 September, 2021 05:04PM by sunweaver

September 17, 2021

Reproducible Builds (diffoscope)

diffoscope 184 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 184. This version includes the following changes:

[ Chris Lamb ]
* Fix the semantic comparison of R's .rdb files after a refactoring of
  temporary directory handling in a previous version.
* Support a newer format version of R's .rds files.
* Update tests for OCaml 4.12. (Closes: reproducible-builds/diffoscope#274)
* Move diffoscope.versions to diffoscope.tests.utils.versions.
* Use assert_diff in tests/comparators/test_rdata.py.
* Reformat various modules with Black.

[ Zbigniew Jędrzejewski-Szmek ]
* Stop using the deprecated distutils module by adding a version
  comparison class based on the RPM version rules.
* Update invocations of llvm-objdump for the latest version of LLVM.
* Adjust a test with one-byte text file for file(1) version 5.40.
* Improve the parsing of the version of OpenSSH.

[ Benjamin Peterson ]
* Add a --diff-context option to control the unified diff context size.
  (reproducible-builds/diffoscope!88)

You find out more by visiting the project homepage.

17 September, 2021 12:00AM

September 16, 2021

hackergotchi for Chris Lamb

Chris Lamb

On Colson Whitehead's Harlem Shuffle

Colson Whitehead's latest novel, Harlem Shuffle, was always going to be widely reviewed, if only because his last two books won Pulitzer prizes. Still, after enjoying both The Underground Railroad and The Nickel Boys, I was certainly going to read his next book, regardless of what the critics were saying — indeed, it was actually quite agreeable to float above the manufactured energy of the book's launch.

Saying that, I was encouraged to listen to an interview with the author by Ezra Klein. Now I had heard Whitehead speak once before when he accepted the Orwell Prize in 2020, and once again he came across as a pretty down-to-earth guy. Or if I were to emulate the detached and cynical tone Whitehead embodied in The Nickel Boys, after winning so many literary prizes in the past few years, he has clearly rehearsed how to respond to the cliched questions authors must be asked in every interview. With the obligatory throat-clearing of 'so, how did you get into writing?', for instance, Whitehead replies with his part of the catechism that 'It seemed like being a writer could be a cool job. You could work from home and not talk to people.' The response is the right combination of cute and self-effacing... and with its slight tone-deafness towards enforced isolation, it was no doubt honed before Covid-19.

§

Harlem Shuffle tells three separate stories about Ray Carney, a furniture salesman and 'fence' for stolen goods in New York in the 1960s. Carney doesn't consider himself a genuine criminal though, and there's a certain logic to his relativistic morality. After all, everyone in New York City is on the take in some way, and if some 'lightly used items' in Carney's shop happened to have had 'previous owners', well, that's not quite his problem. 'Nothing solid in the city but the bedrock,' as one character dryly observes. Yet as Ezra pounces on in his NYT interview mentioned abov, the focus on the Harlem underworld means there are very few women in the book, and Whitehead's circular response — ah well, it's a book about the criminals at that time! — was a little unsatisfying. Not only did it feel uncharacteristically slippery of someone justly lauded for his unflinching power of observation (after all, it was the author who decided what to write about in the first place), it foreclosed on the opportunity to delve into why the heist and caper genres (from The Killing, The Feather Thief, Ocean's 11, etc.) have historically been a 'male' mode of storytelling.

Perhaps knowing this to be the case, the conversation quickly steered towards Ray Carney's wife, Elizabeth, the only woman in the book who could be said possesses some plausible interiority. The following off-hand remark from Whitehead caught my attention:

My wife is convinced that [Elizabeth] knows everything about Carney's criminal life, and is sort of giving him a pass. And I'm not sure if that's true. I have to have to figure out exactly what she knows and when she knows it and how she feels about it.

I was quite taken by this, although not simply due to its effect on the story it self. As in, it immediately conjured up a charming picture of Whitehead's domestic arrangements: not only does Whitehead's wife feel free to disagree with what one of Whitehead's 'own' characters knows or believes, but that Colson has no problem whatsoever sharing that disagreement with the public at large. (It feels somehow natural that Whitehead's wife believes her counterpart knows more than she lets on, whilst Whitehead himself imbues the protagonist's wife with a kind of neo-Victorian innocence.) I'm minded to agree with Whitehead's partner myself, if only due to the passages where Elizabeth is studiously ignoring Carney's otherwise unexplained freak-outs.

But all of these meta-thoughts simply underline just how emancipatory the Death of the Author can be. This product of academic literary criticism (the term was coined by Roland Barthes' 1967 essay of the same name) holds that the original author's intentions, ideas or biographical background carry no especial weight in determining how others should interpret their work. It is usually understood as meaning that a writer's own views are no more valid or 'correct' than the views held by someone else. (As an aside, I've found that most readers who encounter this concept for the first time have been reading books in this way since they were young. But the opposite is invariably true with cinephiles, who often have a bizarre obsession with researching or deciphering the 'true' interpretation of a film.) And with all that in mind, can you think of a more wry example of how freeing (and fun) nature of the Death of the Author than an author's own partner dissenting with their (Pulitzer Prize-winning) husband on the position of a lynchpin character?

§

The 1964 Harlem riot began after James Powell, a 15-year-old African American, was shot and killed by Thomas Gilligan, an NYPD police officer in front of 10s of witnesses. Gilligan was subsequently cleared by a grand jury.

As it turns out, the reviews for Harlem Shuffle have been almost universally positive, and after reading it in the two days after its release, I would certainly agree it is an above-average book. But it didn't quite take hold of me in the way that The Underground Railroad or The Nickel Boys did, especially the later chapters of The Nickel Boys that were set in contemporary New York and could thus make some (admittedly fairly explicit) connections from the 1960s to the present day — that kind of connection is not there in Harlem Shuffle, or at least I did not pick up on it during my reading.

I can see why one might take exception to that, though. For instance, it is certainly true that the week-long Harlem Riot forms a significant part of the plot, and some events in particular are entirely contingent on the ramifications of this momentous event. But it's difficult to argue the riot's impact are truly integral to the story, so not only is this uprising against police brutality almost regarded as a background event, any contemporary allusion to the murder of George Floyd is subsequently watered down. It's nowhere near the historical rubbernecking of Forrest Gump (1994), of course, but that's not a battle you should ever be fighting.

Indeed, whilst a certain smoothness of affect is to be priced into the Whitehead reading experience, my initial overall reaction to Harlem Shuffle was fairly flat, despite all the action and intrigue on the page. The book perhaps belies its origins as a work conceived during quarantine — after all, the book is essentially comprised of three loosely connected novellas, almost as if the unreality and mental turbulence of lockdown prevented the author from performing the psychological 'deep work' of producing a novel-length text with his usual depth of craft. A few other elements chimed with this being a 'lockdown novel' as well, particularly the book's preoccupation with the sheer physicality of the city compared to the usual complex interplay between its architecture and its inhabitants. This felt like it had been directly absorbed into the book from the author walking around his deserted city, and thus being able to take in details for the first time:

The doorways were entrances into different cities—no, different entrances into one vast, secret city. Ever close, adjacent to all you know, just underneath. If you know where to look.

And I can't fail to mention that you can almost touch Whitehead's sublimated hunger to eat out again as well:

Stickups were chops—they cook fast and hot, you’re in and out. A stakeout was ribs—fire down low, slow, taking your time.

[…]

Sometimes when Carney jumped into the Hudson when he was a kid, some of that stuff got into his mouth. The Big Apple Diner served it up and called it coffee.

More seriously, however, the relatively thin personalities of minor characters then reminded me of the simulacrum of Zoom-based relationships, and the essentially unsatisfactory endings to the novellas felt reminiscent of lockdown pseudo-events that simply fizzle out without a bang. One of the stories ties up loose ends with: 'These things were usually enough to terminate a mob war, and they appeared to end the hostilities in this case as well.' They did? Well, okay, I guess.

§

The corner of 125th Street and Morningside Avenue in 2019, the purported location of Carney's fictional furniture store. Signage plays a prominent role in Harlem Shuffle, possibly due to the author's quarantine walks.

Still, it would be unfair to characterise myself as 'disappointed' with the novel, and none of this piece should be taken as really deep criticism. The book certainly was entertaining enough, and pretty funny in places as well:

Carney didn’t have an etiquette book in front of him, but he was sure it was bad manners to sit on a man’s safe.

[…]

The manager of the laundromat was a scrawny man in a saggy undershirt painted with sweat stains. Launderer, heal thyself.

Yet I can't shake the feeling that every book you write is a book that you don't, and so we might need to hold out a little longer for Whitehead's 'George Floyd novel'. (Although it is for others to say how much of this sentiment is the expectations of a White Reader for The Black Author to ventriloquise the pain of 'their' community.)

Some room for personal critique is surely permitted. I dearly missed the junk food energy of the dry and acerbic observations that run through Whitehead's previous work. At one point he had a good line on the model tokenisation that lurks behind 'The First Negro to...' labels, but the callbacks to this idea ceased without any payoff. Similar things happened with the not-so-subtle critiques of the American Dream:

“Entrepreneur?” Pepper said the last part like manure. “That’s just a hustler who pays taxes.”

[…]

One thing I’ve learned in my job is that life is cheap, and when things start getting expensive, it gets cheaper still.

Ultimately, though, I think I just wanted more. I wanted a deeper exploration of how the real power in New York is not wielded by individual street hoodlums or even the cops but in the form of real estate, essentially serving as a synecdoche for Capital as a whole. (A recent take of this can be felt in Jed Rothstein's 2021 documentary, WeWork: Or the Making and Breaking of a $47 Billion Unicorn… and it is perhaps pertinent to remember that the US President at the time this novel was written was affecting to be a real estate tycoon.). Indeed, just like the concluding scenes of J. J. Connolly's Layer Cake, although you can certainly pull off a cool heist against the Man, power ultimately resides in those who control the means of production... and a homespun furniture salesman on the corner of 125 & Morningside just ain't that. There are some nods to kind of analysis in the conclusion of the final story ('Their heist unwound as if it had never happened, and Van Wyck kept throwing up buildings.'), but, again, I would have simply liked more.

And when I attempted then file this book away into the broader media landscape, given the current cultural visibility of 1960s pop culture (e.g. One Night in Miami (2020), Judas and the Black Messiah (2021), Summer of Soul (2021), etc.), Harlem Shuffle also seemed like a missed opportunity to critically analyse our (highly-qualified) longing for the civil rights era. I can certainly understand why we might look fondly on the cultural products from a period when politics was less alienated, when society was less atomised, and when it was still possible to imagine meaningful change, but in this dimension at least, Harlem Shuffle seems to merely contribute to this nostalgic escapism.

16 September, 2021 04:10PM

September 15, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

plocate in Fedora

It seems that due to the work of Zbigniew Jędrzejewski-Szmek, plocate is now in Fedora Rawhide. This carries a special significance; not only in Fedora an important distribution, but it is also the upstream of mlocate. Thus, an expressed desire to replace mlocate with plocate over the next few Fedora releases feels like it carries a certain amount of support on the road towards world domination. :-)

I'd love to see someone make a version of GNU tar that uses io_uring; it's really slow for many small files on rotating media. Also, well, a faster dpkg. :-)

15 September, 2021 09:37PM

Ian Jackson

Get source to Debian packages only via dgit; "official" git links are beartraps

tl;dr

dgit clone sourcepackage gets you the source code, as a git tree, in ./sourcepackage. cd into it and dpkg-buildpackage -uc -b.

Do not use: "VCS" links on official Debian web pages like tracker.debian.org; "debcheckout"; searching Debian's gitlab (salsa.debian.org). These are good for Debian experts only.

If you use Debian's "official" source git repo links you can easily build a package without Debian's patches applied.[1] This can even mean missing security patches. Or maybe it can't even be built in a normal way (or at all).

OMG WTF BBQ, why?

It's complicated. There is History.

Debian's "most-official" centralised source repository is still the Debian Archive, which is a system based on tarballs and patches. I invented the Debian source package format in 1992/3 and it has been souped up since, but it's still tarballs and patches. This system is, of course, obsolete, now that we have modern version control systems, especially git.

Maintainers of Debian packages have invented ways of using git anyway, of course. But this is not standardised. There is a bewildering array of approaches.

The most common approach is to maintain git tree containing a pile of *.patch files, which are then often maintained using quilt. Yes, really, many Debian people are still using quilt, despite having git! There is machinery for converting this git tree containing a series of patches, to an "official" source package. If you don't use that machinery, and just build from git, nothing applies the patches.

[1] This post was prompted by a conversation with a friend who had wanted to build a Debian package, and didn't know to use dgit. They had got the source from salsa via a link on tracker.d.o, and built .debs without Debian's patches. This not a theoretical unsoundness, but a very real practical risk.

Future is not very bright

In 2013 at the Debconf in Vaumarcus, Joey Hess, myself, and others, came up with a plan to try to improve this which we thought would be deployable. (Previous attempts had failed.) Crucially, this transition plan does not force change onto any of Debian's many packaging teams, nor onto people doing cross-package maintenance work. I worked on this for quite a while, and at a technical level it is a resounding success.

Unfortunately there is a big limitation. At the current stage of the transition, to work at its best, this replacement scheme hopes that maintainers who update a package will use a new upload tool. The new tool fits into their existing Debian git packaging workflow and has some benefits, but it does make things more complicated rather than less (like any transition plan must, during the transitional phase). When maintainers don't use this new tool, the standardised git branch seen by users is a compatibility stub generated from the tarballs-and-patches. So it has the right contents, but useless history.

The next step is to allow a maintainer to update a package without dealing with tarballs-and-patches at all. This would be massively more convenient for the maintainer, so an easy sell. And of course it links the tarballs-and-patches to the git history in a proper machine-readable way.

We held a "git packaging requirements-gathering session" at the Curitiba Debconf in 2019. I think the DPL's intent was to try to get input into the git workflow design problem. The session was a great success: my existing design was able to meet nearly everyone's needs and wants. The room was obviously keen to see progress. The next stage was to deploy tag2upload. I spoke to various key people at the Debconf and afterwards in 2019 and the code has been basically ready since then.

Unfortunately, deployment of tag2upload is mired in politics. It was blocked by a key team because of unfounded security concerns; positive opinions from independent security experts within Debian were disregarded. Of course it is always hard to get a team to agree to something when it's part of a transition plan which treats their systems as an obsolete setup retained for compatibility.

Current status

If you don't know about Debian's git packaging practices (eg, you have no idea what "patches-unapplied packaging branch without .pc directory" means), and don't want want to learn about them, you must use dgit to obtain the source of Debian packages. There is a lot more information and detailed instructions in dgit-user(7).

Hopefully either the maintainer did the best thing, or, if they didn't, you won't need to inspect the history. If you are a Debian maintainer, you should use dgit push-source to do your uploads. This will make sure that users of dgit will see a reasonable git history.

edited 2021-09-15 14:48 Z to fix a typo



comment count unavailable comments

15 September, 2021 02:57PM

September 14, 2021

Sven Hoexter

PV - Monitoring Envertech Microinverter via envertecportal.com

Some time ago I looked briefly at an Envertech data logger for small scale photovoltaic setups. Turned out that PV inverter are kinda unreliable, and you really have to monitor them to notice downtimes and defects. Since my pal shot for a quick win I've cobbled together another Python script to query the portal at www.envertecportal.com, and report back if the generated power is down to 0. The script is currently run on a vserver via cron and reports back via the system MTA. So yeah, you need to have something like that already at hand.

Script and Configuration

You've to provide your PV systems location with latitude and longitude so the script can calculate (via python3-suntime) the sunrise and sunset times. At the location we deal with we expect to generate some power at least from sunrise + 1h to sunet - 1h. That is tunable via the configuration option toleranceSeconds.

Retrieving the stationId is a bit ugly because it's not provided via any API, instead it's rendered serverside into the website. So I just logged in on the portal and picked it up by looking into the page source.

www.envertecportal.com API

I guess this is some classic in the IoT land, but neither the documentation provided on the portal frontpage as docx, nor the API docs at port 8090 are complete and correct. The few bits I gathered via the Firefox Web Developer Tools are:

  1. Login https://www.envertecportal.com/apiaccount/login - POST, sent userName and pwd containing your login name and password. The response JSON is very explicit if your login was not successful and why.
  2. Store the session cookie called ASP.NET_SessionId for use on all subsequent requests.
  3. Retrieve station info https://www.envertecportal.com/ApiStations/getStationInfo - POST, sent ASP.NET_SessionId and stationId with the ID of the station. Returns a JSON with an object named Data. The field Power contains the currently generated power as a float with two digits (e.g. 0.01).
  4. Logout https://www.envertecportal.com/apiAccount/Logout - POST, sent ASP.NET_SessionId.

Some Surprises

There were a few surprises, maybe they help others dealing with an Envertech setup.

  1. The portal truncates passwords at 16 chars.
  2. The "Forget Password?" function mails you back the password in plain text (that's how I learned about 1.).
  3. The login API endpoint reporting the exact reason why the login failed is somewhat out of fashion. Though this one is probably not a credential stuffing target because there is no money to make, so don't care.
  4. The data logger reports the data to www.envertecportal.com at port 10013.
  5. There is some checksuming done on the reported data, but the system is not replay safe. So you can sent it any valid data string at a later time and get wrong data recorded.
  6. People at forum.fhem.de decoded some values but could not figure out the checksuming so far.

14 September, 2021 08:11PM

hackergotchi for Joachim Breitner

Joachim Breitner

A Candid explainer: Quirks

This is the fifth and final post in a series about the interface description language Candid.

If you made it this far, you now have a good understanding of what Candid is, what it is for and how it is used. For this final post, I’ll put the spotlight on specific aspects of Candid that are maybe surprising, or odd, or quirky. This section will be quite opinionated, and could maybe be called “what I’d do differently if I’d re-do the whole thing”.

Note that these quirks are not serious problems, and they don’t invalidate the overall design. I am writing this up not to discourage the use of Candid, but merely help interested parties to understand it better.

References in the wire format

When the work on Candid began at DFINITY, the Internet Computer was still far away from being a thing, and many fundamental aspects about it were still in the air. I particular, there was still talk about embracing capabilities as a core feature of the application model, which would be implemented as opaque references on the system level, likely building on WebAssembly’s host reference type proposal (which only landed recently), and could be used to model access permissions, custom tokens and many other things.

So Candid is designed with that in mind, and you’ll find that its wire format is not just a type table and a value table, but actually

a triple (T, M, R), where T (“type”) and M (“memory”) are sequences of bytes and R (“references”) is a sequence of references.

Also the wire format for values of function service tyeps have an extra byte to distinguish between “public references” (represented by a principal and possible a method name in the data part), and these opaque references.

Alas, references never made it into the Internet Computer, so all Candid implementations simply ignore that part of the specification. But it’s still in the spec, and if it confused you before, now you know why.

Hashed field names

Candid record and variant types look like they have textual field names:

type T = record { syndactyle : nat; trustbuster: bool }

But that is actually only true superficially. The wire format for Candid only stores hashes of field names. So the above is actually equivalent to

type T = record { 4260381820 : nat; 3504418361 : bool }

or, for that matter, to

type T = record { destroys : bool; rectum : nat }

(Yes, I used an english word list to find these hash collisions. There aren’t that many actually.)

The benefit of such hashing is that the messages are a bit smaller in most (not all) cases, but it is a big annoyance for dynamic uses of Candid. It’s the reason why tools like dfx, if they don’t know the Candid interface of a service, will print the result with just the numerical hash, letting you guess which field is which.

It also complicates host languages that derive Candid types from the host language, like Motoko, as some records (e.g. record { trustbuster: bool; destroys : int }) with field name hash collisions can not be represented in Candid, and either the host language’s type system needs to be Candid aware now (as is the case of Motoko), or serialization/deserialization will fail at runtime, or odd bugs can happen.

(More discussion of this issue).

Tuples

Many languages have a built-in notion of a tuple type (e.g. (Int, Bool)), but Candid does not have such a type. The only first class product type is records.

This means that tuples have to encoded as records somehow. Conveniently(?) record fields are just numbers after all, so the type (Int, Bool) would be mapped to the type

record { 0 : int; 1 : bool }

So tuples can be expressed. But from my experience implementing the type mappings for Motoko and Haskell this is causing headaches. To get a good experience when importing from Candid, the tools have to try to reconstruct which records may have been tuples originally, and turn them into tuples.

The main argument for the status quo is that Candid types should be canonical, and there should not be more than one product type, and records are fine, and there needs to be no anonymous product type. But that has never quite resonated with me, given the practical reality of tuple types in many host languages.

Argument sequences

Did I say that Candid does not have tuple types? Turns out it does, sort of. There is no first class anonymous product, but since functions take sequences of arguments and results, there is a tuple type right there:

func foo : (bool, int) -> (int, bool)

Again, I found that ergonomic interaction with host languages becomes relatively unwieldy by requiring functions to take and return sequences of values. This is especially true for languages where functions take one argument value or return one result type (the latter being very common). Here, return sequences of length one are turned into that type directly, longer argument sequences turn into the host language’s tuple type, and nullary argument sequences turn into the idiomatic unit type. But this means that the types (int, bool) -> () and (record { 0: int, 1: bool}) -> () may be mapped to the same host language type, which causes problems when you hope to encode all necessary Candid type information in the host language.

Another oddity with argument and result sequences is that you can give names to the entries, e.g. write

func hello : (last_name : text; first_name : text) -> ()

but these names are completely ignored! So while this looks like you can, for example, add new optional arguments in the middle, such as

func hello : (last_name : text; middle_name: opt text, first_name : text) -> ()

without breaking clients, this does not have the effect you think it has and will likely break.

My suggestion is to never put names on function arguments and result values in Candid interfaces, and for anything that might be extended with new fields or where you want to name the arguments, use a single record type as the only argument:

func hello : (record { last_name : text; first_name : text}) -> ()

This allows you to add and remove arguments more easily and reliably.

Type “shorthands”

The Candid specification defines a system of types, and then adds a number of “syntactic short-hands”. For example, if you write blob in a Candid type description, it ought to means the same as vec nat8.

My qualm with that is that it doesn’t always mean the same. A Candid type description is interpreted by a number of, say, “consumers”. Two such consumers are part of the Candid specification:

  • The specification that defines the wire format for that type
  • The upgrading (subtyping) rules

But there are more! For every host language, there is some mapping from Candid types to host language types, and also generic tools like Candid UI are consumers of the type algebra. If these were to take the Candid specification as gospel, they would be forced to treat blob and vec nat8 the same, but that would be quite unergonomic and might cause performance regressions (most language try to map blob to some compact binary data type, while vec t tends to turn into some form of array structure).

So they need to be pragmatic and treat blob and vec nat8 differently. But then, for all practical purposes, blob is not just a short-hand of vec nat8. They are different types that just happens to have the same wire representations and subtyping relations.

This affects not just blob, but also “tuples” (record { blob; int; bool }) and field “names”, as discussed above.

The value text format

For a while after defining Candid, the only implementation was in Motoko, and all the plumbing was automatic, so there was never a need for users to to explicitly handle Candid values, as all values were Motoko values. Still, for debugging and testing and such things, we eventually needed a way to print out Candid values, so the text format was defined (“To enable convenient debugging, the following grammar specifies a text format for values…”).

But soon the dfx tool learned to talk to canisters, and now users needed to enter Candid value on the command line, possibly even when talking to canisters for which the interface was not known to dfx. And, sure enough, the textual interface defined in the Candid spec was used.

Unfortunately, since it was not designed for that use case, it is rather unwieldy:

  • It is quite verbose. You have to write record { … }, not just { … }. Vectors are written vec { …; …} instead of some conventional syntax like […, …]. Variants are written as variant { error = "…"} with braces that don’t any value here, and something like #error "…" might have worked as well.

    With a bit more care, a more concise and ergonomic syntax might have been possible.

  • It wasn’t designed to be sufficient to create a Candid value from it. If you write 5 it’s unclear whether that’s a nat or an int16 or what (and all of these have different wire representations). Type annotations were later added, but are relatively unwieldy, and don’t cover all cases (e.g. a service reference with a recursive type cannot be represented in the textual format at the moment).

  • Not really the fault of the textual format, but some useful information about the types is not reflected in the type description that’s part of the wire format. In particular not the field names, and whether a value was intended to be binary data (blob) or a list of small numbers (vec nat8), so pretty-printing such values requires guesswork. The Haskell library even tries to brute-force the hash to guess the field name, if it is short or in a english word list!

In hindsight I think it was too optimistic to assume that correct static type information is always available, and instead of actively trying to discourage dynamic use, Candid might be better if we had taken these (unavoidable?) use cases into account.

Custom wire format

At the beginning of this post, I have a “Candid is …” list. The list is relatively long, and the actual wire format is just one bullet point. Yes, defining a wire format that works isn’t rocket science, and it was easiest to just make one up. But since most of the interesting meat of Candid is in other aspects (subtyping rules, host language integration), I wonder if it would have been better to use an existing, generic wire format, such as CBOR, and build Candid as a layer on top.

This would give us plenty of tools and libraries to begin with. And maybe it would have reduced barrier of entry for developers, which now led to the very strange situation that DFINITY advocates for the universal use of Candid on the Internet Computer, so that all services can smoothly interact, but two of the most important services on the Internet Computer (the registry and the ledger) use Protobuf as their primary interface format, with Candid interfaces missing or an afterthought.

Sideways Interface Evolution

This is not a quirk of Candid itself, but rather an idiom of how you can use Candid that emerged from our solution for record extensions and upgrades.

Consider our example from before, a service with interface

service { add_user : (record { login : text; name : text }) -> () }

where you want to add an age field, which should be a number.

The “official” way of doing that is to add that field with an optional type:

service { add_user : (record { login : text; name : text; age : opt nat }) -> () }

As explained above, this will not break old clients, as the decoder will treat a missing argument as null. So far so good.

But often when adding such a field you don’t want to bother new clients with the fact that this age was, at some point in the past, not there yet. And you can do that! The trick is to distinguish between the interface you publish and the interface you implement. You can (for example in your documentation) state that the interface is

service { add_user : (record { login : text; name : text; age : nat }) -> () }

which is not a subtype of the old type, but it is the interface you want new clients to work with. And then your implementation uses the type with opt nat. Calls from old clients will come through as null, and calls from new clients will come through as opt 42.

We can see this idiom used in the Management Canister of the Internet Computer. The current documented interface only mentions a controllers : vec principal field in the settings, but the implementation still can handle both the old controller : principal and the new controllers field.

It’s probably advisable to let your CI system check that new versions of your service continue to implement all published interfaces, including past interfaces. But as long as the actual implementation’s interface is a subtype of all interfaces ever published, this works fine.

This pattern is related to when your service implements, say, http_request (so its implemented interface is a subtype of that common interface), but does not include that method in the published documentation (because clients of your service don’t need to call it).

Self-describing Services

As you have noticed, Candid was very much designed assuming that all parties always have the service type of services they want to interact with. But the Candid specification does not define how one can obtain the interface of a given service, and there isn’t really a an official way to do that on the Internet Computer.

That is unfortunate, because many interesting features depend on that: Such as writing import C "ic:7e6iv-biaaa-aaaaf-aaada-cai" in your Motoko program, and having it’s type right there. Or tools like ic.rocks, that allow you to interact with any canister right there.

One reason why we don’t really have that feature yet is because of disagreements about how dynamic that feature should be. Should you be able to just ask the canister for its interface (and allow the canister to vary the response, for example if it can change its functionality over time, even without changing the actual wasm code)? Or is the interface a static property of the code, and one should be able to query the system for that data, without the canister’s active involvement. Or, completely different, should interfaces be distributed out of band, maybe together with API documentation, or in some canister registry somewhere else?

I always leaned towards the first of these options, but not convincingly enough. The second options requires system assistance, so more components to change, more teams to be involved that maybe intrinsically don’t care a lot about this feature. And the third might have emerged as the community matures and builds such infrastructure, but that did not happen yet.

In the end I sneaked in an implementation of the first into Motoko, arguing that even if we don’t know yet how this feature will be implemented eventually, we all want the feature to exist somehow, and we really really want to unblock all the interesting applications it enables (e.g. Candid UI). That’s why every Motoko canister, and some rust canisters too, implements a method

__get_candid_interface_tmp_hack : () -> (text)

that one can use to get the Candid interface file.

The name was chosen to signal that this may not be the final interface, but like all good provisional solutions, it may last longer than intended. If that’s the case, I’m not sorry.

This concludes my blog post series about Candid, for now. If you want to know more, feel free to post your question on the DFINTY developer forum, and I’ll probably answer.

14 September, 2021 06:13PM by Joachim Breitner (mail@joachim-breitner.de)