July 09, 2020

Enrico Zini

Mime type associations

The last step of my laptop migration was to fix mime type associations, that seem to associate opening file depending on whatever application was installed last, phases of the moon, and what option is the most annoying.

The state of my system after a fresh install, is that, for application/pdf, xdg-open (used for example by pcmanfm) runs inkscape, and run-mailcap (used for example by neomutt) runs the calibre ebook viewer.

It looks like there are at least two systems to understand, debug and fix, instead of one.

xdg-open

This comes from package xdg-utils, and works using .desktop files:

# This runs inkscape
$ xdg-open file.pdf

There is a tool called xdg-mime that queries what .desktop file is associated with a given mime type:

$ xdg-mime query default application/pdf
inkscape.desktop

You can use xdg-mime default to change an association, and it works nicely:

$ xdg-mime default org.gnome.Evince.desktop application/pdf
$ xdg-mime query default application/pdf
org.gnome.Evince.desktop

However, if you accidentally mistype the name of the .desktop file, it won't complain and it will silently reset the association to the arbitrary default:

$ xdg-mime default org.gnome.Evince.desktop application/pdf
$ xdg-mime query default application/pdf
org.gnome.Evince.desktop
$ xdg-mime default evince.desktop application/pdf
$ echo $?
0
$ xdg-mime query default application/pdf
inkscape.desktop

You can use a GUI like xfce4-mime-settings from the xfce4-settings package to perform the same kind of changes avoiding typing mistakes.

The associations seem to be saved in ~/.config/mimeapps.list

run-mailcap

This comes from the package mime-support

You can test things by running it using --norun:

$ run-mailcap --norun file.pdf
ebook-viewer file.pdf

run-mailcap uses the ~/.mailcap and /etc/mailcap to map mime types to commands. This is what's in the system default:

$ grep application/pdf /etc/mailcap
application/pdf; ebook-viewer %s; test=test -n "$DISPLAY"
application/pdf; calibre %s; test=test -n "$DISPLAY"
application/pdf; gimp-2.10 %s; test=test -n "$DISPLAY"
application/pdf; evince %s; test=test -n "$DISPLAY"

To fix this, I copypasted the evince line into ~/.mailcap, and indeed it gets used:

$ run-mailcap --norun file.pdf
evince file.pdf

There is a /etc/mailcap.order file providing a limited way to order entries in /etc/mailcap, but it can only be manipulated system-wide, and cannot be used for user preferences.

Sadly, this means that if a package changes its mailcap invocation because of, say, a security issue in the former one, the local override will never get fixed.

I am really not comfortable about that. As a workaround, I put this in my ~/.mailcap:

application/pdf; xdg-open %s && sleep 0.3s; test=test -n "$DISPLAY"

The sleep 0.3s is needed because xdg-open exits right after starting the program, and when invoked by mutt it means that mutt could delete the attachment before evince has a chance to open it. I had to use the same workaround for sensible-browser, since the same happens when a browser opens a document in an existing tab.

I wonder if there is any reason run-mailcap could not be implemented as a wrapper to xdg-open.

I reported #964723 elaborating on these thoughts.

09 July, 2020 08:20AM

Laptop migration

This laptop used to be extra-flat

My laptop battery started to explode in slow motion. HP requires 10 business days to repair my laptop under warranty, and I cannot afford that length of downtime.

Alternatively, HP quoted me 375€ + VAT for on-site repairs, which I tought was very funny.

For 376.55€ + VAT, which is pretty much exactly the same amount, I bought instead a refurbished ThinkPad X240 with a dual-core I5, 8G of RAM, 250G SSD, and a 1920x1080 IPS display, to use as a spare while my laptop is being repaired. I'd like to thank HP for giving me the opportunity to own a ThinkPad.

Since I'm migrating all my system to the spare and then (hopefully) back, I'm documenting what I need to be fully productive on new hardware.

Install Debian

A basic Debian netinst with no tasks selected is good enough to get going.

Note that if wifi worked in Debian Installer, it doesn't mean that it will work in the minimal system it installed. See here for instructions on quickly bringing up wifi on a newly installed minimal system.

Copy /home

A simple tar of /home is all I needed to copy my data over.

A neat way to do it was connecting the two laptops with an ethernet cable, and using netcat:

# On the source
tar -C / -zcf - home | nc -l -p 12345 -N
# On the target
nc 10.0.0.1 12345 | tar -C / -zxf -

Since the data travel unencrypted in this way, don't do it over wifi.

Install packages

I maintain a few simple local metapackages that depend on the packages I usually used.

I could just install those and let apt bring in their dependencies.

For the build dependencies of the programs I develop, I use mk-build-deps from the devscripts package to create metapackages that make sure they are installed.

Here's an extract from debian/control of the metapackage:

Source: enrico
Section: admin
Priority: optional
Maintainer: Enrico Zini <enrico@debian.org>
Build-Depends: debhelper (>= 11)
Standards-Version: 3.7.2.1

Package: enrico
Section: admin
Architecture: all
Depends:
  mc, mmv, moreutils, powertop, syncmaildir, notmuch,
  ncdu, vcsh, ddate, jq, git-annex, eatmydata,
  vdirsyncer, khal, etckeeper, moc, pwgen
Description: Enrico's working environment

Package: enrico-devel
Section: devel
Architecture: all
Depends:
  git, python3-git, git-svn, gitk, ansible, fabric,
  valgrind, kcachegrind, zeal, meld, d-feet, flake8, mypy, ipython3,
  strace, ltrace
Description: Enrico's development environment

Package: enrico-gui
Section: x11
Architecture: all
Depends:
  xclip, gnome-terminal, qalculate-gtk, liferea, gajim,
  mumble, sm, syncthing, virt-manager
Recommends: k3b
Description: Enrico's GUI environment

Package: enrico-sanity
Section: admin
Architecture: all
Conflicts: libapache2-mod-php, libapache2-mod-php5, php5, php5-cgi, php5-fpm, libapache2-mod-php7.0, php7.0, libphp7.0-embed, libphp-embed, libphp5-embed
Description: Enrico's sanity
 Metapackage with a list of packages that I do not want anywhere near my
 system.

System-wide customizations

I tend to avoid changing system-wide configuration as much as possible, so copying over /home and installing packages takes care of 99% of my needs.

There are a few system-wide tweaks I cannot do without:

  • setup postfix to send mail using my mail server
  • copy Network Manager system connections files in /etc/NetworkManager/system-connections/
  • update-alternatives --config editor

For postfix, I have a little ansible playbook that takes care of it.

Network Manager system connections need to be copied manually: a plain copy and a systemctl restart network-manager are enough. Note that Network Manager will ignore the files unless their owner and permissions are what it expects.

Fine tuning

Comparing the output of dpkg --get-selections between the old and the new system might highlight packages manually installed in a hurry and not added to the metapackages.

Finally, what remains is fixing the sad state of mimetype associations, which seem to associate opening file depending on whatever application was installed last, phases of the moon, and what option is the most annoying.

Currently on my system, PDFs are opened in inkscape by xdg-open and in calibre by run-mailcap. Let's see how long it takes to figure this one out.

09 July, 2020 07:34AM

July 07, 2020

Noah Meyerhans

Setting environment variables for gnome-session

Am I missing something obvious? When did this get so hard?

In the old days, you configured your desktop session on a Linux system by editing the .xsession file in your home directory. The display manager (login screen) would invoke the system-wide xsession script, which would either defer to your personal .xsession script or set up a standard desktop environment. You could put whatever you want in the .xsession script, and it would be executed. If you wanted a specific window manager, you’d run it from .xsession. Start emacs or a browser or an xterm or two? .xsession. It was pretty easy, and super flexible.

For the past 25 years or so, I’ve used X with an environment started via .xsession. Early on it was fvwm with some programs, then I replaced fvwm with Window Maker (before that was even its name!), then switched to KDE. More recently (OK, like 10 years ago) I gradually replaced KDE with awesome and various custom widgets. Pretty much everything was based on a .xsession script, and that was fine. One particularly nice thing about it was that I could keep .xsession and any related helper programs in a git repository and manage changes over time.

More recently I decided to give Wayland and GNOME an honest look. This has mostly been fine, but everything I’ve been doing in .xsession is suddenly useless. OK, fine, progress is good. I’ll just use whatever new mechanisms exist. How hard can it be?

OK, so here we go. I am running GNOME. This isn’t so bad. Alt+F2 brings up the “Run Command” dialog. It’s a different keystroke than what I’m used to, but I can adapt. (Obviously I can reconfigure the key binding, and maybe someday I will, but that’s not the point here.) I have some executables in ~/bin. Oops, the run command dialog can’t find them. No problem, I just need to update the PATH variable that it sees. Hmmm… So how does one do that, anyway? GNOME has a help system, but searching that doesn’t doesn’t reveal anything. But that’s fine, maybe it’s inherited from the parent process. But there’s no xsession script equivalent, since this isn’t X anymore at all. The familiar stuff in /etc/X11/Xsession is no longer used. What’s the equivalent in Wayland? Turns out, there isn’t a shell script at all anymore, at least not in how Wayland and GNOME interact in Debian’s configuration, which seems fairly similar to how anybody else would set this up. The GNOME session runs from a systemd-managed user session.

Digging in to some web search results suggests that systemd provides a mechanism for setting some environment variables for services started by the user instance of the system. OK, so let’s create some files in ~/.config/environment.d and we should be good. Except no, this isn’t working. I can set some variables, but something is overriding PATH. I can create this file:

$ cat ~/.config/environment.d/01_path.conf
USER_INITIAL_PATH=${PATH}
PATH=${HOME}/bin:${HOME}/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
USER_CUSTOM_PATH=${PATH}

After logging in, the “Run a command” dialog still doesn’t see my PATH. So I use Alt+F2 and sh -c "env > /tmp/env" to capture the environment, and this is what I see:

USER_INITIAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PATH=/usr/local/bin:/usr/bin:/bin:/usr/games
USER_CUSTOM_PATH=/home/noahm/bin:/home/noahm/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

So, my environment.d file is there, and it’s getting looked at, but something else is clobbering my PATH later in the startup process. But what? Where? Why? The systemd docs don’t indicate that there’s anything special about PATH, and nothing in /lib/systemd/user-environment-generators/ seems to treat it specially. The string “PATH” doesn’t appear in /lib/systemd/user/ either. Looking for the specific value that’s getting assigned to PATH in /etc shows the only occurrence of it being in /etc/zsh/zshenv, so maybe that’s where it’s coming from? But that should only get set there if it’s otherwise unset or otherwise very minimally set. So I still have no idea where it’s coming from.

OK, so ignoring where my custom value is getting overridden, maybe what’s configured in /lib/systemd/user will point me in the right direction. systemd --user status suggests that the interesting part of my session is coming from gnome-shell-wayland.service. Can we use a standard systemd drop-in as documented in systemd.unit(5)? It turns out that we can. This file sets things up the way I want:

$ cat .config/systemd/user/gnome-shell-wayland.service.d/path.conf
[Service]
Environment=PATH=%h/bin:%h/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Is that right? It really doesn’t feel ideal to me. Systemd’s Environment directive can’t reference existing environment variables, and I can’t use conditionals to do things like add a directory to the PATH only if it exists, so it’s still a functional regression from what we had before. But at least it’s a text file, edited by hand, trackable in git, so that’s not too bad.

There are some people out there who hate systemd, and will cite this as an illustration of why. However, I’m not one of those people, and I very much like systemd as an init system. I’d be happy to throw away sysvinit scripts forever, but I’m not quite so happy with the state of .xsession’s replacements. Despite the similarities, I don’t think .xsession is entirely the same as SysV-style init scripts. The services running on a system are vastly more important than my personal .xsession, and systemd is far better at managing them than the pile of shell scripts used to set things up under sysvinit. Further, systemd the init system maintains compatibility with init scripts, so if you really want to keep using them, you can. As far as I can tell, though, systemd the user session manager does not seem to maintain compatibility with .xsession scripts, and that’s unfortunate.

I still haven’t figured out what was overriding the ~/.config/environment.d/ setting. Any ideas?

07 July, 2020 11:51PM by Noah Meyerhans (frodo+blog@morgul.net)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.1.0: Now on Windows, With Parsers and Faster Still!

A smashing new RcppSimdJson release 0.1.0 containing several small updates to upstream simdjson (now at 0.4.6) in part triggered by very excisting work by Brendan who added actual parser from file and string—and together with Daniel upstream worked really hard to make Windows builds as well as complete upstream tests on our beloved (ahem) MinGW platform possible. So this version will, once the builders have caught up, give everybody on Windows a binary—with a JSON parser running circles around the (arguably more feature-rich and possibly easier-to-use) alternatives. Dave just tweeted a benchmark snippet by Brendan, the full set is at the bottom our issue ticket for this release.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators, which in its upstream release 0.4.0 improved once more (also see the following point releases). Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed; see the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk).

As mentioned, this release expands the reach of the package to Windows, and adds new user-facing functions. A big thanks for most of this is owed to Brendan, so buy him a drink if you run across him. The full NEWS entry follows.

Changes in version 0.1.0 (2020-07-07)

  • Upgraded to simdjson 0.4.1 which adds upstream Windows support (Dirk in #27 closing #26 and #14, plus extensive work by Brendan helping upstream with mingw tests).

  • Upgraded to simdjson 0.4.6 with further upstream improvements (Dirk in #30).

  • Change Travis CI to build matrix over g++ 7, 8, 9, and 10 (Dirk in #31; and also Brendan in #32).

  • New JSON functions fparse and fload (Brendan in #32) closing #18 and #10).

Courtesy of CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

07 July, 2020 10:45PM

AsioHeaders 1.16.1-1 on CRAN

An updated version of the AsioHeaders package arrived on CRAN today (after a we days of “rest” in the incoming directory of CRAN). Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This release brings a new upstream version. Its changes required a corresponding change in one of (only) three reverse depends which delayed the CRAN admisstion by a few days.

Changes in version 1.16.1-1 (2020-06-28)

  • Upgraded to Asio 1.16.1 (Dirk in #5).

  • Updated README.md with standard set of badges

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

07 July, 2020 10:28PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Raspberry Pi 4, now running your favorite distribution!

Great news, great news! New images available!Grab them while they are hot!

With lots of help (say, all of the heavy lifting) from the Debian Raspberry Pi Maintainer Team, we have finally managed to provide support for auto-building and serving bootable minimal Debian images for the Raspberry Pi 4 family of single-board, cheap, small, hacker-friendly computers!

The Raspberry Pi 4 was released close to a year ago, and is a very major bump in the Raspberry lineup; it took us this long because we needed to wait until all of the relevant bits entered Debian (mostly the kernel bits). The images are shipping a kernel from our Unstable branch (currently, 5.7.0-2), and are less tested and more likely to break than our regular, clean-Stable images. Nevertheless, we do expect them to be useful for many hackers –and even end-users– throughout the world.

The images we are generating are very minimal, they carry basically a minimal Debian install. Once downloaded, of course, you can install whatever your heart desires (because… Face it, if your heart desires it, it must free and of high quality. It must already be in Debian!)

Oh — And very important: Due to a change in the memory layout, if you get the 8GB model (currently the top-of-the-line RPi4), it will still not have USB support, due to a change in its memory layout (that means, no local keyboard/mouse ☹). We are working on getting it ironed out!

07 July, 2020 07:00AM

July 06, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.5: Several Updates

rcpp logo

Right on the heels of the news of 2000 CRAN packages using Rcpp (and also hitting 12.5 of CRAN package, or one in eight), we are happy to announce release 1.0.5 of Rcpp. Since the ten-year anniversary and the 1.0.0 release release in November 2018, we have been sticking to a four-month release cycle. The last release has, however, left us with a particularly bad taste due to some rather peculiar interactions with a very small (but ever so vocal) portion of the user base. So going forward, we will change two things. First off, we reiterate that we have already made rolling releases. Each minor snapshot of the main git branch gets a point releases. Between release 1.0.4 and this 1.0.5 release, there were in fact twelve of those. Each and every one of these was made available via the drat repo, and we will continue to do so going forward. Releases to CRAN, however, are real work. If they then end up with as much nonsense as the last release 1.0.4, we think it is appropriate to slow things down some more so we intend to now switch to a six-months cycle. As mentioned, interim releases are always just one install.packages() call with a properly set repos argument away.

Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 2002 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 203 in BioConductor. And per the (partial) logs of CRAN downloads, we are running steady at around one millions downloads per month.

This release features again a number of different pull requests by different contributors covering the full range of API improvements, attributes enhancements, changes to Sugar and helper functions, extended documentation as well as continuous integration deplayment. See the list below for details.

Changes in Rcpp patch release version 1.0.5 (2020-07-01)

  • Changes in Rcpp API:

    • The exception handler code in #1043 was updated to ensure proper include behavior (Kevin in #1047 fixing #1046).

    • A missing Rcpp_list6 definition was added to support R 3.3.* builds (Davis Vaughan in #1049 fixing #1048).

    • Missing Rcpp_list{2,3,4,5} definition were added to the Rcpp namespace (Dirk in #1054 fixing #1053).

    • A further updated corrected the header include and provided a missing else branch (Mattias Ellert in #1055).

    • Two more assignments are protected with Rcpp::Shield (Dirk in #1059).

    • One call to abs is now properly namespaced with std:: (Uwe Korn in #1069).

    • String object memory preservation was corrected/simplified (Kevin in #1082).

  • Changes in Rcpp Attributes:

    • Empty strings are not passed to R CMD SHLIB which was seen with R 4.0.0 on Windows (Kevin in #1062 fixing #1061).

    • The short_file_name() helper function is safer with respect to temporaries (Kevin in #1067 fixing #1066, and #1071 fixing #1070).

  • Changes in Rcpp Sugar:

    • Two sample() objects are now standard vectors and not R_alloc created (Dirk in #1075 fixing #1074).
  • Changes in Rcpp support functions:

    • Rcpp.package.skeleton() adjusts for a (documented) change in R 4.0.0 (Dirk in #1088 fixing #1087).
  • Changes in Rcpp Documentation:

    • The pdf file of the earlier introduction is again typeset with bibliographic information (Dirk).

    • A new vignette describing how to package C++ libraries has been added (Dirk in #1078 fixing #1077).

  • Changes in Rcpp Deployment:

    • Travis CI unit tests now run a matrix over the versions of R also tested at CRAN (rel/dev/oldrel/oldoldrel), and coverage runs in parallel for a net speed-up (Dirk in #1056 and #1057).

    • The exceptions test is now partially skipped on Solaris as it already is on Windows (Dirk in #1065).

    • The default CI runner was upgraded to R 4.0.0 (Dirk).

    • The CI matrix spans R 3.5, 3.6, r-release and r-devel (Dirk).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2455 previous questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

06 July, 2020 03:43PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Review: Roku Express

I don't generally write consumer reviews, here or elsewhere; but I have been so impressed by this one I wanted to mention it.

For Holly's birthday this year, taking place under Lockdown, we decided to buy a year's subscription to "Disney+". Our current TV receiver (A Humax Freesat box) doesn't support it so I needed to find some other way to get it onto the TV.

After a short bit of research, I bought the "Roku Express" streaming media player. This is the most basic streamer that Roku make, bottom of their range. For a little bit more money you can get a model which supports 4K (although my TV obviously doesn't: it, and the basic Roku, top out at 1080p) and a bit more gets you a "stick" form-factor and a Bluetooth remote (rather than line-of-sight IR).

I paid £20 for the most basic model and it Just Works. The receiver is very small but sits comfortably next to my satellite receiver-box. I don't have any issues with line-of-sight for the IR remote (and I rely on a regular IR remote for the TV itself of course). It supports Disney+, but also all the other big name services, some of which we already use (Netflix, YouTube BBC iPlayer) and some of which we didn't, since it was too awkward to access them (Google Play, Amazon Prime Video). It has now largely displaced the FreeSat box for accessing streaming content because it works so well and everything is in one place.

There's a phone App that remote-controls the box and works even better than the physical remote: it can offer a full phone-keyboard at times when you need to input text, and can mute the TV audio and put it out through headphones attached to the phone if you want.

My aging Plasma TV suffers from burn-in from static pictures. If left paused for a duration the Roku goes to a screensaver that keeps the whole frame moving. The FreeSat doesn't do this. My Blu Ray player does, but (I think) it retains some static elements.

06 July, 2020 09:55AM

Reproducible Builds

Reproducible Builds in June 2020

Welcome to the June 2020 report from the Reproducible Builds project. In these reports we outline the most important things that we and the rest of the community have been up to over the past month.

What are reproducible builds?

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security.

But whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into seemingly secure software during the various compilation and distribution processes.

News

The GitHub Security Lab published a long article on the discovery of a piece of malware designed to backdoor open source projects that used the build process and its resulting artifacts to spread itself. In the course of their analysis and investigation, the GitHub team uncovered 26 open source projects that were backdoored by this malware and were actively serving malicious code. (Full article)

Carl Dong from Chaincode Labs uploaded a presentation on Bitcoin Build System Security and reproducible builds to YouTube:

The app intended to trace infection chains of Covid-19 in Switzerland published information on how to perform a reproducible build.

The Reproducible Builds project has received funding in the past from the Open Technology Fund (OTF) to reach specific technical goals, as well as to enable the project to meet in-person at our summits. The OTF has actually also assisted countless other organisations that promote transparent, civil society as well as those that provide tools to circumvent censorship and repressive surveillance. However, the OTF has now been threatened with closure. (More info)

It was noticed that Reproducible Builds was mentioned in the book End-user Computer Security by Mark Fernandes (published by WikiBooks) in the section titled Detection of malware in software.

Lastly, reproducible builds and other ideas around software supply chain were mentioned in a recent episode of the Ubuntu Podcast in a wider discussion about the Snap and application stores (at approx 16:00).


Distribution work

In the ArchLinux distribution, a goal to remove .doctrees from installed files was created via Arch’s ‘TODO list’ mechanism. These .doctree files are caches generated by the Sphinx documentation generator when developing documentation so that Sphinx does not have to reparse all input files across runs. They should not be packaged, especially as they lead to the package being unreproducible as their pickled format contains unreproducible data. Jelle van der Waa and Eli Schwartz submitted various upstream patches to fix projects that install these by default.

Dimitry Andric was able to determine why the reproducibility status of FreeBSD’s base.txz depended on the number of CPU cores, attributing it to an optimisation made to the Clang C compiler []. After further detailed discussion on the FreeBSD bug it was possible to get the binaries reproducible again [].

For the GNU Guix operating system, Vagrant Cascadian started a thread about collecting reproducibility metrics and Jan “janneke” Nieuwenhuizen posted that they had further reduced their “bootstrap seed” to 25% which is intended to reduce the amount of code to be audited to avoid potential compiler backdoors.

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update as well as made the following changes within the distribution itself:

Debian

Holger Levsen filed three bugs (#961857, #961858 & #961859) against the reproducible-check tool that reports on the reproducible status of installed packages on a running Debian system. They were subsequently all fixed by Chris Lamb [][][].

Timo Röhling filed a wishlist bug against the debhelper build tool impacting the reproducibility status of 100s of packages that use the CMake build system which led to a number of tests and next steps. []

Chris Lamb contributed to a conversation regarding the nondeterministic execution of order of Debian maintainer scripts that results in the arbitrary allocation of UNIX group IDs, referencing the Tails operating system’s approach this []. Vagrant Cascadian also added to a discussion regarding verification formats for reproducible builds.

47 reviews of Debian packages were added, 37 were updated and 69 were removed this month adding to our knowledge about identified issues. Chris Lamb identified and classified a new uids_gids_in_tarballs_generated_by_cmake_kde_package_app_templates issue [] and updated the paths_vary_due_to_usrmerge as deterministic issue, and Vagrant Cascadian updated the cmake_rpath_contains_build_path and gcc_captures_build_path issues. [][][].

Lastly, Debian Developer Bill Allombert started a mailing list thread regarding setting the -fdebug-prefix-map command-line argument via an environment variable and Holger Levsen also filed three bugs against the debrebuild Debian package rebuilder tool (#961861, #961862 & #961864).

Development

On our website this month, Arnout Engelen added a link to our Mastodon account [] and moved the SOURCE_DATE_EPOCH git log example to another section []. Chris Lamb also limited the number of news posts to avoid showing items from (for example) 2017 [].

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. It is used automatically in most Debian package builds. This month, Mattia Rizzolo bumped the debhelper compatibility level to 13 [] and adjusted a related dependency to avoid potential circular dependency [].

Upstream work

The Reproducible Builds project attempts to fix unreproducible packages and we try to to send all of our patches upstream. This month, we wrote a large number of such patches including:

Bernhard M. Wiedemann also filed reports for frr (build fails on single-processor machines), ghc-yesod-static/git-annex (a filesystem ordering issue) and ooRexx (ASLR-related issue).

diffoscope

diffoscope is our in-depth ‘diff-on-steroids’ utility which helps us diagnose reproducibility issues in packages. It does not define reproducibility, but rather provides a helpful and human-readable guidance for packages that are not reproducible, rather than relying essentially-useless binary diffs.

This month, Chris Lamb uploaded versions 147, 148 and 149 to Debian and made the following changes:

  • New features:

    • Add output from strings(1) to ELF binaries. (#148)
    • Dump PE32+ executables (such as EFI applications) using objdump(1). (#181)
    • Add support for Zsh shell completion. (#158)
  • Bug fixes:

    • Prevent a traceback when comparing PDF documents that did not contain metadata (ie. a PDF /Info stanza). (#150)
    • Fix compatibility with jsondiff version 1.2.0. (#159)
    • Fix an issue in GnuPG keybox file handling that left filenames in the diff. []
    • Correct detection of JSON files due to missing call to File.recognizes that checks candidates against file(1). []
  • Output improvements:

    • Use the CSS word-break property over manually adding U+200B zero-width spaces as these were making copy-pasting cumbersome. (!53)
    • Downgrade the tlsh warning message to an ‘info’ level warning. (#29)
  • Logging improvements:

  • Testsuite improvements:

    • Update tests for file(1) version 5.39. (#179)
    • Drop accidentally-duplicated copy of the --diff-mask tests. []
    • Don’t mask an existing test. []
  • Codebase improvements:

    • Replace obscure references to WF with “Wagner-Fischer” for clarity. []
    • Use a semantic AbstractMissingType type instead of remembering to check for both types of ‘missing’ files. []
    • Add a comment regarding potential security issue in the .changes, .dsc and .buildinfo comparators. []
    • Drop a large number of unused imports. [][][][][]
    • Make many code sections more Pythonic. [][][][]
    • Prevent some variable aliasing issues. [][][]
    • Use some tactical f-strings to tidy up code [][] and remove explicit u"unicode" strings [].
    • Refactor a large number of routines for clarity. [][][][]

trydiffoscope is the web-based version of diffoscope. This month, Chris Lamb also corrected the location for the celerybeat scheduler to ensure that the clean/tidy tasks are actually called which had caused an accidental resource exhaustion. (#12)

In addition Jean-Romain Garnier made the following changes:

  • Fix the --new-file option when comparing directories by merging DirectoryContainer.compare and Container.compare. (#180)
  • Allow user to mask/filter diff output via --diff-mask=REGEX. (!51)
  • Make child pages open in new window in the --html-dir presenter format. []
  • Improve the diffs in the --html-dir format. [][]

Lastly, Daniel Fullmer fixed the Coreboot filesystem comparator [] and Mattia Rizzolo prevented warnings from the tlsh fuzzy-matching library during tests [] and tweaked the build system to remove an unwanted .build directory []. For the GNU Guix distribution Vagrant Cascadian updated the version of diffoscope to version 147 [] and later 148 [].

Testing framework

We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org. Amongst many other tasks, this tracks the status of our reproducibility efforts across many distributions as well as identifies any regressions that have been introduced. This month, Holger Levsen made the following changes:

  • Debian-related changes:

    • Prevent bogus failure emails from rsync2buildinfos.debian.net every night. []
    • Merge a fix from David Bremner’s database of .buildinfo files to include a fix regarding comparing source vs. binary package versions. []
    • Only run the Debian package rebuilder job twice per day. []
    • Increase bullseye scheduling. []
  • System health status page:

    • Add a note displaying whether a node needs to be rebooted for a kernel upgrade. []
    • Fix sorting order of failed jobs. []
    • Expand footer to link to the related Jenkins job. []
    • Add archlinux_html_pages, openwrt_rebuilder_today and openwrt_rebuilder_future to ‘known broken’ jobs. []
    • Add HTML <meta> header to refresh the page every 5 minutes. []
    • Count the number of ignored jobs [], ignore permanently ‘known broken’ jobs [] and jobs on ‘known offline’ nodes [].
    • Only consider the ‘known offline’ status from Git. []
    • Various output improvements. [][]
  • Tools:

    • Switch URLs for the Grml Live Linux and PureOS package sets. [][]
    • Don’t try to build a disorderfs Debian source package. [][][]
    • Stop building diffoscope as we are moving this to Salsa. [][]
    • Merge several “is diffoscope up-to-date on every platform?” test jobs into one [] and fail less noisily if the version in Debian cannot be determined [].

In addition: Marcus Hoffmann was added as a maintainer of the F-Droid reproducible checking components [], Jelle van der Waa updated the “is diffoscope up-to-date in every platform” check for Arch Linux and diffoscope [], Mattia Rizzolo backed up a copy of a “remove script” run on the Codethink-hosted ‘jump server‘ [] and Vagrant Cascadian temporarily disabled the fixfilepath on bullseye, to get better data about the ftbfs_due_to_f-file-prefix-map categorised issue.

Lastly, the usual build node maintenance was performed by Holger Levsen [][], Mattia Rizzolo [] and Vagrant Cascadian [][][][][].



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:


This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Eli Schwartz, Holger Levsen, Jelle van der Waa and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

06 July, 2020 08:11AM

July 05, 2020

Enrico Zini

COVID-19 and Capitalism

If the Reopen America protests seem a little off to you, that's because they are. In this video we're going to talk about astroturfing and how insidious it i...
Techdirt has just written about the extraordinary legal action taken against a company producing Covid-19 tests. Sadly, it's not the only example of some individuals putting profits before people. Here's a story from Italy, which is...
Berlin is trying to stop Washington from persuading a German company seeking a coronavirus vaccine to move its research to the United States.
Amazon cracked down on coronavirus price gouging. Now, while the rest of the world searches, some sellers are holding stockpiles of sanitizer and masks.
And 3D-printed valve for breathing machine sparks legal threat
Ischgl, an Austrian ski resort, has achieved tragic international fame: hundreds of tourists are believed to have contracted the coronavirus there and taken it home with them. The Tyrolean state government is now facing serious criticism. EURACTIV Germany reports.
We are seeing how the monopolistic repair and lobbying practices of medical device companies are making our response to the coronavirus pandemic harder.
Las Vegas, Nevada has come under criticism after reportedly setting up a temporary homeless shelter in a parking lot complete with social distancing barriers.

05 July, 2020 10:00PM

Thorsten Alteholz

My Debian Activities in June 2020

FTP master

This month I accepted 377 packages and rejected 30. The overall number of packages that got accepted was 411.

Debian LTS

This was my seventy-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS uploads of:

  • [DLA 2255-1] libtasn1-6 security update for one CVE
  • [DLA 2256-1] libtirpc security update for one CVE
  • [DLA 2257-1] pngquant security update for one CVE
  • [DLA 2258-1] zziplib security update for eight CVEs
  • [DLA 2259-1] picocom security update for one CVE
  • [DLA 2260-1] mcabber security update for one CVE
  • [DLA 2261-1] php5 security update for one CVE

I started to work on curl as well but did not upload a fixed version, so this has to go to ELTS now.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty fourth ELTS month.

Unfortunately in the last month of Wheezy ELTS even I did not find any package to fix a CVE, so during my small allocated time I didn’t uploaded anything.

But at least I did some days of frontdesk duties und updated my working environment for the new ELTS Jessie.

Other stuff

I uploaded a new upstream version of …

05 July, 2020 02:13PM by alteholz

Russell Coker

Debian S390X Emulation

I decided to setup some virtual machines for different architectures. One that I decided to try was S390X – the latest 64bit version of the IBM mainframe. Here’s how to do it, I tested on a host running Debian/Unstable but Buster should work in the same way.

First you need to create a filesystem in an an image file with commands like the following:

truncate -s 4g /vmstore/s390x
mkfs.ext4 /vmstore/s390x
mount -o loop /vmstore/s390x /mnt/tmp

Then visit the Debian Netinst page [1] to download the S390X net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2.

The package qemu-system-misc has the program for emulating a S390X system (among many others), the qemu-user-static package has the program for emulating S390X for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.

# Install the basic packages you need
apt install qemu-system-misc qemu-user-static debootstrap

# List the support for different binary formats
update-binfmts --display

# qemu s390x needs exec stack to solve "Could not allocate dynamic translator buffer"
# so you probably need this on SE Linux systems
setsebool allow_execstack 1

# commands to do the main install
debootstrap --foreign --arch=s390x --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage

# set the apt sources
cat << END > /mnt/tmp/etc/apt/sources.list
deb http://YOURLOCALMIRROR/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
# for minimal install do not want recommended packages
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf

# update to latest packages
chroot /mnt/tmp apt update
chroot /mnt/tmp apt dist-upgrade

# install kernel, ssh, and build-essential
chroot /mnt/tmp apt install bash-completion locales linux-image-s390x man-db openssh-server build-essential
chroot /mnt/tmp dpkg-reconfigure locales
echo s390x > /mnt/tmp/etc/hostname
chroot /mnt/tmp passwd

# copy kernel and initrd
mkdir -p /boot/s390x
cp /mnt/tmp/boot/vmlinuz* /mnt/tmp/boot/initrd* /boot/s390x

# setup /etc/fstab
cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END

# clean up
umount /mnt/tmp
umount /mnt/tmp2

# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper

# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf

Some of the above can be considered more as pseudo-code in shell script rather than an exact way of doing things. While you can copy and past all the above into a command line and have a reasonable chance of having it work I think it would be better to look at each command and decide whether it’s right for you and whether you need to alter it slightly for your system.

To run qemu as non-root you need to have a helper program with extra capabilities to setup bridged networking. I’ve included that in the explanation because I think it’s important to have all security options enabled.

The “-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0” part is to give entropy to the VM from the host, otherwise it will take ages to start sshd. Note that this is slightly but significantly different from the command used for other architectures (the “ccw” is the difference).

I’m not sure if “noresume” on the kernel command line is required, but it doesn’t do any harm. The “net.ifnames=0” stops systemd from renaming Ethernet devices. For the virtual networking the “ccw” again is a difference from other architectures.

Here is a basic command to run a QEMU virtual S390X system. If all goes well it should give you a login: prompt on a curses based text display, you can then login as root and should be able to run “dhclient eth0” and other similar commands to setup networking and allow ssh logins.

qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume root=/dev/vda ro" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Here is a slightly more complete QEMU command. It has 2 block devices, for root and swap. It has SE Linux enabled for the VM (SE Linux works nicely on S390X). I added the “lockdown=confidentiality” kernel security option even though it’s not supported in 4.19 kernels, it doesn’t do any harm and when I upgrade systems to newer kernels I won’t have to remember to add it.

qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -drive format=raw,file=/vmswap/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Try It Out

I’ve got a S390X system online for a while, “ssh root@s390x.coker.com.au” with password “SELINUX” to try it out.

PPC64

I’ve tried running a PPC64 virtual machine, I did the same things to set it up and then tried launching it with the following result:

qemu-system-ppc64 -drive format=raw,file=/vmstore/ppc64,if=virtio -nographic -m 1024 -kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le -curses -append "root=/dev/vda ro"

Above is the minimal qemu command that I’m using. Below is the result, it stops after the “4.” from “4.19.0-9”. Note that I had originally tried with a more complete and usable set of options, but I trimmed it to the minimal needed to demonstrate the problem.

  Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
  This program and the accompanying materials are made available
  under the terms of the BSD License available at
  http://www.opensource.org/licenses/bsd-license.php

Booting from memory...
Linux ppc64le
#1 SMP Debian 4.

The kernel is from the package linux-image-4.19.0-9-powerpc64le which is a dependency of the package linux-image-ppc64el in Debian/Buster. The program qemu-system-ppc64 is from version 5.0-5 of the qemu-system-ppc package.

Any suggestions on what I should try next would be appreciated.

05 July, 2020 02:58AM by etbe

July 04, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp now used by 2000 CRAN packages–and one in eight!

2000 Rcpp packages

As of yesterday, Rcpp stands at exactly 2000 reverse-dependencies on CRAN. The graph on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time.

Rcpp was first released in November 2008. It probably cleared 50 packages around three years later in December 2011, 100 packages in January 2013, 200 packages in April 2014, and 300 packages in November 2014. It passed 400 packages in June 2015 (when I tweeted about it), 500 packages in late October 2015, 600 packages in March 2016, 700 packages last July 2016, 800 packages last October 2016, 900 packages early January 2017, 1000 packages in April 2017, 1250 packages in November 2017, 1500 packages in November 2018 and then 1750 packages last August. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is available too.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent in the summer of 2016, nine percent mid-December 2016, cracked ten percent in the summer of 2017 and eleven percent in 2018. We now passed 12.5 percent—so one in every eight CRAN packages dependens on Rcpp. Stunning. There is more detail in the chart: how CRAN seems to be pushing back more and removing more aggressively (which my CRANberries tracks but not in as much detail as it could), how the growth of Rcpp seems to be slowing somewhat outright and even more so as a proportion of CRAN – as one would expect a growth curve to.

To mark the occassion, I sent out two tweets yesterday: first a shorter one with “just the numbers”, followed by a second one also containing the few calculation steps. The screenshot from the second one is below.

2000 Rcpp packages

2000 user packages is pretty mind-boggling. We can use the progression of CRAN itself compiled by Henrik in a series of posts and emails to the main development mailing list. Not that long ago CRAN itself did have only 1000 packages, then 5000, 10000, and here we are at just over 16000 with Rcpp at 12.5% and still growing (though maybe more slowly). Amazeballs.

The Rcpp team continues to aim for keeping Rcpp as performant and reliable as it has been. A really big shoutout and Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 July, 2020 10:20PM

Petter Reinholdtsen

Working on updated Norwegian Bokmål edition of Debian Administrator's Handbook

Three years ago, the first Norwegian Bokmål edition of "The Debian Administrator's Handbook" was published. This was based on Debian Jessie. Now a new and updated version based on Buster is getting ready. Work on the updated Norwegian Bokmål edition has been going on for a few months now, and yesterday, we reached the first mile stone, with 100% of the texts being translated. A lot of proof reading remains, of course, but a major step towards a new edition has been taken.

The book is translated by volunteers, and we would love to get some help with the proof reading. The translation uses the hosted Weblate service, and we welcome everyone to have a look and submit improvements and suggestions. There is also a proof readers PDF available on request, get in touch if you want to help out that way.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

04 July, 2020 09:55PM

Russ Allbery

Review: The Light Brigade

Review: The Light Brigade, by Kameron Hurley

Publisher: Saga
Copyright: 2019
ISBN: 1-4814-4798-X
Format: Kindle
Pages: 355

In the wake of the Blink, which left a giant crater where São Paulo was, Dietz signed up for the military. To be a hero. To satisfy an oath of vengeance. To kill aliens.

Corporations have consumed the governments that used to run Earth and have divided the world between them. Dietz's family, before the Blink, were ghouls in Tene-Silva territory, non-citizens who scavenged a precarious life on the margins. Citizenship is a reward for loyalty and a mechanism of control. The only people who don't fit into the corporate framework are the Martians, former colonists who went dark for ten years and re-emerged as a splinter group offering to use their superior technology to repair environmental damage to the northern hemisphere caused by corporate wars. When the Blink happens, apparently done with technology far beyond what the corporations have, corporate war with the Martians is the unsurprising result.

Long-time SF readers will immediately recognize The Light Brigade as a response to Starship Troopers with far more cynical world-building. For the first few chapters, the parallelism is very strong, down to the destruction of a large South American city (São Paulo instead of Buenos Aires), a naive military volunteer, and horrific basic training. But, rather than dropships, the soldiers in Dietz's world are sent into battle via, essentially, Star Trek transporters. These still very experimental transporters send Dietz to a different mission than the one in the briefing.

Advance warning that I'm going to talk about what's happening with Dietz's drops below. It's a spoiler, but you would find out not far into the book and I don't think it ruins anything important. (On the contrary, it may give you an incentive to stick through the slow and unappealing first few chapters.)

I had so many suspension of disbelief problems with this book. So many.

This starts with the technology. The core piece of world-building is Star Trek transporters, so fine, we're not talking about hard physics. Every SF story gets one or two free bits of impossible technology, and Hurley does a good job showing the transporters through a jaundiced military eye. But, late in the book, this technology devolves into one of my least-favorite bits of SF hand-waving that, for me, destroyed that gritty edge.

Technology problems go beyond the transporters. One of the bits of horror in basic training is, essentially, torture simulators, whose goal is apparently to teach soldiers to dissociate (not that the book calls it that). One problem is that I never understood why a military would want to teach dissociation to so many people, but a deeper problem is that the mechanics of this simulation made no sense. Dietz's training in this simulator is a significant ongoing plot point, and it kept feeling like it was cribbed from The Matrix rather than something translatable into how computers work.

Technology was the more minor suspension of disbelief problem, though. The larger problem was the political and social world-building.

Hurley constructs a grim, totalitarian future, which is a fine world-building choice although I think it robs some nuance from the story she is telling about how militaries lie to soldiers. But the totalitarian model she uses is one of near-total information control. People believe what the corporations tell them to believe, or at least are indifferent to it. Huge world events (with major plot significance) are distorted or outright lies, and those lies are apparently believed by everyone. The skepticism that exists is limited to grumbling about leadership competence and cynicism about motives, not disagreement with the provided history. This is critical to the story; it's a driver behind Dietz's character growth and is required to set up the story's conclusion.

This is a model of totalitarianism that's familiar from Orwell's Nineteen Eighty-Four. The problem: The Internet broke this model. You now need North Korean levels of isolation to pull off total message control, which is incompatible with the social structure or technology level that Hurley shows.

You may be objecting that the modern world is full of people who believe outrageous propaganda against all evidence. But the world-building problem is not that some people believe the corporate propaganda. It's that everyone does. Modern totalitarians have stopped trying to achieve uniformity (because it stopped working) and instead make the disagreement part of the appeal. You no longer get half a country to believe a lie by ensuring they never hear the truth. Instead, you equate belief in the lie with loyalty to a social or political group, and belief in the truth with affiliation with some enemy. This goes hand in hand with "flooding the zone" with disinformation and fakes and wild stories until people's belief in the accessibility of objective truth is worn down and all facts become ideological statements. This does work, all too well, but it relies on more information, not less. (See Zeynep Tufekci's excellent Twitter and Tear Gas if you're unfamiliar with this analysis.) In that world, Dietz would have heard the official history, the true history, and all sorts of wild alternative histories, making correct belief a matter of political loyalty. There is no sign of that.

Hurley does gesture towards some technology to try to explain this surprising corporate effectiveness. All the soldiers have implants, and military censors can supposedly listen in at any time. But, in the story, this censorship is primarily aimed at grumbling and local disloyalty. There is no sign that it's being used to keep knowledge of significant facts from spreading, nor is there any sign of the same control among the general population. It's stated in the story that the censors can't even keep up with soldiers; one would have to get unlucky to be caught. And yet the corporation maintains preternatural information control.

The place this bugged me the most is around knowledge of the current date. For reasons that will be obvious in a moment, Dietz has reasons to badly want to know what month and year it is and is unable to find this information anywhere. This appears to be intentional; Tene-Silva has a good (albeit not that urgent) reason to keep soldiers from knowing the date. But I don't think Hurley realizes just how hard that is.

Take a look around the computer you're using to read this and think about how many places the date shows up. Apart from the ubiquitous clock and calendar app, there are dates on every file, dates on every news story, dates on search results, dates in instant messages, dates on email messages and voice mail... they're everywhere. And it's not just the computer. The soldiers can easily smuggle prohibited outside goods into the base; knowledge of the date would be much easier. And even if Dietz doesn't want to ask anyone, there are opportunities to go off base during missions. Somehow every newspaper and every news bulletin has its dates suppressed? It's not credible, and it threw me straight out of the story.

These world-building problems are unfortunate, since at the heart of The Light Brigade is a (spoiler alert) well-constructed time travel story that I would have otherwise enjoyed. Dietz is being tossed around in time with each jump. And, unlike some of these stories, Hurley does not take the escape hatch of alternate worlds or possible futures. There is a single coherent timeline that Dietz and the reader experience in one order and the rest of the world experiences in a different order.

The construction of this timeline is incredibly well-done. Time can only disconnect at jump and return points, and Hurley maintains tight control over the number of unresolved connections. At every point in the story, I could list all of the unresolved discontinuities and enjoy their complexity and implications without feeling overwhelmed by them. Dietz gains some foreknowledge, but in a way that's wildly erratic and hard to piece together fast enough for a single soldier to do anything about the plot. The world spins out of control with foreshadowing of grimmer and grimmer events, and then Hurley pulls it back together in a thoroughly satisfying interweaving of long-anticipated scenes and major surprises.

I'm not usually a fan of time travel stories, but this is one of the best I've read. It also has a satisfying emotional conclusion (albeit marred for me by some unbelievable mystical technobabble), which is impressive given how awful and nasty Hurley makes this world. Dietz is a great first-person narrator, believably naive and cynical by turns, and piecing together the story structure alongside the protagonist built my emotional attachment to Dietz's character arc. Hurley writes the emotional dynamics of soldiers thoughtfully and well: shit-talking, fights, sudden moments of connection, shared cynicism over degenerating conditions, and the underlying growth of squad loyalty that takes over other motivations and becomes the reason to keep on fighting.

Hurley also pulled off a neat homage to (and improvement on) Starship Troopers that caught me entirely by surprise and that I've hopefully not spoiled.

This is a solid science fiction novel if you can handle the world-building. I couldn't, but I understand why it was nominated for the Hugo and Clarke awards. Recommended if you're less picky about technological and social believability than I am, although content warning for a lot of bloody violence and death (including against children) and a horrifically depressing world.

Rating: 6 out of 10

04 July, 2020 03:49AM

July 03, 2020

hackergotchi for Michael Prokop

Michael Prokop

Grml 2020.06 – Codename Ausgehfuahangl

We did it again™, at the end of June we released Grml 2020.06, codename Ausgehfuahangl. This Grml release (a Linux live system for system administrators) is based on Debian/testing (AKA bullseye) and provides current software packages as of June, incorporates up to date hardware support and fixes known issues from previous Grml releases.

I am especially fond of our cloud-init and qemu-guest-agent integration, which makes usage and automation in virtual environments like Proxmox VE much more comfortable.

Once as the Qemu Guest Agent setting is enabled in the VM options (also see Proxmox wiki), you’ll see IP address information in the VM summary:

Screenshot of qemu guest agent integration

Using a cloud-init drive allows using an SSH key for login as user "grml", and you can control network settings as well:

Screenshot of cloud-init integration

It was fun to focus and work on this new Grml release together with Darsha, and we hope you enjoy the new Grml release as much as we do!

03 July, 2020 02:32PM by mika

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma Status Update 2020-07-04

Great timing for 4th of July, here is another status update of KDE/Plasma for Debian. Short summary: everything is now available for Debian sid and testing, for both i386 and am64 architectures!

(Update 2020-07-07: Plasma 5.19.3 is included!)

With Qt 5.14 arriving in Debian/testing, and some tweaks here and there, we finally have all the packages (2 additional deps, 82 frameworks, 47 Plasma, 216 Apps, 3 other apps) built on both Debian unstable and Debian testing, for both amd64 and i386 architectures. Again, big thanks to OBS!

Repositories:
For Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma519/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

For Testing:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma519/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Testing/ ./

As usual, don’t forget that you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

Enjoy.

03 July, 2020 02:06PM by Norbert Preining

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#28: Welcome RSPM and test-drive with Bionic and Focal

Welcome to the 28th post in the relatively random R recommendations series, or R4 for short. Our last post was a “double entry” in this R4 series and the newer T4 video series and covered a topic touched upon in this R4 series multiple times: easy binary install, especially on Ubuntu.

That post already previewed the newest kid on the block: RStudio’s RSPM, now formally announced. In the post we were only able to show Ubuntu 18.04 aka bionic. With the formal release of RSPM support has been added for Ubuntu 20.04 aka focal—and we are happy to announce that of course we added a corresponding Rocker r-rspm container. So you can now take full advantage of RSPM either via docker pull rocker/r-rspm:18.04 or via docker pull rocker/r-rspm:20.04 covering the two most recent LTS releases.

RSPM is a nice accomplishment. Covering multiple Linux distributions is an excellent achievement. Allowing users to reason in terms of the CRAN packages (i.e. installing xml2, not r-cran-xml2) eases use. Doing it from via the standard R command install.packages() (or wrapper around it like our install.r from littler package) is very good too and an excellent technical achievement.

There is, as best as I can tell, only one shortcoming, along with one small bit of false advertising. The shortcoming is technical. By bringing the package installation into the user application domain, it is separated from the system and lacks integration with system libraries. What do I mean here? If you were to add R to a plain Ubuntu container, say 18.04 or 20.04, then added the few lines to support RSPM and install xml2 it would install. And fail. Why? Because the system library libxml2 does not get installed with the RSPM package—whereas the .deb from the distribution or PPAs does. So to help with some popular packages I added libxml2, libunits and a few more for geospatial work to the rocker/r-rspm containers. Being already present ensures packages xml2 and units can run immediately. Please file issue tickets at the Rocker repo if you come across other missing libraries we could preload. (A related minor nag is incomplete coverage. At least one of my CRAN packages does not (yet?) come as a RSPM binary. Then again, CRAN has 16k packages, and the RSPM coverage is much wider than the PPA one. But completeness would be neat. The final nag is lack of Debian support which seems, well, odd.)

So what about the small bit of false advertising? Well it is claimed that RSPM makes installation “so much faster on Linux”. True, faster than the slowest possible installation from source. Also easier. But we had numerous posts on this blog showing other speed gains: Using ccache. And, of course, using binaries. And as the initial video mentioned above showed, installing from the PPAs is also faster than via RSPM. That is easy to replicate. Just set up the rocker/r-ubuntu:20.04 (or 18.04) container alongside the rocker/r-rspm:20.04 (or also 18.04) container. And then time install.r rstan (or install.r tinyverse) in the RSPM one against apt -y update; apt install -y r-cran-rstan (or ... r-cran-tinyverse). In every case I tried, the installation using binaries from the PPA was still faster by a few seconds. Not that it matters greatly: both are very, very quick compared to source installation (as e.g. shown here in 2017 (!!)) but the standard Ubuntu .deb installation is simply faster than using RSPM. (Likely due to better CDN usage so this may change over time. Neither method appears to do downloads in parallel so there is scope for both for doing better.)

So in sum: Welcome to RSPM, and nice new tool—and feel free to “drive” it using rocker/r-rspm:18.04 or rocker/r-rspm:20.04.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

03 July, 2020 01:05PM

Reproducible Builds (diffoscope)

diffoscope 150 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 150. This version includes the following changes:

[ Chris Lamb ]
* Don't crash when listing entries in archives if they don't have a listed
  size (such as hardlinks in .ISO files).
  (Closes: reproducible-builds/diffoscope#188)
* Dump PE32+ executables (including EFI applications) using objdump.
  (Closes: reproducible-builds/diffoscope#181)
* Tidy detection of JSON files due to missing call to File.recognizes that
  checks against the output of file(1) which was also causing us to attempt
  to parse almost every file using json.loads. (Whoops.)
* Drop accidentally-duplicated copy of the new --diff-mask tests.
* Logging improvements:
  - Split out formatting of class names into a common method.
  - Clarify that we are generating presenter formats in the opening logs.

[ Jean-Romain Garnier ]
* Remove objdjump(1) offsets before instructions to reduce diff noise.
  (Closes: reproducible-builds/diffoscope!57)

You find out more by visiting the project homepage.

03 July, 2020 12:00AM

July 02, 2020

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, June 2020

I was assigned 20 hours of work by Freexian's Debian LTS initiative, and worked all 20 hours this month.

I sent a final request for testing for the next update to Linux 3.16 in jessie. I also prepared an update to Linux 4.9, included in both jessie and stretch. I completed backporting of kernel changes related to CVE-2020-0543, which was still under embargo, to Linux 3.16.

Finally I uploaded the updates for Linux 3.16 and 4.9, and issued DLA-2241 and DLA-2242.

The end of June marked the end of long-term support for Debian 8 "jessie" and for Linux 3.16. I am no longer maintaining any stable kernel branches, but will continue contributing to them as part of my work on Debian 9 "stretch" LTS and other Debian releases.

02 July, 2020 07:25PM

hackergotchi for Mike Gabriel

Mike Gabriel

My Work on Debian LTS (June 2020)

In June 2020, I have worked on the Debian LTS project for 8 hours (of 8 hours planned).

LTS Work

  • frontdesk: CVE bug triaging for Debian jessie LTS: mailman, alpine, python3.4, redis, pound, pcre3, ngircd, mutt, lynis, libvncserver, cinder, bison, batik.
  • upload to jessie-security: libvncserver (DLA-2264-1 [1], 9 CVEs)
  • upload to jessie-security: mailman (DLA-2265-1 [2], 1 CVE)
  • upload to jessie-security: mutt (DLA-2268-1 [3] and DLA-2268-2 [4]), 2 CVEs)

Other security related work for Debian

  • make sure all security fixes for php-horde-* are also in Debian unstable
  • upload freerdp2 2.1.2+dfsg-1 to unstable (9 CVEs)

References

02 July, 2020 02:20PM by sunweaver

Russell Coker

Desklab Portable USB-C Monitor

I just got a 15.6″ 4K resolution Desklab portable touchscreen monitor [1]. It takes power via USB-C and video input via USB-C or mini HDMI, has touch screen input, and has speakers built in for USB or HDMI sound.

PC Use

I bought a mini-DisplayPort to HDMI adapter and for my first test ran it from my laptop, it was seen as a 1920*1080 DisplayPort monitor. The adaptor is specified as supporting 4K so I don’t know why I didn’t get 4K to work, my laptop has done 4K with other monitors.

The next thing I plan to get is a VGA to HDMI converter so I can use this on servers, it can be a real pain getting a monitor and power cable to a rack mounted server and this portable monitor can be powered by one of the USB ports in the server. A quick search indicates that such devices start at about $12US.

The Desklab monitor has no markings to indicate what resolution it supports, no part number, and no serial number. The only documentation I could find about how to recognise the difference between the FullHD and 4K versions is that the FullHD version supposedly draws 2A and the 4K version draws 4A. I connected my USB Ammeter and it reported that between 0.6 and 1.0A were drawn. If they meant to say 2W and 4W instead of 2A and 4A (I’ve seen worse errors in manuals) then the current drawn would indicate the 4K version. Otherwise the stated current requirements don’t come close to matching what I’ve measured.

Power

The promise of USB-C was power from anywhere to anywhere. I think that such power can theoretically be done with USB 3 and maybe USB 2, but asymmetric cables make it more challenging.

I can power my Desklab monitor from a USB battery, from my Thinkpad’s USB port (even when the Thinkpad isn’t on mains power), and from my phone (although the phone battery runs down fast as expected). When I have a mains powered USB charger (for a laptop and rated at 60W) connected to one USB-C port and my phone on the other the phone can be charged while giving a video signal to the display. This is how it’s supposed to work, but in my experience it’s rare to have new technology live up to it’s potential at the start!

One thing to note is that it doesn’t have a battery. I had imagined that it would have a battery (in spite of there being nothing on their web site to imply this) because I just couldn’t think of a touch screen device not having a battery. It would be nice if there was a version of this device with a big battery built in that could avoid needing separate cables for power and signal.

Phone Use

The first thing to note is that the Desklab monitor won’t work with all phones, whether a phone will take the option of an external display depends on it’s configuration and some phones may support an external display but not touchscreen. The Huawei Mate devices are specifically listed in the printed documentation as being supported for touchscreen as well as display. Surprisingly the Desklab web site has no mention of this unless you download the PDF of the manual, they really should have a list of confirmed supported devices and a forum for users to report on how it works.

My phone is a Huawei Mate 10 Pro so I guess I got lucky here. My phone has a “desktop mode” that can be enabled when I connect it to a USB-C device (not sure what criteria it uses to determine if the device is suitable). The desktop mode has something like a regular desktop layout and you can move windows around etc. There is also the option of having a copy of the phone’s screen, but it displays the image of the phone screen vertically in the middle of the landscape layout monitor which is ridiculous.

When desktop mode is enabled it’s independent of the phone interface so I had to find the icons for the programs I wanted to run in an unsorted list with no search usable (the search interface of the app list brings up the keyboard which obscures the list of matching apps). The keyboard takes up more than half the screen and there doesn’t seem to be a way to make it smaller. I’d like to try a portrait layout which would make the keyboard take something like 25% of the screen but that’s not supported.

It’s quite easy to type on a keyboard that’s slightly larger than a regular PC keyboard (a 15″ display with no numeric keypad or cursor control keys). The hackers keyboard app might work well with this as it has cursor control keys. The GUI has an option for full screen mode for an app which is really annoying to get out of (you have to use a drop down from the top of the screen), full screen doesn’t make sense for a display this large. Overall the GUI is a bit clunky, imagine Windows 3.1 with a start button and task bar. One interesting thing to note is that the desktop and phone GUIs can be run separately, so you can type on the Desklab (or any similar device) and look things up on the phone. Multiple monitors never really interested me for desktop PCs because switching between windows is fast and easy and it’s easy to resize windows to fit several on the desktop. Resizing windows on the Huawei GUI doesn’t seem easy (although I might be missing some things) and the keyboard takes up enough of the screen that having multiple windows open while typing isn’t viable.

I wrote the first draft of this post on my phone using the Desklab display. It’s not nearly as easy as writing on a laptop but much easier than writing on the phone screen.

Currently Desklab is offering 2 models for sale, 4K resolution for $399US and FullHD for $299US. I got the 4K version which is very expensive at the moment when converted to Australian dollars. There are significantly cheaper USB-C monitors available (such as this ASUS one from Kogan for $369AU), but I don’t think they have touch screens and therefore can’t be used with a phone unless you enable the phone screen as touch pad mode and have a mouse cursor on screen. I don’t know if all Android devices support that, it could be that a large part of the desktop experience I get is specific to Huawei devices.

One annoying feature is that if I use the phone power button to turn the screen off it shuts down the connection to the Desklab display, but the phone screen will turn off it I leave it alone for the screen timeout (which I have set to 10 minutes).

Caveats

When I ordered this I wanted the biggest screen possible. But now that I have it the fact that it doesn’t fit in the pocket of my Scott e Vest jacket [2] will limit what I can do with it. Maybe I’ll be buying a 13″ monitor in the near future, I expect that Desklab will do well and start selling them in a wide range of sizes. A 15.6″ portable device is inconvenient even if it is in the laptop format, a thin portable screen is inconvenient in many ways.

Netflix doesn’t display video on the Desklab screen, I suspect that Netflix is doing this deliberately as some misguided attempt at stopping piracy. It is really good for watching video as it has the speakers in good locations for stereo sound, it’s a pity that Netflix is difficult.

The functionality on phones from companies other than Huawei is unknown. It is likely to work on most Android phones, but if a particular phone is important to you then you want to Google for how it worked for others.

02 July, 2020 10:42AM by etbe

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Test a webcam from the command line on Linux with VLC

Since this info was too well hidden on the internet, here is the information:
cvlc v4l2:///dev/video0
and there you go.
If you have multiple cameras connected, you can try /dev/video0 up to /dev/video5

02 July, 2020 07:30AM by Emmanuel Kasper (noreply@blogger.com)

hackergotchi for Evgeni Golov

Evgeni Golov

Automatically renaming the default git branch to "devel"

It seems GitHub is planning to rename the default brach for newly created repositories from "master" to "main". It's incredible how much positive PR you can get with a one line configuration change, while still working together with the ICE.

However, this post is not about bashing GitHub.

Changing the default branch for newly created repositories is good. And you also should do that for the ones you create with git init locally. But what about all the repositories out there? GitHub surely won't force-rename those branches, but we can!

Ian will do this as he touches the individual repositories, but I tend to forget things unless I do them immediately…

Oh, so this is another "automate everything with an API" post? Yes, yes it is!

And yes, I am going to use GitHub here, but something similar should be implementable on any git hosting platform that has an API.

Of course, if you have SSH access to the repositories, you can also just edit HEAD in an for loop in bash, but that would be boring ;-)

I'm going with devel btw, as I'm already used to develop in the Foreman project and devel in Ansible.

acquire credentials

My GitHub account is 2FA enabled, so I can't just use my username and password in a basic HTTP API client. So the first step is to acquire a personal access token, that can be used instead. Of course I could also have implemented OAuth2 in my lousy script, but ain't nobody have time for that.

The token will require the "repo" permission to be able to change repositories.

And we'll need some boilerplate code (I'm using Python3 and requests, but anything else will work too):

#!/usr/bin/env python3

import requests

BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcdef'

headers = {'User-Agent': '@{}'.format(USER)}
auth = (USER, TOKEN)

session = requests.Session()
session.auth = auth
session.headers.update(headers)
session.verify = True

This will store our username, token, and create a requests.Session so that we don't have to pass the same data all the time.

get a list of repositories to change

I want to change all my own repos that are not archived, not forks, and actually have the default branch set to master, YMMV.

As we're authenticated, we can just list the repositories of the currently authenticated user, and limit them to "owner" only.

GitHub uses pagination for their API, so we'll have to loop until we get to the end of the repository list.

repos_to_change = []

url = '{}/user/repos?type=owner'.format(BASE)
while url:
    r = session.get(url)
    if r.ok:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':
                repos_to_change.append(repo['name'])
        if 'next' in r.links:
            url = r.links['next']['url']
        else:
            url = None
    else:
        url = None

create a new devel branch and mark it as default

Now that we know which repos to change, we need to fetch the SHA of the current master, create a new devel branch pointing at the same commit and then set that new branch as the default branch.

for repo in repos_to_change:
    master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json()
    data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}
    session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data)
    default_branch_data = {'default_branch': 'devel'}
    session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)
    session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

I've also opted in to actually delete the old master, as I think that's the safest way to let the users know that it's gone. Letting it rot in the repository would mean people can still pull and won't notice that there are no changes anymore as the default branch moved to devel.

So…

announcement

I've updated all my (those in the evgeni namespace) non-archived repositories to have devel instead of master as the default branch.

Have fun updating!

code

#!/usr/bin/env python3

import requests

BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcd'

headers = {'User-Agent': '@{}'.format(USER)}
auth = (USER, TOKEN)

session = requests.Session()
session.auth = auth
session.headers.update(headers)
session.verify = True

repos_to_change = []

url = '{}/user/repos?type=owner'.format(BASE)
while url:
    r = session.get(url)
    if r.ok:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':
                repos_to_change.append(repo['name'])
        if 'next' in r.links:
            url = r.links['next']['url']
        else:
            url = None
    else:
        url = None

for repo in repos_to_change:
    master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json()
    data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}
    session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data)
    default_branch_data = {'default_branch': 'devel'}
    session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)
    session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

02 July, 2020 07:12AM by evgeni

Russell Coker

Isolating PHP Web Sites

If you have multiple PHP web sites on a server in a default configuration they will all be able to read each other’s files in a default configuration. If you have multiple PHP web sites that have stored data or passwords for databases in configuration files then there are significant problems if they aren’t all trusted. Even if the sites are all trusted (IE the same person configures them all) if there is a security problem in one site it’s ideal to prevent that being used to immediately attack all sites.

mpm_itk

The first thing I tried was mpm_itk [1]. This is a version of the traditional “prefork” module for Apache that has one process for each HTTP connection. When it’s installed you just put the directive “AssignUserID USER GROUP” in your VirtualHost section and that virtual host runs as the user:group in question. It will work with any Apache module that works with mpm_prefork. In my experiment with mpm_itk I first tried running with a different UID for each site, but that conflicted with the pagespeed module [2]. The pagespeed module optimises HTML and CSS files to improve performance and it has a directory tree where it stores cached versions of some of the files. It doesn’t like working with copies of itself under different UIDs writing to that tree. This isn’t a real problem, setting up the different PHP files with database passwords to be read by the desired group is easy enough. So I just ran each site with a different GID but used the same UID for all of them.

The first problem with mpm_itk is that the mpm_prefork code that it’s based on is the slowest mpm that is available and which is also incompatible with HTTP/2. A minor issue of mpm_itk is that it makes Apache take ages to stop or restart, I don’t know why and can’t be certain it’s not a configuration error on my part. As an aside here is a site for testing your server’s support for HTTP/2 [3]. To enable HTTP/2 you have to be running mpm_event and enable the “http2” module. Then for every virtual host that is to support it (generally all https virtual hosts) put the line “Protocols h2 h2c http/1.1” in the virtual host configuration.

A good feature of mpm_itk is that it has everything for the site running under the same UID, all Apache modules and Apache itself. So there’s no issue of one thing getting access to a file and another not getting access.

After a trial I decided not to keep using mpm_itk because I want HTTP/2 support.

php-fpm Pools

The Apache PHP module depends on mpm_prefork so it also has the issues of not working with HTTP/2 and of causing the web server to be slow. The solution is php-fpm, a separate server for running PHP code that uses the fastcgi protocol to talk to Apache. Here’s a link to the upstream documentation for php-fpm [4]. In Debian this is in the php7.3-fpm package.

In Debian the directory /etc/php/7.3/fpm/pool.d has the configuration for “pools”. Below is an example of a configuration file for a pool:

# cat /etc/php/7.3/fpm/pool.d/example.com.conf
[example.com]
user = example.com
group = example.com
listen = /run/php/php7.3-example.com.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3

Here is the upstream documentation for fpm configuration [5].

Then for the Apache configuration for the site in question you could have something like the following:

ProxyPassMatch "^/(.*\.php(/.*)?)$" "unix:/run/php/php7.3-example.com.sock|fcgi://localhost/usr/share/wordpress/"

The “|fcgi://localhost” part is just part of the way of specifying a Unix domain socket. From the Apache Wiki it appears that the method for configuring the TCP connections is more obvious [6]. I chose Unix domain sockets because it allows putting the domain name in the socket address. Matching domains for the web server to port numbers is something that’s likely to be error prone while matching based on domain names is easier to check and also easier to put in Apache configuration macros.

There was some additional hassle with getting Apache to read the files created by PHP processes (the options include running PHP scripts with the www-data group, having SETGID directories for storing files, and having world-readable files). But this got things basically working.

Nginx

My Google searches for running multiple PHP sites under different UIDs didn’t turn up any good hits. It was only after I found the DigitalOcean page on doing this with Nginx [7] that I knew what to search for to find the way of doing it in Apache.

02 July, 2020 02:12AM by etbe

July 01, 2020

hackergotchi for Joachim Breitner

Joachim Breitner

Template Haskell recompilation

I was wondering: What happens if I have a Haskell module with Template Haskell that embeds some information from the environment (time, environment variables). Will such a module be reliable recompiled? And what if it gets recompiled, but the source code produced by Template Haskell is actually unchanged (e.g., because the environment variable has not changed), will all depending modules be recompiled (which would be bad)?

Here is a quick experiment, using GHC-8.8:

/tmp/th-recom-test $ cat Foo.hs
{-# LANGUAGE TemplateHaskell #-}
{-# OPTIONS_GHC -fforce-recomp #-}
module Foo where

import Language.Haskell.TH
import Language.Haskell.TH.Syntax
import System.Process

theMinute :: String
theMinute = $(runIO (readProcess "date" ["+%M"] "") >>= stringE)
[jojo@kirk:2] Mi, der 01.07.2020 um 17:18 Uhr ☺
/tmp/th-recom-test $ cat Main.hs
import Foo
main = putStrLn theMinute

Note that I had to set {-# OPTIONS_GHC -fforce-recomp #-} – by default, GHC will not recompile a module, even if it uses Template Haskell and runIO. If you are reading from a file you can use addDependentFile to tell the compiler about that depenency, but that does not help with reading from the environment.

So here is the test, and we get the desired behaviour: The Foo module is recompiled every time, but unless the minute has changed (see my prompt), Main is not recomipled:

/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
[2 of 2] Compiling Main             ( Main.hs, Main.o )
Linking test ...
[jojo@kirk:2] Mi, der 01.07.2020 um 17:20 Uhr ☺
/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
Linking test ...
[jojo@kirk:2] Mi, der 01.07.2020 um 17:20 Uhr ☺
/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
[2 of 2] Compiling Main             ( Main.hs, Main.o ) [Foo changed]
Linking test ...

So all well!

Update: It seems that while this works with ghc --make, the -fforce-recomp does not cause cabal build to rebuild the module. That’s unfortunate.

01 July, 2020 03:16PM by Joachim Breitner (mail@joachim-breitner.de)

Sylvain Beucler

Debian LTS and ELTS - June 2020

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In June, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 30h for LTS (out of 30 max; all done) and 5.25h for ELTS (out of 20 max; all done).

While LTS is part of the Debian project, fellow contributors sometimes surprise me: suggestion to vote for sponsors-funded projects with concorcet was only met with overhead concerns, and there were requests for executive / business owner decisions (we're currently heading towards consultative vote); I heard concerns about discussing non-technical issues publicly (IRC team meetings are public though); the private mail infrastructure was moved from self-hosting straight to Google; when some got an issue with Debian Social for our first video conference, there were immediate suggestions to move to Zoom...
Well, we do need some people to make those LTS firmware updates in non-free :)

Also this was the last month before shifting suites: goodbye to Jessie LTS and Wheezy ELTS, welcome Stretch LTS and Jessie ELTS.

ELTS - Wheezy

  • mysql-connector-java: improve testsuite setup; prepare wheezy/jessie/stretch triple builds; coordinate versioning scheme with security-team; security upload ELA 234-1
  • ntp: wheezy+jessie triage: 1 ignored (too intrusive to backport); 1 postponed (hard to exploit, no patch)
  • Clean-up (ditch) wheezy VMs :)

LTS - Jessie

  • mysql-connector-java: see common work in ELTS
  • mysql-connector-java: security uploads DLA 2245-1 (LTS) and DSA 4703 (oldstable)
  • ntp: wheezy+jessie triage (see ELTS)
  • rails: global triage, backport 2 patches, security upload DLA 2251-1
  • rails: global security: prepare stretch/oldstable update
  • rails: new important CVE on unmaintained 4.x, fixes introduce several regressions, propose new fix to upstream, update stretch proposed update [and jessie, but rails will turn out unsupported in ELTS]
  • python3.4: prepare update to fix all pending non-criticial issues, 5/6 ready
  • private video^W^Wpublic IRC team meeting

Documentation/Scripts

01 July, 2020 02:19PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Already July and still stuck at home.

Already July and still stuck at home.

01 July, 2020 08:58AM by Junichi Uekawa

Utkarsh Gupta

FOSS Activites in June 2020

Here’s my (ninth) monthly update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 16th month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/

This month was a little intense. I did a a lot of different kinds of things in Debian this month. Whilst most of my time went on doing security stuff, I also sponosred a bunch of packages.

Here are the following things I did this month:

Uploads and bug fixes:

Other $things:

  • Hosted Ruby team meeting. Logs here.
  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored ruby-ast for Abraham, libexif for Hugh, djangorestframework-gis and karlseguin-ccache for Nilesh, and twig-extensions, twig-i18n-extension, and mariadb-mysql-kbs for William.

GSoC Phase 1, Part 2!

Last month, I got selected as a Google Summer of Code student for Debian again! \o/
I am working on the Upstream-Downstream Cooperation in Ruby project.

The first half of the first month is blogged here, titled, GSoC Phase 1.
Also, I log daily updates at gsocwithutkarsh2102.tk.

Whilst the daily updates are available at the above site^, I’ll breakdown the important parts of the later half of the first month here:

  • Documented the first cop, GemspecGit via PR #2.
  • Made an initial release, v0.1.0! 💖
  • Spread the word/usage about this tool/library via adding them in the official RuboCop docs.
  • We had our third weekly meeting where we discussed the next steps and the things that are supposed to be done for the next set of cops.
  • Wrote more tests so as to cover different aspects of the GemspecGit cop.
  • Opened PR #4 for the next Cop, RequireRelativeToLib.
  • Introduced rubocop-packaging to the outer world and requested other upstream projects to use it! It is being used by 6 other projects already 😭💖
  • Had our fourth weekly meeting where we pair-programmed (and I sucked :P) and figured out a way to make the second cop work.
  • Found a bug, reported at issue #5 and raised PR #6 to fix it.
  • And finally, people loved the library/tool (and it’s outcome):


    (for those who don’t know, @bbatsov is the author of RuboCop, @lienvdsteen is an amazing fullstack engineer at GitLab, and @pboling is the author of some awesome Ruby tools and libraries!)

Whilst I have already mentioned it multiple times but it’s still not enough to stress how amazing Antonio Terceiro and David Rodríguez are! 💖
They’re more than just mentors to me!

Well, only they know how much I trouble them with different things, which are not only related to my GSoC project but also extends to the projects they maintain! :P
David maintains rubygems and bundler and Antonio maintains debci.

So on days when I decide to hack on rubygems or debci, only I know how kind and nice David and Anotonio are to me!
They very patiently walk me through with whatever I am stuck on, no matter what and no matter when.

Thus, with them around, I contributed to these two projects and more, with regards to working on rubocop-packaging.
Following are a few things that I raised:


Debian LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

This was my ninth month as a Debian LTS paid contributor. I was assigned 30.00 hours and worked on the following things:

CVE Fixes and Announcements:

Other LTS Work:

  • Triaged sympa, apache2, qemu, and coturn.
  • Add fix for CVE-2020-0198/libexif.
  • Requested CVE for bug#60251 against apache2 and prodded further.
  • Raised issue #947 against sympa reporting an incomplete patch for CVE-2020-10936. More discussions internally.
  • Created the LTS Survey on the self-hosted LimeSurvey instance.
  • Attended the third LTS meeting. Logs here.
  • General discussion on LTS private and public mailing list.

Other(s)

Sometimes it gets hard to categorize work/things into a particular category.
That’s why I am writing all of those things inside this category.
This includes two sub-categories and they are as follows.

Personal:

This month I did the following things:

  • Wrote and published v0.1.0 of rubocop-packaging on RubyGems! 💯
    It’s open-sourced and the repository is here.
    Bug reports and pull requests are welcomed! 😉
  • Integrated a tiny (yet a powerful) hack to align images in markdown for my blog.
    Commit here. 🚀
  • Released v0.4.0 of batalert on RubyGems! 🤗

Open Source:

Again, this contains all the things that I couldn’t categorize earlier.
Opened several issues and PRs:


Thank you for sticking along for so long :)

Until next time.
:wq for today.

01 July, 2020 04:00AM

Paul Wise

FLOSS Activities June 2020

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: usertags QA
  • Debian IRC channels: fixed a channel mode lock
  • Debian wiki: unblock IP addresses, approve accounts, ping folks with bouncing email

Communication

  • Respond to queries from Debian users and developers on the mailing lists and IRC

Sponsors

The ifenslave and apt-listchanges work was sponsored by my employer. All other work was done on a volunteer basis.

01 July, 2020 12:32AM

June 30, 2020

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in June 2020

Here is my monthly update covering what I have been doing in the free software world during June 2020 (previous month):

  • Opened two pull requests against the Ghostwriter distraction-free Markdown editor to:

    • Persist whether "focus mode" is enabled across between sessions. (#522)
    • Correct the ordering of the MarkdownAST::toString() debugging output. (#520)
  • Will McGugan's "Rich" is a Python library to output formatted text, tables, syntax etc. to the terminal. I filed a pull request in order to allow for easy enabling and disabling of displaying the file path in Rich's logging handler. (#115)

  • As part of my duties of being on the board of directors of the Open Source Initiative and Software in the Public Interest I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions regarding logistics and policy etc.

  • Filed a pull request against the PyQtGraph Scientific Graphics and graphical user interface library to make the documentation build reproducibly. (#1265)

  • Reviewed and merged a large number of changes by Pavel Dolecek to my Strava Enhancement Suite, a Chrome extension to improve the user experience on the Strava athletic tracker.

For Lintian, the static analysis tool for Debian packages:

  • Don't emit breakout-link for architecture-independent .jar files under /usr/lib. (#963939)
  • Correct a reference to override_dh_ in the long description of the excessive-debhelper-overrides tag. [...]
  • Update data/fields/perl-provides for Perl 5.030003. [...]
  • Check for execute_after and execute_before spelling mistakes just like override_*. [...]

§


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:


§


Elsewhere in our tooling, I made the following changes to diffoscope including preparing and uploading versions 147, 148 and 149 to Debian:

  • New features:

    • Add output from strings(1) to ELF binaries. (#148)
    • Allow user to mask/filter diff output via --diff-mask=REGEX. (!51)
    • Dump PE32+ executables (such as EFI applications) using objdump(1). (#181)
    • Add support for Zsh shell completion. (#158)
  • Bug fixes:

    • Prevent a traceback when comparing PDF documents that did not contain metadata (ie. a PDF /Info stanza). (#150)
    • Fix compatibility with jsondiff version 1.2.0. (#159)
    • Fix an issue in GnuPG keybox file handling that left filenames in the diff. [...]
    • Correct detection of JSON files due to missing call to File.recognizes that checks candidates against file(1). [...]
  • Output improvements:

    • Use the CSS word-break property over manually adding U+200B zero-width spaces as these were making copy-pasting cumbersome. (!53)
    • Downgrade the tlsh warning message to an "info" level warning. (#29)
  • Logging improvements:

  • Testsuite improvements:

    • Update tests for file(1) version 5.39. (#179)
    • Drop accidentally-duplicated copy of the --diff-mask tests. [...]
    • Don't mask an existing test. [...]
  • Codebase improvements:

    • Replace obscure references to "WF" with "Wagner-Fischer" for clarity. [...]
    • Use a semantic AbstractMissingType type instead of remembering to check for both types of "missing" files. [...]
    • Add a comment regarding potential security issue in the .changes, .dsc and .buildinfo comparators. [...]
    • Drop a large number of unused imports. [...][...][...][...][...]
    • Make many code sections more Pythonic. [...][...][...][...]
    • Prevent some variable aliasing issues. [...][...][...]
    • Use some tactical f-strings to tidy up code [...][...] and remove explicit u"unicode" strings [...].
    • Refactor a large number of routines for clarity. [...][...][...][...]

trydiffoscope is the web-based version of diffoscope. This month, I specified a location for the celerybeat scheduler to ensure that the clean/tidy tasks are actually called which had caused an accidental resource exhaustion. (#12)


§


Debian

I filed three bugs against:

  • cmark: Please update the homepage URI. (#962576)
  • petitboot: Please update Vcs-Git urls. (#963123)
  • python-pauvre: FTBFS if the DISPLAY environment variable is exported. (#962698)

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 5¼ hours on its sister Extended LTS project.

  • Investigated and triaged angular.js [...],icinga2 [...], intel-microcode [...], jquery [...], pdns-recursor [...], unbound [...] & wordpress [...].

  • Frontdesk duties, including responding to user/developer questions, reviewing others' packages, participating in mailing list discussions as well as attending our contributor meeting.

  • Issued DLA 2233-1 to fix two issues in the Django web development framework in order to fix a potential data leakage via malformed memcached keys (CVE-2020-13254) and to prevent a cross-site scripting attack in the Django administration system (CVE-2020-13596). This was followed by DLA 2233-2 to address a regression as well as uploads to Debian stretch (1.10.7-2+deb9u9) and buster (1.11.29-1~deb10u1). (More info)

  • Issued DLA 2235-1 to prevent a file descriptor leak in the D-Bus message bus (CVE-2020-12049).

  • Issued DLA 2239-1 for a security module for using the TACACS+ authentication service to prevent an issue where shared secrets such as private server keys were being added in plaintext to various logs.

  • Issued DLA 2244-1 to address an escaping issue in PHPMailer, an email generation utility class for the PHP programming language.

  • Issued DLA 2252-1 for the ngircd IRC server as it was discovered that there was an out-of-bounds access vulnerability in the server-to-server protocol.

  • Issued DLA 2253-1 to resolve a vulnerability in the Lynis a security auditing tool because a shared secret could be obtained by simple observation of the process list when a data upload is being performed.

You can find out more about the project via the following video:

Uploads

30 June, 2020 10:18PM

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Learning openshift: a good moment to revisit awk too

I can’t believe I spent all these years using only grep.

Most of us know how to use awk to print the nth column of a file:

$ awk '{print $1}' /etc/hosts

will print all IP addresses from /etc/hosts

But you can also do filtering before printing the chosen column:

$ awk '$5 >= 2 {print $2}' /path/to/file

will print the second column of all lines, where the 5th column is greater than 2. That would have been hard with grep.

Now I can use that to find out all deployments on my openshift cluster, where the number of current replicas is greater than 2.

$ oc get deployments --all-namespaces | awk '$5 >= 2 {print $2}'
NAME
oauth-openshift
console
downloads
router-default
etcd-quorum-guard
prometheus-adapter
thanos-querier
packageserver

I know that openshift/kubernetes both have a powerful query selector syntax, but for the moment awk will do.

30 June, 2020 11:29AM by Emmanuel Kasper (noreply@blogger.com)

Russ Allbery

Review: The Fifth Risk

Review: The Fifth Risk, by Michael Lewis

Publisher: W.W. Norton
Copyright: 2018
Printing: 2019
ISBN: 0-393-35745-7
Format: Kindle
Pages: 254

The Fifth Risk starts with the presidential transition. Max Stier, the first person profiled by Lewis in this book, is the founder of the Partnership for Public Service. That foundation helped push through laws to provide more resources and structure for the transition of the United States executive branch from one president to the next. The goal was to fight wasted effort, unnecessary churn, and pointless disruption in the face of each administration's skepticism about everyone who worked for the previous administration.

"It's Groundhog Day," said Max. "The new people come in and think that the previous administration and the civil service are lazy or stupid. Then they actually get to know the place they are managing. And when they leave, they say, 'This was a really hard job, and those are the best people I've ever worked with.' This happens over and over and over."

By 2016, Stier saw vast improvements, despite his frustration with other actions of the Obama administration. He believed their transition briefings were one of the best courses ever produced on how the federal government works. Then that transition process ran into Donald Trump.

Or, to be more accurate, that transition did not run into Donald Trump, because neither he nor anyone who worked for him were there. We'll never know how good the transition information was because no one ever listened to or read it. Meetings were never scheduled. No one showed up.

This book is not truly about the presidential transition, though, despite its presence as a continuing theme. The Fifth Risk is, at its heart, an examination of government work, the people who do it, why it matters, and why you should care about it. It's a study of the surprising and misunderstood responsibilities of the departments of the United States federal government. And it's a series of profiles of the people who choose this work as a career, not in the upper offices of political appointees, but deep in the civil service, attempting to keep that system running.

I will warn now that I am far too happy that this book exists to be entirely objective about it. The United States desperately needs basic education about the government at all levels, but particularly the federal civil service. The public impression of government employees is skewed heavily towards the small number of public-facing positions and towards paperwork frustrations, over which the agency usually has no control because they have been sabotaged by Congress (mostly by Republicans, although the Democrats get involved occasionally). Mental images of who works for the government are weirdly selective. The Coast Guard could say "I'm from the government and I'm here to help" every day, to the immense gratitude of the people they rescue, but Reagan was still able to use that as a cheap applause line in his attack on government programs.

Other countries have more functional and realistic social attitudes towards their government workers. The United States is trapped in a politically-fueled cycle of contempt and ignorance. It has to stop. And one way to help stop it is someone with Michael Lewis's story-telling skills writing a different narrative.

The Fifth Risk is divided into a prologue about presidential transitions, three main parts, and an afterword (added in current editions) about a remarkable government worker whom you likely otherwise would never hear about. Each of the main parts talks about a different federal department: the Department of Energy, the Department of Agriculture, and the Department of Commerce. In keeping with the theme of the book, the people Lewis profiles do not do what you might expect from the names of those departments.

Lewis's title comes from his discussion with John MacWilliams, a former Goldman Sachs banker who quit the industry in search of more personally meaningful work and became the chief risk officer for the Department of Energy. Lewis asks him for the top five risks he sees, and if you know that the DOE is responsible for safeguarding nuclear weapons, you will be able to guess several of them: nuclear weapons accidents, North Korea, and Iran. If you work in computer security, you may share his worry about the safety of the electrical grid. But his fifth risk was project management. Can the government follow through on long-term hazardous waste safety and cleanup projects, despite constant political turnover? Can it attract new scientists to the work of nuclear non-proliferation before everyone with the needed skills retires? Can it continue to lay the groundwork with basic science for innovation that we'll need in twenty or fifty years? This is what the Department of Energy is trying to do.

Lewis's profiles of other departments are similarly illuminating. The Department of Agriculture is responsible for food stamps, the most effective anti-poverty program in the United States with the possible exception of Social Security. The section on the Department of Commerce is about weather forecasting, specifically about NOAA (the National Oceanic and Atmospheric Administration). If you didn't know that all of the raw data and many of the forecasts you get from weather apps and web sites are the work of government employees, and that AccuWeather has lobbied Congress persistently for years to prohibit the NOAA from making their weather forecasts public so that AccuWeather can charge you more for data your taxes already paid for, you should read this book. The story of American contempt for government work is partly about ignorance, but it's also partly about corporations who claim all of the credit while selling taxpayer-funded resources back to you at absurd markups.

The afterword I'll leave for you to read for yourself, but it's the story of Art Allen, a government employee you likely have never heard of but whose work for the Coast Guard has saved more lives than we are able to measure. I found it deeply moving.

If you, like I, are a regular reader of long-form journalism and watch for new Michael Lewis essays in particular, you've probably already read long sections of this book. By the time I sat down with it, I think I'd read about a third in other forms on-line. But the profiles that I had already read were so good that I was happy to read them again, and the additional stories and elaboration around previously published material was more than worth the cost and time investment in the full book.

It was never obvious to me that anyone would want to read what had interested me about the United States government. Doug Stumpf, my magazine editor for the past decade, persuaded me that, at this strange moment in American history, others might share my enthusiasm.

I'll join Michael Lewis in thanking Doug Stumpf.

The Fifth Risk is not a proposal for how to fix government, or politics, or polarization. It's not even truly a book about the Trump presidency or about the transition. Lewis's goal is more basic: The United States government is full of hard-working people who are doing good and important work. They have effectively no public relations department. Achievements that would result in internal and external press releases in corporations, not to mention bonuses and promotions, go unnoticed and uncelebrated. If you are a United States citizen, this is your government and it does important work that you should care about. It deserves the respect of understanding and thoughtful engagement, both from the citizenry and from the politicians we elect.

Rating: 10 out of 10

30 June, 2020 06:02AM

Craig Sanders

Fuck Grey Text

fuck grey text on white backgrounds
fuck grey text on black backgrounds
fuck thin, spindly fonts
fuck 10px text
fuck any size of anything in px
fuck font-weight 300
fuck unreadable web pages
fuck themes that implement this unreadable idiocy
fuck sites that don’t work without javascript
fuck reactjs and everything like it

thank fuck for Stylus. and uBlock Origin. and uMatrix.

Fuck Grey Text is a post from: Errata

30 June, 2020 02:33AM by cas

hackergotchi for Norbert Preining

Norbert Preining

TeX Live Debian update 20200629

More than a month has passed since the last update of TeX Live packages in Debian, so here is a new checkout!

All arch all packages have been updated to the tlnet state as of 2020-06-29, see the detailed update list below.

Enjoy.

New packages

akshar, beamertheme-pure-minimalistic, biblatex-unified, biblatex-vancouver, bookshelf, commutative-diagrams, conditext, courierten, ektype-tanka, hvarabic, kpfonts-otf, marathi, menucard, namedef, pgf-pie, pwebmac, qrbill, semantex, shtthesis, tikz-lake-fig, tile-graphic, utf8add.

Updated packages

abnt, achemso, algolrevived, amiri, amscls, animate, antanilipsum, apa7, babel, bangtex, baskervillef, beamerappendixnote, beamerswitch, beamertheme-focus, bengali, bib2gls, biblatex-apa, biblatex-philosophy, biblatex-phys, biblatex-software, biblatex-swiss-legal, bibleref, bookshelf, bxjscls, caption, ccool, cellprops, changes, chemfig, circuitikz, cloze, cnltx, cochineal, commutative-diagrams, comprehensive, context, context-vim, cquthesis, crop, crossword, ctex, cweb, denisbdoc, dijkstra, doclicense, domitian, dps, draftwatermark, dvipdfmx, ebong, ellipsis, emoji, endofproofwd, eqexam, erewhon, erewhon-math, erw-l3, etbb, euflag, examplep, fancyvrb, fbb, fbox, fei, fira, fontools, fontsetup, fontsize, forest-quickstart, gbt7714, genealogytree, haranoaji, haranoaji-extra, hitszthesis, hvarabic, hyperxmp, icon-appr, kpfonts, kpfonts-otf, l3backend, l3build, l3experimental, l3kernel, latex-amsmath-dev, latexbangla, latex-base-dev, latexdemo, latexdiff, latex-graphics-dev, latexindent, latex-make, latexmp, latex-mr, latex-tools-dev, libertinus-fonts, libertinust1math, lion-msc, listings, logix, lshort-czech, lshort-german, lshort-polish, lshort-portuguese, lshort-russian, lshort-slovenian, lshort-thai, lshort-ukr, lshort-vietnamese, luamesh, lua-uca, luavlna, lwarp, marathi, memoir, mnras, moderntimeline, na-position, newcomputermodern, newpx, nicematrix, nodetree, ocgx2, oldstandard, optex, parskip, pdfcrop, pdfpc, pdftexcmds, pdfxup, pgf, pgfornament, pgf-pie, pgf-umlcd, pgf-umlsd, pict2e, plautopatch, poemscol, pst-circ, pst-eucl, pst-func, pstricks, pwebmac, pxjahyper, quran, rec-thy, reledmac, rest-api, sanskrit, sanskrit-t1, scholax, semantex, showexpl, shtthesis, suftesi, svg, tcolorbox, tex4ht, texinfo, thesis-ekf, thuthesis, tkz-doc, tlshell, toptesi, tuda-ci, tudscr, twemoji-colr, univie-ling, updmap-map, vancouver, velthuis, witharrows, wtref, xecjk, xepersian-hm, xetex-itrans, xfakebold, xindex, xindy, xltabular, yathesis, ydoc, yquant, zref.

30 June, 2020 01:24AM by Norbert Preining

June 29, 2020

Cinnamon 4.6 for Debian

After a few rounds of testing in experimental, I have uploaded Cinnamon 4.6 packages to Debian/unstable. Nothing spectacular new besides the usual stream of fixes. Enjoy the new Cinnamon!

29 June, 2020 01:37AM by Norbert Preining

hackergotchi for Holger Levsen

Holger Levsen

20200629-developers-reference-challenge

developer-reference challenge: get the bug count down to 0

As of now, the BTS counts 52 bugs for src:developers-reference and I'd like to get this down to zero, because src:developers-reference is just documenting best practices and this should be easy and done well.

So, I've been trying to keep the habit of fixing at least one bug per month and I also try to do one upload per month. And I very much welcome your patches, commits, MRs and bug reports, because else it will take more than 5 years. Though I'm hopeful, src:developers-reference is maintained by the debian group on salsa, which is like a thousand people.

Please help and join the fun and responsibility. Even if only once every three months (or less) ;-) And thank you, it's awesome to be in this together.

29 June, 2020 01:10AM

June 28, 2020

Russ Allbery

PGP::Sign 1.00

This is the first new release of PGP::Sign in 13 years, so it's long-overdue. I have finally updated it in preparation for creating a new, more modern signing key for the Big Eight Usenet hierarchies and issuing control messages with both the old and new keys, using GnuPG v2 for the new key.

The biggest change in this release is that it drops support for all OpenPGP implementations other than GnuPG, and adds support for GnuPG v2. I think some of the other PGP implementations are still around, but I haven't seen them in years and have no way to test against them, so it didn't seem worthwhile to continue to support them. GnuPG v2 support is obviously long-overdue, given that we're getting close to the point where GnuPG v1 will start disappearing from distributions. The default backend is now GnuPG v2, although the module can be configured to use GnuPG v1 instead.

This release also adds a new object-oriented API. When I first wrote this module, it was common in the Perl community to have functional APIs configured with global variables. Subsequently we've learned this is a bad idea for a host of reasons, and I finally got around to redoing the API. It's still not perfect (in particular, the return value of the verify method is still a little silly), but it's much nicer. The old API is still supported, implemented as a shim in front of the new API.

A few other, more minor changes in this release:

  • Status output from GnuPG is now kept separate from human-readable log and error output for more reliable parsing.

  • Pass --allow-weak-digest-algos to GnuPG so it can use old keys and verify signatures from old keys, such as those created with PGP 2.6.2.

  • pgp_sign, when called in array context, now always returns "GnuPG" as the version string, and the version passed into pgp_verify is always ignored. Including the OpenPGP implementation version information in signatures is obsolete; GnuPG no longer does it by default and it serves no useful purpose.

  • When calling pgp_sign multiple times in the same process with whitespace munging enabled, trailing whitespace without a newline could have leaked into the next invocation of pgp_sign, resulting in an invalid signature. Clear any remembered whitespace between pgp_sign invocations.

It also now uses IPC::Run and File::Temp instead of hand-rolling equivalent functionality, and the module build system is now Module::Build.

You can get the latest version from CPAN or from the PGP::Sign distribution page.

28 June, 2020 02:00AM

June 27, 2020

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Two Chinese video encoders

ENC1 and UHE265-1-Mini

I started a project recently (now on hold) that would involve streaming from mobile cameras over 4G. Since I wanted something with better picture quality than mobile phones on gimbals, the obvious question became: How can you take in HDMI (or SDI) and stream it out over SRT?

We looked at various SBCs coupled with video capture cards, but it didn't really work out; H.264 software encoding of 720p60/1080p60 (or even 1080i60 or 1080p30) is a tad too intensive for even the fastest ARM SoCs currently, USB3 is a pain on many of them, and the x86 SoCs are pretty expensive. Many of them have hardware encoding, but it's amazingly crappy quality-wise; borderline unusable even at 5 Mbit/sec. (Intel's Quick Sync is heaven compared to what the Raspberry Pi 4 has!)

So I started looking for all-in-one cheap encoder boxes; ideally with 4G or 802.11 out, but an Ethernet jack is OK to couple with a phone and an USB OTG adapter. I found three manufacturers that all make cheap gear, namely LinkPi, URayTech and Unisheen. Long story short, I ended up ordering one from each of the two former; namely their base HEVC models LinkPi ENC1 and URayTech's UHE265-1-Mini. (I wanted HEVC to offset some of the quality loss of going hardware encoding; my general sentiment is that the advantages of HEVC are somewhat more pronounced in cheap hardware than when realtime-encoding in software.)

The ENC1 is ostensibly 599 yuan (~$85, ~€75), but good luck actually getting it for that price. I spent an hour or so fiddling with their official store at Taobao (sort of like Aliexpress for Chinese domestic users), only to figure out in the very last step that they didn't ship from there outside of China. They have an international reselller, but they charge $130 plus shipping, on very shaky grounds (“we need to employ more people late at night to be able to respond to international support inquiries fast enough”). It's market segmentation, of course. There are supposedly companies that will help you buy from Taobao and bounce the goods on to you, but I didn't try them; I eventually went to eBay and got one for €120 (+VAT) with free (very slow!) shipping.

UHE265-1-Mini was pricier but easier; I bought one directly off of URayTech's Aliexpress store for $184.80, and it was shipped by FedEx in about five days.

Externally, they look fairly similar; the ENC1 (on top) is a bit thicker but generally smaller. The ENC1 also has a handy little OLED display that shows IP address, video input and a few other things (it's a few degrees uneven, though; true Chinese quality!). There are also two buttons, but they are fairly useless (you can't map them to anything except start/stop streaming and recording).

Both boxes can do up to 1080p60 input, but somehow, the ENC1 can only do 1080p30 or 720p60 encoding; it needs to either scale or decimate. (If you ask it to do 1080p60, it will make a valiant attempt even though it throws up a warning box, but it ends up lagging behind by a lot, so it seems to be useless.) The URayTech box, however, happily trods along with no issue. This is pretty bizarre when you consider that both are based on the same HiSilicon SoC (the HI3520DV400). On pure visual inspection on very limited test data, the HEVC output from the URayTech looks a bit better than the ENC1, too, but I might just be fooling myself here.

Both have web interfaces; LinkPi's looks a bit more polished, while URayTech's has a tiny bit more low-level control, such as the ability to upload an EDID. (In particular, after a bug fix, I can set SRT stream IDs on the URayTech, which is fairly important to me.) They let you change codecs, stream endpoints, protocols, bit rates, set up multiple independent encodes etc. easily enough.

Both of them also have Telnet interfaces (and the UHE265-1-Mini has SSH, too); neither included any information on what the username and password was, though. I asked URayTech's support what the password was, and their answer was that they didn't know (!) but I could try “unisheen” (!!). And lo and behold, “unisheen” didn't work, but “neworangetech”, Unisheen's default password, worked and gave me a HiLinux root shell with kernel 3.8.20. No matter the reason, I suppose this means I can assume the Unisheen devices are fairly similar…

LinkPi's support was less helpful; they refused to give me the password and suggested I talk to the seller. However, one of their update packages (which requires registration on Gitee, a Chinese GitHub clone—but at least in the open, unlike URayTech's, which you can only get from their support) contained an /etc/passwd with some DES password hashes. Three of them were simply the empty password, while the root password fell to some Hashcat brute-forcing after a few hours. It's “linkpi.c” (DES passwords are truncated after 8 characters, so I suppose they intended to have “linkpi.cn”.) It also presents a HiLinux root shell with kernel 3.8.20, but rather different userspace and 256 MB flash instead of 32 MB or so.

GPL violations are rampant, of course. Both ship Linux with no license text (even though UHE265-1-Mini comes with a paper manual) and no source in sight. Both ship FFmpeg, although URayTech has configured it in LGPL mode, unlike LinkPi, which has a full-on GPL build and links to it from various proprietary libraries. (URayTech ships busybox; I don't think LinkPi does, but I could be mistaken.) LinkPi's FFmpeg build also includes FDK-AAC, which is GPL-incompatible, so the entire shebang is non-redistributable. So both are bad, but the ENC1 is definitely worse. I've sent a request to support for source code, but I have zero belief that I'll receive any.

Both manufacturers have fancier, more expensive boxes, but they seem to diverge a bit. LinkPi is more oriented towards rack gear and also development boards (their main business seems to be selling chips to camera manufacturers; the ENC1 and similar looks most of all like a demo). URayTech, on the other hand, sells a bewildering amount of upgrades; they have variants that are battery-powered (using standard Sony camera battery form factors) and coldshoe mount, variants with Wi-Fi, variants with LTE, dual-battery options… (They also have some rack options.) I believe Unisheen also goes inthis direction. OTOH, the ENC1 has an USB port, which can seemingly be used fairly flexibly; you can attach a hard drive for recording (or video playback), a specific Wi-Fi dongle and even the ubiquitous Huawei E3276 LTE stick for 4G connectivity. I haven't tried any of this, but it's certainly a nice touch. It also has some IP input support, for transcoding, and HDMI out. I don't think there's much IPv6 support on either, sadly, although the kernel does support it on both, and perhaps some push support leaks in through FFmpeg (I haven't tested much).

Finally, both support various overlays, mixing of multiple sources and such. I haven't looked a lot at this.

So, which one is better? I guess that on paper, the ENC1 is smaller, cheaper (especially if you can get it at the Chinese price!), has the nice OLED display and more features. But if the price doesn't matter as much, the UHE265-1-Mini wins with the 1080p60 support, and I really ought to look more closely at what's going on with that quality difference (ie., whether it exists at all). It somehow just gives me better vibes, and I'm not really sure why; perhaps it's just because I received it so much earlier. Both are very interesting devices, and I'd love to be able to actually take them into the field some day.

27 June, 2020 11:02PM

Russell Coker

June 26, 2020

hackergotchi for Chris Lamb

Chris Lamb

On the pleasure of hating

People love to tell you that they "don't watch sports" but the story of Lance Armstrong provides a fascinating lens through which to observe our culture at large.

For example, even granting all that he did and all the context in which he did it, why do sports cheats act like a lightning rod for such an instinctive hatred? After all, the sheer level of distaste directed at people such as Lance eludes countless other criminals in our society, many of whom have taken a lot more with far fewer scruples. The question is not one of logic or rationality, but of proportionality.

In some ways it should be unsurprising. In all areas of life, we instinctively prefer binary judgements to moral ambiguities and the sports cheat is a cliché of moral bankruptcy — cheating at something so seemingly trivial as a sport actually makes it more, not less, offensive to us. But we then find ourselves strangely enthralled by them, drawn together in admiration of their outlaw-like tenacity, placing them strangely close to criminal folk heroes. Clearly, sport is not as unimportant as we like to claim it is. In Lance's case in particular though, there is undeniably a Shakespearean quality to the story and we are forced to let go of our strict ideas of right and wrong and appreciate all the nuance.


§


There is a lot of this nuance in Marina Zenovich's new documentary. In fact, there's a lot of everything. At just under four hours, ESPN's Lance combines the duration of a Tour de France stage with the depth of the peloton — an endurance event compared to the bite-sized hagiography of Michael Jordan's The Last Dance.

Even for those who follow Armstrong's story like a mini-sport in itself, Lance reveals new sides to this man for all seasons. For me, not only was this captured in his clumsy approximations at being a father figure but also in him being asked something I had not read in countless tell-all books: did his earlier experiments in drug-taking contribute to his cancer?

But even in 2020 there are questions that remain unanswered. By needlessly returning to the sport in 2009, did Lance subconsciously want to get caught? Why does he not admit he confessed to Betsy Andreu back in 1999 but will happily apologise to her today for slurring her publicly on this very point? And why does he remain so vindictive towards former-teammate Floyd Landis? In all of Armstrong's evasions and masterful control of the narrative, there is the gnawing feeling that we don't even know what questions we should be even asking. As ever, the questions are more interesting than the answers.


§


Lance also reminded me of how professional cycling's obsession with national identity. Although I was intuitively aware of it to some degree, I had not fully grasped how much this kind of stereotyping runs through the veins of the sport itself, just like the drugs themselves. Journalist Daniel Friebe first offers us the portrait of:

Spaniards tend to be modest, very humble. Very unpretentious. And the Italians are loud, vain and outrageous showmen.

Former directeur sportif Johan Bruyneel then asserts that "Belgians are hard workers... they are ambitious to a certain point, but not overly ambitious", and cyclist Jörg Jaksche concludes with:

The Germans are very organised and very structured. And then the French, now I have to be very careful because I am German, but the French are slightly superior.

This kind of lazy caricature is nothing new, especially for those brought up on a solid diet of Tintin and Asterix, but although all these examples are seemingly harmless, why does the underlying idea of ascribing moral, social or political significance to genetic lineage remain so durable in today's age of anti-racism? To be sure, culture is not quite the same thing as race, but being judged by the character of one's ancestors rather than the actions of an individual is, at its core, one of the many conflations at the heart of racism. There is certainly a large amount of cognitive dissonance at work, especially when Friebe elaborates:

East German athletes were like incredible robotic figures, fallen off a production line somewhere behind the Iron Curtain...

... but then Übermensch Jan Ullrich is immediately described as "emotional" and "struggled to live the life of a professional cyclist 365 days a year". We see the habit to stereotype is so ingrained that even in the face of this obvious contradiction, Friebe unironically excuses Ullrich's failure to live up his German roots due to him actually being "Mediterranean".


§


I mention all this as I am known within my circles for remarking on these national characters, even collecting stereotypical examples of Italians 'being Italian' and the French 'being French' at times. Contrary to evidence, I don't believe in this kind of innate quality but what I do suspect is that people generally behave how they think they ought to behave, perhaps out of sheer imitation or the simple pleasure of conformity. As the novelist Will Self put it:

It's quite a complicated collective imposture, people pretending to be British and people pretending to be French, and then they get really angry with each other over what they're pretending to be.

The really remarkable thing about this tendency is that even if we consciously notice it there is no seemingly no escape — even I could not smirk when I considered that a brash Texan winning the Tour de France actually combines two of America's cherished obsessions: winning... and annoying the French.

26 June, 2020 08:58PM

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma 5.19.2 for Debian

After the long wait (or should I say The Long Dark) finally Qt 5.14 has arrived in Debian/unstable, and thus the doors for KDE/Plasma 5.19 are finally open.

I have been preparing this release for quite some time, but due to Qt 5.12 I could only test it in a virtual machine using Debian/experimental. But now, finally, a full upgrade to Plasma 5.19(.2) has arrived.

Unfortunately, it turned out that the OBS build servers are either overloaded, incapable, or broken, but they do not properly build the necessary packages. Thus, I make the Plasma 5.19.2 (and framework) packages available via my server, for amd64. Please use the following apt line:

deb https://www.preining.info/debian/ unstable kde

(The source packages are also there, just in case you want to rebuild them for i386).

Not that this requires my GPG key to be imported, best is to put it into /etc/apt/trusted.gpg.d/preining-norbert.asc.

For KDE Applications please still use

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Unstable/ ./

but I might sooner or later move that to my server, too. OBS servers are too broken with respect to considerable big rebuild groups it seems.

So, enjoy your shiny new KDE/Plasma desktop on Debian, too!

26 June, 2020 01:04PM by Norbert Preining

hackergotchi for Jonathan Dowland

Jonathan Dowland

Our Study

This is a fairly self-indulgent post, sorry!

Encouraged by Evgeni, Michael and others, given I'm spending a lot more time at my desk in my home office, here's a picture of it:

Fisheye shot of my home office

Fisheye shot of my home office

Near enough everything in the study is a work in progress.

The KALLAX behind my desk is a recent addition (under duress) because we have nowhere else to put it. Sadly I can't see it going away any time soon. On the up-side, since Christmas I've had my record player and collection in and on top of it, so I can listen to records whilst working.

The arm chair is a recent move from another room. It's a nice place to take some work calls, serving as a change of scene, and I hope to add a reading light sometime. The desk chair is some Ikea model I can't recall, which is sufficient, and just fits into the desk cavity. (I'm fairly sure my Aeron, inaccessible elsewhere, would not fit.)

I've had this old mahogany, leather-topped desk since I was a teenager and it's a blessing and a curse. Mostly a blessing: It's a lovely big desk. The main drawback is it's very much not height adjustable. At the back is a custom made, full-width monitor stand/shelf, a recent gift produced to specification by my Dad, a retired carpenter.

On the top: my work Thinkpad T470s laptop, geared more towards portable than powerful (my normal preference), although for the forseeable future it's going to remain in the exact same place; an Ikea desk lamp (I can't recall the model); a 27" 4K LG monitor, the cheapest such I could find when I bought it; an old LCD Analog TV, fantastic for vintage consoles and the like.

Underneath: An Alesis Micron 2½ octave analog-modelling synthesizer; various hubs and similar things; My Amiga 500.

Like Evgeni, my normal keyboard is a ThinkPad Compact USB Keyboard with TrackPoint. I've been using different generations of these styles of keyboards for a long time now, initially because I loved the trackpoint pointer. I'm very curious about trying out mechanical keyboards, as I have very fond memories of my first IBM Model M buckled-spring keyboard, but I haven't dipped my toes into that money-trap just yet. The Thinkpad keyboard is rubber-dome, but it's a good one.

Wedged between the right-hand bookcases are a stack of IT-related things: new printer; deprecated printer; old/spare/play laptops, docks and chargers; managed network switch; NAS.

26 June, 2020 09:32AM

hackergotchi for Martin Michlmayr

Martin Michlmayr

ledger2beancount 2.3 released

I released version 2.3 of ledger2beancount, a ledger to beancount converter.

There are three notable changes with this release:

  1. Performance has significantly improved. One large, real-world test case has gone from around 160 seconds to 33 seconds. A smaller test case has gone from 11 seconds to ~3.5 seconds.
  2. The documentation is available online now (via Read the Docs).
  3. The repository has moved to the beancount GitHub organization.

Here are the changes in 2.3:

  • Improve speed of ledger2beancount significantly
  • Improve parsing of postings for accuracy and speed
  • Improve support for inline math
  • Handle lots without cost
  • Fix parsing of lot notes followed by a virtual price
  • Add support for lot value expressions
  • Make parsing of numbers more strict
  • Fix behaviour of dates without year
  • Accept default ledger date formats without configuration
  • Fix implicit conversions with negative prices
  • Convert implicit conversions in a more idiomatic way
  • Avoid introducing trailing whitespace with hledger input
  • Fix loading of config file
  • Skip ledger directive import
  • Convert documentation to mkdocs

Thanks to Colin Dean for some feedback. Thanks to Stefano Zacchiroli for prompting me into investigating performance issues (and thanks to the developers of the Devel::NYTProf profiler).

You can get ledger2beancount from GitHub

26 June, 2020 03:45AM by Martin Michlmayr

Reproducible Builds (diffoscope)

diffoscope 149 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 149. This version includes the following changes:

[ Chris Lamb ]
* Update tests for file 5.39. (Closes: reproducible-builds/diffoscope#179)
* Downgrade the tlsh warning message to an "info" level warning.
  (Closes: #888237, reproducible-builds/diffoscope#29)
* Use the CSS "word-break" property over manually adding U+200B zero-width
  spaces that make copy-pasting cumbersome.
  (Closes: reproducible-builds/diffoscope!53)

* Codebase improvements:
  - Drop some unused imports from the previous commit.
  - Prevent an unnecessary .format() when rendering difference comments.
  - Use a semantic "AbstractMissingType" type instead of remembering to check
    for both "missing" files and missing containers.

[ Jean-Romain Garnier ]
* Allow user to mask/filter reader output via --diff-mask=REGEX.
  (MR: reproducible-builds/diffoscope!51)
* Make --html-dir child pages open in new window to accommodate new web
  browser content security policies.
* Fix the --new-file option when comparing directories by merging
  DirectoryContainer.compare and Container.compare.
  (Closes: reproducible-builds/diffoscope#180)
* Fix zsh completion for --max-page-diff-block-lines.

[ Mattia Rizzolo ]
* Do not warn about missing tlsh during tests.

You find out more by visiting the project homepage.

26 June, 2020 12:00AM

June 25, 2020

Russell Coker

How Will the Pandemic Change Things?

The Bulwark has an interesting article on why they can’t “Reopen America” [1]. I wonder how many changes will be long term. According to the Wikipedia List of Epidemics [2] Covid-19 so far hasn’t had a high death toll when compared to other pandemics of the last 100 years. People’s reactions to this vary from doing nothing to significant isolation, the question is what changes in attitudes will be significant enough to change society.

Transport

One thing that has been happening recently is a transition in transport. It’s obvious that we need to reduce CO2 and while electric cars will address the transport part of the problem in the long term changing to electric public transport is the cheaper and faster way to do it in the short term. Before Covid-19 the peak hour public transport in my city was ridiculously overcrowded, having people unable to board trams due to overcrowding was really common. If the economy returns to it’s previous state then I predict less people on public transport, more traffic jams, and many more cars idling and polluting the atmosphere.

Can we have mass public transport that doesn’t give a significant disease risk? Maybe if we had significantly more trains and trams and better ventilation with more airflow designed to suck contaminated air out. But that would require significant engineering work to design new trams, trains, and buses as well as expense in refitting or replacing old ones.

Uber and similar companies have been taking over from taxi companies, one major feature of those companies is that the vehicles are not dedicated as taxis. Dedicated taxis could easily be designed to reduce the spread of disease, the famed Black Cab AKA Hackney Carriage [3] design in the UK has a separate compartment for passengers with little air flow to/from the driver compartment. It would be easy to design such taxis to have entirely separate airflow and if setup to only take EFTPOS and credit card payment could avoid all contact between the driver and passengers. I would prefer to have a Hackney Carriage design of vehicle instead of a regular taxi or Uber.

Autonomous cars have been shown to basically work. There are some concerns about safety issues as there are currently corner cases that car computers don’t handle as well as people, but of course there are also things computers do better than people. Having an autonomous taxi would be a benefit for anyone who wants to avoid other people. Maybe approval could be rushed through for autonomous cars that are limited to 40Km/h (the maximum collision speed at which a pedestrian is unlikely to die), in central city areas and inner suburbs you aren’t likely to drive much faster than that anyway.

Car share services have been becoming popular, for many people they are significantly cheaper than owning a car due to the costs of regular maintenance, insurance, and depreciation. As the full costs of car ownership aren’t obvious people may focus on the disease risk and keep buying cars.

Passenger jets are ridiculously cheap. But this relies on the airline companies being able to consistently fill the planes. If they were to add measures to reduce cross contamination between passengers which slightly reduces the capacity of planes then they need to increase ticket prices accordingly which then reduces demand. If passengers are just scared of flying in close proximity and they can’t fill planes then they will have to increase prices which again reduces demand and could lead to a death spiral. If in the long term there aren’t enough passengers to sustain the current number of planes in service then airline companies will have significant financial problems, planes are expensive assets that are expected to last for a long time, if they can’t use them all and can’t sell them then airline companies will go bankrupt.

It’s not reasonable to expect that the same number of people will be travelling internationally for years (if ever). Due to relying on economies of scale to provide low prices I don’t think it’s possible to keep prices the same no matter what they do. A new economic balance of flights costing 2-3 times more than we are used to while having significantly less passengers seems likely. Governments need to spend significant amounts of money to improve trains to take over from flights that are cancelled or too expensive.

Entertainment

The article on The Bulwark mentions Las Vegas as a city that will be hurt a lot by reductions in travel and crowds, the same thing will happen to tourist regions all around the world. Australia has a significant tourist industry that will be hurt a lot. But the mention of Las Vegas makes me wonder what will happen to the gambling in general. Will people avoid casinos and play poker with friends and relatives at home? It seems that small stakes poker games among friends will be much less socially damaging than casinos, will this be good for society?

The article also mentions cinemas which have been on the way out since the video rental stores all closed down. There’s lots of prime real estate used for cinemas and little potential for them to make enough money to cover the rent. Should we just assume that most uses of cinemas will be replaced by Netflix and other streaming services? What about teenage dates, will kissing in the back rows of cinemas be replaced by “Netflix and chill”? What will happen to all the prime real estate used by cinemas?

Professional sporting matches have been played for a TV-only audience during the pandemic. There’s no reason that they couldn’t make a return to live stadium audiences when there is a vaccine for the disease or the disease has been extinguished by social distancing. But I wonder if some fans will start to appreciate the merits of small groups watching large TVs and not want to go back to stadiums, can this change the typical behaviour of groups?

Restaurants and cafes are going to do really badly. I previously wrote about my experience running an Internet Cafe and why reopening businesses soon is a bad idea [4]. The question is how long this will go for and whether social norms about personal space will change things. If in the long term people expect 25% more space in a cafe or restaurant that’s enough to make a significant impact on profitability for many small businesses.

When I was young the standard thing was for people to have dinner at friends homes. Meeting friends for dinner at a restaurant was uncommon. Recently it seemed to be the most common practice for people to meet friends at a restaurant. There are real benefits to meeting at a restaurant in terms of effort and location. Maybe meeting friends at their home for a delivered dinner will become a common compromise, avoiding the effort of cooking while avoiding the extra expense and disease risk of eating out. Food delivery services will do well in the long term, it’s one of the few industry segments which might do better after the pandemic than before.

Work

Many companies are discovering the benefits of teleworking, getting it going effectively has required investing in faster Internet connections and hardware for employees. When we have a vaccine the equipment needed for teleworking will still be there and we will have a discussion about whether it should be used on a more routine basis. When employees spend more than 2 hours per day travelling to and from work (which is very common for people who work in major cities) that will obviously limit the amount of time per day that they can spend working. For the more enthusiastic permanent employees there seems to be a benefit to the employer to allow working from home. It’s obvious that some portion of the companies that were forced to try teleworking will find it effective enough to continue in some degree.

One company that I work for has quit their coworking space in part because they were concerned that the coworking company might go bankrupt due to the pandemic. They seem to have become a 100% work from home company for the office part of the work (only on site installation and stock management is done at corporate locations). Companies running coworking spaces and other shared offices will suffer first as their clients have short term leases. But all companies renting out office space in major cities will suffer due to teleworking. I wonder how this will affect the companies providing services to the office workers, the cafes and restaurants etc. Will there end up being so much unused space in central city areas that it’s not worth converting the city cinemas into useful space?

There’s been a lot of news about Zoom and similar technologies. Lots of other companies are trying to get into that business. One thing that isn’t getting much notice is remote access technologies for desktop support. If the IT people can’t visit your desk because you are working from home then they need to be able to remotely access it to fix things. When people make working from home a large part of their work time the issue of who owns peripherals and how they are tracked will get interesting. In a previous blog post I suggested that keyboards and mice not be treated as assets [5]. But what about monitors, 4G/Wifi access points, etc?

Some people have suggested that there will be business sectors benefiting from the pandemic, such as telecoms and e-commerce. If you have a bunch of people forced to stay home who aren’t broke (IE a large portion of the middle class in Australia) they will probably order delivery of stuff for entertainment. But in the long term e-commerce seems unlikely to change much, people will spend less due to economic uncertainty so while they may shift some purchasing to e-commerce apart from home delivery of groceries e-commerce probably won’t go up overall. Generally telecoms won’t gain anything from teleworking, the Internet access you need for good Netflix viewing is generally greater than that needed for good video-conferencing.

Money

I previously wrote about a Basic Income for Australia [6]. One of the most cited reasons for a Basic Income is to deal with robots replacing people. Now we are at the start of what could be a long term economic contraction caused by the pandemic which could reduce the scale of the economy by a similar degree while also improving the economic case for a robotic workforce. We should implement a Universal Basic Income now.

I previously wrote about the make-work jobs and how we could optimise society to achieve the worthwhile things with less work [7]. My ideas about optimising public transport and using more car share services may not work so well after the pandemic, but the rest should work well.

Business

There are a number of big companies that are not aiming for profitability in the short term. WeWork and Uber are well documented examples. Some of those companies will hopefully go bankrupt and make room for more responsible companies.

The co-working thing was always a precarious business. The companies renting out office space usually did so on a monthly basis as flexibility was one of their selling points, but they presumably rented buildings on an annual basis. As the profit margins weren’t particularly high having to pay rent on mostly empty buildings for a few months will hurt them badly. The long term trend in co-working spaces might be some sort of collaborative arrangement between the people who run them and the landlords similar to the way some of the hotel chains have profit sharing agreements with land owners to avoid both the capital outlay for buying land and the risk involved in renting. Also city hotels are very well equipped to run office space, they have the staff and the procedures for running such a business, most hotels also make significant profits from conventions and conferences.

The way the economy has been working in first world countries has been about being as competitive as possible. Just in time delivery to avoid using storage space and machines to package things in exactly the way that customers need and no more machines than needed for regular capacity. This means that there’s no spare capacity when things go wrong. A few years ago a company making bolts for the car industry went bankrupt because the car companies forced the prices down, then car manufacture stopped due to lack of bolts – this could have been a wake up call but was ignored. Now we have had problems with toilet paper shortages due to it being packaged in wholesale quantities for offices and schools not retail quantities for home use. Food was destroyed because it was created for restaurant packaging and couldn’t be packaged for home use in a reasonable amount of time.

Farmer’s markets alleviate some of the problems with packaging food etc. But they aren’t a good option when there’s a pandemic as disease risk makes them less appealing to customers and therefore less profitable for vendors.

Religion

Many religious groups have supported social distancing. Could this be the start of more decentralised religion? Maybe have people read the holy book of their religion and pray at home instead of being programmed at church? We can always hope.

25 June, 2020 03:27AM by etbe

June 24, 2020

Ian Jackson

Renaming the primary git branch to "trunk"

I have been convinced by the arguments that it's not nice to keep using the word master for the default git branch. Regardless of the etymology (which is unclear), some people say they have negative associations for this word, Changing this upstream in git is complicated on a technical level and, sadly, contested.

But git is flexible enough that I can make this change in my own repositories. Doing so is not even so difficult.

So:

Announcement

I intend to rename master to trunk in all repositories owned by my personal hat. To avoid making things very complicated for myself I will just delete refs/heads/master when I make this change. So there may be a little disruption to downstreams.

I intend make this change everywhere eventually. But rather than front-loading the effort, I'm going to do this to repositories as I come across them anyway. That will allow me to update all the docs references, any automation, etc., at a point when I have those things in mind anyway. Also, doing it this way will allow me to focus my effort on the most active projects, and avoids me committing to a sudden large pile of fiddly clerical work. But: if you have an interest in any repository in particular that you want updated, please let me know so I can prioritise it.

Bikeshed

Why "trunk"? "Main" has been suggested elswewhere, and it is often a good replacement for "master" (for example, we can talk very sensibly about a disk's Main Boot Record, MBR). But "main" isn't quite right for the VCS case; for example a "main" branch ought to have better quality than is typical for the primary development branch.

Conversely, there is much precedent for "trunk". "Trunk" was used to refer to this concept by at least SVN, CVS, RCS and CSSC (and therefore probably SCCS) - at least in the documentation, although in some of these cases the command line API didn't have a name for it.

So "trunk" it is.

Aside: two other words - passlist, blocklist

People are (finally!) starting to replace "blacklist" and "whitelist". Seriously, why has it taken everyone this long?

I have been using "blocklist" and "passlist" for these concepts for some time. They are drop-in replacements. I have also heard "allowlist" and "denylist" suggested, but they are cumbersome and cacophonous.

Also "allow" and "deny" seem to more strongly imply an access control function than merely "pass" and "block", and the usefulness of passlists and blocklists extends well beyond access control: protocol compatibility and ABI filtering are a couple of other use cases.



comment count unavailable comments

24 June, 2020 05:38PM

François Marier

Automated MythTV-related maintenance tasks

Here is the daily/weekly cronjob I put together over the years to perform MythTV-related maintenance tasks on my backend server.

The first part performs a database backup:

5 1 * * *  mythtv  /usr/share/mythtv/mythconverg_backup.pl

which I previously configured by putting the following in /home/mythtv/.mythtv/backuprc:

DBBackupDirectory=/var/backups/mythtv

and creating a new directory for it:

mkdir /var/backups/mythtv
chown mythtv:mythtv /var/backups/mythtv

The second part of /etc/cron.d/mythtv-maintenance runs a contrib script to optimize the database tables:

10 1 * * *  mythtv  /usr/bin/chronic /usr/share/doc/mythtv-backend/contrib/maintenance/optimize_mythdb.pl

once a day. It requires the libmythtv-perl and libxml-simple-perl packages to be installed on Debian-based systems.

It is quickly followed by a check of the recordings and automatic repair of the seektable (when possible):

20 1 * * *  mythtv  /usr/bin/chronic /usr/bin/mythutil --checkrecordings --fixseektable

Next, I force a scan of the music and video databases to pick up anything new that may have been added externally via NFS mounts:

30 1 * * *  mythtv  /usr/bin/mythutil --quiet --scanvideos
31 1 * * *  mythtv  /usr/bin/mythutil --quiet --scanmusic

Finally, I defragment the XFS partition for two hours every day except Friday:

45 1 * * 1-4,6-7  root  /usr/sbin/xfs_fsr

and resync the RAID-1 arrays once a week to ensure that they stay consistent and error-free:

15 3 * * 2  root  /usr/local/sbin/raid_parity_check md0
15 3 * * 4  root  /usr/local/sbin/raid_parity_check md2

using a trivial script.

In addition to that cronjob, I also have smartmontools run daily short and weekly long SMART tests via this blurb in /etc/smartd.conf:

/dev/sda -a -d ata -o on -S on -s (S/../.././04|L/../../6/05)
/dev/sdb -a -d ata -o on -S on -s (S/../.././04|L/../../6/05)

If there are any other automated maintenance tasks you do on your MythTV server, please leave a comment!

24 June, 2020 04:45PM

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, May 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 198 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 18.0h (out of 14h assigned and 4h from April).
  • Anton Gladky gave back the assigned 10h and declared himself inactive.
  • Ben Hutchings did 19.75h (out of 17.25h assigned and 2.5h from April).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 17.25h (out of 17.25h assigned).
  • Dylan Aïssi gave back the assigned 6h and declared himself inactive.
  • Emilio Pozuelo Monfort did not manage to work LTS in May and now reported 5h work in April (out of 17.25h assigned plus 46h from April), thus is carrying over 58.25h for June.
  • Markus Koschany did 25.0h (out of 17.25h assigned and 56h from April), thus carrying over 48.25h to June.
  • Mike Gabriel did 14.50h (out of 8h assigned and 6.5h from April).
  • Ola Lundqvist did 11.5h (out of 12h assigned and 7h from April), thus carrying over 7.5h to June.
  • Roberto C. Sánchez did 17.25h (out of 17.25h assigned).
  • Sylvain Beucler did 17.25h (out of 17.25h assigned).
  • Thorsten Alteholz did 17.25h (out of 17.25h assigned).
  • Utkarsh Gupta did 17.25h (out of 17.25h assigned).

Evolution of the situation

In May 2020 we had our second (virtual) contributors meeting on IRC, Logs and minutes are available online. Then we also moved our ToDo from the Debian wiki to the issue tracker on salsa.debian.org.
Sadly three contributors went inactive in May: Adrian Bunk, Anton Gladky and Dylan Aïssi. And while there are currently still enough active contributors to shoulder the existing work, we like to use this opportunity that we are always looking for new contributors. Please mail Holger if you are interested.
Finally, we like to remind you for a last time, that the end of Jessie LTS is coming in less than a month!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 6 packages with a known CVE and the dla-needed.txt file has 30 packages needing an update.

Thanks to our sponsors

New sponsors are in bold. With the upcoming start of Jessie ELTS, we are welcoming a few new sponsors and others should join soon.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

24 June, 2020 01:03PM by Raphaël Hertzog

hackergotchi for Matthew Garrett

Matthew Garrett

Making my doorbell work

I recently moved house, and the new building has a Doorbird to act as a doorbell and open the entrance gate for people. There's a documented local control API (no cloud dependency!) and a Home Assistant integration, so this seemed pretty straightforward.

Unfortunately not. The Doorbird is on separate network that's shared across the building, provided by Monkeybrains. We're also a Monkeybrains customer, so our network connection is plugged into the same router and antenna as the Doorbird one. And, as is common, there's port isolation between the networks in order to avoid leakage of information between customers. Rather perversely, we are the only people with an internet connection who are unable to ping my doorbell.

I spent most of the past few weeks digging myself out from under a pile of boxes, but we'd finally reached the point where spending some time figuring out a solution to this seemed reasonable. I spent a while playing with port forwarding, but that wasn't ideal - the only server I run is in the UK, and having packets round trip almost 11,000 miles so I could speak to something a few metres away seemed like a bad plan. Then I tried tethering an old Android device with a data-only SIM, which worked fine but only in one direction (I could see what the doorbell could see, but I couldn't get notifications that someone had pushed a button, which was kind of the point here).

So I went with the obvious solution - I added a wifi access point to the doorbell network, and my home automation machine now exists on two networks simultaneously (nmcli device modify wlan0 ipv4.never-default true is the magic for "ignore the gateway that the DHCP server gives you" if you want to avoid this), and I could now do link local service discovery to find the doorbell if it changed addresses after a power cut or anything. And then, like magic, everything worked - I got notifications from the doorbell when someone hit our button.

But knowing that an event occurred without actually doing something in response seems fairly unhelpful. I have a bunch of Chromecast targets around the house (a mixture of Google Home devices and Chromecast Audios), so just pushing a message to them seemed like the easiest approach. Home Assistant has a text to speech integration that can call out to various services to turn some text into a sample, and then push that to a media player on the local network. You can group multiple Chromecast audio sinks into a group that then presents as a separate device on the network, so I could then write an automation to push audio to the speaker group in response to the button being pressed.

That's nice, but it'd also be nice to do something in response. The Doorbird exposes API control of the gate latch, and Home Assistant exposes that as a switch. I'm using Home Assistant's Google Assistant integration to expose devices Home Assistant knows about to voice control. Which means when I get a house-wide notification that someone's at the door I can just ask Google to open the door for them.

So. Someone pushes the doorbell. That sends a signal to a machine that's bridged onto that network via an access point. That machine then sends a protobuf command to speakers on a separate network, asking them to stream a sample it's providing. Those speakers call back to that machine, grab the sample and play it. At this point, multiple speakers in the house say "Someone is at the door". I then say "Hey Google, activate the front gate" - the device I'm closest to picks this up and sends it to Google, where something turns my speech back into text. It then looks at my home structure data and realises that the "Front Gate" device is associated with my Home Assistant integration. It then calls out to the home automation machine that received the notification in the first place, asking it to trigger the front gate relay. That device calls out to the Doorbird and asks it to open the gate. And now I have functionality equivalent to a doorbell that completes a circuit and rings a bell inside my home, and a button inside my home that completes a circuit and opens the gate, except it involves two networks inside my building, callouts to the cloud, at least 7 devices inside my home that are running Linux and I really don't want to know how many computational cycles.

The future is wonderful.

(I work for Google. I do not work on any of the products described in this post. Please god do not ask me how to integrate your IoT into any of this)

comment count unavailable comments

24 June, 2020 05:09AM

June 22, 2020

Andrew Cater

And a nice quick plug - Apple WWDC keynote

Demonstrating virtualisation on the new Apple to be powered by Apple silicon and ARM. Parallels virtualisation: Linux gets mentioned - two clear shots, one of Debian 9 and one of Debian 10. That's good publicity, any which way you cut it, with a worldwide tech audience hanging on every word from the presenters.

Now - developer hardware is due out this week - Apple A12 chip inside Mac mini format. Can we run Debian natively there?


22 June, 2020 07:40PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Evgeni Golov

Evgeni Golov

mass-migrating modules inside an Ansible Collection

Im the Foreman project, we've been maintaining a collection of Ansible modules to manage Foreman installations since 2017. That is, 2 years before Ansible had the concept of collections at all.

For that you had to set library (and later module_utils and doc_fragment_plugins) in ansible.cfg and effectively inject our modules, their helpers and documentation fragments into the main Ansible namespace. Not the cleanest solution, but it worked quiet well for us.

When Ansible started introducing Collections, we quickly joined, as the idea of namespaced, easily distributable and usable content units was great and exactly matched what we had in mind.

However, collections are only usable in Ansible 2.8, or actually 2.9 as 2.8 can consume them, but tooling around building and installing them is lacking. Because of that we've been keeping our modules usable outside of a collection.

Until recently, when we decided it's time to move on, drop that compatibility (which costed a few headaches over the time) and release a shiny 1.0.0.

One of the changes we wanted for 1.0.0 is renaming a few modules. Historically we had the module names prefixed with foreman_ and katello_, depending whether they were designed to work with Foreman (and plugins) or Katello (which is technically a Foreman plugin, but has a way more complicated deployment and currently can't be easily added to an existing Foreman setup). This made sense as long as we were injecting into the main Ansible namespace, but with collections the names be became theforemam.foreman.foreman_ <something> and while we all love Foreman, that was a bit too much. So we wanted to drop that prefix. And while at it, also change some other names (like ptable, which became partition_table) to be more readable.

But how? There is no tooling that would rename all files accordingly, adjust examples and tests. Well, bash to the rescue! I'm usually not a big fan of bash scripts, but renaming files, searching and replacing strings? That perfectly fits!

First of all we need a way map the old name to the new name. In most cases it's just "drop the prefix", for the others you can have some if/elif/fi:

prefixless_name=$(echo ${old_name}| sed -E 's/^(foreman|katello)_//')
if [[ ${old_name} == 'foreman_environment' ]]; then
  new_name='puppet_environment'
elif [[ ${old_name} == 'katello_sync' ]]; then
  new_name='repository_sync'
elif [[ ${old_name} == 'katello_upload' ]]; then
  new_name='content_upload'
elif [[ ${old_name} == 'foreman_ptable' ]]; then
  new_name='partition_table'
elif [[ ${old_name} == 'foreman_search_facts' ]]; then
  new_name='resource_info'
elif [[ ${old_name} == 'katello_manifest' ]]; then
  new_name='subscription_manifest'
elif [[ ${old_name} == 'foreman_model' ]]; then
  new_name='hardware_model'
else
  new_name=${prefixless_name}
fi

That defined, we need to actually have a ${old_name}. Well, that's a for loop over the modules, right?

for module in ${BASE}/foreman_*py ${BASE}/katello_*py; do
  old_name=$(basename ${module} .py)done

While we're looping over files, let's rename them and all the files that are associated with the module:

# rename the module
git mv ${BASE}/${old_name}.py ${BASE}/${new_name}.py

# rename the tests and test fixtures
git mv ${TESTS}/${old_name}.yml ${TESTS}/${new_name}.yml
git mv tests/fixtures/apidoc/${old_name}.json tests/fixtures/apidoc/${new_name}.json
for testfile in ${TESTS}/fixtures/${old_name}-*.yml; do
  git mv ${testfile} $(echo ${testfile}| sed "s/${old_name}/${new_name}/")
done

Now comes the really tricky part: search and replace. Let's see where we need to replace first:

  1. in the module file
    1. module key of the DOCUMENTATION stanza (e.g. module: foreman_example)
    2. all examples (e.g. foreman_example: …)
  2. in all test playbooks (e.g. foreman_example: …)
  3. in pytest's conftest.py and other files related to test execution
  4. in documentation
sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" ${BASE}/*.py

sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml

sed -E -i "/'${old_name}'/ s/${old_name}/${new_name}/" tests/conftest.py tests/test_crud.py

sed -E -i "/`${old_name}`/ s/${old_name}/${new_name}/g' README.md docs/*.md

You've probably noticed I used ${BASE} and ${TESTS} and never defined them… Lazy me.

But here is the full script, defining the variables and looping over all the modules.

#!/bin/bash

BASE=plugins/modules
TESTS=tests/test_playbooks
RUNTIME=meta/runtime.yml

echo "plugin_routing:" > ${RUNTIME}
echo "  modules:" >> ${RUNTIME}

for module in ${BASE}/foreman_*py ${BASE}/katello_*py; do
  old_name=$(basename ${module} .py)
  prefixless_name=$(echo ${old_name}| sed -E 's/^(foreman|katello)_//')
  if [[ ${old_name} == 'foreman_environment' ]]; then
    new_name='puppet_environment'
  elif [[ ${old_name} == 'katello_sync' ]]; then
    new_name='repository_sync'
  elif [[ ${old_name} == 'katello_upload' ]]; then
    new_name='content_upload'
  elif [[ ${old_name} == 'foreman_ptable' ]]; then
    new_name='partition_table'
  elif [[ ${old_name} == 'foreman_search_facts' ]]; then
    new_name='resource_info'
  elif [[ ${old_name} == 'katello_manifest' ]]; then
    new_name='subscription_manifest'
  elif [[ ${old_name} == 'foreman_model' ]]; then
    new_name='hardware_model'
  else
    new_name=${prefixless_name}
  fi

  echo "renaming ${old_name} to ${new_name}"

  git mv ${BASE}/${old_name}.py ${BASE}/${new_name}.py

  git mv ${TESTS}/${old_name}.yml ${TESTS}/${new_name}.yml
  git mv tests/fixtures/apidoc/${old_name}.json tests/fixtures/apidoc/${new_name}.json
  for testfile in ${TESTS}/fixtures/${old_name}-*.yml; do
    git mv ${testfile} $(echo ${testfile}| sed "s/${old_name}/${new_name}/")
  done

  sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" ${BASE}/*.py

  sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml

  sed -E -i "/'${old_name}'/ s/${old_name}/${new_name}/" tests/conftest.py tests/test_crud.py

  sed -E -i "/`${old_name}`/ s/${old_name}/${new_name}/g' README.md docs/*.md

  echo "    ${old_name}:" >> ${RUNTIME}
  echo "      redirect: ${new_name}" >> ${RUNTIME}

  git commit -m "rename ${old_name} to ${new_name}" ${BASE} tests/ README.md docs/ ${RUNTIME}
done

As a bonus, the script will also generate a meta/runtime.yml which can be used by Ansible 2.10+ to automatically use the new module names if the playbook contains the old ones.

Oh, and yes, this is probably not the nicest script you'll read this year. Maybe not even today. But it got the job nicely done and I don't intend to need it again anyways.

22 June, 2020 07:31PM by evgeni

June 21, 2020

Enrico Zini

Forced italianisation of Südtirol

Italianization (Italian: Italianizzazione; Croatian: talijanizacija; Slovene: poitaljančevanje; German: Italianisierung; Greek: Ιταλοποίηση) is the spread of Italian culture and language, either by integration or assimilation.[1][2]
In 1919, at the time of its annexation, the middle part of the County of Tyrol which is today called South Tyrol (in Italian Alto Adige) was inhabited by almost 90% German speakers.[1] Under the 1939 South Tyrol Option Agreement, Adolf Hitler and Benito Mussolini determined the status of the German and Ladin (Rhaeto-Romanic) ethnic groups living in the region. They could emigrate to Germany, or stay in Italy and accept their complete Italianization. As a consequence of this, the society of South Tyrol was deeply riven. Those who wanted to stay, the so-called Dableiber, were condemned as traitors while those who left (Optanten) were defamed as Nazis. Because of the outbreak of World War II, this agreement was never fully implemented. Illegal Katakombenschulen ("Catacomb schools") were set up to teach children the German language.
The Prontuario dei nomi locali dell'Alto Adige (Italian for Reference Work of Place Names of Alto Adige) is a list of Italianized toponyms for mostly German place names in South Tyrol (Alto Adige in Italian) which was published in 1916 by the Royal Italian Geographic Society (Reale Società Geografica Italiana). The list was called the Prontuario in short and later formed an important part of the Italianization campaign initiated by the fascist regime, as it became the basis for the official place and district names in the Italian-annexed southern part of the County of Tyrol.
Ettore Tolomei (16 August 1865, in Rovereto – 25 May 1952, in Rome) was an Italian nationalist and fascist. He was designated a Member of the Italian Senate in 1923, and ennobled as Conte della Vetta in 1937.
The South Tyrol Option Agreement (German: Option in Südtirol; Italian: Opzioni in Alto Adige) was an agreement in effect between 1939 and 1943, when the native German speaking people in South Tyrol and three communes in the province of Belluno were given the option of either emigrating to neighboring Nazi Germany (of which Austria was a part after the 1938 Anschluss) or remaining in Fascist Italy and being forcibly integrated into the mainstream Italian culture, losing their language and cultural heritage. Over 80% opted to move to Germany.

21 June, 2020 10:00PM

hackergotchi for Daniel Lange

Daniel Lange

Upgrading Limesurvey with (near) zero downtime

Limesurvey is an online survey tool. It is very powerful and commonly used in academic environments because it is Free Software (GPLv2+), allows for local installations protecting the data of participants and allowing to comply with data protection regulations. This also means there are typically no load-balanced multi-server szenarios with HA databases. But simple VMs where Limesurvey runs and needs upgrading in place.

There's an LTS branch (currently 3.x) and a stable branch (currently 4.x). There's also a 2.06 LTS branch that is restricted to paying customers. The main developers behind Limesurvey offer many services from template design to custom development to support to hosting ("Cloud", "Limesurvey Pro"). Unfortunately they also charge for easy updates called "ComfortUpdate" (currently 39€ for three months) and the manual process is made a bit cumbersome to make the "ComfortUpdate" offer more attractive.

Due to Limesurvey being an old code base and UI elements not being clearly separated, most serious use cases will end up patching files and symlinking logos around template directories. That conflicts a bit with the opaque "ComfortUpdate" process where you push a button and then magic happens. Or you have downtime and a recovery case while surveys are running.

If you do not intend to use the "ComfortUpdate" offering, you can prevent Limesurvey from connecting to http://comfortupdate.limesurvey.org daily by adding the updatable stanza as in line 14 to limesurvey/application/config/config.php:

  1. return array(
  2.  [...]
  3.          // Use the following config variable to set modified optional settings copied from config-defaults.php
  4.         'config'=>array(
  5.         // debug: Set this to 1 if you are looking for errors. If you still get no errors after enabling this
  6.         // then please check your error-logs - either in your hosting provider admin panel or in some /logs directory
  7.         // on your webspace.
  8.         // LimeSurvey developers: Set this to 2 to additionally display STRICT PHP error messages and get full access to standard templates
  9.                 'debug'=>0,
  10.                 'debugsql'=>0, // Set this to 1 to enanble sql logging, only active when debug = 2
  11.                 // Mysql database engine (INNODB|MYISAM):
  12.                  'mysqlEngine' => 'MYISAM'
  13. ,               // Update default LimeSurvey config here
  14.                 'updatable' => false,
  15.         )
  16. );

The comma on line 13 is placed like that in the current default limesurvey config.php, don't let yourself get confused. Every item in a php array must end with a comma. It can be on the next line.

The basic principle of low risk, near-zero downtime, in-place upgrades is:

  1. Create a diff between the current release and the target release
  2. Inspect the diff
  3. Make backups of the application webroot
  4. Patch a copy of the application in-place
  5. (optional) stop the web server
  6. Make a backup of the production database
  7. Move the patched application to the production webroot
  8. (if 5) Start the webserver
  9. Upgrade the database (if needed)
  10. Check the application

So, in detail:

Continue reading "Upgrading Limesurvey with (near) zero downtime"

21 June, 2020 07:38PM by Daniel Lange

Molly de Blanc

Fire

The world is on fire.

I know many of you are either my parents friends or here for the free software thoughts, but rather than listen to me, I want you to listed to Black voices in these fields.

If you’re not Black, it’s your job to educate yourself on issues that affect Black people all over the world and the way systems are designed to benefit White Supremacy. It is our job to acknowledge that racism is a problem — whether it appears as White Supremacy, Colonialism, or something as seemingly banal as pay gaps.

We must make space for Black voices. We must make space for Black Women. We must make space for Black trans lives. We must do this in technology. We must build equity. We must listen.

I know I have a platform. It’s one I value highly because I’ve worked so hard for the past twelve years to build it against sexist attitudes in my field (and the world). However, it is time for me to use that platform for the voices of others.

Please pay attention to Black voices in tech and FOSS. Do not just expect them to explain diversity and inclusion, but go to them for their expertise. Pay them respectful salaries. Mentor Black youth and Black people who want to be involved. Volunteer or donate to groups like Black Girls Code, The Last Mile, and Resilient Coders.

If you’re looking for mentorship, especially around things like writing, speaking, community organizing, or getting your career going in open source, don’t hesitate to reach out to me. Mentorship can be a lasting relationship, or a brief exchange around a specific issue of event. If I can’t help you, I’ll try to connect you to someone who can.

We cannot build the techno-utopia unless everyone is involved.

21 June, 2020 04:32PM by mollydb

June 20, 2020

Dima Kogan

OpenCV C API transition. A rant.

I just went through a debugging exercise that was so ridiculous, I just had to write it up. Some of this probably should go into a bug report instead of a rant, but I'm tired. And clearly I don't care anymore.

OK, so I'm doing computer vision work. OpenCV has been providing basic functions in this area, so I have been using them for a while. Just for really, really basic stuff, like projection. The C API was kinda weird, and their error handling is a bit ridiculous (if you give it arguments it doesn't like, it asserts!), but it has been working fine for a while.

At some point (around OpenCV 3.0) somebody over there decided that they didn't like their C API, and that this was now a C++ library. Except the docs still documented the C API, and the website said it supported C, and the code wasn't actually removed. They just kinda stopped testing it and thinking about it. So it would mostly continue to work, except some poor saps would see weird failures; like this and this, for instance. OpenCV 3.2 was the last version where it was mostly possible to keep using the old C code, even when compiling without optimizations. So I was doing that for years.

So now, in 2020, Debian is finally shipping a version of OpenCV that definitively does not work with the old code, so I had to do something. Over time I stopped using everything about OpenCV, except a few cvProjectPoints2() calls. So I decided to just write a small C++ shim to call the new version of that function, expose that with =extern "C"= to the rest of my world, and I'd be done. And normally I would be, but this is OpenCV we're talking about. I wrote the shim, and it didn't work. The code built and ran, but the results were wrong. After some pointless debugging, I boiled the problem down to this test program:

#include <opencv2/calib3d.hpp>
#include <stdio.h>

int main(void)
{
    double fx = 1000.0;
    double fy = 1000.0;
    double cx = 1000.0;
    double cy = 1000.0;
    double _camera_matrix[] =
        { fx,  0, cx,
          0,  fy, cy,
          0,   0,  1 };
    cv::Mat camera_matrix(3,3, CV_64FC1, _camera_matrix);

    double pp[3] = {1., 2., 10.};
    double qq[2] = {444, 555};

    int N=1;
    cv::Mat object_points(N,3, CV_64FC1, pp);
    cv::Mat image_points (N,2, CV_64FC1, qq);

    // rvec,tvec
    double _zero3[3] = {};
    cv::Mat zero3(1,3,CV_64FC1, _zero3);

    cv::projectPoints( object_points,
                       zero3,zero3,
                       camera_matrix,
                       cv::noArray(),
                       image_points,
                       cv::noArray(), 0.0);

    fprintf(stderr, "manually-projected no-distortion: %f %f\n",
            pp[0]/pp[2] * fx + cx,
            pp[1]/pp[2] * fy + cy);
    fprintf(stderr, "opencv says: %f %f\n", qq[0], qq[1]);

    return 0;
}

This is as trivial as it gets. I project one point through a pinhole camera, and print out the right answer (that I can easily compute, since this is trivial), and what OpenCV reports:

$ g++ -I/usr/include/opencv4 -o tst tst.cc -lopencv_calib3d -lopencv_core && ./tst

manually-projected no-distortion: 1100.000000 1200.000000
opencv says: 444.000000 555.000000

Well that's no good. The answer is wrong, but it looks like it didn't even write anything into the output array. Since this is supposed to be a thin shim to C code, I want this thing to be filling in C arrays, which is what I'm doing here:

double qq[2] = {444, 555};
int N=1;
cv::Mat image_points (N,2, CV_64FC1, qq);

This is how the C API has worked forever, and their C++ API works the same way, I thought. Nothing barfed, not at build time, or run time. Fine. So I went to figure this out. In the true spirit of C++, the new API is inscrutable. I'm passing in cv::Mat, but the API wants cv::InputArray for some arguments and cv::OutputArray for others. Clearly cv::Mat can be coerced into either of those types (and that's what you're supposed to do), but the details are not meant to be understood. You can read the snazzy C++-style documentation. Clicking on "OutputArray" in the doxygen gets you here. Then I guess you're supposed to click on "_OutputArray", and you get here. Understand what's going on now? Me neither.

Stepping through the code revealed the problem. cv::projectPoints() looks like this:

void cv::projectPoints( InputArray _opoints,
                        InputArray _rvec,
                        InputArray _tvec,
                        InputArray _cameraMatrix,
                        InputArray _distCoeffs,
                        OutputArray _ipoints,
                        OutputArray _jacobian,
                        double aspectRatio )
{
    ....
    _ipoints.create(npoints, 1, CV_MAKETYPE(depth, 2), -1, true);
    ....

I.e. they're allocating a new data buffer for the output, and giving it back to me via the OutputArray object. This object already had a buffer, and that's where I was expecting the output to go. Instead it went to the brand-new buffer I didn't want. Issues:

  • The OutputArray object knows it already has a buffer, and they could just use it instead of allocating a new one
  • If for some reason my buffer smells bad, they could complain to tell me they're ignoring it to save me the trouble of debugging, and then bitching about it on the internet
  • I think dynamic memory allocation smells bad
  • Doing it this way means the new function isn't a drop-in replacement for the old function

Well that's just super. I can call the C++ function, copy the data into the place it's supposed to go to, and then deallocate the extra buffer. Or I can pull out the meat of the function I want into my project, and then I can drop the OpenCV dependency entirely. Clearly that's the way to go.

So I go poking back into their code to grab what I need, and here's what I see:

static void cvProjectPoints2Internal( const CvMat* objectPoints,
                  const CvMat* r_vec,
                  const CvMat* t_vec,
                  const CvMat* A,
                  const CvMat* distCoeffs,
                  CvMat* imagePoints, CvMat* dpdr CV_DEFAULT(NULL),
                  CvMat* dpdt CV_DEFAULT(NULL), CvMat* dpdf CV_DEFAULT(NULL),
                  CvMat* dpdc CV_DEFAULT(NULL), CvMat* dpdk CV_DEFAULT(NULL),
                  CvMat* dpdo CV_DEFAULT(NULL),
                  double aspectRatio CV_DEFAULT(0) )
{
...
}

Looks familiar? It should. Because this is the original C-API function they replaced. So in their quest to move to C++, they left the original code intact, C API and everything, un-exposed it so you couldn't call it anymore, and made a new, shitty C++ wrapper for people to call instead. CvMat is still there. I have no words.

Yes, this is a massive library, and maybe other parts of it indeed did make some sort of non-token transition, but this thing is ridiculous. In the end, here's the function I ended up with (licensed as OpenCV; see the comment)

// The implementation of project_opencv is based on opencv. The sources have
// been heavily modified, but the opencv logic remains. This function is a
// cut-down cvProjectPoints2Internal() to keep only the functionality I want and
// to use my interfaces. Putting this here allows me to drop the C dependency on
// opencv. Which is a good thing, since opencv dropped their C API
//
// from opencv-4.2.0+dfsg/modules/calib3d/src/calibration.cpp
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
//   * Redistribution's of source code must retain the above copyright notice,
//     this list of conditions and the following disclaimer.
//
//   * Redistribution's in binary form must reproduce the above copyright notice,
//     this list of conditions and the following disclaimer in the documentation
//     and/or other materials provided with the distribution.
//
//   * The name of the copyright holders may not be used to endorse or promote products
//     derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
typedef union
{
    struct
    {
        double x,y;
    };
    double xy[2];
} point2_t;
typedef union
{
    struct
    {
        double x,y,z;
    };
    double xyz[3];
} point3_t;
void project_opencv( // outputs
                     point2_t* q,
                     point3_t* dq_dp,               // may be NULL
                     double* dq_dintrinsics_nocore, // may be NULL

                     // inputs
                     const point3_t* p,
                     int N,
                     const double* intrinsics,
                     int Nintrinsics)
{
    const double fx = intrinsics[0];
    const double fy = intrinsics[1];
    const double cx = intrinsics[2];
    const double cy = intrinsics[3];

    double k[12] = {};
    for(int i=0; i<Nintrinsics-4; i++)
        k[i] = intrinsics[i+4];

    for( int i = 0; i < N; i++ )
    {
        double z_recip = 1./p[i].z;
        double x = p[i].x * z_recip;
        double y = p[i].y * z_recip;

        double r2      = x*x + y*y;
        double r4      = r2*r2;
        double r6      = r4*r2;
        double a1      = 2*x*y;
        double a2      = r2 + 2*x*x;
        double a3      = r2 + 2*y*y;
        double cdist   = 1 + k[0]*r2 + k[1]*r4 + k[4]*r6;
        double icdist2 = 1./(1 + k[5]*r2 + k[6]*r4 + k[7]*r6);
        double xd      = x*cdist*icdist2 + k[2]*a1 + k[3]*a2 + k[8]*r2+k[9]*r4;
        double yd      = y*cdist*icdist2 + k[2]*a3 + k[3]*a1 + k[10]*r2+k[11]*r4;

        q[i].x = xd*fx + cx;
        q[i].y = yd*fy + cy;


        if( dq_dp )
        {
            double dx_dp[] = { z_recip, 0,       -x*z_recip };
            double dy_dp[] = { 0,       z_recip, -y*z_recip };
            for( int j = 0; j < 3; j++ )
            {
                double dr2_dp = 2*x*dx_dp[j] + 2*y*dy_dp[j];
                double dcdist_dp = k[0]*dr2_dp + 2*k[1]*r2*dr2_dp + 3*k[4]*r4*dr2_dp;
                double dicdist2_dp = -icdist2*icdist2*(k[5]*dr2_dp + 2*k[6]*r2*dr2_dp + 3*k[7]*r4*dr2_dp);
                double da1_dp = 2*(x*dy_dp[j] + y*dx_dp[j]);
                double dmx_dp = (dx_dp[j]*cdist*icdist2 + x*dcdist_dp*icdist2 + x*cdist*dicdist2_dp +
                                k[2]*da1_dp + k[3]*(dr2_dp + 4*x*dx_dp[j]) + k[8]*dr2_dp + 2*r2*k[9]*dr2_dp);
                double dmy_dp = (dy_dp[j]*cdist*icdist2 + y*dcdist_dp*icdist2 + y*cdist*dicdist2_dp +
                                k[2]*(dr2_dp + 4*y*dy_dp[j]) + k[3]*da1_dp + k[10]*dr2_dp + 2*r2*k[11]*dr2_dp);
                dq_dp[i*2 + 0].xyz[j] = fx*dmx_dp;
                dq_dp[i*2 + 1].xyz[j] = fy*dmy_dp;
            }
        }
        if( dq_dintrinsics_nocore )
        {
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 0] = fx*x*icdist2*r2;
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 0] = fy*(y*icdist2*r2);

            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 1] = fx*x*icdist2*r4;
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 1] = fy*y*icdist2*r4;

            if( Nintrinsics-4 > 2 )
            {
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 2] = fx*a1;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 2] = fy*a3;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 3] = fx*a2;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 3] = fy*a1;
                if( Nintrinsics-4 > 4 )
                {
                    dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 4] = fx*x*icdist2*r6;
                    dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 4] = fy*y*icdist2*r6;

                    if( Nintrinsics-4 > 5 )
                    {
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 5] = fx*x*cdist*(-icdist2)*icdist2*r2;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 5] = fy*y*cdist*(-icdist2)*icdist2*r2;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 6] = fx*x*cdist*(-icdist2)*icdist2*r4;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 6] = fy*y*cdist*(-icdist2)*icdist2*r4;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 7] = fx*x*cdist*(-icdist2)*icdist2*r6;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 7] = fy*y*cdist*(-icdist2)*icdist2*r6;
                        if( Nintrinsics-4 > 8 )
                        {
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 8] = fx*r2; //s1
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 8] = fy*0; //s1
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 9] = fx*r4; //s2
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 9] = fy*0; //s2
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 10] = fx*0;//s3
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 10] = fy*r2; //s3
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 11] = fx*0;//s4
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 11] = fy*r4; //s4
                        }
                    }
                }
            }
        }
    }
}

This does only the stuff I need: projection only (no geometric transformation), and gradients in respect to the point coordinates and distortions only. Gradients in respect to fxy and cxy are trivial, and I don't bother reporting them.

So now I don't compile or link against OpenCV, my code builds and runs on Debian/sid and (surprisingly) it runs much faster than before. Apparently there was a lot of pointless overhead happening.

Alright. Rant over.

20 June, 2020 07:42AM by Dima Kogan

June 19, 2020

Ingo Juergensmann

Jitsi Meet and ejabberd

Since the more or less global lockdown caused by Covid-19 there was a lot talk about video conferencing solutions that can be used for e.g. those people that try to coordinate with coworkers while in home office. One of the solutions is Jitsi Meet, which is NOT packaged in Debian. But there are Debian packages provided by Jitsi itself.

Jitsi relies on an XMPP server. You can see the network overview in the docs. While Jitsi itself uses Prosody as XMPP and their docs only covers that one. But it's basically irrelevant which XMPP you want to use. Only thing is that you can't follow the official Jitsi documentation when you are not using Prosody but instead e.g. ejabberd. As always, it's sometimes difficult to find the correct/best non-official documentation or how-to, so I try to describe what helped me in configuring Jitsi Meet with ejabberd as XMPP server and my own coturn STUN/TURN server...

This is not a step-by-step description, but anyway... here we go with some links:

One of the first issues I stumpled across was that my Java was too old, but this can be quickly solved by update-alternatives:

There are 3 choices for the alternative java (providing /usr/bin/java).

Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode
1 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode
2 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 1081 manual mode
3 /usr/lib/jvm/jre-7-oracle-x64/bin/java 316 manual mode

It was set to jre-7, but I guess this was from years ago when I ran OpenFire as XMPP server.

If something is not working with Jitsi Meet, it helps to not watch the log files only, but also to open the Debug Console in your web browser. That way I catched some XMPP Failures and saw that it tries to connect to some components where the DNS records were missing:

meet IN A yourIP
chat.meet IN A yourIP
focus.meet IN A yourIP
pubsub.meet IN A yourIP

Of course you also need to add some config to your ejabberd.yml:

host_config:
domain.net:
auth_password_format: scram
meet.domain.net:
## Disable s2s to prevent spam
s2s_access: none
auth_method: anonymous
allow_multiple_connections: true
anonymous_protocol: both
modules:
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
#mod_disco: {}
mod_stun_disco:
secret: "YOURSECRETTURNCREDENTIALS"
services:
-
host: yourIP # Your coturn's public address.
type: stun
transport: udp
-
host: yourIP # Your coturn's public address.
type: stun
transport: tcp
-
host: yourIP # Your coturn's public address.
type: turn
transport: udp
mod_muc:
access: all
access_create: local
access_persistent: local
access_admin: admin
host: "chat.@"
mod_muc_admin: {}
mod_ping: {}
mod_pubsub:
access_createnode: local
db_type: sql
host: "pubsub.@"
ignore_pep_from_offline: false
last_item_cache: true
max_items_node: 5000 # For Jappix this must be set to 1000000
plugins:
- "flat"
- "pep" # requires mod_caps
force_node_config:
"eu.siacs.conversations.axolotl.*":
access_model: open
## Avoid buggy clients to make their bookmarks public
"storage:bookmarks":
access_model: whitelist

There is more config that needs to be done, but one of the XMPP Failures I spotted from debug console in Firefox was that it tried to connect to conference.domain.net while I prefer to use chat.domain.net for its brevity. If you prefer conference instead of chat, then you shouldn't be affected by this. Just make just that your config is consistent with what Jitsi Meet wants to connect to. Maybe not all of the above lines are necessary, but this works for me.

Jitsi config is like this for me with short domains (/etc/jitsi/meet/meet.domain.net-config.js):

var config = {

hosts: {
domain: 'domain.net',
anonymousdomain: 'meet.domain.net',
authdomain: 'meet.domain.net',
focus: 'focus.meet.domain.net',
muc: 'chat.hookipa.net'
},

bosh: '//meet.domain.net:5280/http-bind',
//websocket: 'wss://meet.domain.net/ws',
clientNode: 'http://jitsi.org/jitsimeet',
focusUserJid: 'focus@domain.net',

useStunTurn: true,

p2p: {
// Enables peer to peer mode. When enabled the system will try to
// establish a direct connection when there are exactly 2 participants
// in the room. If that succeeds the conference will stop sending data
// through the JVB and use the peer to peer connection instead. When a
// 3rd participant joins the conference will be moved back to the JVB
// connection.
enabled: true,

// Use XEP-0215 to fetch STUN and TURN servers.
useStunTurn: true,

// The STUN servers that will be used in the peer to peer connections
stunServers: [
//{ urls: 'stun:meet-jit-si-turnrelay.jitsi.net:443' },
//{ urls: 'stun:stun.l.google.com:19302' },
//{ urls: 'stun:stun1.l.google.com:19302' },
//{ urls: 'stun:stun2.l.google.com:19302' },
{ urls: 'stun:turn.domain.net:5349' },
{ urls: 'stun:turn.domain.net:3478' }
],

....

In the above config snippet one of the issues solved was to add the port to the bosh setting. Of course you can also take care that your bosh requests get forwarded by your webserver as reverse proxy. Using the port in the config might be a quick way to test whether or not your config is working. It's easier to solve one issue after the other and make one config change at a time instead of needing to make changes in several places.

/etc/jitsi/jicofo/sip-communicator.properties:

org.jitsi.jicofo.auth.URL=XMPP:meet.domain.net
org.jitsi.jicofo.BRIDGE_MUC=jvbbrewery@chat.meet.domain.net

 /etc/jitsi/videobridge/sip-communicator.properties:

org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
org.jitsi.videobridge.STATISTICS_INTERVAL=5000

org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=localhost
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=domain.net
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=SECRET
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@chat.meet.domain.net
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=videobridge1

org.jitsi.videobridge.DISABLE_TCP_HARVESTER=true
org.jitsi.videobridge.TCP_HARVESTER_PORT=4443
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=yourIP
org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=yourIP
org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=turn.domain.net:3478
org.ice4j.ice.harvest.ALLOWED_INTERFACES=eth0

Sometimes there might be stupid errors like (in my case) wrong hostnames like "chat.meet..domain.net" (a double dot in the domain), but you can spot those easily in the debug console of your browser.

There tons of config options where you can easily make mistakes, but watching your logs and your debug console should really help you in sorting out these kind of errors. The other URLs above are helpful as well and more elaborate then my few lines here. Especially Mike Kuketz has some advanced configuration tips like disabling third party requests with "disableThirdPartyRequests: true" or limiting the number of video streams and such.

Hope this helps...

Kategorie: 

19 June, 2020 05:26PM by ij

Reproducible Builds (diffoscope)

diffoscope 148 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 148. This version includes the following changes:

[ Daniel Fullmer ]
* Fix a regression in the CBFS comparator due to changes in our_check_output.

[ Chris Lamb ]
* Add a remark in the deb822 handling re. potential security issue in the
  .changes, .dsc, .buildinfo comparator.

You find out more by visiting the project homepage.

19 June, 2020 12:00AM

June 18, 2020

Enrico Zini

Missing Qt5 designer library in cross-build development

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

The problem

While testing the cross-compiler, we noticed that the designer library was not being built.

The designer library is needed to build designer plugins, which allow loading, dynamically at runtime, .ui interface files that use custom widgets.

The error the customer got at runtime is: QFormBuilder was unable to create a custom widget of the class '…'; defaulting to base class 'QWidget'.

The library with the custom widget implementation was correctly linked, and indeed the same custom widget was used by the application in other parts of its interface not loaded via .ui files.

It turns out that it is not sufficient, and to load custom widgets automatically, QUiLoader wants to read their metadata from plugin libraries containing objects that implement the QDesignerCustomWidgetInterface interface.

Sadly, building such a library requires using QT += designer, and the designer library, that was not being built by Qt5's build system. This looks very much like a Qt5 bug.

A work around would be to subclass QUiLoader extending createWidget to teach it how to create the custom widgets we need. Unfortunately, the customer has many custom widgets.

The investigation

To find out why designer was not being built, I added -d to the qmake invocation at the end of qtbase/configure, and trawled through the 3.1G build output.

The needle in the haystack seems to be here:

DEBUG 1: /home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/src.pro:18: SUBDIRS := uiplugin uitools lib components designer plugins
DEBUG 1: /home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/src.pro:23: calling qtNomakeTools(lib components designer plugins)

As far as I can understand, qtNomakeTools seems to be intended to disable building those components if QT_BUILD_PARTS doesn't contain tools. For cross-building, QT_BUILD_PARTS is libs examples, so designer does not get built.

However, designer contains the library part needed for QDesignerCustomWidgetInterface and that really needs to be built. I assume that part should really be built as part of libs, not tools.

The fixes/workarounds

I tried removing designer from the qtNomakeTools invocation at qttools/src/designer/src/src.pro:23, to see if qttools/src/designer/src/designer/ would get built.

It did get built, but then build failed with designer/src/designer and designer/src/uitools both claiming the designer plugin.

I tried editing qttools/src/designer/src/uitools/uitools.pro not to claim the designer plugin when tools is not a build part.

I added the tweaks to the Qt5 build system as debian/patches.

2 hours of build time later...

make check is broken:

make[6]: Leaving directory '/home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/uitools'
make[5]: *** No rule to make target 'sub-components-check', needed by 'sub-designer-check'.  Stop.

But since make check doesn't do anything in this build, we can simply override dh_auto_test to skip that step.

Finally, this patch builds a new executable, of an architecture that makes dh_shlibdeps struggle:

dpkg-shlibdeps: error: cannot find library libQt5DesignerComponentssystem.so.5 needed by debian/qtbase5system-armhf-dev/opt/qt5system-armhf/bin/designer (ELF format: 'elf32-little' abi: '0101002800000000'; RPATH: '')
dpkg-shlibdeps: error: cannot find library libQt5Designersystem.so.5 needed by debian/qtbase5system-armhf-dev/opt/qt5system-armhf/bin/designer (ELF format: 'elf32-little' abi: '0101002800000000'; RPATH: '')

And we can just skip running dh_shlibdeps on the designer executable.

The result is in the qt5custom git repository.

18 June, 2020 01:30PM

Ian Jackson

BountySource have turned evil - alternatives ?

I need an alternative to BountySource, who have done an evil thing. Please post recommendations in the comments.

From: Ian Jackson <*****>
To: support@bountysource.com
Subject: Re: Update to our Terms of Service
Date: Wed, 17 Jun 2020 16:26:46 +0100

Bountysource writes ("Update to our Terms of Service"):
> You are receiving this email because we are updating the Bountysource Terms of
> Service, effective 1st July 2020.
>
> What's changing?
> We have added a Time-Out clause to the Bounties section of the agreement:
>
> 2.13 Bounty Time-Out.
> If no Solution is accepted within two years after a Bounty is posted, then the
> Bounty will be withdrawn and the amount posted for the Bounty will be retained
> by Bountysource. For Bounties posted before June 30, 2018, the Backer may
> redeploy their Bounty to a new Issue by contacting support@bountysource.com
> before July 1, 2020. If the Backer does not redeploy their Bounty by the
> deadline, the Bounty will be withdrawn and the amount posted for the Bounty
> will be retained by Bountysource.
>
> You can read the full Terms of Service here
>
> What do I need to do?
> If you agree to the new terms, you don't have to do anything.
>
> If you have a bounty posted prior to June 30, 2018 that is not currently being
> solved, email us at support@bountysource.com to redeploy your bounty.  Or, if
> you do not agree with the new terms, please discontinue using Bountysource.

I do not agree to this change to the Terms and Conditions.
Accordingly, I will not post any more bounties on BountySource.

I currently have one outstanding bounty of $200 on
   https://www.bountysource.com/issues/86138921-rfe-add-a-frontend-for-the-rust-programming-language

That was posted in December 2019.  It is not clear now whether that
bounty will be claimed within your 2-year timeout period.

Since I have not accepted the T&C change, please can you confirm that

(i) My bounty will not be retained by BountySource even if no solution
    is accepted by December 2021.

(ii) As a backer, you will permit me to vote on acceptance of that
    bounty should a solution be proposed before then.

I suspect that you intend to rely on the term in the previous T&C
giving you unlimited ability to modify the terms and conditions.  Of
course such a term is an unfair contract term, because if it were
effective it would give you the power to do whatever you like.  So it
is not binding on me.

I look forward to hearing from you by the 8th of July.  If I do not
hear from you I will take the matter up with my credit card company.

Thank you for your attention.

Ian.

They will try to say "oh it's all governed by US law" but of course section 75 of the Consumer Credit Act makes the card company jointly liable for Bountysource's breach of contract and a UK court will apply UK consumer protection law even to a contract which says it is to be governed by US law - because you can't contract out of consumer protection. So the card company are on the hook and I can use them as a lever.

Update - BountySource have changed their mind

From: Bountysource <support@bountysource.com>
To: *****
Subject: Re: Update to our Terms of Service
Date: Wed, 17 Jun 2020 18:51:11 -0700

Hi Ian

The new terms of service has with withdrawn.

This is not the end of the matter, I'm sure. They will want to get long-unclaimed bounties off their books (and having the cash sat forever at BountySource is not ideal for backers either). Hopefully they can engage in a dialogue and find a way that is fair, and that doesn't incentivise BountySource to sabotage bounty claims(!) I think that means that whatever it is, BountySource mustn't keep the money. There are established ways of dealing with similar problems (eg ancient charitable trusts; unclaimed bank accounts).

I remain wary. That BountySource is now owned by cryptocurrency company is not encouraging. That they would even try what they just did is a really bad sign.

Edited 2020-06-17 16:28 for a typo in Bountysource's email address
Update added 2020-06-18 11:40 for BountySource's change of mind.



comment count unavailable comments

18 June, 2020 10:44AM