# November 24, 2017

## Dirk Eddelbuettel <!-- document.write( "<a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2017/11/24#rcpp_0.12.14_hide\" onClick=\"exclude( 'http://dirk.eddelbuettel.com/blog/2017/11/24#rcpp_0.12.14' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2017/11/24#rcpp_0.12.14_show\" style=\"display:none;\" onClick=\"show( 'http://dirk.eddelbuettel.com/blog/2017/11/24#rcpp_0.12.14' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Rcpp 0.12.14: Some deprecation and minor updates

The fourteenth release in the 0.12.* series of Rcpp landed on CRAN yesterday after a somewhat longer-than-usual gestation period (and word is it may have been due to some unrelated disturbances from lots of changes within the main r-devel build).

This release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017 and the 0.12.13.release in late September 2017 making it the eighteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1246 packages (and hence 77 more since the last release) on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This release is relatively minor compared to other releases, but follows through on the deprecattion of the old vectors for Date and Datetime (which were terrible: I was influenced by the vector design in QuantLib at the time and didn't really understand yet how a SEXP vector should work) we announced with Rcpp 0.12.8 a year ago. So now the new vectors are the default, but you can flip back if you need to with #define.

Otherwise Dan rounded a corner with the improved iterators he contributed, and Kirill improved the output stream implementation suppressing a warning with newer compilers.

### Changes in Rcpp version 0.12.14 (2017-11-17)

• Changes in Rcpp API:

• New const iterators functions cbegin() and cend() added to MatrixRow as well (Dan Dillon in #750).

• The Rostream object now contains a Buffer rather than allocating one (Kirill Müller in #763).

• New DateVector and DatetimeVector classes are now the default fully deprecating the old classes as announced one year ago.

• Changes in Rcpp Package:

• DESCRIPTION file now list doi information per CRAN suggestion.
• Changes in Rcpp Documentation:

• Update CITATION file with doi information and PeerJ preprint.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

## Daniel Pocock <!-- document.write( "<a href=\"#\" id=\"https://danielpocock.com/free-software-in-the-snow-2018_hide\" onClick=\"exclude( 'https://danielpocock.com/free-software-in-the-snow-2018' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://danielpocock.com/free-software-in-the-snow-2018_show\" style=\"display:none;\" onClick=\"show( 'https://danielpocock.com/free-software-in-the-snow-2018' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Free software in the snow

There are an increasing number of events for free software enthusiasts to meet in an alpine environment for hacking and fun.

In Switzerland, Swiss Linux is organizing the fourth edition of the Rencontres Hivernales du Libre in the mountain resort of Saint-Cergue, a short train ride from Geneva and Lausanne, 12-14 January 2018. The call for presentations is still open.

In northern Italy, not far from Milan (Malpensa) airport, Debian is organizing a Debian Snow Camp, a winter getaway for developers and enthusiasts in a mountain environment where the scenery is as diverse as the Italian culinary options. It is hoped the event will take place 22-25 February 2018.

24 November, 2017 08:31AM by Daniel.Pocock

## Norbert Preining <!-- document.write( "<a href=\"#\" id=\"https://www.preining.info/blog/2017/11/tex-live-cockpit/_hide\" onClick=\"exclude( 'https://www.preining.info/blog/2017/11/tex-live-cockpit/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.preining.info/blog/2017/11/tex-live-cockpit/_show\" style=\"display:none;\" onClick=\"show( 'https://www.preining.info/blog/2017/11/tex-live-cockpit/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### TeX Live Cockpit

I have been working quite some time on a new front end for the TeX Live Manager tlmgr. Early versions have leaked into TeX Live, but the last month or two has seen many changes in tlmgr itself, in particular support for JSON output. These changes were mostly driven by the need (or ease) of the new frontend: TLCockpit.

With the update of the main TeX Live infrastructure packages that we made available the other day, and the following update of TLCockpit in TeX Live, it is now time to announce TLCockpit a bit more formally. So here it is. We took lots of inspiration from the excellent TeX Live Utility which has and is serving the Mac users since long time. TLCockpit tries to provide similar functionality and ease of use for users of all architectures and not only the Mac.

Cross-platform development is a pain, and the current TeX Live Manager contains a GUI, written in Perl/Tk. While the GUI is functional, in fact nearly functionally complete wrt to the command line version, its appearance is less of pleasure. Complaints about it not being styled according to the underlying OS, UX not being optimal, feature-overload etc are common. And I agree, that back then when I wrote that GUI it was less for actual consumption but to at least have some GUI for those poor Windows users.

Fast-forward about 10 years, and the implementation of the tlmgr shell (something I haven’t reported in detail by now!) opened up new routes for GUI development. And since in my current job Scala is very common, I decided to implement a frontend with Scala, first based on the Swing toolkit but soon I switched over to ScalaFX. That is also the reason there are so many posts on ScalaFX on my blog recently.

### Features

Let us go through the features of TLCockpit at the current time. Starting up the program one sees the loading screen on the right. The indicator in the menu bar shows that the underlying tlmgr is busy loading the local and remote databases. When this is done, the list of packages appears. In the list, the binary sub-packages are ordered below the main packages and can be expanded. For each package the package name, the short description, and the installation status is shown:

Below the list of packages there is also a search entry field: Entering a text here and pressing Go (or Enter) will restrict the display of packages to those where the package name or short description contains the search term.

Using the Packages menu entry one can change the list view to sort by collections, as shown on the left.

On switching to the Updates tab one is (after a hopefully short loading time) with the list of updates available. For each update the package name, the status (update, new on server, removed on server etc), short description, local and remote revisions (with TeX Catalogue versions given if available), and the download size is shown:

While visiting the Updates tab, the menu entry Updates allows updating either all, or if there are infra-structure updates available, only the TeX Live manager.

The last of the three tabs is Backup and lists the currently available backups and allows restoring packages. For each package the revision and date of creation is shown.

Clicking on a package anywhere in the three tabs with the right mouse button gives a context menu with options to install, remove, update, and show detailed information of the package. Selecting the Info entry gives a details window, in which all kind of information is listed. For documentation files that are installed clicking on the file name will open the file with the default application for the file type:

Configuration of the TeX Live installation is possible via menu entry Options where there are General options allowing to customize the TeX Live installation and Paper allowing to customize the default paper size for the supported programs:

This completes the list of current functionality.

Well, more or less. The attentive reader might have spotted two additional features at the lower part of the application window: Experts only and Debug panes. The former gives access to a text field that allows entering arbitrary tlmgr shell commands. But be careful what you are entering. The later one gives access to three tabs containing the output, debug, and error output of tlmgr as well as the application itself. It will open automatically in case something unforeseeable has happened.

### Requirements

Although written in Scala, all necessary Scala related libraries are packed into the distributed jar file, that means only a recent Java installation which includes JavaFX is necessary. In case you see errors like java.lang.NoClassDefFoundError: javafx/event/EventTarget this means your Java is too old.

I have tested the release jar files on Linux (Debian) with Java 8u152 from the OpenJDK project and on Windows (10) with the 8u154 from Sun. Both worked without any problems. Mac OS is untested due to lack of devices – but there most users anyway use TLU.

### Development

Development is done one GitHub tlcockpit project, please use the issues functionality there to report bugs, request features, and suggest improvements. As mentioned above, the program is written in Scala with ScalaFX as toolkit, so if you are an expert in any of those, please join in and help!

I myself use IntelliJ for development, the checkout can be opened with IntelliJ without any problems. Deployment is done via sbt assembly which packs all the necessary libraries into one jar file.

### Closing

While far from perfect, I believe it is an improvement over the tlmgr gui, and I hope that some users of TeX Live Manager might find TLCockpit useful.

Enjoy.

24 November, 2017 07:10AM by Norbert Preining

## Russ Allbery <!-- document.write( "<a href=\"#\" id=\"https://www.eyrie.org/~eagle/reviews/books/0-7653-2754-6.html_hide\" onClick=\"exclude( 'https://www.eyrie.org/~eagle/reviews/books/0-7653-2754-6.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.eyrie.org/~eagle/reviews/books/0-7653-2754-6.html_show\" style=\"display:none;\" onClick=\"show( 'https://www.eyrie.org/~eagle/reviews/books/0-7653-2754-6.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Review: Range of Ghosts

Review: Range of Ghosts, by Elizabeth Bear

 Series: Eternal Sky #1 Publisher: Tor Copyright: March 2012 ISBN: 0-7653-2754-6 Format: Hardcover Pages: 334

Temur is one of many grandsons of the Great Khagan. We meet him on a bloody battlefield, in the aftermath of a fight between his cousin and his brother over the succession. His brother lost and Temur was left for dead with a vicious cut down the side of his neck, improbably surviving among the corpses and even more improbably finding a surviving horse (or being found by one). But a brief reprieve and a potential new family connection are cut brutally short when they are attacked by ghosts and his lover is pulled away into the sky.

Once-Princess Samarkar was heir and then a political wife of a prince, but her marriage was cut short in bloody insult when her husband refused to consummate the marriage. Now, she's chosen a far different path: be neutered in order to become a wizard. If she survives the surgery, she gains independence from her brother, freedom from politics, and possibly (if she's lucky) magical power.

Range of Ghosts is the first book of a fantasy trilogy set in what appears to be an analogue of what's now far northwest China and the central Asian steppes (although the geography doesn't exactly follow ours). There are mountainous strongholds, a large city-based civilization in the east, a civilization with onion domes and a different god in the west, and nomadic horse clans in the north. That said, there's also, as one discovers a bit later in the book, a race of huge bipedal cat people, just to be sure you don't think this is too much like our world.

I had a hard time with the start of this book due to the brutality. Just in the first 70 pages or so, we get a near-fatal wound that a character has to hold closed with his hand (for pages), human sacrifice, essentially medieval invasive surgery, a graphic description of someone losing an eye, and then (I think a little later in the book) serious illness with high fever. And this is Elizabeth Bear, which means the descriptions are very well-written and vivid, which was... not what I wanted. Thankfully, the horror show does slow down by the middle of the book.

The opening also didn't quite connect with me. There's a lot about war, the aftermath of war, and the death of Temur's family, but I found Temur mostly boring. The reader enters the story in at the aftermath, so none of the death and battle really touched me. Temur's steppe mythology is vaguely interesting, but only vaguely.

Samarkar saved this book for me. She's pragmatic, introspective, daring, but also risk-conscious. She pays attention and learns and studies, and she takes the opportunity to learn from everyone she can. The magical system she's learning also has some nicely-described effects without being too mechanical, and I liked the interweaving of magic with science. As she started taking charge of the plot, I thought this book got better and better. Also, full points for the supposedly pampered concubine (one of Samarkar's relatives) turning out to have iron determination and considerable ability to put up with hardship when it was required. That was refreshing.

More positive points to this book for allowing both men and women can choose to become neutered and become wizards. Same principle, so why not the same effect? One of the things I like about fantasy is the opportunity to explore human society with little tweaks and differences, and the ones that poke at gender roles and ask why we gender something that clearly could be gender-neutral make me happy.

I wasn't as fond of the hissable villain. I think I'm getting increasingly grumbly about books in which the villain is so obviously evil as to be basically demonic. Maybe Bear will twist this around in later books, but this one opens with human sacrifice, and the villain doesn't get any more appealing as we go along. I think I wasn't in the mood to read about someone plotting horrible dark things, keeping hostages, and practicing black magic, particularly since Bear's vivid descriptions make it a bit hard to tune the horrors out.

Thankfully, there isn't that much of the villain, and there's rather a lot of Samarkar, which left me generally happy with the book and wanting more. However, be warned that this is in absolutely no way a standalone book. Essentially nothing is resolved in this volume; you will need the sequel (if not the third book as well) for any sense of completed story whatsoever.

Followed by The Shattered Pillars.

Rating: 7 out of 10

# November 23, 2017

## Sean Whitton <!-- document.write( "<a href=\"#\" id=\"https://spwhitton.name//blog/entry/propellorsbuild/_hide\" onClick=\"exclude( 'https://spwhitton.name//blog/entry/propellorsbuild/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://spwhitton.name//blog/entry/propellorsbuild/_show\" style=\"display:none;\" onClick=\"show( 'https://spwhitton.name//blog/entry/propellorsbuild/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Using Propellor to provision your Debian development laptop

sbuild is a tool used by those maintaining packages in Debian, and derived distributions such as Ubuntu. When used correctly, it can catch a lot of categories of bugs before packages are uploaded. It does this by building the package in a clean environment, and then running the package through the Lintian, piuparts, adequate and autopkgtest tools. However, configuring sbuild so that it makes use of all of these tools is cumbersome.

In response to this complexity, I wrote a module for the Propellor configuration management system to prepare a system such that a user can just go ahead and run the sbuild(1) command.

This module is useful on one’s development laptop – if you need to reinstall your OS, you don’t have to look up the instructions for setting up sbuild again. But it’s also useful on throwaway build boxes. I can instruct propellor to provision a new virtual machine to build packages with sbuild, and all the different tools mentioned above will be connected together for me.

I just uploaded Propellor version 5.1.0 to Debian unstable. The version overhauls the API and internals of the Sbuild module to take better advantage of Propellor’s design. I won’t get into those details in this post. What I’d like to do is demonstrate how you can set up sbuild on your own machines, using Propellor.

# Getting started with Propellor

apt-get install propellor, and then propellor --init.

As mentioned, at the time of writing you’ll need to install from Debian unstable. For this tutorial you need version 5.1.0 or greater.

You’ll be offered two setups, options A and B. I suggest starting with option B.

If you never use Propellor for anything other than provisioning sbuild, you can stick with option B. If this tutorial makes you want to check out more features of Propellor, you might consider switching to option A and importing your old configuration.

Open ~/.propellor/config.hs. You will see something like this:

-- The hosts propellor knows about.
hosts :: [Host]
hosts =
[ mybox
]

-- An example host.
mybox :: Host
mybox = host "mybox.example.com" $props & osDebian Unstable X86_64 & Apt.stdSourcesList & Apt.unattendedUpgrades & Apt.installed ["etckeeper"] & Apt.installed ["ssh"] & User.hasSomePassword (User "root") & File.dirExists "/var/www" & Cron.runPropellor (Cron.Times "30 * * * *")  You’ll want to customise this so that it reflects your computer. My laptop is called iris, so I might replace the above with this: -- The hosts propellor knows about. hosts :: [Host] hosts = [ iris ] -- My laptop. iris :: Host iris = host "iris.silentflame.com"$ props
& osDebian Testing X86_64


The list of lines beginning with & are the properties of the host iris. Here, I’ve removed all properties except the osDebian property, which informs propellor that iris runs Debian testing and has the amd64 architecture.

The effect of this is that Propellor will not try to change anything about iris. In this tutorial, we are not going to let Propellor configure anything about iris other than setting up sbuild.

(The osDebian property is a pure info property, which means that it tells Propellor information about the host to which other properties might refer, but it doesn’t itself change anything about iris.)

# Telling Propellor to configure sbuild

First, add to the import lines at the top of config.hs the lines:

import qualified Propellor.Property.Sbuild as Sbuild
import qualified Propellor.Property.Schroot as Schroot


to enable use of the Sbuild module. Here is the full config for iris, which I’ll go through line-by-line:

-- The hosts propellor knows about.
hosts :: [Host]
hosts =
[ iris
]

-- My laptop.
iris :: Host
iris = host "iris.silentflame.com" $props & osDebian Testing X86_64 & Apt.useLocalCacher & sidSchrootBuilt & Sbuild.usableBy (User "spwhitton") & Schroot.overlaysInTmpfs & Cron.runPropellor (Cron.Times "30 * * * *") where sidSchrootBuilt = Sbuild.built Sbuild.UseCcache$ props
& osDebian Unstable X86_64
& Sbuild.update period Daily
& Sbuild.useHostProxy iris

• Apt.useLocalCacher set up apt-cacher-ng and points apt on iris at the cacher. This is the most efficient way to share a cache of packages between apt on iris, and apt within the sbuild chroot.
• sidSchrootBuilt builds the sbuild schroot. Unfortunately, we have to use a where clause because of Haskell’s rules about operator precedence. But it’s just a simple substitution: imagine that & Sbuild.built ... replaces & sidSchrootBuilt.
• osDebian specifies that this is a Debian unstable chroot. You can easily change this, or add another chroot, for building stable backports, etc.
• Sbuild.UseCcache enables ccache for builds in this chroot. You can replace this with Sbuild.NoCcache when building a package which is broken by ccache, which happens from time-to-time.
• Sbuild.update updates the chroot once per day.
• Sbuild.useHostProxy iris causes Propellor to propagate iris’s apt proxy into the chroot, so that apt in the chroot will also use iris’s apt cacher.
• Sbuild.usableBy adds spwhitton to the right group, so that he is allowed to invoke sbuild(1).
• Schroot.overlaysInTmpfs configures sbuild to install build dependencies and build packages in tmpfs. You can omit this property on machines with low amounts of memory.
• Cron.runPropellor sets up a cron job to re-run propellor once per hour. This is needed to ensure that things like Sbuild.update actually happen. It will also normalise sbuild configuration files, replace chroots that you accidently deleted, etc.

# Running Propellor to configure your laptop

$propellor iris.silentflame.com  In this configuration, you don’t need to worry about whether the hostname iris.silentflame.com actually resolves to your laptop. However, it must be possible to ssh root@localhost. This should be enough that spwhitton can: $ sbuild -A --run-lintian --run-autopkgtest --run-piuparts foo.dsc


# Further configuration

It is easy to add new schroots; for example, for building backports:

        ...
& stretchSchrootBuilt
...
where
...
stretchSchrootBuilt = Sbuild.built Sbuild.UseCcache $props & osDebian (Stable "stretch") X86_64 & Sbuild.update period Daily & Sbuild.useHostProxy iris  You can even use architectures other than X86_64. Propellor knows how to invoke qemu when it needs to do this to build the chroot, though sbuild does not know how to actually use chroots built in this way. You can also add additional properties to configure your chroot. Perhaps on your LAN you need sbuild to install packages via https, and you already have an apt cacher available. You can replace the apt-cacher-ng configuration like this:  where sidSchrootBuilt = Sbuild.built Sbuild.UseCcache$ props
& osDebian Unstable X86_64
& Sbuild.update period Daily
& Apt.mirror "https://foo.mirror/debian/"
& Apt.installed ["apt-transport-https"]


# Thanks

Thanks to Propellor’s author, Joey Hess, for help navigating Propellor’s type system while performing the overhaul included in version 5.1.0. Also for a conversation at DebConf17 which enabled this work by clearing some misconceptions of mine.

## Russ Allbery <!-- document.write( "<a href=\"#\" id=\"https://www.eyrie.org/~eagle/journal/2017-11/002.html_hide\" onClick=\"exclude( 'https://www.eyrie.org/~eagle/journal/2017-11/002.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.eyrie.org/~eagle/journal/2017-11/002.html_show\" style=\"display:none;\" onClick=\"show( 'https://www.eyrie.org/~eagle/journal/2017-11/002.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Holiday haul

Catching up on accumulated book purchases. I'm going to get another burst of reading time over the holidays (and am really looking forward to it).

Alfred Bester — The Stars My Destination (sff)
James Blish — A Case of Conscience (sff)
Leigh Brackett — The Long Tomorrow (sff)
Algis Budrys — Who? (sff)
Frances Hardinge — Fly By Night (sff)
Robert A. Heinlein — Double Star (sff)
N.K. Jemisin — The Obelisk Gate (sff)
N.K. Jemisin — The Stone Sky (sff)
T. Kingfisher — Clockwork Boys (sff)
Ursula K. Le Guin — City of Illusions (sff)
Ursula K. Le Guin — The Complete Orsinia (historical)
Ursula K. Le Guin — The Dispossessed (sff)
Ursula K. Le Guin — Five Ways to Forgiveness (sff)
Ursula K. Le Guin — The Left Hand of Darkness (sff)
Ursula K. Le Guin — Planet of Exile (sff)
Ursula K. Le Guin — Rocannon's World (sff)
Ursula K. Le Guin — The Telling (sff)
Ursula K. Le Guin — The World for Word Is Forest (sff)
Fritz Leiber — The Big Time (sff)
Melina Marchetta — Saving Francesca (mainstream)
Richard Matheson — The Shrinking Man (sff)
Foz Meadows — An Accident of Stars (sff)
Dexter Palmer — Version Control (sff)
Frederick Pohl & C.M. Kornbluth — The Space Merchants (sff)
Adam Rex — True Meaning of Smekday (sff)
John Scalzi — The Dispatcher (sff)
Julia Spencer-Fleming — In the Bleak Midwinter (mystery)
R.E. Stearns — Barbary Station (sff)
Theodore Sturgeon — More Than Human (sff)

I'm listing the individual components except for the Orsinia collection, but the Le Guin are from the Library of America Hainish Novels & Stories two-volume set. I had several of these already, but I have a hard time resisting a high-quality Library of America collection for an author I really like. Now I can donate a bunch of old paperbacks.

Similarly, a whole bunch of the older SF novels are from the Library of America American Science Fiction two-volume set, which I finally bought since I was ordering Library of America sets anyway.

The rest is a pretty random collection of stuff, although several of them are recommendations from Light. I was reading through her old reviews and getting inspired to read (and review) more.

## Jonathan Dowland <!-- document.write( "<a href=\"#\" id=\"http://jmtd.net/log/concreate/_hide\" onClick=\"exclude( 'http://jmtd.net/log/concreate/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://jmtd.net/log/concreate/_show\" style=\"display:none;\" onClick=\"show( 'http://jmtd.net/log/concreate/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Concreate and Red Hat JBoss OpenShift image sources

Last year I wrote about some tools for working with Docker images. Since then, we've deprecated the dogen tool for our own images and have built a successor called Concreate.

Concreate takes a container image definition described in a YAML document and generates an output image. To do so, it generates an intermediate Dockerfile, along with the scripts and artefacts you references in the YAML file, and by default invokes docker build on the result. However, it can use other builders, such as the OpenShift Build Service, which is what we use for our production images.

Concreate can also manage running tests against the image. As with the Container Testing Framework that I mentioned last time, these tests are defined using the Behave system.

In related news is that we have published the sources for all of our images. You can now go and read the image.yaml file for EAP 7 on OpenShift to give you an example of what a real image using Concreate looks like.

## Louis-Philippe Véronneau <!-- document.write( "<a href=\"#\" id=\"https://veronneau.org/debconf-videoteam-sprint-report-day-4.html_hide\" onClick=\"exclude( 'https://veronneau.org/debconf-videoteam-sprint-report-day-4.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://veronneau.org/debconf-videoteam-sprint-report-day-4.html_show\" style=\"display:none;\" onClick=\"show( 'https://veronneau.org/debconf-videoteam-sprint-report-day-4.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### DebConf Videoteam sprint report - day 4

Day 4 of the videoteam sprint!

## Pictures of Our Lives

"Countless" people wrote to me asking for more pictures of our marvelous sprint, especially of our feline friend. How could I resist?

Now that we've got this covered, here's what we did today.

## Report

### tumbleweed

Stefano had to do some office work today and had very little time to hack on the videoteam stuff.

He did get access to the pentabarf XML of DebConf 7 to 13. This should help a great lot generating the video metadata for our archive.

He also played around with YouTube a little. It seems we already got our first copyright strike! Gotta love remixes of The Lion King hidden in DebConf videos.

### ivodd

Ivo left us to work with the Debian Release Team today. Sad!

### RattusRattus

Andy and Kyle got together today and worked on making a list of the audio hardware we should buy to replace our old kit.

He also finished the video loop we are going to use at the mini-conf.

Kyle has some time today and gave Andy a call to work on our audio gear wishlist. He also worked on designing a flight case for it.

We will eventually submit a budget request to buy said kit.

### olasd

Nicolas mainly worked on refactoring the ansible module that generates the TLS certificates for our streaming network.

### pollo

Our ansible roles are now all documented! I'm happy this is all done. Next, I'll try to remove some of the ugly hacks in our ansible repository.

We finished the day by going to the Polish Club. Here's a picture of the team!

23 November, 2017 05:00AM by Louis-Philippe Véronneau

# November 22, 2017

## Daniel Pocock <!-- document.write( "<a href=\"#\" id=\"https://danielpocock.com/vr-hackathon-fixme-lausanne-2017_hide\" onClick=\"exclude( 'https://danielpocock.com/vr-hackathon-fixme-lausanne-2017' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://danielpocock.com/vr-hackathon-fixme-lausanne-2017_show\" style=\"display:none;\" onClick=\"show( 'https://danielpocock.com/vr-hackathon-fixme-lausanne-2017' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### VR Hackathon at FIXME, Lausanne (1-3 December 2017)

Competitors and visitors are welcome, please register here.

Some of the free software technologies in use include Blender and Mozilla VR.

22 November, 2017 07:25PM by Daniel.Pocock

## Louis-Philippe Véronneau <!-- document.write( "<a href=\"#\" id=\"https://veronneau.org/debconf-videoteam-sprint-report-day-3.html_hide\" onClick=\"exclude( 'https://veronneau.org/debconf-videoteam-sprint-report-day-3.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://veronneau.org/debconf-videoteam-sprint-report-day-3.html_show\" style=\"display:none;\" onClick=\"show( 'https://veronneau.org/debconf-videoteam-sprint-report-day-3.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### DebConf Videoteam sprint report - day 3

Erf, I'm tired and it is late so this report will be short and won't include dank memes or funny cat pictures. Come back tomorrow for that.

### tumbleweed

Stefano worked all day long on the metadata project and on YouTube uploads. I think the DebConf7 videos have just finished being uploaded, check them out!

### RattusRattus

Apart from the wonderful lasagna he baked for us, Andy continued working on the scraping scheme, helping tumbleweed.

### nattie

Nattie has been with us for a few days now, but today she did some great QA work on our metadata scraping of the video archive.

### ivodd

More tests, more bugs! Ivo worked quite a bit on the Opsis board today and it seems everything is ready for the mini-conf. \0/

### olasd

Nicolas built the streaming network today and wrote some Ansible roles to manage TLS cert creation through Let's Encrypt. He also talked with DSA some more about our long term requirements.

### wouter

I forgot to mention it yesterday because he could not come to Cambridge, but Wouter has been sprinting remotely, working on the reviewing system. Everything with regards to reviewing should be in place for the mini-conf.

He also generated the intro and outro slides for the videos for us.

### KiBi and Julien

KiBi and Julien arrived late in the evening, but were nonetheless of great assistance.

Neither are technically part of the videoteam, but their respective experience with Debian-Installer and general DSA systems helped us a great deal.

### pollo

I'm about 3/4 done documenting our ansible roles. Once I'm done, I'll try to polish some obvious hacks I've seen while documenting.

22 November, 2017 05:00AM by Louis-Philippe Véronneau

## Norbert Preining <!-- document.write( "<a href=\"#\" id=\"https://www.preining.info/blog/2017/11/kobo-firmware-4-6-10075-mega-update-ksm-nickel-patch-ssh-fonts/_hide\" onClick=\"exclude( 'https://www.preining.info/blog/2017/11/kobo-firmware-4-6-10075-mega-update-ksm-nickel-patch-ssh-fonts/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.preining.info/blog/2017/11/kobo-firmware-4-6-10075-mega-update-ksm-nickel-patch-ssh-fonts/_show\" style=\"display:none;\" onClick=\"show( 'https://www.preining.info/blog/2017/11/kobo-firmware-4-6-10075-mega-update-ksm-nickel-patch-ssh-fonts/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Kobo firmware 4.6.10075 mega update (KSM, nickel patch, ssh, fonts)

A new firmware for the Kobo ebook reader came out and I adjusted the mega update pack to use it. According to the comments in the firmware thread it is working faster than previous releases. The most incredible change though is the update from wpa_supplicant 0.7.1 (around 2010) to 2.7-devel (current). Wow.

For details and warning please consult the previous post.

### Mark6 – Kobo GloHD

firmware: Kobo 4.6.9995 for GloHD

Mega update: Kobo-4.6.10075-combined/Mark6/KoboRoot.tgz

### Mark5 – Aura

firmware: Kobo 4.6.9995 for Aura

Mega update: Kobo-4.6.10075-combined/Mark5/KoboRoot.tgz

### Mark4 – Kobo Glo, Aura HD

firmware: Kobo 4.6.9995 for Glo and AuraHD

Mega update: Kobo-4.6.10075-combined/Mark4/KoboRoot.tgz

Enjoy.

22 November, 2017 01:18AM by Norbert Preining

# November 21, 2017

## Louis-Philippe Véronneau <!-- document.write( "<a href=\"#\" id=\"https://veronneau.org/debconf-videoteam-sprint-report-day-2.html_hide\" onClick=\"exclude( 'https://veronneau.org/debconf-videoteam-sprint-report-day-2.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://veronneau.org/debconf-videoteam-sprint-report-day-2.html_show\" style=\"display:none;\" onClick=\"show( 'https://veronneau.org/debconf-videoteam-sprint-report-day-2.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### DebConf Videoteam sprint report - day 2

Another day, another videoteam report! It feels like we did a lot of work today, so let's jump right in:

### tumbleweed

Stefano worked most of the day on the DebConf video archive metadata project. A bunch of videos already have been uploaded to YouTube.

Here's some gold you might want to watch.

By the end of our sprint, we should have generated metadata for most of our archive and uploaded a bunch of videos to YouTube. Don't worry though, YouTube is only a mirror and we'll keep our current archive as a video master.

### RattusRattus

Andy joined us today! He hacked away with Stefano for most of the day, working on the metadata format for our videos and making schemes for our scraping tools.

### ivodd

Ivo built and tested a good part of our video setup today, fixing bugs left and right in Ansible. We are prepared for the Cambridge Mini-DebConf!

### olasd

Nicolas finished his scripts to automatically spool up and down our streaming mirrors via the DigitalOcean API today and ran our Ansible config against those machines to test our setup.

### pollo

For my part, I completed a huge chunk of my sprint goals: we now have a website documenting our setup! It is currently hosted on Alioth pages, but olasd plans to make a request to DSA to have it hosted on the static.debian.org machine. The final URL will most likely be something like: https://video.debconf.org

The documentation is still missing the streaming section (our streaming setup is not final yet, so not point in documenting that) and a section hosting guides for the volunteers. With some luck I might write those later this week.

I've now moved on documentation our various Ansible roles.

Oh, and we also ate some cheese fondue:

21 November, 2017 05:00AM by Louis-Philippe Véronneau

# November 20, 2017

## Jonathan Carter <!-- document.write( "<a href=\"#\" id=\"https://jonathancarter.org/2017/11/20/new-powerline-goodies-in-debian/_hide\" onClick=\"exclude( 'https://jonathancarter.org/2017/11/20/new-powerline-goodies-in-debian/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://jonathancarter.org/2017/11/20/new-powerline-goodies-in-debian/_show\" style=\"display:none;\" onClick=\"show( 'https://jonathancarter.org/2017/11/20/new-powerline-goodies-in-debian/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### New powerline goodies in Debian

Powerline does some font substitutions that allow additional theming for terminal applications such as tmux, vim, zsh, bash and more. The powerline font has been packaged in Debian for a while now, and I’ve packaged two powerline themes for vim and zsh. They’re currently only in testing, but once my current todo list on packages look better, I’ll upload them to stretch-backports.

#### For vim, vim-airline

vim-airline is different from previous vim powerline plugins in that it doesn’t depend om perl or python, it’s purely implemented in vim config files.

Demo

Here’s a gif from the upstream site, they also demo various themes on there that you can get in Debian by installing the vim-airlines-themes package.

How to enable

Install the vim-airline package, and add the following to your .vimrc file:

" Vim Airline theme
let g:airline_theme='powerlineish'
let g:airline_powerline_fonts = 1
let laststatus=2

The vim-airline-themes package contains additional themes that can be defined in the snippet above.

#### For zsh, powerlevel9k

Demo

Here’s a gif from upstream that walks through some of its features. You can configure it to display all kinds of system metrics and also information about VCS status in your current directory.

Powerlevel9k has lots of options and features. If you’re interested in it, you should probably take a look at their readme file on GitHub for all the details.

How to enable

source /usr/share/powerlevel9k/powerlevel9k.zsh-theme

20 November, 2017 07:22PM by jonathan

## Reproducible builds folks <!-- document.write( "<a href=\"#\" id=\"https://reproducible.alioth.debian.org/blog/posts/133/_hide\" onClick=\"exclude( 'https://reproducible.alioth.debian.org/blog/posts/133/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://reproducible.alioth.debian.org/blog/posts/133/_show\" style=\"display:none;\" onClick=\"show( 'https://reproducible.alioth.debian.org/blog/posts/133/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Reproducible Builds: Weekly report #133

Here's what happened in the Reproducible Builds effort between Sunday November 5 and Saturday November 11 2017:

## Upcoming events

On November 17th Chris Lamb will present at Open Compliance Summit, Yokohama, Japan on how reproducible builds ensures the long-term sustainability of technology infrastructure.

We plan to hold an assembly at 34C3 - hope to see you there!

## LEDE CI tests

Thanks to the work of lynxis, Mattia and h01ger, we're now testing all LEDE packages in our setup. This is our first result for the ar71xx target: "502 (100.0%) out of 502 built images and 4932 (94.8%) out of 5200 built packages were reproducible in our test setup." - see below for details how this was achieved.

## Bootstrapping and Diverse Double Compilation

As a follow-up of a discussion on bootstrapping compilers we had on the Berlin summit, Bernhard and Ximin worked on a Proof of Concept for Diverse Double Compilation of tinycc (aka tcc).

Ximin Luo did a successful diverse-double compilation of tinycc git HEAD using gcc-7.2.0, clang-4.0.1, icc-18.0.0 and pgcc-17.10-0 (pgcc needs to triple-compile it). More variations are planned for the future, with the eventual aim to reproduce the same binaries cross-distro, and extend it to test GCC itself.

## Packages reviewed and fixed, and bugs filed

Patches filed upstream:

• Bernhard M. Wiedemann:
• clang - ASLR affects objective-C binaries.
• Chris Lamb:
• nbsphinx (merged) - Random UUIDs used as element selectors.
• stardicter (merged) - SOURCE_DATE_EPOCH support.
• stetl - Build path in documentation.

Patches filed in Debian:

Patches filed in OpenSUSE:

• Bernhard M. Wiedemann:
• i4l-base (merged) - Uninitialized memory written to output.

## Reviews of unreproducible packages

73 package reviews have been added, 88 have been updated and 40 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been updated:

## Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

• Andreas Beckmann (3)
• Dmitry Shachnev (1)
• Graham Inggs (1)

## diffoscope development

Mattia Rizzolo uploaded version 88~bpo9+1 to stretch-backports.

## reprotest development

• Ximin Luo:
• build: add comment that util-linux confirmed bug in nsenter, awaiting fix.
• Make --print-sudoers work for --env-build as well.

## reproducible-website development

• Holger Levsen:
• Chris Lamb:
• Move the "contribute" page from the Debian wiki to /contribute/ on our main website.
• Fix typo in FreeBSD mailing list.

## theunreproduciblepackage development

• Bernhard M. Wiedemann:

## tests.reproducible-builds.org in detail

• Mattia Rizzolo:

• reproducible archlinux: enable debugging mode
• reproducible archlinux: don't use hidden files for the package lists
• reproducible fedora: don't use hidden files for the package lists
• udd-query: move from public-udd-mirror.xvm.mit.edu to udd-mirror.debian.net
• udd-query: remove the temporary file with a trap in case this script is called with the wrong argument, and in case of failures, etc, the temporary file would be left around otherwise
• reproducible debian: schroot-create: drop the reproducible gpg keyring into /etc/apt/trusted.gpg.d/ instead of using apt-key add
• reproducible debian: setup_pbuilder: drop the reproducible gpg keyring into /etc/apt/trusted.gpg.d/ instead of using apt-key add
• reprodocible debian: setup_pbuilder: stop installing gnupg2 in our chroot, not needed anymore now
• Mattia also merged and deployed some commits from others this week.
• Alexander 'lynxis' Couzens

• reproducible_lede: use correct place/variable to save results: Results on remote nodes are expected to be under $TMPDIR, which defined by openwrt_build. RESULTSDIR is undefined on the remote node • reproducible_lede: enable building all packages again, after it was disabled to improve the debug speed. • reproducible_lede: correct given path for node_cleanup_tmpdirs & node_save_logs- reproducible_lede: enable CONFIG_BUILDBOT to reduce inodes while building. • kpcyrd: • reproducible-archlinux: try porting abs to asp • reproducible-archlinux: explicitly sync packages • reproducible-archlinux: use sudo for pacman • Hans-Christoph Steiner: • reproducible fdroid: point jenkins to canonical URL • reproducible_fdroid: separate testsuite into its own job • reproducible fdroid: sync upstream script names with jenkins.debian.net, make things self-documenting by reusing the same names everywhere. • reproducible_fdroid_test: make script executable • Chris Lamb: • Move some IRC announcements to #debian-reproducible-changes. • Holger Levsen: • reproducible LEDE: try to deal gracefully with problems and report • as usual, Holger merged many of the above commits and deployed them. ## Misc. This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. ## NOKUBI Takatsugu <!-- document.write( "<a href=\"#\" id=\"http://blog.daionet.gr.jp/knok-e/2017/11/20/debian-seminar-in-yokohama-20171118/_hide\" onClick=\"exclude( 'http://blog.daionet.gr.jp/knok-e/2017/11/20/debian-seminar-in-yokohama-20171118/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.daionet.gr.jp/knok-e/2017/11/20/debian-seminar-in-yokohama-20171118/_show\" style=\"display:none;\" onClick=\"show( 'http://blog.daionet.gr.jp/knok-e/2017/11/20/debian-seminar-in-yokohama-20171118/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Debian seminar in Yokohama, 2017/11/18 I had attended to Tokyo area debian seminar #157. The day’s special guest is Chris Lamb, the Debian Project Leader in 2017. He had attended to Open Compliance Summit, so we invited him as our guest. The following pdf file is the day’s presentation: And Hideki Yamane(henrich) talked about a new idea of Debian distribution ‘fresh’, pull-based rolling release. The details would be published by him in a few days. There were some discussion, and we need to introduce more information aboud Japanese Debian/FLOSS scene, so now I am writing this article. Anything else, I ccould get good time with debian developers and community. Our community, especially in Japan, requires more new commers, young people. 20 November, 2017 09:01AM by knok ## Louis-Philippe Véronneau <!-- document.write( "<a href=\"#\" id=\"https://veronneau.org/debconf-videoteam-sprint-report-day-1.html_hide\" onClick=\"exclude( 'https://veronneau.org/debconf-videoteam-sprint-report-day-1.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://veronneau.org/debconf-videoteam-sprint-report-day-1.html_show\" style=\"display:none;\" onClick=\"show( 'https://veronneau.org/debconf-videoteam-sprint-report-day-1.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### DebConf Videoteam sprint report - day 1 Another videoteam report! We've now been hacking for a full day and we are slowly starting to be productive. It's always hard to get back in a project when you haven't touched it in a while... Anyway, let's start this report with some important announcement: we finally have been able to snap a good picture of the airbnb's cat! No more nagging me about the placeholder image from Wikipedia I used in yesterday's report! ## Set up Here's what the team did today: ### tumbleweed Stefano started the day by hacking away on our video archive. We eventually want to upload all our videos to YouTube to give them exposure, but sadly our archive metadata is in a pretty poor shape. With the script tumbleweed wrote, we can scrape the archive for matches against the old DebConf's pentabarf XML we have. tumbleweed also helped Ivo with the ansible PXE setup he's working on. Some recent contributions from a collaborator implemented new features (like a nice menu to choose from) but also came with a few annoying bugs. ### ivodd Ivo continued working on the PXE setup today. He also tried to break our ansible setup by using fresh installs with different user cases (locales, interfaces, etc.), with some success. The reason he and Stefano are working so hard on the PXE boot is that we had a discussion about the future of our USB install method. The general consensus on this was although we would not remove it, we would not actively maintain it anymore. PXE is less trouble for multiple machines. For single machines or if you don't control the DHCP server, using ansible manually on a fresh Debian install will be the recommended way. ### olasd After a very long drive, olasd arrived late in the evening with all our gear. Hurray! We were thus able to set up some test boxes and start wiring the airbnb properly. Tomorrow will certainly be more productive with all this stuff at our disposition. ### pollo Today I mainly worked on setting up our documentation website. After some debate, we decided that sphinx was the right tool for the job. I am a few pages in and if I work well I think we'll have something to show for at the end of the sprint! I also was thrown back into ansible after witnessing a bug in the locale management. I'm still rusty, but it's slowly coming back to me. Let's end this blog post with a picture of the neon pineapple that sits on the wall of the solarium. 20 November, 2017 05:00AM by Louis-Philippe Véronneau ## Dirk Eddelbuettel <!-- document.write( "<a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2017/11/19#rcppeigen_0.3.3.3.1_hide\" onClick=\"exclude( 'http://dirk.eddelbuettel.com/blog/2017/11/19#rcppeigen_0.3.3.3.1' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2017/11/19#rcppeigen_0.3.3.3.1_show\" style=\"display:none;\" onClick=\"show( 'http://dirk.eddelbuettel.com/blog/2017/11/19#rcppeigen_0.3.3.3.1' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### RcppEigen 0.3.3.3.1 A maintenance release 0.3.3.3.1 of RcppEigen is now on CRAN (and will get to Debian soon). It brings Eigen 3.3.* to R. The impetus was a request from CRAN to change the call to Rcpp::Rcpp.plugin.maker() to only use :: as the function has in fact been exported and accessible for a pretty long time. So now the usage pattern catches up. Otherwise, Haiku-OS is now supported and a minor Travis tweak was made. The complete NEWS file entry follows. #### Changes in RcppEigen version 0.3.3.3.1 (2017-11-19) • Compilation under Haiku-OS is now supported (Yu Gong in #45). • The Rcpp.plugin.maker helper function is called via :: as it is in fact exported (yet we had old code using :::). • A spurious argument was removed from an example call. • Travis CI now uses https to fetch the test runner script. Courtesy of CRANberries, there is also a diffstat report for the most recent release. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. ### RcppClassic 0.9.9 A maintenance release RcppClassic 0.9.9 is now at CRAN. This package provides a maintained version of the otherwise deprecated first Rcpp API; no new projects should use it. Per a request from CRAN, we changed the call to Rcpp::Rcpp.plugin.maker() to only use :: as the function has in fact been exported and accessible for a pretty long time. So now the usage pattern catches up. Courtesy of CRANberries, there are changes relative to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings. ## Colin Watson <!-- document.write( "<a href=\"#\" id=\"https://www.chiark.greenend.org.uk/~cjwatson/blog/kitten-block-equivalent-for-firefox-57.html_hide\" onClick=\"exclude( 'https://www.chiark.greenend.org.uk/~cjwatson/blog/kitten-block-equivalent-for-firefox-57.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.chiark.greenend.org.uk/~cjwatson/blog/kitten-block-equivalent-for-firefox-57.html_show\" style=\"display:none;\" onClick=\"show( 'https://www.chiark.greenend.org.uk/~cjwatson/blog/kitten-block-equivalent-for-firefox-57.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Kitten Block equivalent for Firefox 57 I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced. However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility. Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone. 20 November, 2017 12:00AM by Colin Watson # November 19, 2017 ## Joey Hess <!-- document.write( "<a href=\"#\" id=\"http://joeyh.name/blog/entry/custom_ARM_disk_image_generation_with_propellor/_hide\" onClick=\"exclude( 'http://joeyh.name/blog/entry/custom_ARM_disk_image_generation_with_propellor/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://joeyh.name/blog/entry/custom_ARM_disk_image_generation_with_propellor/_show\" style=\"display:none;\" onClick=\"show( 'http://joeyh.name/blog/entry/custom_ARM_disk_image_generation_with_propellor/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### custom ARM disk image generation with propellor Following up on propelling disk images, Propellor can now build custom ARM disk images for a variety of different ARM boards. The disk image build can run on a powerful laptop or server, so it's super fast and easy compared with manually installing Debian on an ARM board. Here's a simple propellor config for a Olimex LIME board, with ssh access and a root password: lime :: Host lime = host "lime.example.com"$ props
& osDebian Unstable ARMHF
& Machine.olimex_A10_OLinuXino_LIME
& hasPartition (partition EXT4 mountedAt "/" setSize MegaBytes 8192)
& Ssh.installed


To make a disk image for that board, I only have to add this property to my laptop:

& imageBuiltFor lime
(RawDiskImage "/srv/lime.img")
(Debootstrapped mempty)


Propellor knows what kernel to install and how to make the image bootable for a bunch of ARM boards, including the Olimex LIME, the SheevaPlug, Banana Pi, and CubieTruck.

To build the disk image targeting ARM, propellor uses qemu. So it's helpful that, after the first build, propellor incrementally updates disk images, quite quickly and efficiently.

Once the board has the image installed, you can run propellor on it to further maintain it, and if there's a hardware problem, you can quickly replace it with an updated image.

It's fairly simple to teach propellor about other ARM boards, so it should be quite easy to keep propellor knowing about all ARM boards supported by Debian (and other distros). Here's how I taught it about the Olimex LIME:

olimex_A10_OLinuXino_LIME :: Property (HasInfo + DebianLike)
olimex_A10_OLinuXino_LIME = FlashKernel.installed "Olimex A10-OLinuXino-LIME"
requires sunixi "A10-OLinuXino-Lime"
requires armmp


My home server is a CubieTruck which serves as a wireless access point, solar panel data collector, and git-annex autobuilder. It's deployed from a disk image built by propellor, using this config. I've been involved with building disk image for ARM boards for a long time -- it was part of my job for five years -- and this is the first time I've been entirely happy with the process.

## Louis-Philippe Véronneau <!-- document.write( "<a href=\"#\" id=\"https://veronneau.org/debconf-videoteam-sprint-report-day-0.html_hide\" onClick=\"exclude( 'https://veronneau.org/debconf-videoteam-sprint-report-day-0.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://veronneau.org/debconf-videoteam-sprint-report-day-0.html_show\" style=\"display:none;\" onClick=\"show( 'https://veronneau.org/debconf-videoteam-sprint-report-day-0.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### DebConf Videoteam sprint report - day 0

First day of the videoteam autumn sprint! Well, I say first day, but in reality it's more day 0. Even though most of us have arrived in Cambridge already, we are still missing a few people.

Last year we decided to sprint in Paris because most of our video gear is stocked there. This year, we instead chose to sprint a few days before the Cambridge Mini-Debconf to help record the conference afterwards.

Since some of us arrived very late and the ones who did arrive early are still mostly jet lagged (that includes me), I'll use this post to introduce the space we'll be working from this week and our general plan for the sprint.

## House Party

After some deliberations, we decided to rent a house for a week in Cambridge: finding a work space to accommodate us and all our gear proved difficult and we decided mixing accommodation and work would be a good idea.

I've only been here for a few hours, but I have to say I'm pretty impressed by the airbnb we got. Last time I checked (it seems every time I do, some new room magically appears), I counted 5 bedrooms, 6 beds, 5 toilets and 3 shower rooms. Heck, there's even a solarium and a training room with weights and a punching bag on the first floor.

Having a whole house to ourselves also means we have access to a functional kitchen. I'd really like to cook at least a few meals during the week.

There's also a cat!

It's not the house's cat per say, but it's been hanging out around the house for most of the day and makes cute faces trying to convince us to let it come inside. Nice try cat. Nice try.

Here are some glamour professional photos of what the place looks like on a perfect summer day, just for the kick of it:

Of course, reality has trouble matching all the post-processing filters.

## Plan for the week

Now on a more serious note; apart from enjoying the beautiful city of Cambridge, here's what the team plans to do this week:

### tumbleweed

Stefano wants to continue refactoring our ansible setup. A lot of things have been added in the last year, but some of it are hacks we should remove and implement correctly.

### highvoltage

Jonathan won't be able to come to Cambridge, but plans to work remotely, mainly on our desktop/xfce session implementation. Another pile of hacks waiting to be cleaned!

### ivodd

Ivo has been working a lot of the pre-ansible part of our installation and plans to continue working on that. At the moment, creating an installation USB key is pretty complicated and he wants to make that simpler.

### olasd

Nicolas completely reimplemented our streaming setup for DC17 and wants to continue working on that.

More specifically, he wants to write scripts to automatically setup and teardown - via API calls - the distributed streaming network we now use.

Finding a way to push TLS certificates to those mirrors, adding a live stream viewer on video.debconf.org and adding a viewer to our archive are also things he wants to look at.

### pollo

For my part, I plan to catch up with all the commits in our ansible repository I missed since last year's sprint and work on documentation.

It would be very nice if we could have a static website describing our work so that others (at mini-debconfs for examples) could replicate it easily.

If I have time, I'll also try to document all the ansible roles we have written.

Stay tuned for more daily reports!

19 November, 2017 05:00AM by Louis-Philippe Véronneau

# November 18, 2017

## Matthieu Caneill <!-- document.write( "<a href=\"#\" id=\"http://matthieu.io/blog/2017/11/19/minidebconf-toulouse/_hide\" onClick=\"exclude( 'http://matthieu.io/blog/2017/11/19/minidebconf-toulouse/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://matthieu.io/blog/2017/11/19/minidebconf-toulouse/_show\" style=\"display:none;\" onClick=\"show( 'http://matthieu.io/blog/2017/11/19/minidebconf-toulouse/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### MiniDebconf in Toulouse

I attended the MiniDebconf in Toulouse, which was hosted in the larger Capitole du Libre, a free software event with talks, presentation of associations, and a keysigning party. I didn't expect the event to be that big, and I was very impressed by its organization. Cheers to all the volunteers, it has been an amazing week-end!

Here's a sum-up of the talks I attended.

## Du logiciel libre à la monnaie libre

Speaker: Éloïs

The first talk I attended was, translated to English, "from free software to free money".

Éloïs compared the 4 freedoms of free software with money, and what properties money needs to exhibit in order to be considered free. He then introduced Ğ1, a project of free (as in free speech!) money, started in the region around Toulouse. Contrary to some distributed ledgers such as Bitcoin, Ğ1 isn't based on an hash-based proof-of-work, but rather around a web of trust of people certifying each other, hence limiting the energy consumption required by the network to function.

## YunoHost

Speaker: Jimmy Monin

I then attended a presentation of YunoHost. Being an happy user myself, it was very nice to discover the future expected features, and also meet two of the developers. YunoHost is a Debian-based project, aimed at providing all the tools necessary to self-host applications, including email, website, calendar, development tools, and dozens of other packages.

## Premiers pas dans l'univers de Debian

Speaker: Nicolas Dandrimont

For the first talk of the MiniDebConf, Nicolas Dandrimont introduced Debian, its philosophy, and how it works with regards to upstreams and downstreams. He gave many details on the teams, the infrastructure, and the internals of Debian.

## Trusting your computer and system

Speaker: Jonas Smedegaard

Jonas introduced some security concepts, and how they are abused and often meaningless (to quote his own words, "secure is bullshit"). He described a few projects which lean towards a more secure and open hardware, for both phones and laptops.

## Automatiser la gestion de configuration de Debian avec Ansible

Speaker: Jérémy Lecour

Jérémy, from Evolix, introduced Ansible, and how they use it to manage hundreds of Debian servers. Ansible is a very powerful tool, and a huge ecosystem, in many ways similar to Puppet or Chef, except it is agent-less, using only ssh connections to communicate with remote machines. Very nice to compare their use of Ansible with mine, since that's the software I use at work for deploying experiments.

## Making Debian for everybody

Speaker: Samuel Thibault

Samuel gave a talk about accessibility, and the general availability of the tools in today's operating systems, including Debian. The lesson to take home is that we often don't do enough in this domain, particularly when considering some issues people might have that we don't always think about. Accessibility on computers (and elsewhere) should be the default, and never require complex setups.

## Retour d'expérience : mise à jour de milliers de terminaux Debian

Speaker: Cyril Brulebois

Cyril described a problem he was hired for, an update of thousands of Debian servers from wheezy to jessie, which he discovered afterwards was worse than initially thought, since the machines were running the out-of-date squeeze. Since they were not always administered with the best sysadmin practices, they were all exhibiting different configurations and different packages lists, which raised many issues and gave him interesting challenges. They were solved using Ansible, which also had the effect of standardizing their system administration practices.

## Retour d'expérience : utilisation de Debian chez Evolix

Speaker: Grégory Colpart

Grégory described Evolix, a company which manages servers for their clients, and how they were inspired by Debian, for both their internal tools and their practices. It is very interesting to see that some of the Debian values can be easily exported for a more open and collaborative business.

## Lightning talks

To close the conference, two lightning talks were presented, describing the switch from Windows XP to Debian in an ecologic association near Toulouse; and how snapshot.debian.org can be used with bisections to find the source of some regressions.

## Conclusion

A big thank you to all the organizers and the associations who contributed to make this event a success. Cheers!

## Russ Allbery <!-- document.write( "<a href=\"#\" id=\"https://www.eyrie.org/~eagle/journal/2017-11/001.html_hide\" onClick=\"exclude( 'https://www.eyrie.org/~eagle/journal/2017-11/001.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.eyrie.org/~eagle/journal/2017-11/001.html_show\" style=\"display:none;\" onClick=\"show( 'https://www.eyrie.org/~eagle/journal/2017-11/001.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Free software log (October 2017)

I've not been taking the time to write these promptly in part because I've not found the time to do much free software work. Thankfully, November will at least contain some work-sponsored work (on a package that isn't widely used yet, but maybe we can make it appealing enough).

Anyway, that's for next month. For October, the only thing I have to report is refreshing the signing key for my personal Debian repository (generating a new key for the new release) and finally updating the distributions to move stretch to stable, jessie to oldstable, and create the new testing distribution (buster). If for some strange reason you're using my personal repositories (there probably isn't much reason just at the moment), be sure to upgrade eyrie-keyring, since I'm going to switch signing over to the new key shortly.

## Petter Reinholdtsen <!-- document.write( "<a href=\"#\" id=\"http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html_hide\" onClick=\"exclude( 'http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html_show\" style=\"display:none;\" onClick=\"show( 'http://people.skolelinux.org/pere/blog/Legal_to_share_more_than_3000_movies_listed_on_IMDB_.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Legal to share more than 3000 movies listed on IMDB?

A month ago, I blogged about my work to automatically check the copyright status of IMDB entries, and try to count the number of movies listed in IMDB that is legal to distribute on the Internet. I have continued to look for good data sources, and identified a few more. The code used to extract information from various data sources is available in a git repository, currently available from github.

So far I have identified 3186 unique IMDB title IDs. To gain better understanding of the structure of the data set, I created a histogram of the year associated with each movie (typically release year). It is interesting to notice where the peaks and dips in the graph are located. I wonder why they are placed there. I suspect World War II caused the dip around 1940, but what caused the peak around 2010?

I've so far identified ten sources for IMDB title IDs for movies in the public domain or with a free license. This is the statistics reported when running 'make stats' in the git repository:

  249 entries (    6 unique) with and   288 without IMDB title ID in free-movies-archive-org-butter.json
2301 entries (  540 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
830 entries (   29 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
2109 entries (  377 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
291 entries (  122 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
144 entries (  135 unique) with and     0 without IMDB title ID in free-movies-manual.json
350 entries (    1 unique) with and   801 without IMDB title ID in free-movies-publicdomainmovies.json
4 entries (    0 unique) with and   124 without IMDB title ID in free-movies-publicdomainreview.json
698 entries (  119 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
8 entries (    8 unique) with and   196 without IMDB title ID in free-movies-vodo.json
3186 unique IMDB title IDs in total


The entries without IMDB title ID are candidates to increase the data set, but might equally well be duplicates of entries already listed with IMDB title ID in one of the other sources, or represent movies that lack a IMDB title ID. I've seen examples of all these situations when peeking at the entries without IMDB title ID. Based on these data sources, the lower bound for movies listed in IMDB that are legal to distribute on the Internet is between 3186 and 4713.

It would be great for improving the accuracy of this measurement, if the various sources added IMDB title ID to their metadata. I have tried to reach the people behind the various sources to ask if they are interested in doing this, without any replies so far. Perhaps you can help me get in touch with the people behind VODO, Public Domain Torrents, Public Domain Movies and Public Domain Review to try to convince them to add more metadata to their movie entries?

Another way you could help is by adding pages to Wikipedia about movies that are legal to distribute on the Internet. If such page exist and include a link to both IMDB and The Internet Archive, the script used to generate free-movies-archive-org-wikidata.json should pick up the mapping as soon as wikidata is updates.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

## Joey Hess <!-- document.write( "<a href=\"#\" id=\"http://joeyh.name/blog/entry/stupid_long_route/_hide\" onClick=\"exclude( 'http://joeyh.name/blog/entry/stupid_long_route/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://joeyh.name/blog/entry/stupid_long_route/_show\" style=\"display:none;\" onClick=\"show( 'http://joeyh.name/blog/entry/stupid_long_route/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### stupid long route

There's an old net story from the 80's, which I can't find right now, but is about two computers, 10 feet apart, having a ridiculously long network route between them, packets traveling into other states or countries and back, when they could have flowed over a short cable.

Ever since I read that, I've been collecting my own ridiculously long routes. ssh bouncing from country to country, making letters I type travel all the way around the world until they echo back on my screen. Tasting the latency that's one of the only ways we can viscerally understand just how big a tangle of wires humanity has built.

Yesterday, I surpassed all that, and I did it in a way that hearkens right back to the original story. I had two computers, 20 feet apart, I wanted one to talk to the other, and the route between the two ended up traveling not around the Earth, but almost the distance to the Moon.

I was rebuilding my home's access point, and ran into a annoying bug that prevented it from listening to wifi. I knew it was still connected over ethernet to the satellite receiver.

I connected my laptop to the satellite receiver over wifi. But, I didn't know the IP address to reach the access point. Then I remembered I had set it up so incoming ssh to the satellite receiver was directed to the access point.

So, I sshed to a computer in New Jersey. And from there I sshed to my access point. And the latency was amazing. Because, every time I pressed a key:

• It was sent to a satellite in geosynchrous orbit, 22250 miles high.
• Which beamed it back to a ground station in Texas, another 22250 miles.
• Which routed it over cable to New Jersey to my server there.
• Which bounced it back to a Texas-size dish, which zapped it back to orbit, another 22250 miles.
• And the satellite transmitted it back in the general direction of my house, another 22250 miles.
• So my keystroke finally reached the access point. But then it had to show me it had received it. So that whole process happened again in reverse, adding another 89000 miles travel total.
• And finally, after 178000 and change miles of data transfer, the letter I'd typed a full second ago appeared on my screen.

Not bad for a lazy solution to a problem that could have been solved by walking across the room, eh?

# November 17, 2017

## Jonathan Carter <!-- document.write( "<a href=\"#\" id=\"https://jonathancarter.org/2017/11/17/i-am-now-a-debian-developer/_hide\" onClick=\"exclude( 'https://jonathancarter.org/2017/11/17/i-am-now-a-debian-developer/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://jonathancarter.org/2017/11/17/i-am-now-a-debian-developer/_show\" style=\"display:none;\" onClick=\"show( 'https://jonathancarter.org/2017/11/17/i-am-now-a-debian-developer/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### It finally happened

On the 6th of April 2017, I finally took the plunge and applied for Debian Developer status. On 1 August, during DebConf in Montréal, my application was approved. If you’re paying attention to the dates you might notice that that was nearly 4 months ago already. I was trying to write a story about how it came to be, but it ended up long. Really long (current draft is around 20 times longer than this entire post). So I decided I’d rather do a proper bio page one day and just do a super short version for now so that someone might end up actually reading it.

### How it started

In 1999… no wait, I can’t start there, as much as I want to, this is a short post, so… In 2003, I started doing some contract work for the Shuttleworth Foundation. I was interested in collaborating with them on tuXlabs, a project to get Linux computers into schools. For the few months before that, I was mostly using SuSE Linux. The open source team at the Shuttleworth Foundation all used Debian though, which seemed like a bizarre choice to me since everything in Debian was really old and its “boot-floppies” installer program kept crashing on my very vanilla computers.

SLUG (Schools Linux Users Group) group photo. SLUG was founded to support the tuXlab schools that ran Linux.

My contract work then later turned into a full-time job there. This was a big deal for me, because I didn’t want to support Windows ever again, and I didn’t ever think that it would even be possible for me to get a job where I could work on free software full time. Since everyone in my team used Debian, I thought that I should probably give it another try. I did, and I hated it. One morning I went to talk to my manager, Thomas Black, and told him that I just don’t get it and I need some help. Thomas was a big mentor to me during this phase. He told me that I should try upgrading to testing, which I did, and somehow I ended up on unstable, and I loved it. Before that I used to subscribe to a website called “freshmeat” that listed new releases of upstream software and then, I would download and compile it myself so that I always had the newest versions of everything. Debian unstable made that whole process obsolete, and I became a huge fan of it. Early on I also hit a problem where two packages tried to install the same file, and I was delighted to find how easily I could find package state and maintainer scripts and fix them to get my system going again.

Thomas told me that anyone could become a Debian Developer and maintain packages in Debian and that I should check it out and joked that maybe I could eventually snap up “highvoltage@debian.org”. I just laughed because back then you might as well have told me that I could run for president of the United States, it really felt like something rather far-fetched and unobtainable at that point, but the seed was planted :)

### Ubuntu and beyond

Ubuntu 4.10 default desktop – Image from distrowatch

One day, Thomas told me that Mark is planning to provide official support for Debian unstable. The details were sparse, but this was still exciting news. A few months later Thomas gave me a CD with just “warty” written on it and said that I should install it on a server so that we can try it out. It was great, it used the new debian-installer and installed fine everywhere I tried it, and the software was nice and fresh. Later Thomas told me that this system is going to be called “Ubuntu” and the desktop edition has naked people on it. I wasn’t sure what he meant and was kind of dumbfounded so I just laughed and said something like “Uh ok”. At least it made a lot more sense when I finally saw the desktop pre-release version and when it got the byline “Linux for Human Beings”. Fun fact, one of my first jobs at the foundation was to register the ubuntu.com domain name. Unfortunately I found it was already owned by a domain squatter and it was eventually handled by legal.

Closer to Ubuntu’s first release, Mark brought over a whole bunch of Debian developers that was working on Ubuntu over to the foundation and they were around for a few days getting some sun. Thomas kept saying “Go talk to them! Go talk to them!”, but I felt so intimidated by them that I couldn’t even bring myself to walk up and say hello.

In the interest of keeping this short, I’m leaving out a lot of history but later on, I read through the Debian packaging policy and really started getting into packaging and also discovered Daniel Holbach’s packaging tutorials on YouTube. These helped me tremendously. Some day (hopefully soon), I’d like to do a similar video series that might help a new generation of packagers.

I’ve also been following DebConf online since DebConf 7, which was incredibly educational for me. Little did I know that just 5 years later I would even attend one, and another 5 years after that I’d end up being on the DebConf Committee and have also already been on a local team for one.

DebConf16 Organisers, Photo by Jurie Senekal.

It’s been a long journey for me and I would like to help anyone who is also interested in becoming a Debian maintainer or developer. If you ever need help with your package, upload it to https://mentors.debian.net and if I have some spare time I’ll certainly help you out and sponsor an upload. Thanks to everyone who have helped me along the way, I really appreciate it!

17 November, 2017 05:48PM by jonathan

## Raphaël Hertzog <!-- document.write( "<a href=\"#\" id=\"https://raphaelhertzog.com/2017/11/17/freexians-report-about-debian-long-term-support-october-2017/_hide\" onClick=\"exclude( 'https://raphaelhertzog.com/2017/11/17/freexians-report-about-debian-long-term-support-october-2017/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://raphaelhertzog.com/2017/11/17/freexians-report-about-debian-long-term-support-october-2017/_show\" style=\"display:none;\" onClick=\"show( 'https://raphaelhertzog.com/2017/11/17/freexians-report-about-debian-long-term-support-october-2017/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Freexian’s report about Debian Long Term Support, October 2017

Like each month, here comes a report about the work of paid contributors to Debian LTS.

### Individual reports

In October, about 197 work hours have been dispatched among 13 paid contributors. Their reports are available:

• Antoine Beaupré did 21h (out of 16h allocated + 8.75h remaining, thus keeping 3.75h for November).
• Ben Hutchings did 20 hours (out of 15h allocated + 9 extra hours, thus keeping 4 extra hours for November).
• Brian May did 10 hours.
• Chris Lamb did 18 hours.
• Emilio Pozuelo Monfort did 7 hours (out of 20.75 hours allocated + 1.5 hours remaining, thus keeping 15.25 hours for November).
• Guido Günther did 6.5 hours (out of 11h allocated + 1 extra hour, thus keeping 5.5h for November).
• Hugo Lefeuvre did 20h.
• Lucas Kanashiro did 2 hours (out of 5h allocated, thus keeping 3 hours for November).
• Markus Koschany did 19 hours (out of 20.75h allocated, thus keeping 1.75 extra hours for November).
• Ola Lundqvist did 7.5h (out of 7h allocated + 0.5 extra hours).
• Raphaël Hertzog did 13.5 hours (out of 12h allocated + 1.5 extra hours).
• Roberto C. Sanchez did 11 hours (out of 20.75 hours allocated + 14.75 hours remaining, thus keeping 24.50 extra hours for November, he will give back remaining hours at the end of the month).
• Thorsten Alteholz did 20.75 hours.

### Evolution of the situation

The number of sponsored hours increased slightly to 183 hours per month. With the increasing number of security issues to deal with, and with the number of open issues not really going down, I decided to bump the funding target to what amounts to 1.5 full-time position.

The security tracker currently lists 50 packages with a known CVE and the dla-needed.txt file 36 (we’re a bit behind in CVE triaging apparently).

17 November, 2017 02:31PM by Raphaël Hertzog

## Craig Small <!-- document.write( "<a href=\"#\" id=\"http://dropbear.xyz/2017/11/17/short-delay-with-wordpress-4-9/_hide\" onClick=\"exclude( 'http://dropbear.xyz/2017/11/17/short-delay-with-wordpress-4-9/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://dropbear.xyz/2017/11/17/short-delay-with-wordpress-4-9/_show\" style=\"display:none;\" onClick=\"show( 'http://dropbear.xyz/2017/11/17/short-delay-with-wordpress-4-9/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Short Delay with WordPress 4.9

You may have heard WordPress 4.9 is out. While this seems a good improvement over 4.8, it has a new editor that uses codemirror.  So what’s the problem? Well, inside codemirror is jshint and this has that idiotic no evil license. I think this was added in by WordPress, not codemirror itself.

So basically WordPress 4.9 has a file, or actually a tiny part of a file that is non-free.  I’ll now have to delay the update of WordPress to hack that piece out, which probably means removing the javascript linter. Not ideal but that’s the way things go.

17 November, 2017 11:03AM by Craig

## Michal Čihař <!-- document.write( "<a href=\"#\" id=\"https://blog.cihar.com/archives/2017/11/17/running-bitcoin-node-and-electrumx-server/?utm_source=rss2_hide\" onClick=\"exclude( 'https://blog.cihar.com/archives/2017/11/17/running-bitcoin-node-and-electrumx-server/?utm_source=rss2' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://blog.cihar.com/archives/2017/11/17/running-bitcoin-node-and-electrumx-server/?utm_source=rss2_show\" style=\"display:none;\" onClick=\"show( 'https://blog.cihar.com/archives/2017/11/17/running-bitcoin-node-and-electrumx-server/?utm_source=rss2' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Running Bitcoin node and ElectrumX server

I've been tempted to run own ElectrumX server for quite some. First attempt was to run this on Turris Omnia router, however that turned out to be impossible due to memory requirements both Bitcoind and ElectrumX have.

This time I've dedicated host for this and it runs fine:

The server runs Debian sid (probably it would be doable on stretch as well, but I didn't try much) and the setup was pretty simple.

First we need to install some things - Bitcoin daemon and ElectrumX dependencies:

# Bitcoin daemon, not available in stretch
apt install bitcoind

# We will checkout ElectrumX from git
apt install git

# ElectrumX deps
apt install python3-aiohttp

# Build environment for ElectrumX deps
apt install build-essentials python3-pip libleveldb-dev

# ElectrumX deps not packaged in Debian
pip3 install plyvel pylru

su - electrumx -c 'git clone https://github.com/kyuupichan/electrumx.git'


Create users which will run the services:

adduser bitcoind


Now it's time to prepare configuration for the services. For Bitcoin it's quite simple - we need to configure RPC interface and enable transaction index in /home/bitcoind/.bitcoin/bitcoin.conf:

txindex=1
listen=1
rpcuser=bitcoin


The ElectrumX configuration is quite simple as well and it's pretty well documented. I've decided to place it in /etc/electrumx.conf:

COIN=BitcoinSegwit
DB_DIRECTORY=/home/electrumx/.electrumx
TCP_PORT=50001
SSL_PORT=50002
HOST=::

SSL_CERTFILE=/etc/letsencrypt/live/btc.cihar.com/fullchain.pem
SSL_KEYFILE=/etc/letsencrypt/live/btc.cihar.com/privkey.pem

REPORT_HOST=btc.cihar.com
BANNER_FILE=banner


I've decided to control both services using systemd, so it's matter of creating pretty simple units for that. Actually the Bitcoin one closely matches the one I've used on Turris Omnia and the ElectrumX the one they ship, but there are some minor changes.

Systemd unit for ElectrumX in /etc/systemd/system/electrumx.service:

[Unit]
Description=Electrumx
After=bitcoind.target

[Service]
EnvironmentFile=/etc/electrumx.conf
ExecStart=/home/electrumx/electrumx/electrumx_server.py
User=electrumx
LimitNOFILE=8192
TimeoutStopSec=30min

[Install]
WantedBy=multi-user.target


And finally systemd unit for Bitcoin daemon in /etc/systemd/system/bitcoind.service:

[Unit]
Description=Bitcoind
After=network.target

[Service]
ExecStart=/usr/bin/bitcoind
User=bitcoind
TimeoutStopSec=30min
Restart=on-failure
RestartSec=30

[Install]
WantedBy=multi-user.target


Now everything should be configured and it's time to start up the services:

# Enable services so that they start on boot
systemctl enable electrumx.service bitcoind.service

# Start services
systemctl start electrumx.service bitcoind.service


Now you have few days time until Bitcoin fetches whole blockchain and ElectrumX indexes that. If you happen to have another Bitcoin node running (or was running in past), you can speedup the process by copying blocks from that system (located in ~/.bitcoin/blocks/). Only get blocks from sources you trust absolutely as it might change your view of history, see Bitcoin wiki for more information on the topic. There is also magnet link in the ElectrumX docs to download ElectrumX database to speed up this process. This should be safe to download from untrusted source.

The last think I'd like to mention is resources usage. You should have at least 4 GB of memory to run this, 8 GB is really preferred (both services consume around 4GB). On disk space, Bitcoin currently consumes 170 GB and ElectrumX 25 GB. Ideally all this should be running on the SSD disk.

You can however offload some of the files to slower storage as old blocks are rarely accessed and this can save some space on your storage. Following script will move around 50 GB of blockchain data to /mnt/btc/blocks (use only when Bitcoin daemon is not running):

  1 2 3 4 5 6 7 8 9 10 11 #!/bin/sh set -e DEST=/mnt/btc/blocks cd ~/.bitcoin/blocks/ find . -type f $$-name 'blk00[0123]*.dat' -o -name 'rev00[0123]*dat'$$ | sed 's@^\./@@' | while read name ; do mv $name$DEST/$name ln -s$DEST/$name$name done 

Anyway if you would like to use this server, configure btc.cihar.com in your Electrum client.

If you find this howto useful, you can send some Satoshis to 3KPccmPtejpMczeog7dcFdqX4oTebYZ3tF.

Filed under: Crypto Debian English

## Norbert Preining <!-- document.write( "<a href=\"#\" id=\"https://www.preining.info/blog/2017/11/scalafx-problems-with-tables-abound/_hide\" onClick=\"exclude( 'https://www.preining.info/blog/2017/11/scalafx-problems-with-tables-abound/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.preining.info/blog/2017/11/scalafx-problems-with-tables-abound/_show\" style=\"display:none;\" onClick=\"show( 'https://www.preining.info/blog/2017/11/scalafx-problems-with-tables-abound/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### ScalaFX: Problems with Tables abound

Doing a lot with all kinds of tables in ScalaFX, I stumbled upon a bug in ScalaFX that, with the help of the bug report, I was able to circumvent. It is a subtle bug where types are mixed between scalafx.SOMETHING and the corresponding javafx.SOMETHING.

In one of the answers it is stated that:

The issue is with implicit conversion from TableColumn not being located by Scala. I am not clear why this is happening (maybe a Scala bug).

But the provided work-around at least made it work. Until today I stumbled onto a (probably) just another instance of this bug, but where the same work-around does not help. I am using TreeTableViews and try to replace the children of the root by filtering out one element. The code I use is of course very different, but here is a reduced and fully contained example, based on the original bug report and adapted to use a TreeTableView:

import scalafx.Includes._
import scalafx.scene.control.TreeTableColumn._
import scalafx.scene.control.TreeItem._
import scalafx.application.JFXApp.PrimaryStage
import scalafx.application.JFXApp
import scalafx.scene.Scene
import scalafx.scene.layout._
import scalafx.scene.control._
import scalafx.scene.control.TreeTableView
import scalafx.scene.control.Button
import scalafx.scene.paint.Color
import scalafx.beans.property.{ObjectProperty, StringProperty}
import scalafx.collections.ObservableBuffer

// TableTester.scala
object TableTester extends JFXApp {

val characters = ObservableBuffer[Person](
new Person("Peggy", "Sue", "123", Color.Violet),
new Person("Rocky", "Raccoon", "456", Color.GreenYellow),
new Person("Bungalow ", "Bill", "789", Color.DarkSalmon)
)

val table = new TreeTableView[Person](
new TreeItem[Person](new Person("","","",Color.Red)) {
expanded = true
children = characters.map(new TreeItem[Person](_))
}) {
columns ++= List(
new TreeTableColumn[Person, String] {
text = "First Name"
cellValueFactory = {
_.value.value.value.firstName
}
prefWidth = 180
},
new TreeTableColumn[Person, String]() {
text = "Last Name"
cellValueFactory = {
_.value.value.value.lastName
}
prefWidth = 180
}
)
}

stage = new PrimaryStage {
title = "Simple Table View"
scene = new Scene {
content = new VBox() {
children = List(
new Button("Test it") {
onAction = p => {
val foo: ObservableBuffer[TreeItem[Person]] = table.root.value.children.map(p => {
val bar: TreeItem[Person] = p
p
})
table.root.value.children = foo
}
},
table)
}
}
}
}

// Person.scala
class Person(firstName_ : String, lastName_ : String, phone_ : String, favoriteColor_ : Color = Color.Blue) {

val firstName = new StringProperty(this, "firstName", firstName_)
val lastName = new StringProperty(this, "lastName", lastName_)
val phone = new StringProperty(this, "phone", phone_)
val favoriteColor = new ObjectProperty(this, "favoriteColor", favoriteColor_)

firstName.onChange((x, _, _) => System.out.println(x.value))
}


With this code what one gets on compilation with the latest Scala and ScalaFX is:

[error]  found   : scalafx.collections.ObservableBuffer[javafx.scene.control.TreeItem[Person]]
[error]  required: scalafx.collections.ObservableBuffer[scalafx.scene.control.TreeItem[Person]]
[error]               val foo: ObservableBuffer[TreeItem[Person]] = table.root.value.children.map(p => {
[error]                                                                                          ^
[error] one error found


And in this case, adding import statements didn’t help, what a pity. Unfortunately this bug is open since 2014 with a helpwanted tag and nothing is going on. I guess I have to try to dive into the source code of ScalaFX

17 November, 2017 06:27AM by Norbert Preining

## Renata Scheibler <!-- document.write( "<a href=\"#\" id=\"https://rsip22.github.io/blog/hello-world.html_hide\" onClick=\"exclude( 'https://rsip22.github.io/blog/hello-world.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://rsip22.github.io/blog/hello-world.html_show\" style=\"display:none;\" onClick=\"show( 'https://rsip22.github.io/blog/hello-world.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Hello, world!

Hello, world! For those who are meeting me for the first time, I am a 31 year old History teacher from Porto Alegre, Brazil.

Some people might know me from the Python community, because I have been leading PyLadies Porto Alegre and helping organize Django Girls workshops in my state since 2016. If you don't, that's okay. Either way, it's nice to have you here.

Ever since I learned about Rails Girls Summer of Code, during the International Free Software Forum - FISL 16, I have been wanting to get into a tech internship program. Google Summer of Code made into my radar as well, but I didn't really feel like I knew enough to try and get into those programs... until I found Outreachy. From their site:

Outreachy is an organization that provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

There were many good projects to choose from on this round, a lot of them with few requirements - and most with requirements that I believed I could fullfill, such as some knowledge of HTML, CSS, Python or Django.

I ought to say that I am not an expert in any of those. And, since you're reading this, I'm going to be completely honest. Coding is hard. Coding is hard to learn, it takes a lot of studying and a lot of practice. Even though I have been messing around computers pretty much since I was a kid, because I was a girl lucky enough to have a father who owned a computer store, I hadn't began learning how to program until mid-2015 - and I am still learning.

I think I became such an autodidact because I had to (and, of course, because I was given the conditions to be, such as having spare time to study when I wasn't at school). I had to get any and all information from my surroundings and turn into knowledge that I could use to achieve my goals. In a time when I could only get new computer games through a CD-ROM and the computer I was allowed to use didn't have a CD-ROM drive, I had to try and learn how to open a computer cabinet and connect/disconnect hardware properly, so I could use my brother's CD-ROM drive on the computer I was allowed to and install the games without anyone noticing. When, back in 1998, I couldn't connect to the internet because the computer I was allowed to use didn't have a modem, I had to learn about networks to figure out how to do it from my brother's computer on the LAN (local network).

I would go to the community public library and read books and any tech magazines I could get my hands into (libraries didn't usually have computers to be used by the public back then). It was about 2002 when I learned how to create HTML sites by studying at the source code from pages I had saved to read offline in one of the very, very few times I was allowed to access the internet and browse the web. Of course, the site I created back then never saw the light of the day, because I didn't have really have internet access at home.

So, how come it is that only now, 14 years later, I am trying to get into tech?

Because when I finished high school in 2003, I was still a minor and my family didn't allow me to go to Vocational School and take an IT course. (Never mind that my own oldest brother had graduated in IT and working with for almost a decade.)

I ended up going to study... teacher training in History as an undergrad course.

A lot has happened since then. I took the exam to become a public school teacher and more than two years had passed without being called to work. I spent 3 years in odd-jobs that paid barely enough to pay rent (and, sometimes, not even that).

Since the IT is the new thing and all jobs are in IT, finally, finally it seemed okay for me to take that Vocational School training in a public school - and so I did.

I gotta say, I thought that while I studied, I would be able to get some sort of job or internship to help with my learning. After all, I had seen it easily happening with people I met before getting into the course. And by "people", of course, I mean white men. For me, it took a whole year of searching, trying and interviewing for me to get an internship related to the field - tech support in a school computer lab, running GNU/Linux. And, in that very same week, I was hired as a public school teacher.

There is a lot more... actually, there is so much more to this story, but I think I have told enough for now. Enough to know where I came from and who I am, as of now. I hope you stick around. I am bound to write here every two weeks, so I guess I will see you then! o/

17 November, 2017 02:49AM by Renata

# November 16, 2017

## Michal Čihař <!-- document.write( "<a href=\"#\" id=\"https://blog.cihar.com/archives/2017/11/16/new-projects-hosted-weblate/?utm_source=rss2_hide\" onClick=\"exclude( 'https://blog.cihar.com/archives/2017/11/16/new-projects-hosted-weblate/?utm_source=rss2' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://blog.cihar.com/archives/2017/11/16/new-projects-hosted-weblate/?utm_source=rss2_show\" style=\"display:none;\" onClick=\"show( 'https://blog.cihar.com/archives/2017/11/16/new-projects-hosted-weblate/?utm_source=rss2' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long, so it's time to process it and include new project.

This time, the newly hosted projects include:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

## Colin Watson <!-- document.write( "<a href=\"#\" id=\"https://www.chiark.greenend.org.uk/~cjwatson/blog/kitten-block-equivalent-for-firefox-57.html_hide\" onClick=\"exclude( 'https://www.chiark.greenend.org.uk/~cjwatson/blog/kitten-block-equivalent-for-firefox-57.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.chiark.greenend.org.uk/~cjwatson/blog/kitten-block-equivalent-for-firefox-57.html_show\" style=\"display:none;\" onClick=\"show( 'https://www.chiark.greenend.org.uk/~cjwatson/blog/kitten-block-equivalent-for-firefox-57.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Kitten Block equivalent for Firefox 57

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

16 November, 2017 12:00AM by Colin Watson

# November 15, 2017

## Steinar H. Gunderson <!-- document.write( "<a href=\"#\" id=\"http://blog.sesse.net/blog/tech/2017-11-15-23-43_introducing_narabu_part_6_performance.html_hide\" onClick=\"exclude( 'http://blog.sesse.net/blog/tech/2017-11-15-23-43_introducing_narabu_part_6_performance.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.sesse.net/blog/tech/2017-11-15-23-43_introducing_narabu_part_6_performance.html_show\" style=\"display:none;\" onClick=\"show( 'http://blog.sesse.net/blog/tech/2017-11-15-23-43_introducing_narabu_part_6_performance.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Introducing Narabu, part 6: Performance

Narabu is a new intraframe video codec. You probably want to read part 1, part 2, part 3, part 4 and part 5 first.

Like I wrote in part 5, there basically isn't a big splashy ending where everything is resolved here; you're basically getting some graphs with some open questions and some interesting observations.

First of all, though, I'll need to make a correction: In the last part, I wrote that encoding takes 1.2 ms for 720p luma-only on my GTX 950, which isn't correct—I remembered the wrong number. The right number is 2.3 ms, which I guess explains even more why I don't think it's acceptable at the current stage. (I'm also pretty sure it's possible to rearchitect the encoder so that it's much better, but I am moving on to other video-related things for the time being.)

I encoded a picture straight off my DSLR (luma-only) at various resolutions, keeping the aspect. Then I decoded it a bunch of times on my GTX 950 (low-end last-generation NVIDIA) and on my HD 4400 (ultraportable Haswell laptop) and measured the times. They're normalized for megapixels per second decoded; remember that doubling width (x axis) means quadruple the pixels. Here it is:

I'm not going to comment much beyond two observations:

• Caches matter, even on GPU. This is the same data over and over again (so small images get an unrealistic boost), so up to a certain point, it's basically all in L1.
• The GTX 950 doesn't really run away from the Intel card before it's getting enough data to chew on. Bigger GPUs don't have faster cores—they're just more parallel.

Encoding only contains the GTX 950 because I didn't finish the work to get that single int64 divide off:

This is… interesting. I have few explanations. Probably more benchmarking and profiling would be needed to make sense of any of it. In fact, it's so strange that I would suspect a bug, but it does indeed seem to create a valid bitstream that is decoded by the decoder.

Do note, however, that seemingly even on the smallest resolutions, there's a 1.7 ms base cost (you can't see it on the picture, but you'd see it in an unnormalized graph). I don't have a very good explanation for this either (even though there are some costs that are dependent on the alphabet size instead of the number of pixels), but figuring it out would probably be a great start for getting the performance up.

So that concludes the series, on a cliffhanger. :-) Even though it's not in a situation where you can just take it and put it into something useful, I hope it was an interesting introduction to the GPU! And in the meantime, I've released version 1.6.3 of Nageru, my live video mixer (also heavily GPU-based) with various small adjustments and bug fixes found before and during Trøndisk. And Movit is getting compute shaders for that extra speed boost, although parts of it is bending my head. Exciting times in GPU land :-)

## Daniel Pocock <!-- document.write( "<a href=\"#\" id=\"https://danielpocock.com/linking-hackerspaces-with-opendht-and-ring_hide\" onClick=\"exclude( 'https://danielpocock.com/linking-hackerspaces-with-opendht-and-ring' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://danielpocock.com/linking-hackerspaces-with-opendht-and-ring_show\" style=\"display:none;\" onClick=\"show( 'https://danielpocock.com/linking-hackerspaces-with-opendht-and-ring' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Linking hackerspaces with OpenDHT and Ring

Francois and Nemen at the FIXME hackerspace (Lausanne) weekly meeting are experimenting with the Ring peer-to-peer softphone:

Francois is using Raspberry Pi and PiCam to develop a telepresence network for hackerspaces (the big screens in the middle of the photo).

The original version of the telepresence solution is using WebRTC. Ring's OpenDHT potentially offers more privacy and resilience.

15 November, 2017 07:57PM by Daniel.Pocock

## Kees Cook <!-- document.write( "<a href=\"#\" id=\"https://outflux.net/blog/archives/2017/11/14/security-things-in-linux-v4-14/_hide\" onClick=\"exclude( 'https://outflux.net/blog/archives/2017/11/14/security-things-in-linux-v4-14/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://outflux.net/blog/archives/2017/11/14/security-things-in-linux-v4-14/_show\" style=\"display:none;\" onClick=\"show( 'https://outflux.net/blog/archives/2017/11/14/security-things-in-linux-v4-14/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### security things in Linux v4.14

Previously: v4.13.

Linux kernel v4.14 was released this last Sunday, and there’s a bunch of security things I think are interesting:

vmapped kernel stack on arm64
Similar to the same feature on x86, Mark Rutland and Ard Biesheuvel implemented CONFIG_VMAP_STACK for arm64, which moves the kernel stack to an isolated and guard-paged vmap area. With traditional stacks, there were two major risks when exhausting the stack: overwriting the thread_info structure (which contained the addr_limit field which is checked during copy_to/from_user()), and overwriting neighboring stacks (or other things allocated next to the stack). While arm64 previously moved its thread_info off the stack to deal with the former issue, this vmap change adds the last bit of protection by nature of the vmap guard pages. If the kernel tries to write past the end of the stack, it will hit the guard page and fault. (Testing for this is now possible via LKDTM’s STACK_GUARD_PAGE_LEADING/TRAILING tests.)

One aspect of the guard page protection that will need further attention (on all architectures) is that if the stack grew because of a giant Variable Length Array on the stack (effectively an implicit alloca() call), it might be possible to jump over the guard page entirely (as seen in the userspace Stack Clash attacks). Thankfully the use of VLAs is rare in the kernel. In the future, hopefully we’ll see the addition of PaX/grsecurity’s STACKLEAK plugin which, in addition to its primary purpose of clearing the kernel stack on return to userspace, makes sure stack expansion cannot skip over guard pages. This “stack probing” ability will likely also become directly available from the compiler as well.

set_fs() balance checking
Related to the addr_limit field mentioned above, another class of bug is finding a way to force the kernel into accidentally leaving addr_limit open to kernel memory through an unbalanced call to set_fs(). In some areas of the kernel, in order to reuse userspace routines (usually VFS or compat related), code will do something like: set_fs(KERNEL_DS); ...some code here...; set_fs(USER_DS);. When the USER_DS call goes missing (usually due to a buggy error path or exception), subsequent system calls can suddenly start writing into kernel memory via copy_to_user (where the “to user” really means “within the addr_limit range”).

Thomas Garnier implemented USER_DS checking at syscall exit time for x86, arm, and arm64. This means that a broken set_fs() setting will not extend beyond the buggy syscall that fails to set it back to USER_DS. Additionally, as part of the discussion on the best way to deal with this feature, Christoph Hellwig and Al Viro (and others) have been making extensive changes to avoid the need for set_fs() being used at all, which should greatly reduce the number of places where it might be possible to introduce such a bug in the future.

SLUB freelist hardening
A common class of heap attacks is overwriting the freelist pointers stored inline in the unallocated SLUB cache objects. PaX/grsecurity developed an inexpensive defense that XORs the freelist pointer with a global random value (and the storage address). Daniel Micay improved on this by using a per-cache random value, and I refactored the code a bit more. The resulting feature, enabled with CONFIG_SLAB_FREELIST_HARDENED, makes freelist pointer overwrites very hard to exploit unless an attacker has found a way to expose both the random value and the pointer location. This should render blind heap overflow bugs much more difficult to exploit.

Additionally, Alexander Popov implemented a simple double-free defense, similar to the “fasttop” check in the GNU C library, which will catch sequential free()s of the same pointer. (And has already uncovered a bug.)

Future work would be to provide similar metadata protections to the SLAB allocator (though SLAB doesn’t store its freelist within the individual unused objects, so it has a different set of exposures compared to SLUB).

setuid-exec stack limitation
Continuing the various additional defenses to protect against future problems related to userspace memory layout manipulation (as shown most recently in the Stack Clash attacks), I implemented an 8MiB stack limit for privileged (i.e. setuid) execs, inspired by a similar protection in grsecurity, after reworking the secureexec handling by LSMs. This complements the unconditional limit to the size of exec arguments that landed in v4.13.

randstruct automatic struct selection
While the bulk of the port of the randstruct gcc plugin from grsecurity landed in v4.13, the last of the work needed to enable automatic struct selection landed in v4.14. This means that the coverage of randomized structures, via CONFIG_GCC_PLUGIN_RANDSTRUCT, now includes one of the major targets of exploits: function pointer structures. Without knowing the build-randomized location of a callback pointer an attacker needs to overwrite in a structure, exploits become much less reliable.

structleak passed-by-reference variable initialization
Ard Biesheuvel enhanced the structleak gcc plugin to initialize all variables on the stack that are passed by reference when built with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. Normally the compiler will yell if a variable is used before being initialized, but it silences this warning if the variable’s address is passed into a function call first, as it has no way to tell if the function did actually initialize the contents. So the plugin now zero-initializes such variables (if they hadn’t already been initialized) before the function call that takes their address. Enabling this feature has a small performance impact, but solves many stack content exposure flaws. (In fact at least one such flaw reported during the v4.15 development cycle was mitigated by this plugin.)

improved boot entropy
Laura Abbott and Daniel Micay improved early boot entropy available to the stack protector by both moving the stack protector setup later in the boot, and including the kernel command line in boot entropy collection (since with some devices it changes on each boot).

eBPF JIT for 32-bit ARM
The ARM BPF JIT had been around a while, but it didn’t support eBPF (and, as a result, did not provide constant value blinding, which meant it was exposed to being used by an attacker to build arbitrary machine code with BPF constant values). Shubham Bansal spent a bunch of time building a full eBPF JIT for 32-bit ARM which both speeds up eBPF and brings it up to date on JIT exploit defenses in the kernel.

seccomp improvements
Tyler Hicks addressed a long-standing deficiency in how seccomp could log action results. In addition to creating a way to mark a specific seccomp filter as needing to be logged with SECCOMP_FILTER_FLAG_LOG, he added a new action result, SECCOMP_RET_LOG. With these changes in place, it should be much easier for developers to inspect the results of seccomp filters, and for process launchers to generate logs for their child processes operating under a seccomp filter.

Additionally, I finally found a way to implement an often-requested feature for seccomp, which was to kill an entire process instead of just the offending thread. This was done by creating the SECCOMP_RET_ACTION_FULL mask (née SECCOMP_RET_ACTION) and implementing SECCOMP_RET_KILL_PROCESS.

That’s it for now; please let me know if I missed anything. The v4.15 merge window is now open!

15 November, 2017 05:23AM by kees

## Russ Allbery <!-- document.write( "<a href=\"#\" id=\"https://www.eyrie.org/~eagle/reviews/books/0-7636-5458-2.html_hide\" onClick=\"exclude( 'https://www.eyrie.org/~eagle/reviews/books/0-7636-5458-2.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.eyrie.org/~eagle/reviews/books/0-7636-5458-2.html_show\" style=\"display:none;\" onClick=\"show( 'https://www.eyrie.org/~eagle/reviews/books/0-7636-5458-2.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Review: The Piper's Son

Review: The Piper's Son, by Melina Marchetta

 Series: Francesca #2 Publisher: Candlewick Press Copyright: 2010 Printing: 2011 ISBN: 0-7636-5458-2 Format: Kindle Pages: 330

Tom Mackee's family has fallen apart. The impetus was the death of his uncle Joe in the London tube terrorist bombings, but that was only the start. He destroyed his chances with the only woman he really loved. His father's drinking got out of control, his mother left with his younger sister to live in a different city, and he refused to go with them and abandon his father. But then, six months later, his father abandoned him anyway. As this novel opens, Tom collapses while performing a music set, high on drugs and no sleep, and wakes up to discover his roommates have been fired from their jobs for stealing, and in turn have thrown him out of their apartment. He's at rock bottom.

The one place he can turn for a place to stay is his aunt Georgie, the second (although less frequent) viewpoint character of this book. She was the one who took the trip to the UK to try to find out what happened and retrieve her brother's body, and the one who had to return to Australia with nothing. Her life isn't in much better shape than Tom's. She's kept her job, but she's pregnant by her ex-boyfriend but barely talking to him, since he now has a son by another woman he met during their separation. And she's not even remotely over her grief.

The whole Finch/Mackee family is, in short, a disaster. But they have a few family relationships left that haven't broken, some underlying basic decency, and some patient and determined friends.

I should warn up-front, despite having read this book without knowing this, that this is a sequel to Saving Francesca, set five years later and focusing on secondary characters from the original novel. I've subsequently read that book as well, though, and I don't think reading it first is necessary. This is one of the rare books where being a sequel made it a better stand-alone novel. I never felt a gap of missing story, just a rich and deep background of friendships and previous relationships that felt realistic. People are embedded in networks of relationships even when they feel the most alone, and I really enjoyed seeing that surface in this book. All those patterns from Tom's past didn't feel like information I was missing. They felt like glimpses of what you'd see if you looked into any other person's life.

The plot summary above might make The Piper's Son sound like a depressing drama fest, but Marchetta made an excellent writing decision: the worst of this has already happened before the start of the book, and the rest is in the first chapter. This is not at all a book about horrible things happening to people. It's a book about healing. An authentic, prickly, angry healing that doesn't forget and doesn't turn into simple happily-ever-after stories, but does involve a lot of recognition that one has been an ass, and that it's possible to be less of an ass in the future, and maybe some things can be fixed.

A plot summary might fool you into thinking that this is a book about a boy and his father, or about dealing with a drunk you still love. It's not. The bright current under this whole story is not father-son bonding. It's female friendships. Marchetta pulls off a beautiful double-story, writing a book that's about Tom, and Georgie, and the layered guilt and tragedy of the Finch/Mackee family, but whose emotional heart is their friends. Francesca, Justine, absent Siobhan. Georgie's friend Lucia. Ned, the cook, and his interactions with Tom's friends. And Tara Finke, also mostly absent, but perfectly written into the story in letters and phone calls.

Marchetta never calls unnecessary attention to this, keeping the camera on Tom and Georgie, but the process of reading this book is a dawning realization of just how much work friendship is doing under the surface, how much back-channel conversation is happening off the page, and how much careful and thoughtful and determined work went into providing Tom a floor, a place to get his feet under him, and enough of a shove for him to pull himself together. Pulling that off requires a deft and subtle authorial touch, and I'm in awe at how well it worked.

This is a beautifully written novel. Marchetta never belabors an emotional point, sticking with a clear and spare description of actions and thoughts, with just the right sentences scattered here and there to expose the character's emotions. Tom's family is awful at communication, which is much of the reason why they start the book in the situation they're in, but Marchetta somehow manages to write that in a way that didn't just frustrate me or make me want to start banging their heads together. She somehow conveys the extent to which they're trying, even when they're failing, and adds just the right descriptions so that the reader can follow the wordless messages they send each other even when they can't manage to talk directly. I usually find it very hard to connect with people who can only communicate by doing things rather than saying them. It's a high compliment to the author that I felt I understood Tom and his family as well as I did.

One bit of warning: while this is not a story of a grand reunion with an alcoholic father where all is forgiven because family, thank heavens, there is an occasional wiggle in that direction. There is also a steady background assumption that one should always try to repair family relationships, and a few tangential notes about the Finches and Mackees that made me think there was a bit more abuse here than anyone involved wants to admit. I don't think the book is trying to make apologies for this, and instead is trying to walk the fine line of talking about realistically messed up families, but I also don't have a strong personal reaction to that type of story. If you have an aversion to "we should all get along because faaaaamily" stories, you may want to skip this book, or at least go in pre-warned.

That aside, the biggest challenge I had in reading this book was not breaking into tears. The emotional arc is just about perfect. Tom and Georgie never stay stuck in the same emotional cycle for too long, Marchetta does a wonderful job showing irritating characters from a slightly different angle and having them become much less irritating, and the interactions between Tom, Tara, and Francesca are just perfect. I don't remember reading another book that so beautifully captures that sensation of knowing that you've been a total ass, knowing that you need to stop, but realizing just how much work you're going to have to do, and how hard that work will be, once you own up to how much you fucked up. That point where you keep being an ass for a few moments longer, because stopping is going to hurt so much, but end up stopping anyway because you can't stand yourself any more. And stopping and making amends is hard and hurts badly, and yet somehow isn't quite as bad as you thought it was going to be.

This is really great stuff.

One final complaint, though: what is it with mainstream fiction and the total lack of denouement? I don't read very much mainstream fiction, but this is the second really good mainstream book I've read (after The Death of Bees) that hits its climax and then unceremoniously dumps the reader on the ground and disappears. Come back here! I wasn't done with these people! I don't need a long happily-ever-after story, but give me at least a handful of pages to be happy with the characters after crying with them for hours! ARGH.

But, that aside, the reader does get that climax, and it's note-perfect to the rest of the book. Everyone is still themselves, no one gets suddenly transformed, and yet everything is... better. It's the kind of book you can trust.

Highly, highly recommended.

Rating: 9 out of 10

# November 14, 2017

## Jonathan Dowland <!-- document.write( "<a href=\"#\" id=\"http://jmtd.net/log/wadc_2.2/_hide\" onClick=\"exclude( 'http://jmtd.net/log/wadc_2.2/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://jmtd.net/log/wadc_2.2/_show\" style=\"display:none;\" onClick=\"show( 'http://jmtd.net/log/wadc_2.2/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

I have recently released version 2.2 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

The biggest change in this version is a reworking of the preferences system (to use the Java Preferences API), the wadcli command-line interface respecting preferences and a new preferences UI dialog (adapted from Quake Injector).

There are two new example maps: A Labyrinth demonstration contributed by "Yoruk", and a Heretic map Bird Cage by yours truly. These are both now amongst the largest examples in the collection, although laby.wl was generated by a higher-level program.

For more information see the release notes and the reference, or check out the new gallery of examples or skip straight to downloads.

I have no plans to work on WadC further (but never say never, I suppose.)

# November 13, 2017

## Steve Kemp <!-- document.write( "<a href=\"#\" id=\"https://blog.steve.fi/paternity_leave_is_half_over.html_hide\" onClick=\"exclude( 'https://blog.steve.fi/paternity_leave_is_half_over.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://blog.steve.fi/paternity_leave_is_half_over.html_show\" style=\"display:none;\" onClick=\"show( 'https://blog.steve.fi/paternity_leave_is_half_over.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Paternity-leave is half-over

I'm taking the month of November off work, so that I can exclusively take care of our child. Despite it being a difficult time, with him teething, it has been a great half-month so far.

During the course of the month I've found my interest in a lot of technological things waning, so I've killed my account(s) on a few platforms, and scaled back others - if I could exclusively do child-care for the next 20 years I'd be very happy, but sadly I don't think that is terribly realistic.

My interest in things hasn't entirely vanished though, to the extent that I found the time to replace my use of etcd with consul yesterday, and I'm trying to work out how to simplify my hosting setup. Right now I have a bunch of servers doing two kinds of web-hosting:

Hosting static-sites is trivial, whether with a virtual machine, via Amazons' S3-service, or some other static-host such as netlify.

Hosting for "dynamic stuff" is harder. These days a trend for "serverless" deployments allows you to react to events and be dynamic, but not everything can be a short-lived piece of ruby/javascript/lambda. It feels like I could setup a generic platform for launching containers, or otherwise modernising FastCGI, etc, but I'm not sure what the point would be. (I'd still be the person maintaining it, and it'd still be a hassle. I've zero interest in selling things to people, as that only means more support.)

In short I have a bunch of servers, they mostly tick over unattended, but I'm not really sure I want to keep them running for the next 10+ years. Over time our child will deserve, demand, and require more attention which means time for personal stuff is only going to diminish.

Simplify things now wouldn't be a bad thing to do, before it is too late.

## Markus Koschany <!-- document.write( "<a href=\"#\" id=\"https://gambaru.de/blog/2017/11/13/my-free-software-activities-in-october-2017/_hide\" onClick=\"exclude( 'https://gambaru.de/blog/2017/11/13/my-free-software-activities-in-october-2017/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://gambaru.de/blog/2017/11/13/my-free-software-activities-in-october-2017/_show\" style=\"display:none;\" onClick=\"show( 'https://gambaru.de/blog/2017/11/13/my-free-software-activities-in-october-2017/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### My Free Software Activities in October 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

## Debian Games

• I packaged a new upstream version of springlobby. There is even a more recent one now but I discovered that it would fail to build from source. I reported the issue and now I am waiting for another release.
• These packages were also updated: bullet, tuxfootball (#876481), berusky (#877979), spring, hitori and trackballs.
• I released a new version of cube2-data, a DFSG-free version of the Sauerbraten game. This release was largely made possible thanks to the work of Nyav.
• I prepared two stable point releases of berusky and simutrans to fix #877979 and # 869029 for users of Debian’s stable distributions too. The bug in Berusky is already resolved but I’m still waiting for the confirmation to upload simutrans (#878668).
• I updated wing and biniax2. Here I discovered that biniax2 would segfault immediately at startup after recompilation. I tracked down the issue to some C code that caused undefined behavior, prepared a patch and released a fixed revision.
• I sponsored a new upstream version of mupen64plus-qt.

## Debian Java

• This month I started to work on fixing Java9 bugs since Java 9 shall become the new default JDK/JRE for Buster. The bug reports were filed by Chris West who did the important work of identifying build failures and broken packages. I started with some low hanging fruits first and the following packages are now Java 9 ready: libgetopt-java, libjide-oss-java, activemq-protobuf, antelope, yecht, slashtime, colorpicker, f2j, libreadline-java, libjaxp1.3-java, jlapack, isorelax, libisrt-java, rxtx, uima-addons.
• New upstream releases this month: apktool, jboss-xnio, okio, pdfsam, libsejda-java, bcel, autocomplete, mediathekview, sweethome3d.
• MediathekView introduced yet another build-dependency. Let’s welcome libokhttp-java in Debian.
• I upgraded jackson-databind to fix CVE-2017-7525. While I was at it, I continued this work with jackson-core, jackson-annotations, jackson-dataformat-xml, jackson-jr, jackson-datatype-joda, jackson-module-jaxb-annotations, jackson-dataformat-cbor, jackson-dataformat-smile, jackson-dataformat-yaml and jackson-jaxrs-providers. I also requested the removal of jackson-datatype-guava.
• More resolved RC issues: commons-io (#873118), tycho (#879250)
• Package updates: mockobjects (converted from CDBS to DH) and jblas (RC #877225, #873212, #698176)
• The Maven 2 to Maven 3 transition caused (and still causes) a lot of fallout: I investigated the following packages with RC bugs. In most cases the issue was in another package, so the bugs could be closed but there were also packages like conversant-disruptor (#869002) which caused build failures unrelated to the transition. In total 15 packages were triaged or fixed: jasypt (#871195), mustache-java (#869009), libslf4j-java, apache-log4j2, conversant-disruptor, powermock(#869017), jetty9(#869021), maven-site-plugin(#869001),  javamail(#871102), assertj-core(#871131), java-allocation-instrumenter(#869251), json-smart(#868603), sisu-guice(#868611), maven-archiver(#871069), doxia-sitetools(#875948)
• I have started to work on a new upstream version of triplea, multiple strategy games written in Java. The update would fix a couple of bugs and make the package ready for Java 9.
• It was also requested to upgrade Gradle to version 3.4.1 at least. I have made good progress but there is more work to do.

## Debian LTS

This was my twentieth month as a paid contributor and I have been paid to work 19 hours on Debian LTS, a project started by Raphaël Hertzog. I will catch up with the remaining 1,75 hours in November. In that time I did the following:

• From 30. October to 05. November I was in charge of our LTS frontdesk. I triaged bugs in jasperreports, jbossas4, libstruts1.2-java, httpcomponents-client, vim, emacs23, trafficserver, async-http-client, liblouis, wordpress, apr, apr-utils, redis, nautilus, libpam4j and spip.
• I decided to mark jbossas4 as end-of-life because the Java application server was never fully packaged and the version in Wheezy is already nine years old. I investigated the open security issues in jasperreports and contacted upstream but they have not published any details yet.
• I pinged bug #878088. The reportbug maintainer still has to respond to the idea of informing the security teams when users report bugs in security uploads. I will discuss the possibility with the rest of the team, whether it is helpful to patch reportbug in Wheezy/Jessie/Stretch now.
• DLA-1151-1 and DLA-1160-1. Issued two security updates for WordPress  addressing 10 CVE. It was later discovered that the patch for CVE-2017-14990 was incomplete and caused a regression when using WordPress’ multi-site feature. Single-site installations were not affected. The complete fix would either include a  database upgrade or a different approach without using the new database field “signup_id”. I reverted the patch for now and issued a regression update in DLA-1151-2.
• DLA-1158-1. Issued a security update for bchunk fixing 3 CVE.
• DLA-1159-1. Issued a security update for graphicsmagick fixing 2 CVE.
• DLA-1164-1. Issued a security update for mupdf fixing 2 CVE.
• DLA-1165-1. Issued a security update for libpam4j fixing 1 CVE.
• DLA-1167-1. Issued a security update for ruby-yajl fixing 1 CVE.
• DLA-1157-1. I uploaded a security update for openssl. The update was prepared by Kurt Roeckx, the maintainer of openssl.

## Misc

• I prepared the security updates for libpam4j (DSA-4025-1) and bchunk (DSA-4026-1) and fixed the same issues in Sid and Buster.

Thanks for reading and see you next time.

## Ben Hutchings <!-- document.write( "<a href=\"#\" id=\"https://www.decadent.org.uk/ben/blog/debian-lts-work-october-2017.html_hide\" onClick=\"exclude( 'https://www.decadent.org.uk/ben/blog/debian-lts-work-october-2017.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.decadent.org.uk/ben/blog/debian-lts-work-october-2017.html_show\" style=\"display:none;\" onClick=\"show( 'https://www.decadent.org.uk/ben/blog/debian-lts-work-october-2017.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Debian LTS work, October 2017

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 9 hours from September. I worked 20 hours and will carry over 4 hours to the next month.

I prepared and uploaded an update to dnsmasq to fix some urgent security issues. I issued DLA-1124-1 for this update.

I prepared and released another update on the Linux 3.2 longterm stable branch (3.2.94) and I began preparing the next update, but I didn't upload an update to Debian.

## François Marier <!-- document.write( "<a href=\"#\" id=\"http://feeding.cloud.geek.nz/posts/test-mail-server-ubuntu-debian/_hide\" onClick=\"exclude( 'http://feeding.cloud.geek.nz/posts/test-mail-server-ubuntu-debian/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://feeding.cloud.geek.nz/posts/test-mail-server-ubuntu-debian/_show\" style=\"display:none;\" onClick=\"show( 'http://feeding.cloud.geek.nz/posts/test-mail-server-ubuntu-debian/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Test mail server on Ubuntu and Debian

I wanted to setup a mail service on a staging server that would send all outgoing emails to a local mailbox. This avoids sending emails out to real users when running the staging server using production data.

First, install the postfix mail server:

apt install postfix


and choose the "Local only" mail server configuration type.

Then change the following in /etc/postfix/main.cf:

default_transport = error


to:

default_transport = local:root


and restart postfix:

systemctl restart postfix.service


Once that's done, you can find all of the emails in /var/mail/root.

So you can install mutt:

apt install mutt


and then view the mailbox like this:

mutt -f /var/mail/root


# November 12, 2017

## Lars Wirzenius <!-- document.write( "<a href=\"#\" id=\"http://blog.liw.fi/posts/2017/11/13/unit_and_integration_testing_an_analogy_with_cars/_hide\" onClick=\"exclude( 'http://blog.liw.fi/posts/2017/11/13/unit_and_integration_testing_an_analogy_with_cars/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.liw.fi/posts/2017/11/13/unit_and_integration_testing_an_analogy_with_cars/_show\" style=\"display:none;\" onClick=\"show( 'http://blog.liw.fi/posts/2017/11/13/unit_and_integration_testing_an_analogy_with_cars/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Unit and integration testing: an analogy with cars

A unit is a part of your program you can test in isolation. You write unit tests to test all aspects of it that you care about. If all your unit tests pass, you should know that your unit works well.

Integration tests are for testing that when your various well-tested, high quality units are combined, integrated, they work together. Integration tests test the integration, not the individual units.

You could think of building a car. Your units are the ball bearings, axles, wheels, brakes, etc. Your unit tests for the ball bearings might test, for example, that they can handle a billion rotations, at various temperatures, etc. Your integration test would assume the ball bearings work, and should instead test that the ball bearings are installed in the right way so that the car, as whole, can run a kilometers, and accelerate and brake every kilometer, uses only so much fuel, produces only so much pollution, and doesn't kill passengers in case of a crash.

## Sven Hoexter <!-- document.write( "<a href=\"#\" id=\"http://sven.stormbind.net/blog/posts/misc_simtec_entropy_key/_hide\" onClick=\"exclude( 'http://sven.stormbind.net/blog/posts/misc_simtec_entropy_key/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://sven.stormbind.net/blog/posts/misc_simtec_entropy_key/_show\" style=\"display:none;\" onClick=\"show( 'http://sven.stormbind.net/blog/posts/misc_simtec_entropy_key/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Offering a Simtec Entropy Key

Since I started to lean a bit towards the concept of minimalism I've got rid of stuff, including all stationary computers. So for now I'm left with just my laptop and that's something where I do not want to attach an USB entropy key permanently. That's why I've a spare Simtec Entropy Key I no longer use, and I'm willing to sell.

In case someone is interested, I'm willing to give it away for 20EUR + shipping. If you can convince me it'll be of use for the Debian project (end up on a DSA managed machine for example) I'm willing to give it away for less. If you're located in Cologne, Copenhagen or Barcelona we might be able, depending on the timing, to do a personal handover (with or without keysigning). Otherwise I guess shipping is mainly interesting for someone also located in Europe.

You can use sven at stormbind dot net or hoexter at debian dot org to contact me and use GPG key 0xA6DC24D9DA2493D1.

## Ben Armstrong <!-- document.write( "<a href=\"#\" id=\"http://syn.theti.ca/2017/11/12/the-joy-of-cat-intelligence/_hide\" onClick=\"exclude( 'http://syn.theti.ca/2017/11/12/the-joy-of-cat-intelligence/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://syn.theti.ca/2017/11/12/the-joy-of-cat-intelligence/_show\" style=\"display:none;\" onClick=\"show( 'http://syn.theti.ca/2017/11/12/the-joy-of-cat-intelligence/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### The Joy of Cat Intelligence

As a cat owner, being surprised by cat intelligence delights me. They’re not exactly smart like a human, but they are smart in cattish ways. The more I watch them and try to sort out what they’re thinking, the more it pleases me to discover they can solve problems and adapt in recognizably intelligent ways, sometimes unique to each individual cat. Each time that happens, it evokes in me affectionate wonder.

Today, I had one of those joyful moments.

First, you need to understand that some months ago, I thought I had my male cat all figured out with respect to mealtimes. I had been cleaning up after my oafish boy who made a watery mess on the floor from his mother’s bowl each morning. I was slightly annoyed, but was mostly curious, and had a hunch. A quick search of the web confirmed it: my cat was left-handed. Not only that, but I learned this is typical for males, whereas females tend to be right-handed. Right away, I knew what I had to do: I adjusted the position of their water bowls relative to their food, swapping them from right to left; the messy morning feedings ceased. I congratulated myself for my cleverness.

You see, after the swap, as he hooked the kibbles with his left paw out of the right-hand bowl, they would land immediately on the floor where he could give them chase. The swap caused the messes to cease because before, his left-handed scoops would land the kibbles in the water to the right; he would then have to scoop the kibble out onto the floor, sprinkling water everywhere! Furthermore, the sodden kibble tended to not skitter so much, decreasing his fun. Or so I thought. Clearly, I reasoned, having sated himself on the entire contents of his own bowl, he turned to pilfering his mother’s leftovers for some exciting kittenish play. I had evidence to back it up, too: he and his mother both seem to enjoy this game, a regular fixture of their mealtime routines. She, too, is adept at hooking out the kibbles, though mysteriously, without making a mess in her water, whichever way the bowls are oriented. I chalked this up to his general clumsiness of movement vs. her daintiness and precision, something I had observed many times before.

Come to think of it, lately, I’ve been seeing more mess around his mother’s bowl again. Hmm. I don’t know why I didn’t stop to consider why …

And then my cat surprised me again.

This morning, with Shadow behind my back as I sat at my computer, finishing up his morning meal at his mother’s bowl, I thought I heard something odd. Or rather, I didn’t hear something. The familiar skitter-skitter sound of kibbles evading capture was missing. So I turned and looked. My dear, devious boy had squished his overgrown body behind his mother’s bowls, nudging them ever so slightly askew to fit the small space. Now the bowl orientation was swapped back again. Stunned, I watched him carefully flip out a kibble with his left paw. Plop! Into the water on the right. Concentrating, he fished for it. A miss! He casually licked the water from his paw. Another try. Swoop! Plop, onto the floor. No chase now, just satisfied munching of his somewhat mushy kibble. And then it dawned on me that I had got it somewhat wrong. Yes, he enjoyed Chase the Kibble, like his mom, but I never recognized he had been indulging in a favourite pastime, peculiarly his own …

I had judged his mealtime messes as accidents, a very human way of thinking about my problem. Little did I know, it was deliberate! His private game was Bobbing for Kibbles. I don’t know if it’s the altered texture, or dabbling in the bowl, but whatever the reason, due to my meddling, he had been deprived of this pleasure. No worries, a thwarted cat will find a way. And that is the joy of cat intelligence.

12 November, 2017 02:46PM by Ben Armstrong

## Russ Allbery <!-- document.write( "<a href=\"#\" id=\"https://www.eyrie.org/~eagle/reviews/books/0-9909741-1-1.html_hide\" onClick=\"exclude( 'https://www.eyrie.org/~eagle/reviews/books/0-9909741-1-1.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.eyrie.org/~eagle/reviews/books/0-9909741-1-1.html_show\" style=\"display:none;\" onClick=\"show( 'https://www.eyrie.org/~eagle/reviews/books/0-9909741-1-1.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Review: Night Moves

Review: Night Moves, by Pat Green

 Publisher: Aquarius Copyright: 2014 ISBN: 0-9909741-1-1 Format: Kindle Pages: 159

In the fall of 2012, Pat Green was a preacher of a failing church, out of a job, divorced for six months, and feeling like a failure at every part of his life. He was living in a relative's house and desperately needed work and his father had been a taxi driver. So he got a job as a 6pm to 6am taxi driver in his home town of Joliet, Illinois. That job fundamentally changed his understanding of the people who live in the night, how their lives work, and what it means to try to help them.

This is nonfiction: a collection of short anecdotes about life as a cab driver and the people who have gotten a ride in Green's cab. They're mostly five or six pages long, just a short story or window into someone's life. I ran across Pat Green's writing by following a sidebar link from a post on Patheos (probably from Love, Joy, Feminism, although I no longer remember). Green has an ongoing blog on Patheos about raising his transgender son (who appears in this collection as a lesbian daughter; he wasn't out yet as transgender when this was published), which is both a good sample of his writing and occasionally has excerpts from this book.

Green's previous writing experience, as mentioned at several points in this collection, was newspaper columns in the local paper. It shows: these essays have the succinct, focused, and bite-sized property of a good newspaper article (or blog post). The writing is a little rough, particularly the remembered dialogue that occasionally falls into the awkward valley between dramatic, constructed fictional dialogue and realistic, in-the-moment speech. But the stories are honest and heartfelt and have the self-reflective genuineness of good preaching paired with a solid sense of narrative. Green tries to observe and report first, both the other person and his own reactions, and only then try to draw more general conclusions.

This book is also very hard to read. It's not a sugar-coated view of people who live in the night of a city, nor is it constructed to produce happy endings. The people who Green primarily writes about are poor, or alone, or struggling. The story that got me to buy this book, about taking a teenage girl to a secret liaison that turned out to be secret because her liaison was another girl, is heartwarming but also one of the most optimistic stories here. A lot of people die or just disappear after being regular riders for some time. A lot of people are desperate and don't have any realistic way out. Some people, quite memorably, think they have a way out, and that way out closes on them.

The subtitle of this book is "An Ex-Preacher's Journey to Hell in a Taxi" and (if you followed the link above) you'll see that Green is writing in the Patheos nonreligious section. The other theme of this collection is the church and its effect on the lives of people who are trying to make a life on the outskirts of society. That effect is either complete obliviousness or an active attempt to make their lives even worse. Green lays out the optimism that he felt early in the job, the hope that he could help someone the way a pastor would, guide her to resources, and how it went horribly wrong when those resources turned out to not be interested in helping her at all. And those stories repeat, and repeat.

It's a book that makes it very clear that the actual practice of Christianity in the United States is not about helping poor or marginalized people, but there are certainly plenty of Christian resources for judging, hurting people, closing doors, and forcing abused people back into abusive situations, all in the name of God. I do hope some Christians read this and wince very hard. (And lest the progressive Christians get too smug, one of the stories says almost as brutal things about liberal ministries as the stories of conservative ones.)

I came away feeling even more convinced by the merits of charities that just give money directly to poor people. No paternalism, no assuming that rich people know what they need, no well-meaning intermediary organizations with endless rules, just resources delivered directly to the people who most need resources. Ideally done by the government and called universal basic income. Short of constructing a functional government that builds working public infrastructure, and as a supplement even if one has such a government (since infrastructure can't provide everything), it feels like the most moral choice. Individual people may still stay mired in awful situations, but at least that isn't compounded by other people taking their autonomy away and dictating life to them in complete ignorance.

This is a fairly short and inexpensive book. I found it very much worth reading, and may end up following Green's blog as well. There are moments of joy and moments of human connection, and the details of the day-to-day worries and work style of a taxi driver (in this case, one who drives a company car) are pretty interesting. (Green does skip over some parts for various reasons, such as a lot of the routine fares and most of the stories of violence, but does mention what he's skipping over.) But it's also a brutal book, because so many people are hurting and there isn't much Green can do about it except bear witness and respect them as people in a way that religion doesn't.

Recommended, but brace yourself.

Rating: 8 out of 10

# November 10, 2017

## Paulo Santana <!-- document.write( "<a href=\"#\" id=\"http://phls.com.br/blog/hello-world_hide\" onClick=\"exclude( 'http://phls.com.br/blog/hello-world' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://phls.com.br/blog/hello-world_show\" style=\"display:none;\" onClick=\"show( 'http://phls.com.br/blog/hello-world' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Hello world

I'm Debian Maintainer since january 2017.

## Wouter Verhelst <!-- document.write( "<a href=\"#\" id=\"https://grep.be/blog//en/computer/SReview/SReview_0.1/_hide\" onClick=\"exclude( 'https://grep.be/blog//en/computer/SReview/SReview_0.1/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://grep.be/blog//en/computer/SReview/SReview_0.1/_show\" style=\"display:none;\" onClick=\"show( 'https://grep.be/blog//en/computer/SReview/SReview_0.1/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### SReview 0.1

This morning I uploaded version 0.1 of SReview, my video review and transcoding system, to Debian experimental. There's still some work to be done before it'll be perfectly easy to use by anyone, but I do think I've reached the point by now where it should have basic usability by now.

Quick HOWTO for how to use it:

• Enable Debian experimental
• Install the packages sreview-master, sreview-encoder, sreview-detect, and sreview-web. It's possible to install the four packages on different machines, but let's not go into too much detail there, yet.
• The installation will create an sreview user and database, and will start the sreview-web service on port 8080, listening only to localhost. The sreview-web package also ships with an apache configuration snippet that shows how to proxy it from the interwebs if you want to.
• Run sreview-config --action=dump. This will show you the current configuration of sreview. If you want to change something, either change it in /etc/sreview/config.pm, or just run sreview-config --set=variable=value --action=update.
• Run sreview-user -d --action=create -u <your email>. This will create an administrator user in the sreview database.
• Open a webbrowser, browse to http://localhost:8080/, and test whether you can log on.
• Write a script to insert the schedule of your event into the SReview database. Look at the debconf and fosdem scripts for inspiration if you need it. Yeah, that's something I still need to genericize, but I'm not quite sure yet how to do that.
• Either configure gridengine so that it will have the required queues and resources for SReview, or disable the qsub commands in the SReview state_actions configuration parameter (e.g., by way of sreview-config --action=update --set=state_actions=... or by editing /etc/sreview/config.pm).
• If you need notification, modify the state_actions entry for notification so that it sends out a notification (e.g., through an IRC bot or an email address, or something along those lines). Alternatively, enable the "anonreviews" option, so that the overview page has links to your talk.
• Review the inputglob and parse_re configuration parameters of SReview. The first should contain a filesystem glob that will find your raw assets; the second should parse the filename into room, year, month, day, hour, minute, and second, components. Look at the defaults of those options for examples (or just use those, and store your files as /srv/sreview/incoming/<room>/<year>-<month>-<day>/<hour>:<minute>:<second>.*).
• Provide an SVG file for opening credits, and point to it from the preroll_template configuration option.
• Provide an SVG or PNG file for closing credits, and point to it from the postroll_template resp postroll configuration option.
• Start recording, and watch SReview do its magic

There's still some bits of the above list that I want to make easier to do, and there's still some things that shouldn't be strictly necessary, but all in all, I think SReview has now reached a certain level of maturity that means I felt confident doing its first upload to Debian.

Did you try it out? Let me know what you think!

## Guido Günther <!-- document.write( "<a href=\"#\" id=\"http://honk.sigxcpu.org/con/git_buildpackage_0_9_2.html_hide\" onClick=\"exclude( 'http://honk.sigxcpu.org/con/git_buildpackage_0_9_2.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://honk.sigxcpu.org/con/git_buildpackage_0_9_2.html_show\" style=\"display:none;\" onClick=\"show( 'http://honk.sigxcpu.org/con/git_buildpackage_0_9_2.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### git-buildpackage 0.9.2

After some time in the experimental distribution I've uploaded git-buildpackage 0.9.0 to sid a couple of weeks ago and were now at 0.9.2 as of today. This brought in two new commands:

• gbp export-orig to regenerate tarballs based on the current version in debian/changelog. This was always possible by using gbp buildpackage and ignoring the build result e.g. gbp buildpackage --git-builder=/bin/true … but having a separate command is much more straight forward.

• gbp push to push everything related to the current version in debian/changelog: debian-tag, debian-branch, upstream-branch, upstream-tag, pristine-tar branch. This could already be achieved by a posttag hook but having it separate is again more straight forward and reduces the numer of knobs one has to tweak.

We moved to better supported tools:

• Switch to Python3 from Python2
• Switch from epydoc to pydoctor
• Finally switch from Docbook SGML to Docbook XML (we ultimately want to switch to Sphinx at one point but this will be much simpler now).

 mkdir -p ~/.config/pk4/hooks-enabled/unpack/
ln -s /usr/share/pk4/hooks-available/unpack/gbp ~/.config/pk4/hooks-enabled/unpack/


so pk4 invokes gbp import-dsc on package import.

There were lots of improvements all over the place like gbp pq now importing the patch queue on switch (if it's not already there) and gbp import-dsc and import-orig not creating pointless master branches if debian-branch != 'master'. And after being broken in the early 0.9.x cycle gbp buildpackage --git-overlay ... should be much better supported now that we have proper tests.

All in all 26 bugs fixed. Thanks to everybody who contributed bug reports and fixes.

## Norbert Preining <!-- document.write( "<a href=\"#\" id=\"https://www.preining.info/blog/2017/11/scalafx-dynamic-update-of-context-menu-of-table-rows/_hide\" onClick=\"exclude( 'https://www.preining.info/blog/2017/11/scalafx-dynamic-update-of-context-menu-of-table-rows/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://www.preining.info/blog/2017/11/scalafx-dynamic-update-of-context-menu-of-table-rows/_show\" style=\"display:none;\" onClick=\"show( 'https://www.preining.info/blog/2017/11/scalafx-dynamic-update-of-context-menu-of-table-rows/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### ScalaFX: dynamic update of context menu of table rows

Context menus are useful to exhibit additional functionality. For my TLCockpit program I am listing the packages, updates, and available backups in a TreeTableView. The context for each row should be different depending on the status of the content displayed.

My first try, taken from searches on the web, was to add the context menu via the rowFactory of the TreeTableView:

table.rowFactory = { p =>
val row = new TreeTableRow[SomeObject] {}
val infoMI = new MenuItem("Info") { onAction = /* use row.item.value */ }
val installMI = new MenuItem("Install") { onAction = /* use row.item.value */ }
val removeMI = new MenuItem("Remove") { onAction = /* use row.item.value */ }
val ctm = new ContextMenu(infoMI, installMI, removeMI)
row
}


This worked nicely until I tried to disable/enable some items based on the status of the displayed package:

  ...
val pkg: SomeObject = row.item.value
val isInstalled: Boolean = /* determine installation status of pkg */
val installMI = new MenuItem("Install") {
disable = isInstalled
onAction = /* use row.item.value */
}


What I did here is just pull the shown package, get its installation status, and disable the Install context menu entry if it is already installed.

All good and fine I thought, but somehow reality was different. First there where NullPointerExceptions (rare occurrence in Scala for me), and then somehow that didn’t work out at all.

The explanation is quite simple to be found by printing something in the rowFactory function. There are only as many rows made as fit into the current screen size (plus a bit), and their content is dynamically updated when one scrolls. But the enable/disable status of the context menu entries were not properly updated.

To fix this one needs to add a callback on the displayed item, which is exposed in row.item. So the correct code is (assuming that a SomeObject has a BooleanProperty installed):

table.rowFactory = { p =>
val row = new TreeTableRow[SomeObject] {}
val infoMI = new MenuItem("Info") { onAction = /* use row.item.value */ }
val installMI = new MenuItem("Install") { onAction = /* use row.item.value */ }
val removeMI = new MenuItem("Remove") { onAction = /* use row.item.value */ }
val ctm = new ContextMenu(infoMI, installMI, removeMI)
row.item.onChange { (_,_,newContent) =>
if (newContent != null) {
val isInstalled: /* determine installation status from newContent */
installMI.disable = is_installed
removeMI.disable = !is_installed
}
}
row
}


The final output then gives me:

That’s it, the context menus are now correctly adapted to the displayed content. If there is a simpler way, please let me know.

10 November, 2017 05:16AM by Norbert Preining

## Thadeu Lima de Souza Cascardo <!-- document.write( "<a href=\"#\" id=\"https://cascardo.eti.br/blog/Software_Freedom_Strategy_with_Community_Projects/_hide\" onClick=\"exclude( 'https://cascardo.eti.br/blog/Software_Freedom_Strategy_with_Community_Projects/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://cascardo.eti.br/blog/Software_Freedom_Strategy_with_Community_Projects/_show\" style=\"display:none;\" onClick=\"show( 'https://cascardo.eti.br/blog/Software_Freedom_Strategy_with_Community_Projects/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Software Freedom Strategy with Community Projects

It's been some time since I last wrote. Life and work have been busy. At the same time, the world has been busy, and as I would love to write a larger post, I will try to be short here. I would love to touch on the Librem 5 and postmarketOS. In fact, I had, in a podcast in Portuguese, Papo Livre. Maybe, I'll touch a little on the latter.

Some of the inspiration for this post include:

All of those led me to understand how software freedom is under attack, in particular how copyleft in under attack. And, as I talked during FISL, though many might say that "Open Source has won", end users software freedom has not. Lots of companies have co-opted "free software" but give no software freedom to their users. They seem friends with free software, and they are. Because they want software to be free. But freedom should not be a value for software itself, it needs to be a value for people, not only companies or people who are labeled software developers, but all people.

That's why I want to stop talking about free software, and talk more about software freedom. Because I believe the latter is more clear about what we are talking about. I don't mind that we use whatever label, as long as we stablish its meaning during conversations, and set the tone to distinguish them. The thing is: free software does not software freedom make. Not by itself. As Bradley Kuhn puts it: it's not magic pixie dust.

Those who have known me for years might remember me as a person who studied free software licenses and how I valued copyleft, the GPL specifically, and how I concerned myself with topics like license compatibility and other licensing matters.

Others might remember me as a person who valued a lot about upstreaming code. Not carrying changes to software openly developed that you had not made an effort to put upstream.

I can't say I was wrong on both accounts. I still believe in those things. I still believe in the importance of copyleft and the GPL. I still value sharing your code in the commons by going upstream. But I was certaily wrong in valuing them too much. Or not giving as much or even more value to distribution efforts of getting software freedom to the users.

And it took me a while in seeing how many people also saw the GPL as a tool to get code upstream. You see that a lot in Linus' discourse about the GPL. And that is on the minds of a lot of people, who I have seen argue that copyleft is not necessary for companies to contribute code back. But that's the problem. The point is not about getting code upstream. But about assuring people have the freedom to run a modified version of the software they received on their computers. It turns out that many examples of companies who had contributed code upstream, have not delivered that freedom to their end-users, who had received a modified version of that same software, which is not free.

Bradley Kuhn also alerts us that many companies have been replacing copyleft software with non-copyleft software. And I completely agree with him that we should be writing more copyleft software that we hold copyright for, so we can enforce it. But looking at what has been happening recently in the Linux community about enforcement, even thought I still believe in enforcement as an strategy, I think we need much more than that.

And one of those strategies is delivering more free software that users may be able to install on their own computers. It's building those replacements for software that people have been using for any reason. Be it the OS they get when they buy a device, or the application they use for communication. It's not like the community is not doing it, it's just that we need to acknowledge that this is a necessary strategy to guarantee software freedom. That distribution of software that users may easily install on their computers is as much or even more valuable than developing software closer to the hacker/developer community. That doing downstream changes to free software in the effort of getting them to users is worth it. That maintaining that software stable and secure for users is a very important task.

I may be biased when talking about that, as I have been shifting from doing upstream work to downstream work and both on the recent years. But maybe that's what I needed to realize that upstreaming does not necessarily guarantees that users will get software freedom.

I believe we need to talk more about that. I have seen many people dear to me disregard that difference between the freedom of the user and the freedom of software. There is much more to talk about that, go into detail about some of those points, and I think we need to debate more. I am subscribed to the libreplanet-discuss mailing list. Come join us in discussing about software freedom there, if you want to comment on anything I brought up here.

As I promised I would, I would like to mention about postmarketOS, which is an option users have now to get some software freedom on some mobile devices. It's an effort I wanted to build myself, and I applaud the community that has developed around it and has been moving forward so quickly. And it's a good example of a balance between upstream and dowstream code that gets to deliver a better level of software freedom to users than the vendor ever would.

I wanted to write about much of the topics I brought up today, but postponed that for some time. I was motivated by recent events in the community, and I am really disappointed at some the free software players and some of the events that happened in the last few years. That got me into thinking in how we need to manifest ourselves about those issues, so people know how we feel. So here it is: I am disappointed at how the Linux Foundation handled the situation about Software Freedom Conversancy taking a case against VMWare; I am disappointed about how Software Freedom Law Center handled a trademark issue against the Software Freedom Conservancy; and I really appreciate all the work the Software Freedom Conservancy has been doing. I have supported them for the last two years, and I urge you to become a supporter too.

# November 08, 2017

### Software Freedom Law Center and Conservancy

Before I start, I would like to make it clear that the below is entirely my personal view, and not necessarily that of the GNOME Foundation, the Debian Project, or anyone else.

There’s been quite a bit of interest recently about the petition by Software Freedom Law Center to cancel the Software Freedom Conservancy’s trademark. A number of people have asked my views on it, so I thought I’d write up a quick blog on my experience with SFLC and Conservancy both during my time as Debian Project Leader, and since.

It’s clear to me that for some time, there’s been quite a bit of animosity between SFLC and Conservancy, which for me started to become apparent around the time of the large debate over ZFS on Linux. I talked about this in my DebConf 16 talk, which fortunately was recorded (ZFS bit from 8:05 to 17:30).

This culminated in SFLC publishing a statement, and Conservancy also publishing their statement, backed up by the FSF. These obviously came to different conclusions, and it seems bizarre to me that SFLC who were acting as Debian’s legal counsel published a position that was contrary to the position taken by Debian. Additionally, Conservancy and FSF who were not acting as counsel mirrored the position of the project.

Then, I hear of an even more confusing move – that SFLC has filed legal action against Conservancy, despite being the organisation they helped set up. This happened on the 22nd September, the day after SFLC announced corporate and support services for Free Software projects.

SFLC has also published a follow up, which they say that the act “is not an attack, let alone a “bizarre” attack“, and that the response from Conservancy, who view it as such “was like reading a declaration of war issued in response to a parking ticket“. Then, as SFLC somehow find the threat of your trademark being taken away as something other than an attack, they also state: “Any project working with the Conservancy that feels in any way at risk should contact us. We will immediately work with them to put in place measures fully ensuring that they face no costs and no risks in this situation.” which I read as a direct pitch to try and pull projects away from Conservancy and over to SFLC.

Now, even if there is a valid claim here, despite the objections that were filed by a trademark lawyer who I have a great deal of respect for (disclosure: Pam also provides pro-bono trademark advice to my employer, the GNOME Foundation), the optics are pretty terrible. We have a case of one FOSS organisation taking another one to court, after many years of them being aware of the issue, and when wishing to promote a competing service. At best, this is a distraction from the supposed goals of Free Software organisations, and at worst is a direct attempt to interrupt the workings of an established and successful umbrella organisation which lots of projects rely on.

I truly hope that this case is simply dropped, and if I was advising SFLC, that’s exactly what I would suggest, along with an apology for the distress. Put it this way – if SFLC win, then they’re simply displaying what would be viewed as an aggressive move to hold the term “software freedom” exclusively to themselves. If they lose, then it shows that they’re willing to do so to another 501(c)3 without actually having a case.

Before I took on the DPL role, I was under the naive impression that although there were differences in approach, at least we were coming to try and work together to promote software freedoms for the end user. Unfortunately, since then, I’ve now become a lot more jaded about exactly who, and which organisations hold our best interests at heart.

(Featured image by  Nick Youngson – CC-BY-SA-3.0 – http://nyphotographic.com/)

08 November, 2017 04:55PM by Neil McGovern

## Dirk Eddelbuettel <!-- document.write( "<a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2017/11/08#r_finance_2018_cfp_hide\" onClick=\"exclude( 'http://dirk.eddelbuettel.com/blog/2017/11/08#r_finance_2018_cfp' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://dirk.eddelbuettel.com/blog/2017/11/08#r_finance_2018_cfp_show\" style=\"display:none;\" onClick=\"show( 'http://dirk.eddelbuettel.com/blog/2017/11/08#r_finance_2018_cfp' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### R / Finance 2018 Call for Papers

The tenth (!!) annual annual R/Finance conference will take in Chicago on the UIC campus on June 1 and 2, 2018. Please see the call for papers below (or at the website) and consider submitting a paper.

We are once again very excited about our conference, thrilled about who we hope may agree to be our anniversary keynotes, and hope that many R / Finance users will not only join us in Chicago in June -- and also submit an exciting proposal.

So read on below, and see you in Chicago in June!

### Call for Papers

R/Finance 2018: Applied Finance with R
June 1 and 2, 2018
University of Illinois at Chicago, IL, USA

The tenth annual R/Finance conference for applied finance using R will be held June 1 and 2, 2018 in Chicago, IL, USA at the University of Illinois at Chicago. The conference will cover topics including portfolio management, time series analysis, advanced risk tools, high-performance computing, market microstructure, and econometrics. All will be discussed within the context of using R as a primary tool for financial risk management, portfolio construction, and trading.

Over the past nine years, R/Finance has includedattendeesfrom around the world. It has featured presentations from prominent academics and practitioners, and we anticipate another exciting line-up for 2018.

We invite you to submit complete papers in pdf format for consideration. We will also consider one-page abstracts (in txt or pdf format) although more complete papers are preferred. We welcome submissions for both full talks and abbreviated "lightning talks." Both academic and practitioner proposals related to R are encouraged.

All slides will be made publicly available at conference time. Presenters are strongly encouraged to provide working R code to accompany the slides. Data sets should also be made public for the purposes of reproducibility (though we realize this may be limited due to contracts with data vendors). Preference may be given to presenters who have released R packages.

Please submit proposals online at http://go.uic.edu/rfinsubmit. Submissions will be reviewed and accepted on a rolling basis with a final submission deadline of February 2, 2018. Submitters will be notified via email by March 2, 2018 of acceptance, presentation length, and financial assistance (if requested).

Financial assistance for travel and accommodation may be available to presenters. Requests for financial assistance do not affect acceptance decisions. Requests should be made at the time of submission. Requests made after submission are much less likely to be fulfilled. Assistance will be granted at the discretion of the conference committee.

Additional details will be announced via the conference website at http://www.RinFinance.com/ as they become available. Information on previous years'presenters and their presentations are also at the conference website. We will make a separate announcement when registration opens.

For the program committee:

Gib Bassett, Peter Carl, Dirk Eddelbuettel, Brian Peterson,
Dale Rosenthal, Jeffrey Ryan, Joshua Ulrich

### RQuantLib 0.4.4: Several smaller updates

A shiny new (mostly-but-not-completely maintenance) release of RQuantLib, now at version 0.4.4, arrived on CRAN overnight, and will get to Debian shortly. This is the first release in over a year, and it it contains (mostly) a small number of fixes throughout. It also includes the update to the new DateVector and DatetimeVector classes which become the default with the upcoming Rcpp 0.12.14 release (just like this week's RcppQuantuccia release). One piece of new code is due to François Cocquemas who added support for discrete dividends to both European and American options. See below for the complete set of changes reported in the NEWS file.

As with release 0.4.3 a little over a year ago, we will not have new Windows binaries from CRAN as I apparently have insufficient powers of persuasion to get CRAN to update their QuantLib libraries. So we need a volunteer. If someone could please build a binary package for Windows from the 0.4.4 sources, I would be happy to once again host it on the GHRR drat repo. Please contact me directly if you can help.

Changes are listed below:

#### Changes in RQuantLib version 0.4.4 (2017-11-07)

• Changes in RQuantLib code:

• Equity options can now be analyzed via discrete dividends through two vectors of dividend dates and values (Francois Cocquemas in #73 fixing #72)

• Some package and dependency information was updated in files DESCRIPTION and NAMESPACE.

• The new Date(time)Vector classes introduced with Rcpp 0.12.8 are now used when available.

• Minor corrections were applied to BKTree, to vanilla options for the case of intraday time stamps, to the SabrSwaption documentation, and to bond utilities for the most recent QuantLib release.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list off the R-Forge page. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

## Jonathan Dowland <!-- document.write( "<a href=\"#\" id=\"http://jmtd.net/log/christmas/_hide\" onClick=\"exclude( 'http://jmtd.net/log/christmas/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://jmtd.net/log/christmas/_show\" style=\"display:none;\" onClick=\"show( 'http://jmtd.net/log/christmas/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Christmas

Every year, family members ask me to produce a list of gift suggestions for them to buy for me for Christmas. An enviable position for many, I'm sure, but combined with trying to come up with gift ideas for them, this can sometimes be a stressful situation, with a risk of either giving or receiving gifts that are really nothing more than tat, fluff or kipple. I've started to feel that this is detracting from the spirit of the season.

I also don't really want much "stuff". When I am interested in something, it's not something that is convenient for others to buy, either because it's hard to describe, or has limited availability, or is only available at particular times of the year, etc. I'd rather focus on spending time with friends and family.

Starting this year, I'm asking that people who wish to do so donate to a charity on my behalf instead. The charity I have chosen for this year is St. Oswald's Hospice.

# November 07, 2017

## Reproducible builds folks <!-- document.write( "<a href=\"#\" id=\"https://reproducible.alioth.debian.org/blog/posts/132/_hide\" onClick=\"exclude( 'https://reproducible.alioth.debian.org/blog/posts/132/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://reproducible.alioth.debian.org/blog/posts/132/_show\" style=\"display:none;\" onClick=\"show( 'https://reproducible.alioth.debian.org/blog/posts/132/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Reproducible Builds: Weekly report #132

Here's what happened in the Reproducible Builds effort between Sunday October 29 and Saturday November 4 2017:

## Past events

• From October 31st — November 2nd we held the 3rd Reproducible Builds summit in Berlin, Germany. A full, in-depth report will be posted in the next week or so.

## Upcoming events

• On November 8th Jonathan Bustillos Osornio (jathan) will present at CubaConf Havana.

• On November 17th Chris Lamb will present at Open Compliance Summit, Yokohama, Japan on how reproducible builds ensures the long-term sustainability of technology infrastructure.

## Reviews of unreproducible packages

7 package reviews have been added, 43 have been updated and 47 have been removed in this week, adding to our knowledge about identified issues.

## Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

• Andreas Moog (1)
• Lucas Nussbaum (7)
• Steve Langasek (1)

• Ulrike Uhlig:
• Holger Levsen:
• Chris Lamb:
• Bernhard M. Wiedemann:

## diffoscope development

Version 88 was uploaded to unstable by Mattia Rizzolo. It included contributions (already covered by posts of the previous weeks) from:

• Mattia Rizzolo
• tests/comparators/dtb: compatibility with version 1.4.5. (Closes: #880279)
• Chris Lamb
• comparators:
• binwalk: improve names in output of "internal" members. #877525
• Omit misleading "any of" prefix when only complaining about one module in ImportError messages.
• Don't crash on malformed "md5sums" files. (Closes: #877473)
• tests/comparators:
• ps: ps2ascii > 9.21 now varies on timezone, so skip this test for now.
• dtby: only parse the version number, not any "-dirty" suffix.
• debian/watch: Use HTTPS URI.
• Ximin Luo
• comparators:
• utils/file: Diff container metadata centrally. This fixes a last remaining bug in fuzzy-matching across containers. (Closes: #797759)
• Fix all the affected comparators after the above change.
• Holger Levsen
• Bump Standards-Version to 4.1.1, no changes needed.

## strip-nondeterminism development

Version 0.040-1 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks, as well as new ones from:

• Mattia Rizzolo:

Version 0.5.2-2 was uploaded to unstable by Holger Levsen.

It included contributions already covered by posts of the previous weeks, as well as new ones from:

• Chris Lamb:

## tests.reproducible-builds.org

• Mattia Rizzolo:
• archlinux: enable schroot building on pb4 as well
• archlinux: don't install the deprecated abs tool
• archlinux: try to re-enable one schroot creation job
• lynxis
• lede: replace TMPDIR -> RESULTSDIR
• lede: openwrt_get_banner(): use locals instead of globals
• lede: add newline to $CONFIG • lede: show git log -1 in jenkins log • Holger Levsen: • lede: add very simple landing page • Juliana Oliveira Rodrigues • archlinux: adds pacman-git dependencies • kpcyrd • archlinux: disable signature verification when running in the future • archlinux: use pacman-git until the next release • archlinux: make pacman fail less early • archlinux: use sudo to prepare chroot • archlinux: remove -rf for regular file • archlinux: avoid possible TOCTOU issue • archlinux: Try to fix tar extraction • archlinux: fix sha1sums parsing ## Misc. This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. ## Lucas Kanashiro <!-- document.write( "<a href=\"#\" id=\"http://blog.kanashiro.xyz/debian/lts/contributions/2017/11/07/lts-work-on-october.html_hide\" onClick=\"exclude( 'http://blog.kanashiro.xyz/debian/lts/contributions/2017/11/07/lts-work-on-october.html' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://blog.kanashiro.xyz/debian/lts/contributions/2017/11/07/lts-work-on-october.html_show\" style=\"display:none;\" onClick=\"show( 'http://blog.kanashiro.xyz/debian/lts/contributions/2017/11/07/lts-work-on-october.html' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### My Debian LTS work on October In this post I describe the work that I’ve done until the end of October in the context of the Debian LTS team. This month I was allocated 5h and spent just 2h of them because I have written my master’s qualification text (I am almost on my deadline to finish it). During November I intend to finish these 3h pending, so I did not request more hours. I basically worked with CVE-2017-0903 which is an issue related to YAML deserialization of gem specifications that could allow one execute remote code. Two packages in wheezy could be affected by this security vulnerability, rubygems and ruby1.9.1. The issue affects just RubyGems source code, but before Ruby version 1.9.1 it was maintained in a separated package, after that it was incorporated by ruby interpreter source package. After carefully read the upstream blogpost and reviewed the commit that intruduced this vulnerability, I was able to figure out whether the mentioned packages were affected or not. The modification was not present in both of them, and after some tests I did confirm that those versions of rubygems were not affected. The two packages were marked as not affected by CVE-2017-0903 in wheezy. Well, this was the summary of my activities in the Debian LTS team in October. See you next month :) ## Don Armstrong <!-- document.write( "<a href=\"#\" id=\"http://www.donarmstrong.com/posts/autorandr/_hide\" onClick=\"exclude( 'http://www.donarmstrong.com/posts/autorandr/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://www.donarmstrong.com/posts/autorandr/_show\" style=\"display:none;\" onClick=\"show( 'http://www.donarmstrong.com/posts/autorandr/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); --> ### Autorandr: automatically adjust screen layout Like many laptop users, I often plug my laptop into different monitor setups (multiple monitors at my desk, projector when presenting, etc.) Running xrandr commands or clicking through interfaces gets tedious, and writing scripts isn't much better. Recently, I ran across autorandr, which detects attached monitors using EDID (and other settings), saves xrandr configurations, and restores them. It can also run arbitrary scripts when a particular configuration is loaded. I've packed it, and it is currently waiting in NEW. If you can't wait, the deb is here and the git repo is here. To use it, simply install the package, and create your initial configuration (in my case, undocked):  autorandr --save undocked  then, dock your laptop (or plug in your external monitor(s)), change the configuration using xrandr (or whatever you use), and save your new configuration (in my case, workstation): autorandr --save workstation  repeat for any additional configurations you have (or as you find new configurations). Autorandr has udev, systemd, and pm-utils hooks, and autorandr --change should be run any time that new displays appear. You can also run autorandr --change or autorandr --load workstation manually too if you need to. You can also add your own ~/.config/autorandr/$PROFILE/postswitch script to run after a configuration is loaded. Since I run i3, my workstation configuration looks like this:

 #!/bin/bash

xrandr --dpi 92
xrandr --output DP2-2 --primary
i3-msg '[workspace="^(1|4|6)"] move workspace to output DP2-2;'
i3-msg '[workspace="^(2|5|9)"] move workspace to output DP2-3;'
i3-msg '[workspace="^(3|8)"] move workspace to output DP2-1;'


which fixes the dpi appropriately, sets the primary screen (possibly not needed?), and moves the i3 workspaces about. You can also arrange for configurations to never be run by adding a block hook in the profile directory.

Check it out if you change your monitor configuration regularly!

## Rogério Brito <!-- document.write( "<a href=\"#\" id=\"http://cynic.cc/blog//posts/2017-11-06_random_notes/_hide\" onClick=\"exclude( 'http://cynic.cc/blog//posts/2017-11-06_random_notes/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"http://cynic.cc/blog//posts/2017-11-06_random_notes/_show\" style=\"display:none;\" onClick=\"show( 'http://cynic.cc/blog//posts/2017-11-06_random_notes/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Some activities of the day

Yesterday, I printed the first draft of the first chapter when my little boy was here and he was impressed with this strange object called a "printer". Before I printed what I needed, I fired up LibreOffice and chose the biggest font size that was available and let him type his first name by himself. He was quicker than I thought with a keyboard. After seeing me print his first name, he was jumping up and down with joy of having created something and even showed grandma and grandpa what he had done.

He, then, wanted more and I taught him how to use that backspace key, what it meant and he wanted to type his full name. I let him and taught him that there is a key called space that he should type every time he wants to start a new word and, in the end, he typed his first two names. To my surprise, he memorized the icon with the printer (which I must say that I have to hunt every time, since it seems so similar to the adjacent ones!) and pressed this new key called "Enter". When he pressed, he wasn't expecting the printer on his right to start making noises and printing his name.

He was so excited and it was so nice to see his reaction full of joy to get a job done!

I am thinking of getting a spare computer, building it with him and for him, so that he can call it his computer every time he comes to see daddy. As a serendipitous situation, Packt Publishing offered yesterday their title "Python Projects for Kids". Unfortunately, he does not yet know how to read, but I guess that the right age is coming soon, which is a good thing to make him be educated "the right way" (that is, with the best support, teaching and patience that I can give him).

Anyway, I printed the first draft of the first chapter and today I have to turn it in.

As I write this, I am downloading a virtual machine from Microsoft to try to install Java on it. Let me see if it works. I have none of the virtualization options used, tough the closest seems to be virtualbox.

Let me cross my fingers.

In other news, I updated some of the tags of very old posts of this blog, and I am seriously thinking about switching from [ikiwiki][0] to another blog platform. It is slow, very slow on my system with the repositories that I have, especially on my armel system. Some non-interpreted system would be best, but I don't know if such a thing even exists. But the killer problem is that it doesn't support easily the typing of Mathematics (even though a 3rd party plugin for MathJax exists).

On the other hand, I just received an answer on twitter from @telegram and it was nice:

Hello, Telegram supports bold and italic. You can type **bold** and __italic__. On mobile, you can also highlight text for this as well.

It is nice that this works with telegram-desktop too.

Besides that, I filed some bugs on Debian's BTS, responded to some issues on my projects on GitHub (I'm slowly getting back on maintaining things) and file wishlist bugs on some other projects.

Oh, and I grabbed a copy of "Wonder woman" ("Mulher Maravilha") and "Despicable Me 3" ("Meu Malvado Favorito 3") dubbed in Brazilian Portuguese for my son. I have to convert the audio from AAC-LC in 6 channels to AC3 or to stereo. Otherwise, my TVs have problem with the videos (one refuses to play the entire file and another plays the audio with sounds of hiccups).

Edit: After converting the VirtualBox image taken from Microsoft, I could easily use qemu/kvm to create screenshots of the installation of Java. The command that I used (for future reference) is:

 qemu-system-x86_64 -enable-kvm -m 4096 -smp 2 -net nic,model=e1000 -net user -soundhw ac97 -drive index=0,media=disk,cache=unsafe,file=win7.qcow2 

Edit: Fixed some typos.

# November 06, 2017

## James Bromberger <!-- document.write( "<a href=\"#\" id=\"https://blog.james.rcpt.to/2017/11/06/web-security-2017/_hide\" onClick=\"exclude( 'https://blog.james.rcpt.to/2017/11/06/web-security-2017/' ); hideHosts(); return false;\"><img src=\"common/minus-8.png\" style=\"border: none;\" title=\"Hide Author\" alt=\"Hide Author\" height=\"8\" width=\"8\"><\/a> <a href=\"#\" id=\"https://blog.james.rcpt.to/2017/11/06/web-security-2017/_show\" style=\"display:none;\" onClick=\"show( 'https://blog.james.rcpt.to/2017/11/06/web-security-2017/' ); return false;\"><img src=\"common/plus-8.png\" style=\"border: none;\" title=\"Show Author\" alt=\"Show Author\" height=\"8\" width=\"8\"><\/a>" ); -->

### Web Security 2017

I started web development around late 1994. Some of my earliest paid web work is still online (dated June 1995). Clearly, that was a simpler time for content! I went on to be ‘Webmaster’ (yes, for those joining us in the last decade, that was a job title once) for UWA, and then for Hartley Poynton/JDV.com at time when security became important as commerce boomed online.

At the dawn of the web era, the consideration of backwards compatibility with older web clients (browsers) was deemed to be important; content had to degrade nicely, even without any CSS being applied. As the years stretched out, the legacy became longer and longer. Until now.

In mid-2018, the Payment Card Industry (PCI) Data Security Standard (DSS) 3.2 comes into effect, requiring card holder environments to use (at minimum) TLS 1.2 for the encrypted transfer of data. Of course, that’s also the maximum version typically available today (TLS 1.3 is in draft 21 at this point in time of writing). This effort by the PCI is forcing people to adopt new browsers that can do the TLS 1.2 protocol (and the encryption ciphers that permits), typically by running modern/recent Chrome, Firefox, Safari or Edge browsers. And for the majority of people, Chrome is their choice, and the majority of those are all auto-updating on every release.

Many are pushing to be compliant with the 2018 PCI DSS 3.2 as early as possible; your logging of negotiated protocols and ciphers will show if your client base is ready as well. I’ve already worked with one government agency to demonstrate they were ready, and have already helped disable TLS 1.0 and 1.1 on their public facing web sites (and previously SSL v3). We’ve removed RC4 ciphers, 3DES ciphers, and enabled ephemeral key ciphers to provide forward secrecy.

Web developers (writing Javascript and using various frameworks) can rejoice — the age of having to support legacy MS IE 6/7/8/9/10 is pretty much over. None of those browsers support TLS 1.2 out of the box (IE 10 can turn this on, but for some reason, it is off by default). This makes Javascript code smaller as it doesn’t have to have conditional code to work with the quirks of those older clients.

But as we find ourselves with modern clients, we can now ask those clients to be complicit in our attempts to secure the content we serve. They understand modern security constructs such as Content Security Policies and other HTTP security-related headers.

There’s two tools I am currently using to help in this battle to improve web security. One is SSLLabs.com, the work of Ivan Ristić (and now owned/sponsored by Qualys). This tool gives a good view of the encryption in flight (protocols, ciphers), chain of trust (certificate), and a new addition of checking DNS records for CAA records (which I and others piled on a feature request for AWS Route53 to support). The second tool is Scott Helm’s SecurityHeaders.io, which looks at the HTTP headers that web content uses to ask browsers to enforce security on the client side.

There’s a really important reason why these tools are good; they are maintained. As new recommendations on ciphers, protocols, signature algorithms or other actions become recommended, they’re updated on these tools. And these tools are produced by very small, but agile teams — like one person teams, without the bureaucracy (and lag) associated with large enterprise tools. But these shouldn’t be used blindly. These services make suggestions, and you should research them yourselves. For some, not all the recommendations may meet your personal risk profile. Personally, I’m uncomfortable with Public-Key-Pins, so that can wait for a while — indeed, Chrome has now signalled they will drop this.

So while PCI is hitting merchants with their DSS-compliance stick (and making it plainly obvious what they have to do), we’re getting a side-effect of having a concrete reason for drawing a line under where our backward compatibility must stretch back to, and the ability to have the web client assist in ensure security of content.

06 November, 2017 03:51PM by james