October 23, 2019

hackergotchi for Bits from Debian

Bits from Debian

The Debian Project stands with the GNOME Foundation in defense against patent trolls

In 2012, the Debian Project published our Position on Software Patents, stating the threat that patents pose to Free Software.

The GNOME Foundation has announced recently that they are fighting a lawsuit alleging that Shotwell, a free and Open Source personal photo manager, infringes a patent.

The Debian Project firmly stands with the GNOME Foundation in their efforts to show the world that we in the Free Software communities will vigorously defend ourselves against any abuses of the patent system.

Please read this blog post about GNOME's defense against this patent troll and consider making a donation to the GNOME Patent Troll Defense Fund.

23 October, 2019 08:00AM by Ana Guerrero López

hackergotchi for Steve Kemp

Steve Kemp

/usr/bin/timedatectl

Today I was looking over a system to see what it was doing, checking all the running processes, etc, and I spotted that it was running openntpd.

This post is a reminder to myself that systemd now contains an NTP-client, and I should go round and purge the ntpd/openntpd packages from my systems.

You can check on the date/time via:

$ timedatectl 
                      Local time: Wed 2019-10-23 09:17:08 EEST
                  Universal time: Wed 2019-10-23 06:17:08 UTC
                        RTC time: Wed 2019-10-23 06:17:08
                       Time zone: Europe/Helsinki (EEST, +0300)
       System clock synchronized: yes
systemd-timesyncd.service active: yes
                 RTC in local TZ: no

If the system is not setup to sync it can be enabled via:

$ sudo timedatectl set-ntp true

Finally logs can be checked as you would expect:

$ journalctl -u systemd-timesyncd.service

23 October, 2019 06:30AM

October 22, 2019

hackergotchi for Shirish Agarwal

Shirish Agarwal

Lives around a banking crisis

First of all I will share a caricature and a joke shared by me as the rest of the blog post would be serious as it involves what is happening in India right now and the lack of sensitivity to it.

Caricature of me by @caticaturewale on twitter.

The above is a caricature of me done by @caricaturewale. If you can’t laugh at yourself, you can’t laugh at the world, can you ? I did share my thankfulness with the gentleman who created it. It surely must have taken quite sometime to first produce a likeness and then fiddle around the features so it produces somewhat laughable picture. So all the kudos to him for making this gem 🙂

PMC Bank

The other joke of the day, no the week or perhaps the month was a conversation between a potential bank deposit holder and a banker or somebody calling on behalf of bank.

I got a call from bank.
They said: “U pay us ₹ 6000 every month. U will get ₹ 1 crore when U retire”.
I replied: “U reverse the plan, U give me 1 crore now. And I will pay U ₹ 6000 every month till I die.”
The banker disconnected the call. Did I say anything wrong???

Poonam Datta, Maritime and Logistics, IMD, Switzerland on twitter.

Now where is this coming from ? I am sure some of you may have seen my last blog post which shared about the travails of PMC bank last week. There have been lot of developments which have happened since that. What I had forgotten to share that day is that the crisis in PMC Bank has been such that lives are being lost. From last week till date, 5 lives have been lost due to the PMC bank debacle. The latest being Bharati Sadarangani, a 73 year old septuagenarian who suffered a cardiac arrest as she had her life-savings, her daughter and son-in-law’s all life savings in that bank account. Now the son-in-law blames himself and his wife as the wife used to share her uncertainities with her mom. These people and all the others need urgent psychological help and counselling. In fact, while I don’t know them, I had suggested to some people to see if they can reach to these people and maybe some psychologists team can help them. While moneylife foundation is helping the distressed to write writ petitions and provide legal help, there is definitely need of medical counselling so no more suicides, panic attacks or cardiac arrests happen. But this isn’t the end of the story but is beginning of one.

Deposit Insurance and Credit Guarantee Corporation

The above I shared is a stamp by Dicgc which stands for Deposit Insurance and Credit Guratantee Corporation shared by a gentleman on twitter. The stamp claims and I quote that DICGC is liable for only INR 1 lakh rupees. Now many people were surprised where did the INR 1 lakh rupee come from ? While it is aptly documented in many places, I am going to use CNBC News 18 coverage to share the history of the liability of a bank which dates it back to 1993 where it was changed last to INR 1 lakh rupees. While I did my own calculations using the simple formulae of currency exchange, I would submit to Mr. Raghav Bahl’s slightly superior method where he has taken also the Indian GDP growth also into account which comes to INR 1,00,000.

The method I used is below –

The exchange rate of dollar in 1993 was 28.13 INR for 1 USD. For an amount of INR 1 lakh it comes to USD 3,555 at that time. Using just the simplest inflation tool it comes to date at USD 20,380 as of date which comes to INR 14,43,566.35 . This comes with some base assumptions that the tools and calculations I did were and are in-line with what GOI gives. One point to note though is what the GOI declares as inflation is usually at least a percent or two lower than what the real inflation is at the very least. At least, that’s what I have observed for the last twenty odd years or so. On top of it I didn’t even use the GDP multiple which Mr. Bahl did.

As far as the argument which some people have or may have it’s unlikely that many people may have had that much money, that amount was done by GOI using a variety of factors, including one would suppose the average balance of a middle-class salaried person. If one were to take gold or some other indicator I’m sure they probably would be a lot more higher. This is most conservative estimate I made. To have a higher limits in insurance to the deposit, it would need to be a people’s movement, otherwise it could be INR 1 lakh for all eternity, or INR 3 lakhs, IF the government feels it’s ok with that 😦

Why bother with such calculations ?

One could ask the question why bother with such calculations ? The reason is very simple. While the issue came at a co-operative bank, the amount of insurance limitation is not just limited to a co-operative bank but to all banks, whether they are co-operative, National or even private banks. While I live in Pune, and always have made some very basic lifestyle choices, in Mumbai, the same lifestyle choices will set anyone back at least INR 30-40k. Somebody shared that their 1 BHK apartment electricity costs are INR 15-16k/- monthly while it fluctuates between INR 900-1k/- for the same in Pune. So in that context, putting a budget of INR 30 – 40K/- may be probably conservative for a single person, if it’s a family or 2 or 4, which is common would probably be much more. So in any such scenario, it affects all of us. And this is not taking into account any medical emergencies or credit needs for some unplanned event.

The way out

What is the way out ? While those who are stuck in PMC Bank do not have a way out unless RBI takes some firm actions and help the ones in distress , those who are not are equally in distress because they do not know what to do or where to go. While I’m no investment adviser, just don’t put all your eggs in one basket. Put some in different post schemes, some in PPF, some in gold bars or coins certified by some nationalized banker or something. And most importantly of all, be prepared for a loss no matter what you choose. In a recessionary economy, while there may be some winners, there would be lot more loosers. And while people will attempt to seduce you with some cheap valuations, this is probably not the best time to indulge into fantasies. You would be better off spending time with friends and loved ones or into books.

Books

If you want to indulge into fantasy, go buy a book at Booksbyweight or Booksbykilo rather than indulge in needless speculation. At the very least you would have an asset or commodity which will give you countless hours of pleasure and is a tradable commodity with friends and enemies over a lifetime. And if you are a book lover, reader, writer, romantic like me they can be the best friends as well. Unlike friends or loved ones who would demand time, they are for you whenever you are ready for them rather than vice-versa.

22 October, 2019 08:50PM by shirishag75

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

pkgKitten 0.1.5: Creating R Packages that purr

kitten

Another minor release 0.1.5 of pkgKitten just hit on CRAN today, after a break of almost three years.

This release provides a few small changes. The default per-package manual page now benefits from a second refinement (building on what was introduced in the 0.1.4 release) in using the Rd macros referring to the DESCRIPTION file rather than duplicating information. Several pull requests fixes sloppy typing in the README.md, NEWS.Rd or manual page—thanks to all contributors for fixing these. Details below.

Changes in version 0.1.5 (2019-10-22)

  • More extensive use of newer R macros in package-default manual page.

  • Install .Rbuildignore and .gitignore files.

  • Use the updated Travis run script.

  • Use more Rd macros in default 'stub' manual page (#8).

  • Several typos were fixed in README.md, NEWS.Rd and the manual page (#9, #10)

More details about the package are at the pkgKitten webpage and the pkgKitten GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 October, 2019 12:52PM

hackergotchi for Keith Packard

Keith Packard

picolibc-updates

Picolibc Updates (October 2019)

Picolibc is in pretty good shape, but I've been working on a few updates which I thought I'd share this evening.

Dummy stdio thunk

Tiny stdio in picolibc uses a global variable, __iob, to hold pointers to FILE structs for stdin, stdout, and stderr. For this to point at actual usable functions, applications normally need to create and initialize this themselves.

If all you want to do is make sure the tool chain can compile and link a simple program (as is often required for build configuration tools like autotools), then having a simple 'hello world' program actually build successfully can be really useful.

I added the 'dummyiob.c' module to picolibc which has an iob variable initialized with suitable functions. If your application doesn't define it's own iob, you'll get this one instead.

$ cat hello.c
#include <stdio.h>

int main(void)
{
    printf("hello, world\n");
}
$ riscv64-unknown-elf-gcc -specs=picolibc.specs hello.c
$ riscv64-unknown-elf-size a.out
   text    data     bss     dec     hex filename
    496      32       0     528     210 a.out

POSIX thunks

When building picolibc on Linux for testing, it's useful to be able to use glibc syscalls for input and output. If you configure picolibc with -Dposix-io=true, then tinystdio will use POSIX functions for reading and writing, and also offer fopen and fdopen functions as well.

To make calling glibc syscall APIs work, I had to kludge the stat structure and fcntl bits. I'm not really happy about this, but it's really only for testing picolibc on a Linux host, so I'm not going to worry too much about it.

Remove 'mathfp' code

The newlib configuration docs aren't exactly clear about what the newlib/libm/mathfp directory contains, but if you look at newlib faq entry 10 it turns out this code was essentially a failed experiment in doing a 'more efficient' math library.

I think it's better to leave 'mathfp' in git history and not have it confusing us in the source repository, so I've removed it along with the -Dhw-fp option.

Other contributions

I've gotten quite a few patches from other people now, which is probably the most satisfying feedback of all.

  • powerpc build patches
  • stdio fixes
  • cleanup licensing, removing stale autotools bits
  • header file cleanups from newlib which got missed

Semihosting support

RISC-V and ARM both define a 'semihosting' API, which provides APIs to access the host system from within an embedded application. This is useful in a number of environments:

  • GDB through OpenOCD and JTAG to an embedded device
  • Qemu running bare-metal applications
  • Virtual machines running anything up to and including Linux

I really want to do continuous integration testing for picolibc on as many target architectures as possible, but it's impractical to try and run that on actual embedded hardware. Qemu seems like the right plan, but I need a simple mechanism to get error messages and exit status values out from the application.

Semihosting offers all of the necessary functionality to run test without requiring an emulated serial port in Qemu and a serial port driver in the application.

For now, that's all the functionality I've added; console I/O (via a definition of _iob) and exit(2). If there's interest in adding more semihosting API calls, including file I/O, let me know.

I wanted to make semihosting optional, so that applications wouldn't get surprising results when linking with picolibc. This meant placing the code in a separate library, libsemihost. To get this linked correctly, I had to do a bit of magic in the picolibc.specs file. This means that selecting semihost mode is now done with a gcc option, -semihost', instead of just adding -lsemihost to the linker line.

Semihosting support for RISC-V is already upstream in OpenOCD. I spent a couple of hours last night adapting the ARM semihosting support in Qemu for RISC-V and have pushed that to my riscv-semihost branch in my qemu project on github

A real semi-hosted 'hello world'

I've been trying to make using picolibc as easy as possible. Learning how to build embedded applications is hard, and reducing some of the weird tool chain fussing might make it easier. These pieces work together to simplify things:

  • Built-in crt0.o
  • picolibc.specs
  • picolibc.ld
  • semihost mode

Here's a sample hello-world.c:

#include <stdio.h>
#include <stdlib.h>

int main(void)
{
    printf("hello, world\n");
    exit(0);
}

On Linux, compiling is easy:

$ cc hello-world.c 
$ ./a.out 
hello, world
$

Here's how close we are to that with picolibc:

$ riscv64-unknown-elf-gcc -march=rv32imac -mabi=ilp32 --specs=picolibc.specs -semihost -Wl,-Tqemu-riscv.ld hello-world.c
$ qemu-system-riscv32 -semihosting -machine spike -cpu rv32imacu-nommu -kernel a.out -nographic
hello, world
$

This requires a pile of options to specify the machine that qemu emulates, both when compiling the program and again when running it. It also requires one extra file to define the memory layout of the target processor, 'qemu-riscv.ld':

__flash = 0x80000000;
__flash_size = 0x00080000;
__ram = 0x80080000;
__ram_size = 0x40000;
__stack_size = 1k;

These are all magic numbers that come from the definition of the 'spike' machine in qemu, which defines 16MB of RAM starting at 0x80000000 that I split into a chunk for read-only data and another chunk for read-write data. I found that definition by looking in the source; presumably there are easier ways?

Larger Examples

I've also got snek running on qemu for both arm and riscv processors; that exercises a lot more of the library. Beyond this, I'm working on freedom-metal and freedom-e-sdk support for picolibc and hope to improve the experience of building embedded RISC-V applications.

Future Plans

I want to get qemu-based testing working on both RISC-V and ARM targets. Once that's running, I want to see the number of test failures reduced to a more reasonable level and then I can feel comfortable releasing version 1.1. Help on these tasks would be greatly appreciated.

22 October, 2019 05:34AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.22: More goodies!

A new version of digest arrived at CRAN earlier today, and I just sent an updated package to Debian too.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 868k monthly downloads) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release comes pretty much exactly one month after the very nice 0.6.21 release but contains five new pull requests. Matthew de Queljoe did a little bit of refactoring of the vectorised digest function he added in 0.6.21. Ion Suruceanu added a CFB cipher for AES. Bill Denney both corrected and extended sha1. And Jim Hester made the windows-side treatment of filenames UTF-8 compliant.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 October, 2019 01:31AM

October 21, 2019

hackergotchi for Chris Lamb

Chris Lamb

Tour d'Orwell: Rue du Pot-de-Fer

In yet another George Orwell-themed jaunt, just off the «Place de la Contrescarpe» in the Vᵉ arrondissement (where the Benalla affair took place) is the narrow «Rue du Pot-de-Fer» where Orwell lived for 18 months à number six.

He would have not have written a glowing TripAdvisor review of the hotel he stayed at: "the walls were as thin as matchwood and to hide the cracks they had been covered with layer after layer of pink paper [and] long lines of bugs marched all day long like columns of soldiers and at night came down ravenously hungry so that one had to get up every few hours and kill them."

Skipping the shisha bar that exists on le premiere étage, whilst I was not "ravenously" hungry I couldn't resist the escargot in the nearby square whilst serenaded by an accordion player clearly afflicted with ennui or some other stereotypically-Parisien malady…

21 October, 2019 03:38PM

hackergotchi for Norbert Preining

Norbert Preining

Pleasures of Tibetan input and typesetting with TeX

Many years ago I decided to learn Tibetan (and the necessary Sanskrit), and enrolled in the university studies of Tibetology in Vienna. Since then I have mostly forgotten Tibetan due to absolute absence of any practice, save for regular consultations from a friend on how to typeset Tibetan. In the future I will write a lengthy article for TUGboat on this topic, but here I want to concentrate only on a single case, the character ཨྰོཾ.Since you might have a hard time to get it rendered correctly in your browser, here is in an image of it.

In former times we used ctib to typeset Tibetan, but the only font that was usable with it is a bit clumsy, and there are several much better fonts now available. Furthermore, always using transliterated input instead of the original Tibetan text might be a problem for people outside academics of Tibetan. If we want to use one of these (obviously) ttf/otf fonts, a switch to either XeTeX or LuaTeX is required.

It turned out that the font my friends currently uses, Jomolhari, although it is beautiful, is practically unusable. My guess is that the necessary tables in the truetype font are not correctly initialized, but this is detective work for later. I tried with several other fonts, in particular DDC Uchen, Noto Serif Tibetan, Yagpo Uni, and Qomolangma, and obtained much better results. Complicated compounds like རྣྲི་ did properly render in both LuaTeX (in standard version as well as the HarfBuzz based version LuaHBTeX) and XeTeX. Others, like ཨོྂ་ only properly with Noto Serif Tibetan.

But there was one combination that escaped correct typesetting: ཨྰོཾ་ which is the famous OM with a subjoint a-chung. It turned out that with LuaLaTeX it worked, but with LuaHBLaTeX (with actually enabled harfbuzz rendering) and XeTeX the output was a a separate OM followed by a dotted circle with subjoint a-chung.

The puzzling thing about it was, that it was rendered correctly in my terminal, and even in my browser – the reason was it uses the font Yagpo Uni. Unfortunately other combinations did render badly with Yagpo Uni, which is surprising because it seems that Yagpo Uni is one of the most complete fonts with practically all stacks (not only standard ones) precomposed as glyphs in the font.

With help of the luaotfload package maintainers we could dissect the problem:

  • The unicode sequence was U+0F00 U+0FB0, which is the pre-composed OM with the subjoined a-chung
  • This sequence was input via ETWS (M17N) by entering oM+' which somehow feels natural to use
  • HarfBuzz decomposes U+0F00 into U+0F68 U+0F7C U+0F7E according to the ccmp table of the font
  • HarfBuzz either does not recognize U+0F00 as a glyph that can get further consonants subjoined, or the decomposed sequence followed by the subjoined a-chung U+0F68 U+0F7C U+0F7E U+0FB0 cannot be rendered correctly, thus HarfBuzz renders the subjoined a-chung separately under the famous dotted circle

Having this in mind, it was easy to come up with a correct input sequence to achieve the correct output: a+'oM which generates the unicode sequence

  U+0F68 TIBETAN LETTER A
  U+0F7C TIBETAN VOWEL SIGN O
  U+0F7E TIBETAN SIGN RJES SU NGA RO
  U+0FB0 TIBETAN SUBJOINED LETTER -A

which in turn is correctly rendered by HarfBuzz, and thus by both XeTeX and LuaHBTeX with HarfBuzz rendering enabled.

Here some further comments concerning HarfBuzz based rendering (luahbtex-dev, xetex, libreoffice) versus luatex:

  • Yagpo Uni has lots of pre-composed compounds, but is rather picky about how they are input. The above letter renders correctly when input as oM+' (that is U+0F00 U+0FB0), but incorrectly when input as a+'oM.
  • Noto Serif Tibetan accepts most input from EWTS correctly, but oM+' does not work, while a+'oM does (opposite of Yagpo Uni!)
  • Concerning ཨོྂ་ (input o~M`) only Noto Serif Tibetan and Qomolangma works, unfortunately Yagpo is broken here. I guess a different input format is necessary for this.
  • Qomolangma does not use the correct size of a-chung in ligatures that are more complex than OM
  • Jomolhari seems to be completely broken with respect to normal input

All this tells me that there is really a huge difference in what fonts expect, how to input, how to render, and with such endangered minority languages like Tibetan the situations seems to be much worse.

Concerning the rendering of ཨྰོཾ་ I am still not sure where to send a bug report: One could easily catch this order of characters in the EWTS M17N input definition, but that would fix it only for EWTS and other areas would be left out, in particular LibreOffice rendering as well as any other HarfBuzz based application. So I would rather see it fixed in HarfBuzz (and I am currently testing code). But all the other problems that appear are still unsolved.

21 October, 2019 06:23AM by Norbert Preining

October 20, 2019

hackergotchi for Jonathan McDowell

Jonathan McDowell

Debian Buster / OpenWRT 18.06.4 upgrade notes

Yesterday was mostly a day of doing upgrades (as well as the usual Saturday things like a parkrun). First on the list was finally upgrading my last major machine from Stretch to Buster. I’d done everything else and wasn’t expecting any major issues, but it runs more things so there’s more generally the opportunity for problems. Some notes I took along the way:

  • apt upgrade updated collectd but due to the loose dependencies (the collectd package has a lot of plugins and chooses not to depend on anything other than what the core needs) libsensors5 was not pulled in so the daemon restart failed. This made apt/dpkg unhappy until I manually pulled in libsensors5 (replacing libsensors4).
  • My custom fail2ban rule to drop spammers trying to register for wiki accounts too frequently needed the addition of a datepattern entry to work properly.
  • The new version of python-flaskext.wtf caused deprecation warnings from a custom site I run, fixed by moving from Form to FlaskForm. I still have a problem with a TypeError: __init__() takes at most 2 arguments (3 given) error to track down.
  • ejabberd is still a pain. This time the change of the erlang node name caused issues with it starting up. There was a note in NEWS.Debian but the failure still wasn’t obvious at first.
  • For most of the config files that didn’t update automatically I just did a manual vimdiff and pulled in the updated comments; the changes were things I wanted to keep like custom SSL certificate configuration or similar.
  • PostgreSQL wasn’t as smooth as last time. A pg_upgradecluster 9.6 main mostly did the right thing (other than taking ages to migrate the SpamAssassin Bayes database), but left 9.6 still running rather than 11.
  • I’m hitting #924178 in my duplicity backups - they’re still working ok, but might be time to finally try restic

All in all it went smoothly enough; the combination of a lot of packages and the PostgreSQL migration caused most of the time. Perhaps it’s time to look at something with SSDs rather than spinning rust (definitely something to check out when I’m looking for alternative hosts).

The other major-ish upgrade was taking my house router (a repurposed BT HomeHub 5A) from OpenWRT 18.06.2 to 18.06.4. Not as big as the Debian upgrade, but with the potential to leave me with non-working broadband if it went wrong1. The CLI sysupgrade approach worked fine (as it has in the past), helped by the fact I’ve added my MQTT and SSL configuration files to /etc/sysupgrade.conf so they get backed up and restored. OpenWRT does a full reflash for an upgrade given the tight flash constraints, so other than the config files that get backed up you need to restore anything extra. This includes non-default packages that were installed, so I end up with something like

opkg update
opkg install mosquitto-ssl 6in4 ip-tiny picocom kmod-usb-serial-pl2303

And then I have a custom compile of the collectd-mod-network package to enable encryption, and my mqtt-arp tool to watch for house occupants:

opkg install collectd collectd-mod-cpu collectd-mod-interface \
     collectd-mod-load collectd-mod-memory collectd-mod-sensors \
     /tmp/collectd-mod-network_5.8.1-1_mips_24kc.ipk
opkg install /tmp/mqtt-arp_1.0-1_mips_24kc.ipk

One thing that got me was the fact that installing the 6to4 package didn’t give me working v6, I had to restart the router for it to even configure up its v6 interfaces. Not a problem, just didn’t notice for a few hours2.

While I was on a roll I upgraded the kernel on my house server to the latest stable release, and Home Assistant to 0.100.2. As expected neither had any hiccups.

  1. Of course I have a spare VDSL modem/router that I can switch in, but it’s faff I prefer to avoid. 

  2. And one I shouldn’t hit in the future, as I’m moving to an alternative provider with native IPv6 this week. 

20 October, 2019 05:30PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppGSL 0.3.7: Fixes and updates

A new release 0.3.7 of RcppGSL is now on CRAN. The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

Stephen Wade noticed that we were not actually freeing memory from the GSL vectors and matrices as we set out to do. And he is quite right: a dormant bug, present since the 0.3.0 release, has now been squashed. I had one boolean wrong, and this has now been corrected. I also took the opportunity to switch the vignette to prebuilt mode: Now a pre-made pdf is just included in a Sweave document, which makes the build more robust to tooling changes around the vignette processing. Lastly, the package was converted to the excellent tinytest unit test framework. Detailed changes below.

Changes in version 0.3.7 (2019-10-20)

  • A logic error was corrected in the wrapper class, vector and matrix memory is now properly free()'ed (Dirk in #22 fixing #20).

  • The introductory vignettes is now premade (Dirk in #23), and was updated lightly in its bibliography handling.

  • The unit tests are now run by tinytest (Dirk in #24).

Courtesy of CRANberries, a summary of changes to the most recent release is also available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 October, 2019 03:01PM

hackergotchi for Daniel Silverstone

Daniel Silverstone

A quarter in review - Nearly there, 2020 in sight

The 2019 plan - Third-quarter review

At the start of the year I blogged about my plans for 2019. For those who don't want to go back to read that post, in summary they are:

  1. Continue to lose weight and get fit. I'd like to reach 80kg during the year if I can
  2. Begin a couch to 5k and give it my very best
  3. Focus my software work on finishing projects I have already started
  4. Where I join in other projects be a net benefit
  5. Give back to the @rustlang community because I've gained so much from them already
  6. Be better at tidying up
  7. Save up lots of money for renovations
  8. Go on a proper holiday

At the point that I posted that, I promised myself to do quarterly reviews and so here is the second of those. The first can be found here, and the second here.

1. Weight loss

So when I wrote in July, I was around 83kg. I am very sad to report that I did not manage to drop another 5kg, I'm usually around 81.5kg at the moment, though I do peak up above 84kg and down as far as 80.9kg. The past three months have been an exercise in incredible frustration because I'd appear to be making progress only to have it vanish in one day.

I've continued my running though, and cycling, and I've finally gone back to the gym and asked for a resistance routine to compliment this, so here's hoping.

Yet again, I continue give myself a solid "B" for this, though if I were generous, given everything else I might consider a "B+"

2. Fitness (was Couch to 5k)

When I wrot ein July, I was pleased to say that I was sub 28m on my parkrun. I'm now consistently sub 27:30, and have personal bests of 26:18 and 26:23 at two of my local parkruns.

I have started running with my colleagues once a week, and that run is a bit longer (5.8 to 7km depending on our mood) and while I've only been out with them a couple of times so far, I've enjoyed running in a small group. With the weather getting colder I've bought myself some longer sleeved tops and bottoms to hopefully allow me to run through the winter. My "Fitness age" is now in the mid 40s, rather than high 60s, so I'm also approaching a point where I'm as fit as I am old, which is nice.

So far, so good, I'm continuing with giving myself an "A+"

3. Finishing projects

This is a much more difficult point for me this year, sadly. I continued to do some work on on NetSurf this quarter. We had another amazing long-weekend where we worked on a whole bunch of NS stuff, and I've even managed to give up some of my other spare time to resolve bugs, though they tend to be quite hard and I'm quite slow. I'm very pleased with how I've done with that.

Lars and I continue to work on our testing project, now called Subplot. Though, frankly, Lars does almost all of the work on this.

I did accidentally start another project (remsync) after buying a reMarkable tablet. So that bumps my score down a half point.

So over-all, this one drops to "C-", from the "C" earlier in the year - still (barely) satisfactory but could do a lot better.

4. Be a net benefit

My efforts for Debian continue to be restricted, though I hope it continues to just about be a net benefit to the project. My efforts with the Lua community have not extended again, so pretty much the same.

I remain invested in Rust stuff, and have managed (just about) to avoid starting in on any other projects, so things are fairly much the same as before. I lead the Rust installer working group and we recently released a huge update to rustup which adds a bunch of desired stuff.

While the effects of my Rust work affect both this and the next section, I am very pleased with how I did and have upgraded myself to an "A-" for this.

5. Give back to the Rust community

I have worked very hard on my Rustup work, and I have also started to review documentation and help updates for the Rust compiler itself. I've become involved in the Sequoia project, at least peripherally, and have attended a developer retreat with them which was both relaxing and productive.

I feel like the effort I'm putting into Rust is being recognised in ways I did not expect nor hope for, but that's very positive and has meant I've engaged even more with the community and feel like I'm making a valuable contribution.

I still hang around on the #wg-rustup Discord channel and other channels on that server, helping where I can, and I've been trying to teach my colleagues about Rust so that they might also contribute to the community.

So initially an 'A', I dropped to an 'A-' last time, but I feel like I've put enough effort in to give myself 'A+' this time.

6. Be better at tidying up

I've managed to do a bit more tidying, but honestly this is still pretty bad. I managed to clean up some stuff, but then it slips back into mess. The habit forming is still not happening. At this point I think I really need to grab the bull by the horns and focus on this one, so it'll be better in the next report I promise.

I'm upgrading to an 'E' because I am making some difference, just not enough.

7. Save up money for renovations

We spent those savings on our renovations, but I do continue to manage to put some away. As you can see in the next section though, I've been spending money on myself too.

I think I get to keep an 'A' here, but only just.

8. Go on a proper holiday

I spent a week with the Sequoia-PGP team in Croatia which was amazing. I have a long weekend planned with them in Barcelona for Rustfest too. Some people would say that those aren't real holidays, but I relaxed, did stuff I enjoyed, tried new things, and even went on a Zip-line in Croatia, so I'm counting it as a win.

While I've not managed to take a holiday with Rob, he's been off to do things independently, so I'm upgrading us to a 'B' here.

Summary

Last quarter I had a B+, A+, C, B, A-, F, A, C+, which ignoring the F is a was better than earlier in the year, though still not great.

This quarter I have a B+, A+, C-, A-, A+, E, A, B. The F has gone which is nice, and I suppose I could therefore call that a fair A- average, or perhaps C+ if I count the E.

20 October, 2019 03:00PM by Daniel Silverstone

October 18, 2019

Molly de Blanc

Autonomy and consent

When I was an undergraduate, I took a course on medical ethics. The core takeaways from the class were that autonomy is necessary for consent, and consent is necessary for ethical action.

There is a reciprocal relationship between autonomy and consent. We are autonomous creatures, we are self-governing. In being self-governing, we have the ability to consent, to give permission to others to interact with us in the ways we agree on. We can only really consent when we are self-governing, otherwise, it’s not proper consent. Consent also allows us to continue to be self-governing. By giving others permission, we are giving up some control, but doing so on our own terms.

In order to actually consent, we have to grasp the situation we’re in, and as much about it as possible. Decision making needs to come from a place of understanding.

It’s a fairly straightforward path when discussing medicine: you cannot operate on someone, or force them to take medication, or any other number of things without their permission to do so, and that their permission is based on knowing what’s going on.

I cannot stress how important it is to transpose this idea onto technology. This is an especially valuable concept when looking at the myriad ways we interact with technology, and especially computing technology, without even being given the opportunity to consent, whether or not we come from a place of autonomy.

At the airport recently, I heard that a flight was boarding with facial recognition technology. I remembered reading an article over the summer about how hard it is to opt-out. It gave me pause. I was running late for my connection and worried that I would be put in a situation where I would have to choose between the opt-out process and missing my flight. I come from a place of greater understanding than the average passenger (I assume) when it comes to facial recognition technology, but I don’t know enough about its implementation in airports to feel as though I could consent. Many people approach this from a place even with much less understanding than I have.

From my perspective, there are two sides to understanding and consent: the technology itself and the way gathered data is being used. I’m going to save those for a future blog post, but I’ll link back to this one, and edit this to link forward to them.

18 October, 2019 01:35PM by mollydb

hackergotchi for Matthew Garrett

Matthew Garrett

Letting Birds scooters fly free

(Note: These issues were disclosed to Bird, and they tell me that fixes have rolled out. I haven't independently verified)

Bird produce a range of rental scooters that are available in multiple markets. With the exception of the Bird Zero[1], all their scooters share a common control board described in FCC filings. The board contains three primary components - a Nordic NRF52 Bluetooth controller, an STM32 SoC and a Quectel EC21-V modem. The Bluetooth and modem are both attached to the STM32 over serial and have no direct control over the rest of the scooter. The STM32 is tied to the scooter's engine control unit and lights, and also receives input from the throttle (and, on some scooters, the brakes).

The pads labeled TP7-TP11 near the underside of the STM32 and the pads labeled TP1-TP5 near the underside of the NRF52 provide Serial Wire Debug, although confusingly the data and clock pins are the opposite way around between the STM and the NRF. Hooking this up via an STLink and using OpenOCD allows dumping of the firmware from both chips, which is where the fun begins. Running strings over the firmware from the STM32 revealed "Set mode to Free Drive Mode". Challenge accepted.

Working back from the code that printed that, it was clear that commands could be delivered to the STM from the Bluetooth controller. The Nordic NRF52 parts are an interesting design - like the STM, they have an ARM Cortex-M microcontroller core. Their firmware is split into two halves, one the low level Bluetooth code and the other application code. They provide an SDK for writing the application code, and working through Ghidra made it clear that the majority of the application firmware on this chip was just SDK code. That made it easier to find the actual functionality, which was just listening for writes to a specific BLE attribute and then hitting a switch statement depending on what was sent. Most of these commands just got passed over the wire to the STM, so it seemed simple enough to just send the "Free drive mode" command to the Bluetooth controller, have it pass that on to the STM and win. Obviously, though, things weren't so easy.

It turned out that passing most of the interesting commands on to the STM was conditional on a variable being set, and the code path that hit that variable had some impressively complicated looking code. Fortunately, I got lucky - the code referenced a bunch of data, and searching for some of the values in that data revealed that they were the AES S-box values. Enabling the full set of commands required you to send an encrypted command to the scooter, which would then decrypt it and verify that the cleartext contained a specific value. Implementing this would be straightforward as long as I knew the key.

Most AES keys are 128 bits, or 16 bytes. Digging through the code revealed 8 bytes worth of key fairly quickly, but the other 8 bytes were less obvious. I finally figured out that 4 more bytes were the value of another Bluetooth variable which could be simply read out by a client. The final 4 bytes were more confusing, because all the evidence made no sense. It looked like it came from passing the scooter serial number to atoi(), which converts an ASCII representation of a number to an integer. But this seemed wrong, because atoi() stops at the first non-numeric value and the scooter serial numbers all started with a letter[2]. It turned out that I was overthinking it and for the vast majority of scooters in the fleet, this section of the key was always "0".

At that point I had everything I need to write a simple app to unlock the scooters, and it worked! For about 2 minutes, at which point the network would notice that the scooter was unlocked when it should be locked and sent a lock command to force disable the scooter again. Ah well.

So, what else could I do? The next thing I tried was just modifying some STM firmware and flashing it onto a board. It still booted, indicating that there was no sort of verified boot process. Remember what I mentioned about the throttle being hooked through the STM32's analogue to digital converters[3]? A bit of hacking later and I had a board that would appear to work normally, but about a minute after starting the ride would cut the throttle. Alternative options are left as an exercise for the reader.

Finally, there was the component I hadn't really looked at yet. The Quectel modem actually contains its own application processor that runs Linux, making it significantly more powerful than any of the chips actually running the scooter application[4]. The STM communicates with the modem over serial, sending it an AT command asking it to make an SSL connection to a remote endpoint. It then uses further AT commands to send data over this SSL connection, allowing it to talk to the internet without having any sort of IP stack. Figuring out just what was going over this connection was made slightly difficult by virtue of all the debug functionality having been ripped out of the STM's firmware, so in the end I took a more brute force approach - I identified the address of the function that sends data to the modem, hooked up OpenOCD to the SWD pins on the STM, ran OpenOCD's gdb stub, attached gdb, set a breakpoint for that function and then dumped the arguments being passed to that function. A couple of minutes later and I had a full transaction between the scooter and the remote.

The scooter authenticates against the remote endpoint by sending its serial number and IMEI. You need to send both, but the IMEI didn't seem to need to be associated with the serial number at all. New connections seemed to take precedence over existing connections, so it would be simple to just pretend to be every scooter and hijack all the connections, resulting in scooter unlock commands being sent to you rather than to the scooter or allowing someone to send fake GPS data and make it impossible for users to find scooters.

In summary: Secrets that are stored on hardware that attackers can run arbitrary code on probably aren't secret, not having verified boot on safety critical components isn't ideal, devices should have meaningful cryptographic identity when authenticating against a remote endpoint.

Bird responded quickly to my reports, accepted my 90 day disclosure period and didn't threaten to sue me at any point in the process, so good work Bird.

(Hey scooter companies I will absolutely accept gifts of interesting hardware in return for a cursory security audit)

[1] And some very early M365 scooters
[2] The M365 scooters that Bird originally deployed did have numeric serial numbers, but they were 6 characters of type code followed by a / followed by the actual serial number - the number of type codes was very constrained and atoi() would terminate at the / so this was still not a large keyspace
[3] Interestingly, Lime made a different design choice here and plumb the controls directly through to the engine control unit without the application processor having any involvement
[4] Lime run their entire software stack on the modem's application processor, but because of [3] they don't have any realtime requirements so this is more straightforward

comment count unavailable comments

18 October, 2019 11:44AM

October 17, 2019

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

User Mode Linux 5.2

User Mode Linux 5.2

User Mode Linux version 5.2 has been uploaded to Debian Unstable and will soon be available on the supported architectures. This upload took more time than usual as I ran into a build time failure caused by newer PCAP library.

Thanks to active upstream developers, this got sorted out quick. In the longer run, we may have a much better fix for it.

What is User Mode Linux a.k.a uml

It is one of the initial virtualization technologies that Linux provided, which still works and is supported/maintained. It is about running a Linux kernel as a single unprivileged user process.

There was a similar project, coLinux, that had a port of Linux kernel to Microsoft Windows, that I used to recommend to people, that were more familiar to Microsoft Windows environment. Conceptually, it was very similar to UML. With coLinux, XMing and PulseAudio, it was a delight to see GNU/Linux based applications to run efficiently on Microsoft Windows.

That was years ago. Unfortunately, the last time I checked on coLinux, it did not seem to be maintained any more. User Mode Linux too hasn’t had a very large user/developer base but, despite, it being part of the Linux kernel has kept it fairly well maintained.

The good news is that for the last 2 years or so, uml has had active maintainers/developers for this project. So I expect to see some parity in terms of features and performance numbers eventually. Things like virtio could definitely bring in a lot of performance improvements to UML. And much wider user acceptance ultimately.

To quote the Debian package description:

Package: user-mode-linux
Version: 5.2-1um-1
Built-Using: linux (= 5.2.17-1)
Status: install ok installed
Priority: extra
Section: kernel
Maintainer: User Mode Linux Maintainers <team+uml@tracker.debian.org>
Installed-Size: 43.7 MB
Depends: libc6 (>= 2.28)
Recommends: uml-utilities (>= 20040406-1)
Suggests: x-terminal-emulator, rootstrap, user-mode-linux-doc, slirp, vde2
Homepage: http://user-mode-linux.sourceforge.net/
Download-Size: unknown
APT-Manual-Installed: yes
APT-Sources: /var/lib/dpkg/status
Description: User-mode Linux (kernel)
 User-mode Linux (UML) is a port of the Linux kernel to its own system
 call interface.  It provides a kind of virtual machine, which runs
 Linux as a user process under another Linux kernel.  This is useful
 for kernel development, sandboxes, jails, experimentation, and many
 other things.
 .
 This package contains the kernel itself, as an executable program,
 and the associated kernel modules.

User Mode Linux networking

I have been maintaining User Mode Linux for Debian for a couple of years now but one area where I still waste a lot of time at times, is networking.

Today, we have these major virt technologies: Virtualizaiton, Containers and UML. For simplicity, I am keeping UML as a separate type.

For my needs, I prefer keeping a single bridge on my laptop, to which all different types of technologies can talk through, to access the network. By keeping everything through the bridge, I get to focus on just one entry/exit point for firewall, monitoring, trouble shooting etc.

As of today, easily:

  • VirtualBox can talk to my local bridge
  • Libvirt/Qemu can talk to my local bridge
  • Docker can too
  • systemd-nspawn can also

The only challenge comes in with UML where in I have had to setup a separate tun interface and make UML talk through it using Tuntap Daemon Mode. And on the host, I do NAT to get it talk to the internet.

Ideally, I would like to simply tell UML that this is my bridge device and that it should associate itself to it for networking. I looked at vde and found a wrapper vdeq. Something like this for UML could simplify things a lot.

UML also has an =vde mode wherein it can attach itself to. But from the looks of it, it is similar to what we already provide through uml-utilities in tuntap daemon mode

I am curious if other User Mode Linux users have simplified ways to get networking set up. And ideally, if it can be setup through Debian’s networking ifup scripts or through systemd-networkd. If so, I would really appreciate if you could drop me a note over email until I find a simple and clean way to get comments setup ready for my blog

17 October, 2019 02:04PM by Ritesh Raj Sarraf (rrs@researchut.com)

hackergotchi for Jonathan Carter

Jonathan Carter

Calamares Plans for Debian 11

Brief history of Calamares in Debian

Before Debian 9 was released, I was preparing a release for a derivative of Debian that was a bit different than other Debian systems I’ve prepared for redistribution before. This was targeted at end-users, some of whom might have used Ubuntu before, but otherwise had no Debian related experience. I needed to find a way to make Debian really easy for them to install. Several options were explored, and I found that Calamares did a great job of making it easy for typical users to get up and running fast.

After Debian 9 was released, I learned that other Debian derivatives were also using Calamares or planning to do so. It started to make sense to package Calamares in Debian so that we don’t do duplicate work in all these projects. On its own, Calamares isn’t very useful, if you ran the pure upstream version in Debian it would crash before it starts to install anything. This is because Calamares needs some configuration and helpers depending on the distribution. Most notably in Debian’s case, this means setting the location of the squashfs image we want to copy over, and some scripts to either install grub-pc or grub-efi depending on how we boot. Since I already did most of the work to figure all of this out, I created a package called calamares-settings-debian, which contains enough configuration to install Debian using Calamares so that derivatives can easily copy and adapt it to their own calamares-settings-* packages for use in their systems.

In Debian 9, the live images were released without an installer available in the live session. Unfortunately the debian-installer live session that was used in previous releases had become hard to maintain and had a growing number of bugs that didn’t make it suitable for release, so Steve from the live team suggested that we add Calamares to the Debian 10 test builds and give it a shot, which surprised me because I never thought that Calamares would actually ship on official Debian media. We tried it out, and it worked well so Debian 10 live media was released with it. It turned out great, every review of Debian 10 I’ve seen so far had very good things to say about it, and the very few problems people have found have already been fixed upstream (I plan to backport those fixes to buster soon).

Plans for Debian 11 (bullseye)

New slideshow

If I had to choose a biggest regret regarding the Debian 10 release, this slideshow would probably be it. It’s just the one slide depicted above. The time needed to create a nice slideshow was one constraint, but another was that I also didn’t have enough time to figure out how its translations work and do a proper call for translations in time for the hard freeze. I consider the slideshow a golden opportunity to explain to new users what the Debian project is about and what this new Debian system they’re installing is capable of, so this is basically a huge missed opportunity that I don’t want to repeat again.

I intend to pull in some help from the web team, publicity team and anyone else who might be interested to cover slides along the topics of (just a quick braindump, final slides will likely have significantly different content):

  • The Debian project, and what it’s about
    • Who develops Debian and why
    • Cover the social contract, and perhaps touch on the DFSG
  • Who uses Debian? Mention notable users and use cases
    • Big and small commercial companies
    • Educational institutions
    • …even NASA?
  • What Debian can do
    • Explain vast package library
    • Provide some tips and tricks on what to do next once the system is installed
  • Where to get help
    • Where to get documentation
    • Where to get additional help

It shouldn’t get to heavy and shouldn’t run longer than a maximum of three minutes or so, because in some cases that might be all we have during this stage of the installation.

Try out RAID support

Calamares now has RAID support. It’s still very new and as far as I know it’s not yet widely tested. It needs to be enabled as a compile-time option and depends on kpmcore 4.0.0, which Calamares uses for its partitioning backend. kpmcore 4.0.0 just entered unstable this week, so I plan to do an upload to test this soon.

RAID support is one of the biggest features missing from Calamares, and enabling it would make it a lot more useful for typical office environments where RAID 1 is typically used on worktations. Some consider RAID on desktops somewhat less important than it used to be. With fast SSDs and network provisioning with gigabit ethernet, it’s really quick to recover from a failed disk, but you still have downtime until the person responsible pops over to replace that disk. At least with RAID-1 you can avoid or drastically decrease downtime, which makes the cost of that extra disk completely worth while.

Add Debian-specific options

The intent is to keep the installer simple, so adding new options is a tricky business, but it would be nice to cover some Debian-specific options in the installer just like debian-installer does. At this point I’m considering adding:

  • Choosing a mirror. Currently it just default to writing a sources.list file that uses deb.debian.org, which is usually just fine.
  • Add an option to enable popularity contest (popcon). As a Debian developer I find popcon stats quite useful. Even though just a small percentage of people enable it, it provides enough data to help us understand how widely packages are used, especially in relation to other packages, and I’m slightly concerned that desktop users who now use Calamares instead of d-i who forget to enable popcon after installation, will result in skewing popcon results for desktop packages compared to previous releases.

Skip files that we’re deleting anyway

At DebConf19, I gave a lightning talk titled “Is it possible to install Debian in a lightning talk slot?”. The answer was sadly “No.”. The idea is that you should be able to install a full Debian desktop system within 5 minutes. In my preparations for the talk, I got it down to just under 6 minutes. It ended up taking just under 7 minutes during my lightnight talk, probably because I forgot to plug in my laptop into a power source and somehow got throttled to save power. Under 7 minutes is fast, but the exercise got me looking at what wasted the most time during installation.

Of the avoidable things that happen during installation, the thing that takes up the most time by a large margin is removing packages that we don’t want on the installed system. During installation, the whole live system is copied from the installation media over to the hard disk, and then the live packages (including Calamares) is removed from that installation. APT (or actually more speficically dpkg) is notorious for playing it safe with filesystem operations, so removing all these live packages takes quite some time (more than even copying them there in the first place).

The contents of the squashfs image is copied over to the filesystem using rsync, so it is possible to provide an exclude list of files that we don’t want. I filed a bug in Calamares to add support for such an exclude list, which was added in version 3.2.15 that was released this week. Now we also need to add support in the live image build scripts to generate these file lists based on the packages we want to remove, but that’s part of a different long blog post all together.

This feature also opens the door for a minimal mode option, where you could choose to skip non-essential packages such as LibreOffice and GIMP. In reality these packages will still be removed using APT in the destination filesystem, but it will be significantly faster since APT won’t have to remove any real files. The Ubuntu installer (Ubiquity) has done something similar for a few releases now.

Add a framebuffer session

As is the case with most Qt5 applications, Calamares can run directly on the Linux framebuffer without the need for Xorg or Wayland. To try it out, all you need to do is run “sudo calamares -platform linuxfb” on a live console and you’ll get Calamares right there in your framebuffer. It’s not tested upstream so it looks a bit rough. As far as I know I’m the only person so far to have installed a system using Calamares on the framebuffer.

The plan is to create a systemd unit to launch this at startup if ‘calamares’ is passed as a boot parameter. This way, derivatives who want this who uses a calamares-settings-debian (or their own fork) can just create a boot menu entry to activate the framebuffer installation without any additional work. I don’t think it should be too hard to make it look decent in this mode either,

Calamares on the framebuffer might also be useful for people who ship headless appliances based on Debian but who still need a simple installer.

Document calamares-settings-debian for derivatives

As the person who put together most of calamares-settings-debian, I consider it quite easy to understand and adapt calamares-settings-debian for other distributions. But even so, it takes a lot more time for someone who wants to adapt it for a derivative to delve into it than it would to just read some quick documentation on it first.

I plan to document calamares-settings-debian on the Debian wiki that covers everything that it does and how to adapt it for derivatives.

Improve Scripts

When writing helper scripts for Calamares in Debian I focused on getting it working, reliably and in time for the hard freeze. I cringed when looking at some of these again after the buster release, it’s not entirely horrible but it can use better conventions and be easier to follow, so I want to get it right for Bullseye. Some scripts might even be eliminated if we can build better images. For example, we install either grub-efi or grub-pc from the package pool on the installation media based on the boot method used, because in the past you couldn’t have both installed at the same time so they were just shipped as additional available packages. With changes in the GRUB packaging (for a few releases now already) it’s possible to have grub-efi and grub-pc-bin installed at the same time, so if we install both at build time it may be possible to simplify those pieces (and also save another few precious seconds of install time).

Feature Requests

I’m sure some people reading this will have more ideas, but I’m not in a position to accept feature requests right now, Calamares is one of a whole bunch of areas in Debian I’m working on in this release. If you have ideas or feature requests, rather consider filing them in Calamares’ upstream bug tracker on GitHub or get involved in the efforts. Calamares has an IRC channel on freenode (#calamares), and for Debian specific stuff you can join the Debian live channel on oftc (#debian-live).

17 October, 2019 09:01AM by jonathan

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal Subway Foot Traffic Data

Edit 2019-10-18: Following some comments (thank you!) I added a few more graphs, such as a few top10. I also cleaned the R code a bit and made the graphs a tad easier to read.

Two weeks ago, I took the Montreal subway with my SO and casually mentioned it would be nice to understand why the Joliette subway station has more foot traffic than the next one, Pie-IX. Is the part of the neighborhood served by the Joliette station denser? Would there be a correlation between the mean household income and foot traffic? Has the more aggressive gentrification around the Joliette station affected its achalandage?

Much to my surprise, instead of sharing my urbanistical enthusiasm, my SO readily disputed what I thought was an irrefutable fact: "Pie-IX has more foot traffic than Joliette, mainly because of the superior amount of bus lines departing from it" she told me.

Shaken to the core, I decided to prove to the world I was right and asked Société de Transport de Montréal (STM) for the foot traffic data of the Montreal subway.

Turns out I was wrong (Pie-IX is about twice as big as Joliette...) and individual observations are often untrue. Shocking right?!

Visualisations

STM kindly sent me daily values for each subway stations from 2001 to 2018. Armed with all this data, I decided to play a little with R and came up with some interesting graphs.

Behold this semi-interactive map of Montreal's subway! By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.

Interactive Map of Montreal's Subway

I also made a few graphs that include data from multiple stations. Some of them (like the Orange line) are quite crowded though:

Licences

  • The subway map displayed on this page, the original dataset and my modified dataset are licenced under CCO 1.0: they are in the public domain.

  • The R code I wrote is licensed under the GPLv3+. Feel free to reuse it if you get a more up to date dataset from the STM.

17 October, 2019 04:00AM by Louis-Philippe Véronneau

October 16, 2019

Antoine Beaupré

Theory: average bus factor = 1

Two articles recently made me realize that all my free software projects basically have a bus factor of one. I am the sole maintainer of every piece of software I have ever written that I still maintain. There are projects that I have been the maintainer of which have other maintainers now (most notably AlternC, Aegir and Linkchecker), but I am not the original author of any of those projects.

Now that I have a full time job, I feel the pain. Projects like Gameclock, Monkeysign, Stressant, and (to a lesser extent) Wallabako all need urgent work: the first three need to be ported to Python 3, the first two to GTK 3, and the latter will probably die because I am getting a new e-reader. (For the record, more recent projects like undertime and feed2exec are doing okay, mostly because they were written in Python 3 from the start, and the latter has extensive unit tests. But they do suffer from the occasional bitrot (the latter in particular) and need constant upkeep.)

Now that I barely have time to keep up with just the upkeep, I can't help but think all of my projects will just die if I stop working on them. I have the same feeling about the packages I maintain in Debian.

What does that mean? Does that mean those packages are useless? That no one cares enough to get involved? That I'm not doing a good job at including contributors?

I don't think so. I think I'm a friendly person online, and I try my best at doing good documentation and followup on my projects. What I have come to understand is even more depressing and scary that this being a personal failure: that is the situation with everyone, everywhere. The LWN article is not talking about silly things like a chess clock or a feed reader: we're talking about the Linux input drivers. A very deep, core component of the vast majority of computers running on the planet, that depend on that single maintainer. And I'm not talking about whether those people are paid or not, that's related, but not directly the question here. The same realization occured with OpenSSL and NTP, GnuPG is in a similar situation, the list just goes on and on.

A single guy maintains those projects! Is that a fluke? A statistical anomaly? Everything I feel, and read, and know in my decades of experience with free software show me a reality that I've been trying to deny for all that time: it's the average.

My theory is this: our average bus factor is one. I don't have any hard evidence to back this up, no hard research to rely on. I'd love to be proven wrong. I'd love for this to change.

But unless economics of technology production change significantly in the coming decades, this problem will remain, and probably worsen, as we keep on scaffolding an entire civilization on shoulders of hobbyists that are barely aware their work is being used to power phones, cars, airplanes and hospitals. A lot has been written on this, but nothing seems to be moving.

And if that doesn't scare you, it damn well should. As a user, one thing you can do is, instead of wondering if you should buy a bit of proprietary software, consider using free software and donating that money to free software projects instead. Lobby governments and research institutions to sponsor only free software projects. Otherwise this civilization will collapse in a crash of spaghetti code before it even has time to get flooded over.

16 October, 2019 07:21PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

PhD Poster

Thumbnail of the poster

Thumbnail of the poster

Today the School of Computing organised a poster session for stage 2 & 3 PhD candidates. Here's the poster I submitted. jdowland-phd-poster.pdf (692K)

This is the first poster I've prepared for my PhD work. I opted to follow the "BetterPoster" principles established by Mike Morrison. These are best summarized in his #BetterPoster 2 minute YouTube video. I adapted this LaTeX #BetterPoster template. This template is licensed under the GPL v3.0 which requires me to provide the source of the poster, so here it is.

After browsing around other student's posters, two things I would now add to the poster would be a mugshot (so people could easily determine who's poster it was, if they wanted to ask questions) and Red Hat's logo, to acknowledge their support of my work.

16 October, 2019 01:53PM

October 15, 2019

hackergotchi for Julien Danjou

Julien Danjou

Sending Emails in Python — Tutorial with Code Examples

Sending Emails in Python — Tutorial with Code Examples

What do you need to send an email with Python? Some basic programming and web knowledge along with the elementary Python skills. I assume you’ve already had a web app built with this language and now you need to extend its functionality with notifications or other emails sending. This tutorial will guide you through the most essential steps of sending emails via an SMTP server:

  1. Configuring a server for testing (do you know why it’s important?)
  2. Local SMTP server
  3. Mailtrap test SMTP server
  4. Different types of emails: HTML, with images, and attachments
  5. Sending multiple personalized emails (Python is just invaluable for email automation)
  6. Some popular email sending options like Gmail and transactional email services

Served with numerous code examples written and tested on Python 3.7!

Sending an email using an SMTP

The first good news about Python is that it has a built-in module for sending emails via SMTP in its standard library. No extra installations or tricks are required. You can import the module using the following statement:

import smtplib

To make sure that the module has been imported properly and get the full description of its classes and arguments, type in an interactive Python session:

help(smtplib)

At our next step, we will talk a bit about servers: choosing the right option and configuring it.

An SMTP server for testing emails in Python

When creating a new app or adding any functionality, especially when doing it for the first time, it’s essential to experiment on a test server. Here is a brief list of reasons:

  1. You won’t hit your friends’ and customers’ inboxes. This is vital when you test bulk email sending or work with an email database.
  2. You won’t flood your own inbox with testing emails.
  3. Your domain won’t be blacklisted for spam.

Local SMTP server

If you prefer working in the local environment, the local SMTP debugging server might be an option. For this purpose, Python offers an smtpd module. It has a DebuggingServer feature, which will discard messages you are sending out and will print them to stdout. It is compatible with all operations systems.

Set your SMTP server to localhost:1025

python -m smtpd -n -c DebuggingServer localhost:1025

In order to run SMTP server on port 25, you’ll need root permissions:

sudo python -m smtpd -n -c DebuggingServer localhost:25

It will help you verify whether your code is working and point out the possible problems if there are any. However, it won’t give you the opportunity to check how your HTML email template is rendered.

Fake SMTP server

Fake SMTP server imitates the work of a real 3rd party web server. In further examples in this post, we will use Mailtrap. Beyond testing email sending, it will let us check how the email will  be rendered and displayed, review the message raw data as well as will provide us with a spam report. Mailtrap is very easy to set up: you will need just copy the credentials generated by the app and paste them into your code.

Sending Emails in Python — Tutorial with Code Examples

Here is how it looks in practice:

import smtplib

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # your password generated by Mailtrap

Mailtrap makes things even easier. Go to the Integrations section in the SMTP settings tab and get the ready-to-use template of the simple message, with your Mailtrap credentials in it. It is the most basic option of instructing your Python script on who sends what to who is the sendmail() instance method:

Sending Emails in Python — Tutorial with Code Examples

The code looks pretty straightforward, right? Let’s take a closer look at it and add some error handling (see the comments in between). To catch errors, we use the try and except blocks.

# The first step is always the same: import all necessary components:
import smtplib
from socket import gaierror

# Now you can play with your code. Let’s define the SMTP server separately here:
port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

# Specify the sender’s and receiver’s email addresses:
sender = "from@example.com"
receiver = "mailtrap@example.com"

# Type your message: use two newlines (\n) to separate the subject from the message body, and use 'f' to  automatically insert variables in the text
message = f"""\
Subject: Hi Mailtrap
To: {receiver}
From: {sender}
This is my first message with Python."""

try:
  # Send your message with credentials specified above
  with smtplib.SMTP(smtp_server, port) as server:
    server.login(login, password)
    server.sendmail(sender, receiver, message)
except (gaierror, ConnectionRefusedError):
  # tell the script to report if your message was sent or which errors need to be fixed
  print('Failed to connect to the server. Bad connection settings?')
except smtplib.SMTPServerDisconnected:
  print('Failed to connect to the server. Wrong user/password?')
except smtplib.SMTPException as e:
  print('SMTP error occurred: ' + str(e))
else:
  print('Sent')

Once you get the Sent result in Shell, you should see your message in your Mailtrap inbox:

Sending Emails in Python — Tutorial with Code Examples

Sending emails with HTML content

In most cases, you need to add some formatting, links, or images to your email notifications. We can simply put all of these with the HTML content. For this purpose, Python has an email package.

We will deal with the MIME message type, which is able to combine HTML and plain text. In Python, it is handled by the email.mime module.

It is better to write a text version and an HTML version separately, and then merge them with the MIMEMultipart("alternative") instance. It means that such a message has two rendering options accordingly. In case an HTML isn’t be rendered successfully for some reason, a text version will still be available.

import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart("alternative")
message["Subject"] = "multipart test"
message["From"] = sender_email
message["To"] = receiver_email
# Write the plain text part
text = """\ Hi, Check out the new post on the Mailtrap blog: SMTP Server for Testing: Cloud-based or Local? https://blog.mailtrap.io/2018/09/27/cloud-or-local-smtp-server/ Feel free to let us know what content would be useful for you!"""

# write the HTML part
html = """\ <html> <body> <p>Hi,<br> Check out the new post on the Mailtrap blog:</p> <p><a href="https://blog.mailtrap.io/2018/09/27/cloud-or-local-smtp-server">SMTP Server for Testing: Cloud-based or Local?</a></p> <p> Feel free to <strong>let us</strong> know what content would be useful for you!</p> </body> </html> """

# convert both parts to MIMEText objects and add them to the MIMEMultipart message
part1 = MIMEText(text, "plain")
part2 = MIMEText(html, "html")
message.attach(part1)
message.attach(part2)

# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail( sender_email, receiver_email, message.as_string() )

print('Sent')
Sending Emails in Python — Tutorial with Code ExamplesThe resulting output

Sending Emails with Attachments in Python

The next step in mastering sending emails with Python is attaching files. Attachments are still the MIME objects but we need to encode them with the base64 module. A couple of important points about the attachments:

  1. Python lets you attach text files, images, audio files, and even applications. You just need to use the appropriate email class like email.mime.audio.MIMEAudio or email.mime.image.MIMEImage. For the full information, refer to this section of the Python documentation.
  2. Remember about the file size: sending files over 20MB is a bad practice.

In transactional emails, the PDF files are the most frequently used: we usually get receipts, tickets, boarding passes, order confirmations, etc. So let’s review how to send a boarding pass as a PDF file.

import smtplib
from email import encoders
from email.mime.base import MIMEBase
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

subject = "An example of boarding pass"
sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart()
message["From"] = sender_email
message["To"] = receiver_email
message["Subject"] = subject

# Add body to email
body = "This is an example of how you can send a boarding pass in attachment with Python"
message.attach(MIMEText(body, "plain"))

filename = "yourBP.pdf"
# Open PDF file in binary mode
# We assume that the file is in the directory where you run your Python script from
with open(filename, "rb") as attachment:
# The content type "application/octet-stream" means that a MIME attachment is a binary file
part = MIMEBase("application", "octet-stream")
part.set_payload(attachment.read())
# Encode to base64
encoders.encode_base64(part)
# Add header
part.add_header("Content-Disposition", f"attachment; filename= {filename}")
# Add attachment to your message and convert it to string
message.attach(part)

text = message.as_string()
# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail(sender_email, receiver_email, text)

print('Sent')
Sending Emails in Python — Tutorial with Code ExamplesThe received email with your PDF

To attach several files, you can call the message.attach() method several times.

How to send an email with image attachment

Images, even if they are a part of the message body, are attachments as well. There are three types of them: CID attachments (embedded as a MIME object), base64 images (inline embedding), and linked images.

For adding a CID attachment, we will create a MIME multipart message with MIMEImage component:

import smtplib
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart("alternative")
message["Subject"] = "CID image test"
message["From"] = sender_email
message["To"] = receiver_email

# write the HTML part
html = """\
<html>
<body>
<img src="cid:myimage">
</body>
</html>
"""
part = MIMEText(html, "html")
message.attach(part)

# We assume that the image file is in the same directory that you run your Python script from
with open('mailtrap.jpg', 'rb') as img:
  image = MIMEImage(img.read())
# Specify the  ID according to the img src in the HTML part
image.add_header('Content-ID', '<myimage>')
message.attach(image)

# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail(sender_email, receiver_email, message.as_string())

print('Sent')
Sending Emails in Python — Tutorial with Code ExamplesThe received email with CID image

The CID image is shown both as a part of the HTML message and as an attachment. Messages with this image type are often considered spam: check the Analytics tab in Mailtrap to see the spam rate and recommendations on its improvement. Many email clients — Gmail in particular — don’t display CID images in most cases. So let’s review how to embed a base64 encoded image instead.

Here we will use base64 module and experiment with the same image file:

import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
import base64

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap
sender_email = "mailtrap@example.com"
receiver_email = "new@example.com"

message = MIMEMultipart("alternative")
message["Subject"] = "inline embedding"
message["From"] = sender_email
message["To"] = receiver_email

# We assume that the image file is in the same directory that you run your Python script from
with open("image.jpg", "rb") as image:
  encoded = base64.b64encode(image.read()).decode()

html = f"""\
<html>
<body>
<img src="data:image/jpg;base64,{encoded}">
</body>
</html>
"""
part = MIMEText(html, "html")
message.attach(part)

# send your email
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  server.sendmail(sender_email, receiver_email, message.as_string())

print('Sent')
Sending Emails in Python — Tutorial with Code ExamplesA base64 encoded image

Now the image is embedded into the HTML message and is not available as an attached file. Python has encoded our JPEG image, and if we go to the HTML Source tab, we will see the long image data string in the img src attribute.

How to Send Multiple Emails

Sending multiple emails to different recipients and making them personal is the special thing about emails in Python.

To add several more recipients, you can just type their addresses in separated by a comma, add Cc and Bcc. But if you work with a bulk email sending, Python will save you with loops.

One of the options is to create a database in a CSV format (we assume it is saved to the same folder as your Python script).

We often see our names in transactional or even promotional examples. Here is how we can make it with Python.

Let’s organize the list in a simple table with just two columns: name and email address. It should look like the following example:

#name,email
John Johnson,john@johnson.com
Peter Peterson,peter@peterson.com

The code below will open the file and loop over its rows line by line, replacing the {name} with the value from the “name” column.

import csv
import smtplib

port = 2525
smtp_server = "smtp.mailtrap.io"
login = "1a2b3c4d5e6f7g" # paste your login generated by Mailtrap
password = "1a2b3c4d5e6f7g" # paste your password generated by Mailtrap

message = """Subject: Order confirmation
To: {recipient}
From: {sender}
Hi {name}, thanks for your order! We are processing it now and will contact you soon"""
sender = "new@example.com"
with smtplib.SMTP("smtp.mailtrap.io", 2525) as server:
  server.login(login, password)
  with open("contacts.csv") as file:
  reader = csv.reader(file)
  next(reader)  # it skips the header row
  for name, email in reader:
    server.sendmail(
      sender,
      email,
      message.format(name=name, recipient=email, sender=sender),
    )
    print(f'Sent to {name}')

In our Mailtrap inbox, we see two messages: one for John Johnson and another for Peter Peterson, delivered simultaneously:

Sending Emails in Python — Tutorial with Code Examples


Sending emails with Python via Gmail

When you are ready for sending emails to real recipients, you can configure your production server. It also depends on your needs, goals, and preferences: your localhost or any external SMTP.

One of the most popular options is Gmail so let’s take a closer look at it.

We can often see titles like “How to set up a Gmail account for development”. In fact, it means that you will create a new Gmail account and will use it for a particular purpose.

To be able to send emails via your Gmail account, you need to provide access to it for your application. You can Allow less secure apps or take advantage of the OAuth2 authorization protocol. It’s a way more difficult but recommended due to the security reasons.

Further, to use a Gmail server, you need to know:

  • the server name = smtp.gmail.com
  • port = 465 for SSL/TLS connection (preferred)
  • or port = 587 for STARTTLS connection
  • username = your Gmail email address
  • password = your password
import smtplib
import ssl

port = 465
password = input("your password")
context = ssl.create_default_context()

with smtplib.SMTP_SSL("smtp.gmail.com", port, context=context) as server:
  server.login("my@gmail.com", password)

If you tend to simplicity, then you can use Yagmail, the dedicated Gmail/SMTP. It makes email sending really easy. Just compare the above examples with these several lines of code:

import yagmail

yag = yagmail.SMTP()
contents = [
"This is the body, and here is just text http://somedomain/image.png",
"You can find an audio file attached.", '/local/path/to/song.mp3'
]
yag.send('to@someone.com', 'subject', contents)

Next steps with Python

Those are just basic options of sending emails with Python. To get great results, review the Python documentation and experiment with your own code!

There are a bunch of various Python frameworks and libraries, which make creating apps more elegant and dedicated. In particular, some of them can help improve your experience with building emails sending functionality:

The most popular frameworks are:

  1. Flask, which offers a simple interface for email sending: Flask Mail.
  2. Django, which can be a great option for building HTML templates.
  3. Zope comes in handy for a website development.
  4. Marrow Mailer is a dedicated mail delivery framework adding various helpful configurations.
  5. Plotly and its Dash can help with mailing graphs and reports.

Also, here is a handy list of Python resources sorted by their functionality.

Good luck and don’t forget to stay on the safe side when sending your emails!

This article was originally published at Mailtrap’s blog: Sending emails with Python

15 October, 2019 10:33AM by Julien Danjou

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, September 2019

A Debian LTS logo
Like each month, here comes a report about
the work of paid contributors
to Debian LTS.

Individual reports

In September, 212.75 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Adrian Bunk did nothing (and got no hours assigned), but has been carrying 26h from August to October.
  • Ben Hutchings did 20h (out of 20h assigned).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 30h (out of 23.75h assigned and 5.25h from August), thus anticipating 1h from October.
  • Hugo Lefeuvre did nothing (out of 23.75h assigned), thus is carrying over 23.75h for October.
  • Jonas Meurer did 5h (out of 10h assigned and 9.5h from August), thus carrying over 14.5h to October.
  • Markus Koschany did 23.75h (out of 23.75h assigned).
  • Mike Gabriel did 11h (out of 12h assigned + 0.75h remaining), thus carrying over 1.75h to October.
  • Ola Lundqvist did 2h (out of 8h assigned and 8h from August), thus carrying over 14h to October.
  • Roberto C. Sánchez did 16h (out of 16h assigned).
  • Sylvain Beucler did 23.75h (out of 23.75h assigned).
  • Thorsten Alteholz did 23.75h (out of 23.75h assigned).

Evolution of the situation

September was more like a regular month again, though two contributors were not able to dedicate any time to LTS work.

For October we are welcoming Utkarsh Gupta as a new paid contributor. Welcome to the team, Utkarsh!

This month, we’re glad to announce that Cloudways is joining us as a new silver level sponsor ! With the reduced involvment of another long term sponsor, we are still at the same funding level (roughly 216 hours sponsored by month).

The security tracker currently lists 32 packages with a known CVE and the dla-needed.txt file has 37 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

15 October, 2019 07:20AM by Raphaël Hertzog

hackergotchi for Norbert Preining

Norbert Preining

State of Calibre in Debian

To counter some recent FUD spread about Calibre in general and Calibre in Debian in particular, here a concise explanation of the current state.

Many might have read my previous post on Calibre as a moratorium, but that was not my intention. Development of Calibre in Debian is continuing, despite the current stall.

Since it seems to be unclear what the current blockers are, there are two orthogonal problems regarding recent Calibre in Debian: One is the update to version 4 and the switch to qtwebengine, one is the purge of Python 2 from Debian.

Current state

Debian sid and testing currently hold Calibre 3.48 based on Python 2. Due to the ongoing purge, necessary modules (in particular python-cherrypy3) have been removed from Debian/sid, making the current Calibre package RC buggy (see this bug report). That means, that within reasonable time frame, Calibre will be removed from testing.


Now for the two orthogonal problems we are facing:

Calibre 4 packaging

Calibre 4 is already packaged for Debian (see the master-4.0 branch in the git repository). Uploading was first blocked due to a disappearing python-pyqt5.qwebengine which was extracted from PyQt5 package into its own. Thanks to the maintainers we now have a Python2 version build from the qtwebengine-opensource-src package.

But that still doesn’t cut it for Calibre 4, because it requires Qt 5.12, but Debian still carries 5.11 (released 1.5 years ago).

So the above mentioned branch is ready for upload as soon as Qt 5.12 is included in Debian.

Python 3

The other big problem is the purge of Python 2 from Debian. Upstream Calibre already supports building Python 3 versions since some months, with ongoing bug fixes. But including this into Debian poses some problems: The first stumbling block was a missing Python3 version of mechanize, which I have adopted after a 7 years hiatus, updated to the newest version and provided Python 3 modules for it.

Packaging for Debian is done in the experimental branch of the git repository, and is again ready to be uploaded to unstable.

But the much bigger concern here is that practically none of the external plugins of Calibre is ready for Python 3. Paired with the fact that probably most users of Calibre are using one or the other external plugin (just to mention Kepub plugin, DeDRM, …), uploading a Python 3 based version of Calibre would break usage for practically all users.


Since I put our (Debian’s) users first, I have thus decided to keep Calibre based on Python 2 as long as Debian allows. Unfortunately the overzealous purge spree has already introduced RC bugs, which means I am now forced to decide whether I upload a version of Calibre that breaks most users, or I don’t upload and see Calibre removed from testing. Not an easy decision.

Thus, my original plan was to keep Calibre based on Python 2 as long as possible, and hope that upstream switches to Python 3 in time before the next Debian release. This would trigger a continuous update of most plugins and would allow users in Debian to have a seamless transition without complete breakage. Unfortunately, this plan seems to be not actually executable.

Now let us return to the FUD spread:

  • Calibre is actively developed upstream
  • Calibre in Debian is actively maintained
  • Calibre is Python 3 ready, but the plugins are not
  • Calibre 4 is ready for Debian as soon as the dependencies are updated
  • Calibre/Python3 is ready for upload to Debian, but breaks practically all users

Hope that helps everyone to gain some understanding about the current state of Calibre in Debian.

15 October, 2019 05:25AM by Norbert Preining

Sergio Durigan Junior

Installing Gerrit and Keycloak for GDB

Back in September, we had the GNU Tools Cauldron in the gorgeous city of Montréal (perhaps I should write a post specifically about it...). One of the sessions we had was the GDB BoF, where we discussed, among other things, how to improve our patch review system.

I have my own personal opinions about the current review system we use (mailing list-based, in a nutshell), and I haven't felt very confident to express it during the discussion. Anyway, the outcome was that at least 3 global maintainers have used or are currently using the Gerrit Code Review system for other projects, are happy with it, and that we should give it a try. Then, when it was time to decide who wanted to configure and set things up for the community, I volunteered. Hey, I'm already running the Buildbot master for GDB, what is the problem to manage yet another service? Oh, well.

Before we dive into the details involved in configuring and running gerrit in a machine, let me first say that I don't totally support the idea of migrating from mailing list to gerrit. I volunteered to set things up because I felt the community (or at least the its most active members) wanted to try it out. I don't necessarily agree with the choice.

Ah, and I'm writing this post mostly because I want to be able to close the 300+ tabs I had to open on my Firefox during these last weeks, when I was searching how to solve the myriad of problems I faced during the set up!

The initial plan

My very initial plan after I left the session room was to talk to the sourceware.org folks and ask them if it would be possible to host our gerrit there. Surprisingly, they already have a gerrit instance up and running. It's been set up back in 2016, it's running an old version of gerrit, and is pretty much abandoned. Actually, saying that it has been configured is an overstatement: it doesn't support authentication, user registration, barely supports projects, etc. It's basically what you get from a pristine installation of the gerrit RPM package in RHEL 6.

I won't go into details here, but after some discussion it was clear to me that the instance on sourceware would not be able to meet our needs (or at least what I had in mind for us), and that it would be really hard to bring it to the quality level I wanted. I decided to go look for other options.

The OSCI folks

Have I mentioned the OSCI project before? They are absolutely awesome. I really love working with them, because so far they've been able to meet every request I made! So, kudos to them! They're the folks that host our GDB Buildbot master. Their infrastructure is quite reliable (I never had a single problem), and Marc Dequénes (Duck) is very helpful, friendly and quick when replying to my questions :-).

So, it shouldn't come as a surprise the fact that when I decided to look for other another place to host gerrit, they were my first choice. And again, they delivered :-).

Now, it was time to start thinking about the gerrit set up.

User registration?

Over the course of these past 4 weeks, I had the opportunity to learn a bit more about how gerrit does things. One of the first things that negatively impressed me was the fact that gerrit doesn't handle user registration by itself. It is possible to have a very rudimentary user registration "system", but it relies on the site administration manually registering the users (via htpasswd) and managing everything by him/herself.

It was quite obvious to me that we would need some kind of access control (we're talking about a GNU project, with a copyright assignment requirement in place, after all), and the best way to implement it is by having registered users. And so my quest for the best user registration system began...

Gerrit supports some user authentication schemes, such as OpenID (not OpenID Connect!), OAuth2 (via plugin) and LDAP. I remembered hearing about FreeIPA a long time ago, and thought it made sense using it. Unfortunately, the project's community told me that installing FreeIPA on a Debian system is really hard, and since our VM is running Debian, it quickly became obvious that I should look somewhere else. I felt a bit sad at the beginning, because I thought FreeIPA would really be our silver bullet here, but then I noticed that it doesn't really offer a self-service user registration.

After exchanging a few emails with Marc, he told me about Keycloak. It's a full-fledged Identity Management and Access Management software, supports OAuth2, LDAP, and provides a self-service user registration system, which is exactly what we needed! However, upon reading the description of the project, I noticed that it is written in Java (JBOSS, to be more specific), and I was afraid that it was going to be very demanding on our system (after all, gerrit is also a Java program). So I decided to put it on hold and take a look at using LDAP...

Oh, man. Where do I start? Actually, I think it's enough to say that I just tried installing OpenLDAP, but gave up because it was too cumbersome to configure. Have you ever heard that LDAP is really complicated? I'm afraid this is true. I just didn't feel like wasting a lot of time trying to understand how it works, only to have to solve the "user registration" problem later (because of course, OpenLDAP is just an LDAP server).

OK, so what now? Back to Keycloak it is. I decided that instead of thinking that it was too big, I should actually install it and check it for real. Best decision, by the way!

Setting up Keycloak

It's pretty easy to set Keycloak up. The official website provides a .tar.gz file which contains the whole directory tree for the project, along with helper scripts, .jar files, configuration, etc. From there, you just need to follow the documentation, edit the configuration, and voilà.

For our specific setup I chose to use PostgreSQL instead of the built-in database. This is a bit more complicated to configure, because you need to download the JDBC driver, and install it in a strange way (at least for me, who is used to just editing a configuration file). I won't go into details on how to do this here, because it's easy to find on the internet. Bear in mind, though, that the official documentation is really incomplete when covering this topic! This is one of the guides I used, along with this other one (which covers MariaDB, but can be adapted to PostgreSQL as well).

Another interesting thing to notice is that Keycloak expects to be running on its own virtual domain, and not under a subdirectory (e.g, https://example.org instead of https://example.org/keycloak). For that reason, I chose to run our instance on another port. It is supposedly possible to configure Keycloak to run under a subdirectory, but it involves editing a lot of files, and I confess I couldn't make it fully work.

A last thing worth mentioning: the official documentation says that Keycloak needs Java 8 to run, but I've been using OpenJDK 11 without problems so far.

Setting up Gerrit

The fun begins now!

The gerrit project also offers a .war file ready to be deployed. After you download it, you can execute it and initialize a gerrit project (or application, as it's called). Gerrit will create a directory full of interesting stuff; the most important for us is the etc/ subdirectory, which contains all of the configuration files for the application.

After initializing everything, you can try starting gerrit to see if it works. This is where I had my first trouble. Gerrit also requires Java 8, but unlike Keycloak, it doesn't work out of the box with OpenJDK 11. I had to make a small but important addition in the file etc/gerrit.config:

[container]
    ...
    javaOptions = "--add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED"
    ...

After that, I was able to start gerrit. And then I started trying to set it up for OAuth2 authentication using Keycloak. This took a very long time, unfortunately. I was having several problems with Gerrit, and I wasn't sure how to solve them. I tried asking for help on the official mailing list, and was able to make some progress, but in the end I figured out what was missing: I had forgotten to add the AddEncodedSlashes On in the Apache configuration file! This was causing a very strange error on Gerrit (as you can see, a java.lang.StringIndexOutOfBoundsException!), which didn't make sense. In the end, my Apache config file looks like this:

<VirtualHost *:80>
    ServerName gnutoolchain-gerrit.osci.io

    RedirectPermanent / https://gnutoolchain-gerrit.osci.io/r/
</VirtualHost>

<VirtualHost *:443>
    ServerName gnutoolchain-gerrit.osci.io

    RedirectPermanent / /r/

    SSLEngine On
    SSLCertificateFile /path/to/cert.pem
    SSLCertificateKeyFile /path/to/privkey.pem
    SSLCertificateChainFile /path/to/chain.pem

    # Good practices for SSL
    # taken from: <https://mozilla.github.io/server-side-tls/ssl-config-generator/>

    # intermediate configuration, tweak to your needs
    SSLProtocol             all -SSLv3
    SSLCipherSuite          ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
    SSLHonorCipherOrder     on
    SSLCompression          off
    SSLSessionTickets       off

    # OCSP Stapling, only in httpd 2.3.3 and later
    #SSLUseStapling          on
    #SSLStaplingResponderTimeout 5
    #SSLStaplingReturnResponderErrors off
    #SSLStaplingCache        shmcb:/var/run/ocsp(128000)

    # HSTS (mod_headers is required) (15768000 seconds = 6 months)
    Header always set Strict-Transport-Security "max-age=15768000"

    ProxyRequests Off
    ProxyVia Off
    ProxyPreserveHost On
    <Proxy *>
        Require all granted
    </Proxy>

    AllowEncodedSlashes On
        ProxyPass /r/ http://127.0.0.1:8081/ nocanon
        #ProxyPassReverse /r/ http://127.0.0.1:8081/r/
</VirtualHost>

I confess I was almost giving up Keycloak when I finally found the problem...

Anyway, after that things went more smoothly. I was finally able to make the user authentication work, then I made sure Keycloak's user registration feature also worked OK...

Ah, one interesting thing: the user logout wasn't really working as expected. The user was able to logout from gerrit, but not from Keycloak, so when the user clicked on "Sign in", Keycloak would tell gerrit that the user was already logged in, and gerrit would automatically log the user in again! I was able to solve this by redirecting the user to Keycloak's logout page, like this:

[auth]
    ...
    logoutUrl = https://keycloak-url:port/auth/realms/REALM/protocol/openid-connect/logout?redirect_uri=https://gerrit-url/
    ...

After that, it was already possible to start worrying about configure gerrit itself. I don't know if I'll write a post about that, but let me know if you want me to.

Conclusion

If you ask me if I'm totally comfortable with the way things are set up now, I can't say that I am 100%. I mean, the set up seems robust enough that it won't cause problems in the long run, but what bothers me is the fact that I'm using technologies that are alien to me. I'm used to setting up things written in Python, C, C++, with very simple yet powerful configuration mechanisms, and an easy to discover what's wrong when something bad happens.

I am reasonably satisfied with the Keycloak logs things, but Gerrit leaves a lot to be desired in that area. And both projects are written in languages/frameworks that I am absolutely not comfortable with. Like, it's really tough to debug something when you don't even know where the code is or how to modify it!

All in all, I'm happy that this whole adventure has come to an end, and now all that's left is to maintain it. I hope that the GDB community can make good use of this new service, and I hope that we can see a positive impact in the quality of the whole patch review process.

My final take is that this is all worth as long as the Free Software and the User Freedom are the ones who benefit.

P.S.: Before I forget, our gerrit instance is running at https://gnutoolchain-gerrit.osci.io.

15 October, 2019 04:00AM by Sergio Durigan Junior

hackergotchi for Chris Lamb

Chris Lamb

Tour d'Orwell: The River Orwell

Continuing my Orwell-themed peregrination, a certain Eric Blair took his pen name "George Orwell" because of his love for a certain river just south of Ipswich, Suffolk. With sheepdog trials being undertaken in the field underneath, even the concrete Orwell Bridge looked pretty majestic from the garden centre — cum — food hall.

15 October, 2019 12:19AM

hackergotchi for Martin Pitt

Martin Pitt

Hardening Cockpit with systemd (socket activation)³

Background A major future goal for Cockpit is support for client-side TLS authentication, primarily with smart cards. I created a Proof of Concept and a demo long ago, but before this can be called production-ready, we first need to harden Cockpit’s web server cockpit-ws to be much more tamper-proof than it is today. This heavily uses systemd’s socket activation. I believe we are now using this in quite a unique and interesting way that helped us to achieve our goal rather elegantly and robustly.

15 October, 2019 12:00AM

October 14, 2019

Arturo Borrero González

What to expect in Debian 11 Bullseye for nftables/iptables

Logo

Debian 11 codename Bullseye is already in the works. Is interesting to make decision early in the development cycle to give people time to accommodate and integrate accordingly, and this post brings you the latest update on the plans for Netfilter software in Debian 11 Bullseye. Mind that Bullseye is expected to be released somewhere in 2021, so still plenty of time ahead.

The situation with the release of Debian 10 Buster is that iptables was using by default the -nft backend and one must explicitly select -legacy in the alternatives system in case of any problem. That was intended to help people migrate from iptables to nftables. Now the question is what to do next.

Back in July 2019, I started an email thread in the debian-devel@lists.debian.org mailing lists looking for consensus on lowering the archive priority of the iptables package in Debian 11 Bullseye. My proposal is to drop iptables from Priority: important and promote nftables instead.

In general, having such a priority level means the package is installed by default in every single Debian installation. Given that we aim to deprecate iptables and that starting with Debian 10 Buster iptables is not even using the x_tables kernel subsystem but nf_tables, having such priority level seems pointless and inconsistent. There was agreement, and I already made the changes to both packages.

This is another step in deprecating iptables and welcoming nftables. But it does not mean that iptables won’t be available in Debian 11 Bullseye. If you need it, you will need to use aptitude install iptables to download and install it from the package repository.

The second part of my proposal was to promote firewalld as the default ‘wrapper’ for firewaling in Debian. I think this is in line with the direction other distros are moving. It turns out firewalld integrates pretty well with the system, includes a DBus interface and many system daemons (like libvirt) already have native integration with firewalld. Also, I believe the days of creating custom-made scripts and hacks to handle the local firewall may be long gone, and firewalld should be very helpful here too.

14 October, 2019 05:00PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Bpfcc New Release

BPF Compiler Collection 0.11.0

bpfcc version 0.11.0 has been uploaded to Debian Unstable and should be accessible in the repositories by now. After the 0.8.0 release, this has been the next one uploaded to Debian.

Multiple source respositories

This release brought in dependencies to another set of sources from the libbpf project. In the upstream repo, this is still a topic of discussion on how to release tools where one depends on another, in unison. Right now, libbpf is configured as a git submodule in the bcc repository. So anyone using the upstream git repoistory should be able to build it.

Multiple source archive for a Debian package

So I had read in the past about Multiple source tarballs for a single package in Debian but never tried it because I wasn’t maintaining anything in Debian which was such. With bpfcc it was now a good opportunity to try it out. First, I came across this post from RaphaĂŤl Hertzog which gives a good explanation of what all has been done. This article was very clear and concise on the topic

Git Buildpackage

gbp is my tool of choice for packaging in Debian. So I did a quick look to check how gbp would take care of it. And everything was in place and Just Worked

rrs@priyasi:~/rrs-home/Community/Packaging/bpfcc (master)$ gbp buildpackage --git-component=libbpf
gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig.tar.gz
gbp:info: Creating /home/rrs/NoBackup/Community/Packaging/bpfcc_0.11.0.orig-libbpf.tar.gz
gbp:info: Performing the build
dpkg-checkbuilddeps: error: Unmet build dependencies: arping clang-format cmake iperf libclang-dev libedit-dev libelf-dev libzip-dev llvm-dev libluajit-5.1-dev luajit python3-pyroute2
W: Unmet build-dependency in source
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: info: applying fix-install-path.patch
dh clean --buildsystem=cmake --with python3 --no-parallel
   dh_auto_clean -O--buildsystem=cmake -O--no-parallel
   dh_autoreconf_clean -O--buildsystem=cmake -O--no-parallel
   dh_clean -O--buildsystem=cmake -O--no-parallel
dpkg-source: info: using source format '3.0 (quilt)'
dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig-libbpf.tar.gz
dpkg-source: info: building bpfcc using existing ./bpfcc_0.11.0.orig.tar.gz
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: warning: ignoring deletion of directory src/cc/libbpf
dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.debian.tar.xz
dpkg-source: info: building bpfcc in bpfcc_0.11.0-1.dsc
I: Generating source changes file for original dsc
dpkg-genchanges: info: including full source code in upload
dpkg-source: info: unapplying fix-install-path.patch
ERROR: ld.so: object 'libeatmydata.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
W: cgroups are not available on the host, not using them.
I: pbuilder: network access will be disabled during build
I: Current time: Sun Oct 13 19:53:57 IST 2019
I: pbuilder-time-stamp: 1570976637
I: Building the build Environment
I: extracting base tarball [/var/cache/pbuilder/sid-amd64-base.tgz]
I: copying local configuration
I: mounting /proc filesystem
I: mounting /sys filesystem
I: creating /{dev,run}/shm
I: mounting /dev/pts filesystem
I: redirecting /dev/ptmx to /dev/pts/ptmx
I: Mounting /var/cache/apt/archives/
I: policy-rc.d already exists
W: Could not create compatibility symlink because /tmp/buildd exists and it is not a directory
I: using eatmydata during job
I: Using pkgname logfile
I: Current time: Sun Oct 13 19:54:04 IST 2019
I: pbuilder-time-stamp: 1570976644
I: Setting up ccache
I: Copying source file
I: copying [../bpfcc_0.11.0-1.dsc]
I: copying [../bpfcc_0.11.0.orig-libbpf.tar.gz]
I: copying [../bpfcc_0.11.0.orig.tar.gz]
I: copying [../bpfcc_0.11.0-1.debian.tar.xz]
I: Extracting source
dpkg-source: warning: extracting unsigned source package (bpfcc_0.11.0-1.dsc)
dpkg-source: info: extracting bpfcc in bpfcc-0.11.0
dpkg-source: info: unpacking bpfcc_0.11.0.orig.tar.gz
dpkg-source: info: unpacking bpfcc_0.11.0.orig-libbpf.tar.gz
dpkg-source: info: unpacking bpfcc_0.11.0-1.debian.tar.xz
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: info: applying fix-install-path.patch
I: Not using root during the build.

14 October, 2019 09:24AM by Ritesh Raj Sarraf (rrs@researchut.com)

Utkarsh Gupta

Joining Debian LTS!

Hey,

(DPL Style):
TL;DR: I joined Debian LTS as a trainee in July (during DebConf) and finally as a paid contributor from this month onward! :D


Here’s something interesting that happened last weekend!
Back during the good days of DebConf19, I finally got a chance to meet Holger! As amazing and inspiring a person he is, it was an absolute pleasure meeting him and also, I got a chance to talk about Debian LTS in more detail.

I was introduced to Debian LTS by Abhijith during his talk in MiniDebConf Delhi. And since then, I’ve been kinda interested in that project!
But finally it was here that things got a little “official” and after a couple of mail exchanges with Holger and Raphael, I joined in as a trainee!

I had almost no idea what to do next, so the next month I stayed silent, observing the workflow as people kept committing and announcing updates.
And finally in September, I started triaging and fixing the CVEs for Jessie and Stretch (mostly the former).

Thanks to Abhijith who explained the basics of what DLA is and how do we go about fixing bugs and then announcing them.

With that, I could fix a couple of CVEs and thanks to Holger (again) for reviewing and sponsoring the uploads! :D

I mostly worked (as a trainee) on:

  • CVE-2019-10751, affecting httpie, and
  • CVE-2019-16680, affecting file-roller.

And finally this happened:
Commit that mattered!
(Though there’s a little hiccup that happened there, but that’s something we can ignore!)

So finally, I’ll be working with the team from this month on!
As Holger says, very much yay! :D

Until next time.
:wq for today.

14 October, 2019 12:20AM

October 13, 2019

hackergotchi for Debian XMPP Team

Debian XMPP Team

New Dino in Debian

Dino (dino-im in Debian), the modern and beautiful chat client for the desktop, has some nice, new features. Users of Debian testing (bullseye) might like to try them:

  • XEP-0391: Jingle Encrypted Transports (explained here)
  • XEP-0402: Bookmarks 2 (explained here)

Note, that users of Dino on Debian 10 (buster) should upgrade to version 0.0.git20181129-1+deb10u1, because of a number of security issues, that have been found (CVE-2019-16235, CVE-2019-16236, CVE-2019-16237).

There have been other XMPP related updates in Debian since release of buster, among them:

You might be interested in the Octobers XMPP newsletter, also available in German.

13 October, 2019 10:00PM by Martin

Iustin Pop

Actually fixing a bug

One of the outcomes of my recent (last few years) sports ramp-up is that my opensource work is almost entirely left aside. Having an office job makes it very hard to spend more time sitting at the computer at home too…

So even my travis dashboard was red for a while now, but I didn’t look into it until today. Since I didn’t change anything recently, just travis builds started to fail, I was sure it’s just environment changes that need to be taken into account.

And indeed it was so, for two out of three projects. The third one… I actually got to fix a bug, introduced at the beginning of the year, but for which gcc (same gcc that originally passed) started to trip on a while back. I even had to read the man page of snprintf! Was fun ☺, too bad I don’t have enough time to do this more often…

My travis dashboard is green again, and “test suite” (if you can call it that) is expanded to explicitly catch this specific problem in the future.

13 October, 2019 06:43PM

October 12, 2019

hackergotchi for Shirish Agarwal

Shirish Agarwal

Social media, knowledge and some history of Banking

First of all Happy Dusshera to everybody. While Dusshera is India is a symbol of many things, it is a symbol of forgiveness and new beginnings. While I don’t know about new beginnings I do feel there is still lot of baggage which needs to be left I would try to share some insights I uncovered over last few months and few realizations I came across.

First of all thank you to the Debian-gnome-team to keep working at new version of packages. While there are still a bunch of bugs which need to be fixed especially #895990 and #913978 among others, still kudos for working at it. Hopefully, those bugs and others will be fixed soon so we could install gnome without a hiccup. I have not been on IRC because my riot-web has been broken for several days now. Also most of the IRC and telegram channels at least related to Debian become mostly echo chambers one way or the other as you do not get any serious opposition. On twitter, while it’s highly toxic, you also get the urge to fight the good fight when either due to principles or for some other reason (usually paid trolls) people fight, While I follow my own rules on twitter apart from their TOS, I feel at least new people who are going on social media in India or perhaps elsewhere as well could use are –

  1. It is difficult to remain neutral and stick to the facts. If you just stick to the facts, you will be branded as urban naxal or some such names.
  2. I find many times, if you are calm and don’t react, many a times, they are curious and display ignorance of knowledge which you thought everyone knew is not there. Now whether that is due to either due to lack of education, lack of knowledge or pretensions, although if its pretentious, you are caught sooner or later.
  3. Be civil at all times, if somebody harassess you, calls you names, report them and block them, although twitter still needs to fix the reporting thing a whole lot more. Although, when even somebody like me (bit of understanding of law, technology, language etc.) had a hard time figuring out twitter’s reporting ways, I dunno how many people would be able to use it successfully ? Maybe they make it so unhelpful so the traffic flows no matter what. I do realize they still haven’t figured out their business model but that’s a question for another day. In short, they need to make it far more simpler than it is today.
  4. You always have an option to block people but it has its own consequences.
  5. Be passive-aggressive if the situation demands it.
  6. Most importantly though, if somebody starts making jokes about you or start abusing you, it is sure that the person on the other side doesn’t have any more arguments and you have won.

Banking

Before I start, let me share why I am putting a blog post on the topic. The reason is pretty simple. It seems a huge number of Indians don’t either know the history of how banking started, the various turns it took and so on and so forth. In fact, nowadays history is being so hotly contested and perhaps even being re-written. Hence for some things I would be sharing some sources but even within them, there is possibiity of contestations. One of the contestations for a long time is when ancient coinage and the technique of smelting, flattening came to India. Depending on whom you ask, you have different answers. Lot of people are waiting to get more insight from the Keezhadi excavation which may also give some insight to the topic as well. There are rumors that the funding is being stopped but hope that isn’t true and we gain some more insight in Indian history. In fact, in South India, there seems to be lot of curiousity and attraction towards the site. It is possible that the next time I get a chance to see South India, I may try to see if there is a chance to see this unique location if a museum gets built somewhere nearby. Sorry from deviating from the topic, but it seems that ancient coinage started anywhere between 1st millenium BCE to 6th century BCE so it could be anywhere between 1500 – 2000 years old in India. While we can’t say anything for sure, but it’s possible that there was barter before that. There has also been some history about sharing tokens in different parts of the world as well. The various timelines get all jumbled up hence I would suggest people to use the wikipedia page of History of Money as a starting point. While it may not be give a complete, it would probably broaden the understanding a little bit. One of the reasons why history is so hotly contested could also perhaps lie because of the destruction of the Ancient Library of Alexandria. Who knows what more we would have known of our ancients if it was not destroyed 😦

Hundi (16th Centry)

I am jumping to 16th century as it is more closer to today’s modern banking otherwise the blog post would be too long. Now Hundi was a financial instrument which was used from 16th century onwards. This could be as either a forbearer of a cheque or a Traveller’s cheque. There doesn’t seem to be much in way of information, whether this was introduced by the Britishers or before by the Portugese when they came to India in via when the Portugese came when they came to India or was it in prevelance before. There is a fascinating in-depth study of Hundi though between 1858-1978 done by Marina Bernadette for London School of Economics as her dissertion paper.

Banias and Sarafs

As I had shared before, history in India is intertwined with mythology and history. While there is possibility of a lot of history behind this is documented somewhere I haven’t been able to find it. As I come from Bania , I had learnt lot of stories about both the migratory strain that Banias had as well as how banias used to split their children in adjoining states. Before the Britishers ruled over India, popular history tells us that it was Mughal emprire that ruled over us. What it doesn’t tell us though that both during the Mughal empire as well as Britishers, Banias and Sarafs who were moneylenders and bullion traders respectively hedged their bets. More so, if they were in royal service or bound to be close to the power of administration of the state/mini-kingdom/s . What they used to do is make sure that one of the sons would obey the king here while the other son may serve the muslim ruler. The idea behind that irrespective of whoever wins, the banias or sarafs would be able to continue their traditions and it was very much possible that the other brother would not be killed or even if he was, any or all wealth will pass to the victorious brother and the family name will live on. If I were to look at that, I’m sure I’ll find the same not only in Banias and Sarafs but perhaps other castes and communities as well. Modern history also tells of Rothschilds who did and continue to be an influence on the world today.

As to why did I share about how people acted in their self-interest because nowadays in the Indian social media, it is because many people chose to believe a very simplistic black and white narrative and they are being fed that by today’s dominant political party in power. What I’m trying to simply say is that history is much more complex than that. While you may choose to believe either of the beliefs, it might open a window in at least some Indian’s minds that there is possibility of another way things were done and ways in which people acted then what is being perceived today. It is also possible it may be contested today as lot of people would like to appear in the ‘right side’ of history as it seems today.

Banking in British Raj till nationalization

When the Britishers came, they bought the modern Banking system with them. These lead to creation of various banks like Bank of Bengal, Bank of Bombay and Bank of Madras which was later subsumed under Imperial Bank of India which later became State Bank of India in 1955. While I will not go into details, I do have curiousity so if life has, would surely want to visit either the Banca Monte dei Paschi di Siena S.p.A of Italy or the Berenberg Bank both of which probably has lot of history than what is written on their wikipedi pages. Soon, other banks. Soon there was whole clutch of banks which will continue to facilitate the British till independance and leak money overseas even afterwards till the Banks were nationalized in 1956 due to the ‘Gorwala Committee’ which recommended. Apart from the opaqueness of private banking and leakages, there was non provision of loans to priority sector i.e. farming in India, A.D. Gorawala recommended nationalization to weed out both issues in a single solution. One could debate efficacy of the same, but history has shown us that privatization in financial sector has many a times been costly to depositors. The financial Crisis of 2008 and the aftermath in many of the financial markets, more so private banks is a testament to it. Even the documentary Plenary’s Men gives whole lot of insight in the corruption that Private banks do today.

The plenary’s men on Youtube at least to my mind is evidence enough that at least India should be cautious in dealings with private banks.

Co-operative banks and their Rise

The Co-operative banks rise in India was largely in part due to rise of co-operative societies. While the co-operative Societies Act was started in 1904 itself. While there were quite a few co-operative societies and banks, arguably the real filip to Co-operative Banking was done by Amul when it started in 1946 and the milk credit society it started with it. I dunno how many people saw ‘Manthan‘ which chronicled the story and bought the story of both the co-operative societies and co-operative banks to millions of India. It is a classic movie which lot of today’s youth probably doesn’t know and even if he would would take time to identify with, although people of my generation the earlier generations do get it. One of the things that many people don’t get is that for lot of people even today, especially for marginal farmers and such in rural areas, co-operative banks are still the only solution. While in recent times, the Govt. of the day has tried something called Jan Dhan Yojana it hasn’t been as much a success as they were hoping to. While reams of paper have been written about it, like most policies it didn’t deliver to the last person which such inclusion programs try. Issues from design to implementation are many but perhaps some other time. I am sharing about Co-operative banks as a recent scam took place in one of the banks, probably one of the most respected and widely held co-operative banks. I would rather share sucheta dalal’s excellent analysis done on the PMC bank crisis which is 1unfolding and perhaps continue to unfold in days to come.

Conclusion

At the end I have to admit I took a lot of short-cuts to reach till here. There is possibility that there may be details people might want me to incorporate, if so please let me know and would try and add that. I did try to compress as much as possible while trying to be as exhaustive as possible. I also haven’t used any data as I wanted to keep the explanations as simple as possible and try not to have as little of politics as possible even though biases which are there, are there.

12 October, 2019 10:58PM by shirishag75

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

GitHub Streak: Round Six

Five ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking

github activity october 2013 to october 2014github activity october 2013 to october 2014

And four year ago a first follow-up appeared in this post:

github activity october 2014 to october 2015github activity october 2014 to october 2015

And three years ago we had a followup

github activity october 2015 to october 2016github activity october 2015 to october 2016

And two years ago we had another one

github activity october 2016 to october 2017github activity october 2016 to october 2017

And last year another one

github activity october 2017 to october 2018github activity october 2017 to october 2018

As today is October 12, here is the newest one from 2018 to 2019:

github activity october 2018 to october 2019github activity october 2018 to october 2019

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 October, 2019 03:53PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Alpine MusicSafe Classic Hearing Protection Review

Yesterday, I went to a punk rock show and had tons of fun. One of the bands playing (Jeunesse Apatride) hadn't played in 5 years and the crowd was wild. The other bands playing were also great. Here's a few links if you enjoy Oi! and Ska:

Sadly, those kind of concerts are always waaaaayyyyy too loud. I mostly go to small venue concerts and for some reason the sound technicians think it's a good idea to make everyone's ears bleed. You really don't need to amplify the drums when the whole concert venue is 50m²...

So I bough hearing protection. It was the first time I wore earplugs at a concert and it was great! I can't really compare the model I got (Alpine MusicSafe Classic earplugs) to other brands since it's the only one I tried out, but:

  • They were very comfortable. I wore them for about 5 hours and didn't feel any discomfort.

  • They came with two sets of plastic tips you insert in the silicone earbuds. I tried the -17db ones but I decided to go with the -18db inserts as it was still freaking loud.

  • They fitted very well in my ears even tough I was in the roughest mosh pit I've ever experienced (and I've seen quite a few). I was sweating profusely from all the heavy moshing and never once I feared loosing them.

  • My ears weren't ringing when I came back home so I guess they work.

  • The earplugs didn't distort sound, only reduce the volume.

  • They came with a handy aluminium carrying case that's really durable. You can put it on your keychain and carry them around safely.

  • They only cost me ~25 CAD with taxes.

The only thing I disliked was that I found it pretty much impossible to sing while wearing them. as I couldn't really hear myself. With a bit of practice, I was able to sing true but it wasn't great :(

All in all, I'm really happy with my purchase and I don't think I'll ever go to another concert without earplugs.

12 October, 2019 04:00AM by Louis-Philippe Véronneau

October 11, 2019

Molly de Blanc

Conferences

I think there are too many conferences.

I conducted this very scientific Twitter poll and out of 52 respondants, only 23% agreed with me. Some people who disagreed with me pointed out specifically what they think is lacking:  more regional events, more in specific countries, and more “generic” FLOSS events.

Many projects have a conference, and then there are “generic” conferences, like FOSDEM, LibrePlanet, LinuxConfAU, and FOSSAsia. Some are more corporate (OSCON), while others more community focused (e.g. SeaGL).

There are just a lot of conferences.

I average a conference a month, with most of them being more general sorts of events, and a few being project specific, like DebConf and GUADEC.

So far in 2019, I went to: FOSDEM, CopyLeft Conf, LibrePlanet, FOSS North, Linux Fest Northwest, OSCON, FrOSCon, GUADEC, and GitLab Commit. I’m going to All Things Open next week. In November I have COSCon scheduled. I’m skipping SeaGL this year. I am not planning on attending 36C3 unless my talk is accepted. I canceled my trip to DebConf19. I did not go to Camp this year. I also had a board meeting in NY, an upcoming one in Berlin, and a Debian meeting in the other Cambridge. I’m skipping LAS and likely going to SFSCon for GNOME.

So 9 so far this year,  and somewhere between 1-4 more, depending on some details.

There are also conferences that don’t happen every year, like HOPE and CubaConf. There are some that I haven’t been to yet, like PyCon, and more regional events like Ohio Linux Fest, SCALE, and FOSSCon in Philadelphia.

I think I travel too much, and plenty of people travel more than I do. This is one of the reasons why we have too many events: the same people are traveling so much.

When you’re nose deep in it, when you think that you’re doing is important, you keep going to them as long as you’re invited. I really believe in the messages I share during my talks, and I know by speaking I am reaching audiences I wouldn’t otherwise. As long as I keep getting invited places, I’ll probably keep going.

Finding sponsors is hard(er).

It is becoming increasingly difficult to find sponsors for conferences. This is my experience, and what I’ve heard from speaking with others about it. Lower response rates to requests and people choosing lower sponsorship levels than they have in past years.

CFP responses are not increasing.

I sort of think the Tweet says it all. Some conferences aren’t having this experiences. Ones I’ve been involved with, or spoken to the organizers of, are needing to extend their deadlines and generally having lower response rates.

Do I think we need fewer conferences?

Yes and no. I think smaller, regional conferences are really important to reaching communities and individuals who don’t have the resources to travel. I think it gives new speakers opportunities to share what they have to say, which is important for the growth and robustness of FOSS.

Project specific conferences are useful for those projects. It gives us a time to have meetings and sprints, to work and plan, and to learn specifically about our project and feel more connected to out collaborators.

On the other hand, I do think we have more conferences than even a global movement can actively support in terms of speakers, organizer energy, and sponsorship dollars.

What do I think we can do?

Not all of these are great ideas, and not all of them would work for every event. However, I think some combination of them might make a difference for the ecosystem of conferences.

More single-track or two-track conferences. All Things Open has 27 sessions occurring concurrently. Twenty-seven! It’s a huge event that caters to many people, but seriously, that’s too much going on at once. More 3-4 track conferences should consider dropping to 1-2 tracks, and conferences with more should consider dropping their numbers down as well. This means fewer speakers at a time.

Stop trying to grow your conference. Growth feels like a sign of success, but it’s not. It’s a sign of getting more people to show up. It helps you make arguments to sponsors, because more attendees means more people being reached when a company sponsors.

Decrease sponsorship levels. I’ve seen conferences increasing their sponsorship levels. I think we should all agree to decrease those numbers. While we’ll all have to try harder to get more sponsors, companies will be able to sponsor more events.

Stop serving meals. I appreciate a free meal. It makes it easier to attend events, but meals are expensive and difficult to logisticate. I know meals make it easier for some people, especially students, to attend. Consider offering special RSVP lunches for students, recent grads, and people looking for work.

Ditch the fancy parties. Okay, I also love a good conference party. They’re loads of fun and can be quite expensive. They also encourage drinking, which I think is bad for the culture.

Ditch the speaker dinners. Okay, I also love a good speaker dinner. It’s fun to relax, see my friends, and have a nice meal that isn’t too loud of overwhelming. These are really expensive. I’ve been trying to donate to local food banks/food insecurity charities an equal amount of the cost of dinner per person, but people are rarely willing to share that information! Paying for a nice dinner out of pocket — with multiple bottles of wine — usually runs $50-80 with tip. I know one dinner I went to was $150 a person. I think the community would be better served if we spent that money on travel grants. If you want to be nice to speakers, I enjoy a box of chocolates I can take home and share with my roommates.

 Give preference to local speakers. One of the things conferences do is bring in speakers from around the world to share their ideas with your community, or with an equally global community. This is cool. By giving preference to local speakers, you’re building expertise in your geography.

Consider combining your community conferences. Rather than having many conferences for smaller communities, consider co-locating conferences and sharing resources (and attendees). This requires even more coordination to organize, but could work out well.

Volunteer for your favorite non-profit or project. A lot of us have booths at conferences, and send people around the world in order to let you know about the work we’re doing. Consider volunteering to staff a booth, so that your favorite non-profits and projects have to send fewer people.

While most of those things are not “have fewer conferences,” I think they would help solve the problems conference saturation is causing: it’s expensive for sponsors, it’s expensive for speakers, it creates a large carbon footprint, and it increases burnout among organizers and speakers.

I must enjoy traveling because I do it so much. I enjoy talking about FOSS, user rights, and digital rights. I like meeting people and sharing with them and learning from them. I think what I have to say is important. At the same time, I think I am part of an unhealthy culture in FOSS, that encourages burnout, excessive travel, and unnecessary spending of money, that could be used for better things.

One last thing you can do, to help me, is submit talks to your local conference(s). This will help with some of these problems as well, can be a great experience, and is good for your conference and your community!

11 October, 2019 11:23PM by mollydb

October 10, 2019

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in September 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Reiner Herrmann investigated a build failure of supertuxkart on several architectures and prepared an update to link against libatomic. I reviewed and sponsored the new revision which allowed supertuxkart 1.0 to migrate to testing.
  • Python 3 ports: Reiner also ported bouncy, a game for small kids, to Python3 which I reviewed and uploaded to unstable.
  • Myself upgraded atomix to version 3.34.0 as requested although it is unlikely that you will find a major difference to the previous version.

Debian Java

Misc

  • I packaged new upstream releases of ublock-origin and privacybadger, two popular Firefox/Chromium addons and
  • packaged a new upstream release of wabt, the WebAssembly Binary Toolkit.

Debian LTS

This was my 43. month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 11.09.2019 until 15.09.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in libonig, bird, curl, openssl, wpa, httpie, asterisk, wireshark and libsixel.
  • DLA-1922-1. Issued a security update for wpa fixing 1 CVE.
  • DLA-1932-1. Issued a security update for openssl fixing 2 CVE.
  • DLA-1900-2. Issued a regression update for apache fixing 1 CVE.
  • DLA-1943-1. Issued a security update for jackson-databind fixing 4 CVE.
  • DLA-1954-1. Issued a security update for lucene-solr fixing 1 CVE. I triaged CVE-2019-12401 and marked Jessie as not-affected because we use the system libraries of woodstox in Debian.
  • DLA-1955-1. Issued a security update for tcpdump fixing 24 CVE by backporting the latest upstream release to Jessie. I discovered several test failures but after more investigation I came to the conclusion that the test cases were simply created with a newer version of libpcap which causes the test failures with Jessie’s older version.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my sixteenth month and I have been assigned to work 15 hours on ELTS plus five hours from August. I used 15 of them for the following:

  • I was in charge of our ELTS frontdesk from 30.09.2019 until 06.10.2019 and I triaged CVE in tcpdump. There were no reports of other security vulnerabilities for supported packages in this week.
  • ELA-163-1. Issued a security update for curl fixing 1 CVE.
  • ELA-171-1. Issued a security update for openssl fixing 2 CVE.
  • ELA-172-1. Issued a security update for linux fixing 23 CVE.
  • ELA-174-1. Issued a security update for tcpdump fixing 24 CVE.

10 October, 2019 08:49PM by apo

hackergotchi for Norbert Preining

Norbert Preining

R with TensorFlow 2.0 on Debian/sid

I recently posted on getting TensorFlow 2.0 with GPU support running on Debian/sid. At that time I didn’t manage to get the tensorflow package for R running properly. It didn’t need much to get it running, though.

The biggest problem I faced was that the R/TensorFlow package recommends using install_tensorflow, which can use either auto, conda, virtualenv, or system (at least according to the linked web page). I didn’t want to set up neither a conda nor virtualenv environment, since TensorFlow was already installed, so I thought system would be correct, but then, I had it already installed. Anyway, the system option is gone and not accepted, but I still got errors. In particular because the code mentioned on the installation page is incorrect for TF2.0!

It turned out to be a simple error on my side – the default is to use the program python which in Debian is still Python2, while I have TF only installed for Python3. The magic incantation to fix that is use_python("/usr/bin/python3") and one is set.

So here is a full list of commands to get R/TensorFlow running on top of an already installed TensorFlow for Python3 (as usual either as root to be installed into /usr/local or as user to have a local installation):

devtools::install_github("rstudio/tensorflow")

And if you want to run some TF program:

library(tensorflow)
use_python("/usr/bin/python3")
tf$math$cumprod(1:5)

This gives lots of output but mentioning that it is running on my GPU.

At least for the (probably very short) time being this looks like a workable system. Now off to convert my TF1.N code to TF2.0.

10 October, 2019 06:15AM by Norbert Preining

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Trying out Sourcehut

Last month, I decided it was finally time to move a project I maintain from Github1 to another git hosting platform.

While polling other contributors (I proposed moving to gitlab.com), someone suggested moving to Sourcehut, a newish git hosting platform written and maintained by Drew DeVault. I've been following Drew's work for a while now and although I had read a few blog posts on Sourcehut's development, I had never really considered giving it a try. So I did!

Sourcehut is still in alpha and I'm expecting a lot of things to change in the future, but here's my quick review.

Things I like

Sustainable FOSS

Sourcehut is 100% Free Software. Github is proprietary and I dislike Gitlab's Open Core business model.

Sourcehut's business model also seems sustainable to me, as it relies on people paying a monthly fee for the service. You'll need to pay if you want your code hosted on https://sr.ht once Sourcehut moves into beta. As I've written previously, I like that a lot.

In comparison, Gitlab is mainly funded by venture capital and I'm afraid of the long term repercussions this choice will have.

Continuous Integration

Continuous Integration is very important to me and I'm happy to say Sourcehut's CI is pretty good! Like Travis and Gitlab CI, you declare what needs to happen in a YAML file. The CI uses real virtual machines backed by QEMU, so you can run many different distros and CPU archs!

Even nicer, you can actually SSH into a failed CI job to debug things. In comparison, Gitlab CI's Interactive Web Terminal is ... web based and thus not as nice. Worse, it seems it's still somewhat buggy as Gitlab still hasn't enabled it on their gitlab.com instance.

Here's what the instructions to SSH into the CI look like when a job fails:

This build job failed. You may log into the failed build environment within 10
minutes to examine the results with the following command:

ssh -t builds@foo.bar connect NUMBER

Sourcehut's CI is not as feature-rich or as flexible as Gitlab CI, but I feel it is more powerful then Gitlab CI's default docker executor. Folks that run integration tests or more complicated setups where Docker fails should definitely give it a try.

From the few tests I did, Sourcehut's CI is also pretty quick (it's definitely faster than Travis or Gitlab CI).

No JS

Although Sourcehut's web interface does bundle some Javascript, all features work without it. Three cheers for that!

Things I dislike

Features division

I'm not sure I like the way features (the issue tracker, the CI builds, the git repository, the wikis, etc.) are subdivided in different subdomains.

For example, when you create a git repository on git.sr.ht, you only get a git repository. If you want an issue tracker for that git repository, you have to create one at todo.sr.ht with the same name. That issue tracker isn't visible from the git repository web interface.

That's the same for all the features. For example, you don't see the build status of a merged commit when you look at it. This design choice makes you feel like the different features aren't integrated to one another.

In comparison, Gitlab and Github use a more "centralised" approach: everything is centered around a central interface (your git repository) and it feels more natural to me.

Discoverability

I haven't seen a way to search sr.ht for things hosted there. That makes it hard to find repositories, issues or even the Sourcehut source code!

Merge Request workflow

I'm a sucker for the Merge Request workflow. I really like to have a big green button I can click on to merge things. I know some people prefer a more manual workflow that uses git merge and stuff, but I find that tiresome.

Sourcehut chose a workflow based on sending patches by email. It's neat since you can submit code without having an account. Sourcehut also provides mailing lists for projects, so people can send patches to a central place.

I find that workflow harder to work with, since to me it makes it more difficult to see what patches have been submitted. It also makes the review process more tedious, since the CI isn't ran automatically on email patches.

Summary

All in all, I don't think I'll be moving ISBG to Sourcehut (yet?). At the moment it doesn't quite feel as ready as I'd want it to be, and that's OK. Most of the things I disliked about the service can be fixed by some UI work and I'm sure people are already working on it.

Github was bought by MS for 7.5 billion USD and Gitlab is currently valued at 2.7 billion USD. It's not really fair to ask Sourcehut to fully compete just yet :)

With Sourcehut, Drew DeVault is fighting the good fight and I wish him the most resounding success. Who knows, maybe I'll really migrate to it in a few years!


  1. Github is a proprietary service, has been bought by Microsoft and gosh darn do I hate Travis CI. 

10 October, 2019 04:00AM by Louis-Philippe Véronneau

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.9.800.1.0

armadillo image

Another month, another Armadillo upstream release! Hence a new RcppArmadillo release arrived on CRAN earlier today, and was just shipped to Debian as well. It brings a faster solve() method and other goodies. We also switched to the (awesome) tinytest unit test frameowrk, and Min Kim made the configure.ac script more portable for the benefit of NetBSD and other non-bash users; see below for more details. One again we ran two full sets of reverse-depends checks, no issues were found, and the packages was auto-admitted similarly at CRAN after less than two hours despite there being 665 reverse depends. Impressive stuff, so a big Thank You! as always to the CRAN team.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 665 other packages on CRAN.

Changes in RcppArmadillo version 0.9.800.1.0 (2019-10-09)

  • Upgraded to Armadillo release 9.800 (Horizon Scraper)

    • faster solve() in default operation; iterative refinement is no longer applied by default; use solve_opts::refine to explicitly enable refinement

    • faster expmat()

    • faster handling of triangular matrices by rcond()

    • added .front() and .back()

    • added .is_trimatu() and .is_trimatl()

    • added .is_diagmat()

  • The package now uses tinytest for unit tests (Dirk in #269).

  • The configure.ac script is now more careful about shell portability (Min Kim in #270).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 October, 2019 12:59AM

October 09, 2019

hackergotchi for Adnan Hodzic

Adnan Hodzic

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

09 October, 2019 03:55PM by admin

Enrico Zini

Fixed XSS issue on debtags.debian.org

Thanks to Moritz Naumann who found the issues and wrote a very useful report, I fixed a number of Cross Site Scripting vulnerabilities on https://debtags.debian.org.

The core of the issue was code like this in a Django view:

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        return http.HttpResponseNotFound("Package %s was not found" % name)
    # …

The default content-type of HttpResponseNotFound is text/html, and the string passed is the raw HTML with clearly no escaping, so this allows injection of arbitrary HTML/<script> code in the name variable.

I was so used to Django doing proper auto-escaping that I missed this place in which it can't do that.

There are various things that can be improved in that code.

One could introduce escaping (and while one's at it, migrate the old % to format):

from django.utils.html import escape

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        return http.HttpResponseNotFound("Package {} was not found".format(escape(name)))
    # …

Alternatively, set content_type to text/plain:

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        return http.HttpResponseNotFound("Package {} was not found".format(name), content_type="text/plain")
    # …

Even better, raise Http404:

from django.utils.html import escape

def pkginfo_view(request, name):
    pkg = bmodels.Package.by_name(name)
    if pkg is None:
        raise Http404(f"Package {name} was not found")
    # …

Even better, use standard shortcuts and model functions if possible:

from django.shortcuts import get_object_or_404

def pkginfo_view(request, name):
    pkg = get_object_or_404(bmodels.Package, name=name)
    # …

And finally, though not security related, it's about time to switch to class-based views:

class PkgInfo(TemplateView):
    template_name = "reports/package.html"

    def get_context_data(self, **kw):
        ctx = super().get_context_data(**kw)
        ctx["pkg"] = get_object_or_404(bmodels.Package, name=self.kwargs["name"])
    # …
        return ctx

I proceeded with a review of the other Django sites I maintain in case I reproduced this mistake also there.

09 October, 2019 08:51AM

hackergotchi for Chris Lamb

Chris Lamb

Tour d'Orwell: Southwold

I recently read that that during 1929 George Orwell returned to his family home in the Suffolk town of Southwold but when I further learned that he had acquired a motorbike during this time to explore the surrounding villages I could not resist visiting myself on such a transport mode.

Orwell would end up writing his first novel here ("Burmese Days") followed by his first passable one ("A Clergyman's Daughter") but unfortunately the local bookshop was only to have the former in stock. He moved back to London in 1934 to work in a bookshop in Hampstead, now a «Le Pain Quotidien».

If you are thinking of visiting, Southwold has some lovely quaint beach huts, a brewery and the officially signposted A1120 "Scenic Route" I took on the way out was neither as picturesque nor as fun to ride as the A1066

09 October, 2019 12:29AM

October 08, 2019

hackergotchi for Steve Kemp

Steve Kemp

A blog overhaul

When this post becomes public I'll have successfully redeployed my blog!

My blog originally started in 2005 as a Wordpress installation, at some point I used Mephisto, and then I wrote my own solution.

My project was pretty cool; I'd parse a directory of text-files, one file for each post, and insert them into an SQLite database. From there I'd initiate a series of plugins, each one to generate something specific:

  • One plugin would output an archive page.
  • Another would generate a tag cloud.
  • Yet another would generate the actual search-results for a particular month/year, or tag-name.

All in all the solution was flexible and it wasn't too slow because finding posts via the SQLite database was pretty good.

Anyway I've come to realize that freedom and architecture was overkill. I don't need to do fancy presentation, I don't need a loosely-coupled set of plugins.

So now I have a simpler solution which uses my existing template, uses my existing posts - with only a few cleanups - and generates the site from scratch, including all the comments, in less than 2 seconds.

After running make clean a complete rebuild via make upload (which deploys the generated site to the remote host via rsync) takes 6 seconds.

I've lost the ability to be flexible in some areas, but I've gained all the speed. The old project took somewhere between 20-60 seconds to build, depending on what had changed.

In terms of simplifying my life I've dropped the remote installation of a site-search which means I can now host this site on a static site with only a single handler to receive any post-comments. (I was 50/50 on keeping comments. I didn't want to lose those I'd already received, and I do often find valuable and interesting contributions from readers, but being 100% static had its appeal too. I guess they stay for the next few years!)

08 October, 2019 06:00PM

Antoine Beaupré

Tip of the day: batch PDF conversion with LibreOffice

Someone asked me today why they couldn't write on the DOCX document they received from a student using the pen in their Onyx Note Pro reader. The answer, of course, is that while the Onyx can read those files, it can't annotate them: that only works with PDFs.

Next question then, is of course: do I really need to open each file separately and save them as PDF? That's going to take forever, I have 30 students per class!

Fear not, shell scripting and headless mode flies in to the rescue!

As it turns out, one of the Libreoffice parameters allow you to run batch operations on files. By calling:

libreoffice --headless --convert-to pdf *.docx

LibreOffice will happily convert all the *.docx files in the current directory to PDF. But because navigating the commandline can be hard, I figured I could push this a tiny little bit further and wrote the following script:

#!/bin/sh

exec libreoffice --headless --convert-to pdf "$@"

Drop this in ~/.local/share/nautilus/scripts/libreoffice-pdf, mark it executable, and voilà! You can batch-convert basically any text file (or anything supported by LibreOffice, really) into PDF.

Now I wonder if this would be a useful addition to the Debian package, anyone?

08 October, 2019 04:28PM

Jonas Meurer

debian lts report 2019.09

Debian LTS report for September 2019

This month I was allocated 10 hours and carried over 9.5 hours from August. Unfortunately, again I didn't find much time to work on LTS issues, partially because I was travelling. I spent 5 hours on the task listed below. That means that I carry over 14.5 hours to October.

08 October, 2019 02:34PM

Jamie McClelland

Editing video without a GUI? Really?

It seems counter intuitive - if ever there was a program in need of a graphical user interface, it's a non-linear video editing program.

However, as part of the May First board elections, I discovered otherwise.

We asked each board candidate to submit a 1 - 2 minute video introduction about why they want to be on the board. My job was to connect them all into a single video.

I had an unrealistic thought that I could find some simple tool that could concatenate them all together (like mkvmerge) but I soon realized that this approach requires that everyone use the exact same format, codec, bit rate, sample rate and blah blah blah.

I soon realized that I needed to actually make a video, not compile one. I create videos so infrequently, that I often forget the name of the video editing software I used last time so it takes some searching. This time I found that I had openshot-qt installed but when I tried to run it, I got a back trace (which someone else has already reported).

I considered looking for another GUI editor, but I wasn't that interested in learning what might be a complicated user interface when what I need is so simple.

So I kept searching and found melt. Wow.

I ran:

melt originals/* -consumer avformat:all.webm acodec=libopus vcodec=libvpx

And a while later I had a video. Impressive. It handled people who submitted their videos in portrait mode on their cell phones in mp4 as well as web cam submissions using webm/vp9 on landscape mode.

Thank you melt developers!

08 October, 2019 01:19PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Ada Lovelace Day: 5 Amazing Women in Tech

Ada Lovelace portrait

It’s Ada Lovelace day and I’ve been lax in previous years about celebrating some of the talented women in technology I know or follow on the interwebs. So, to make up for it, here are 5 amazing technologists.

Allison Randal

I was initially aware of Allison through her work on Perl, was vaguely aware of the fact she was working on Ubunutu, briefly overlapped with her at HPE (and thought it was impressive HP were hiring such high calibre of Free Software folk) when she was working on OpenStack, and have had the pleasure of meeting her in person due to the fact we both work on Debian. In the continuing theme of being able to do all things tech she’s currently studying a PhD at Cambridge (the real one), and has already written a fascinating paper about about the security misconceptions around virtual machines and containers. She’s also been doing things with home automation, properly, with local speech recognition rather than relying on any external assistant service (I will, eventually, find the time to follow her advice and try this out for myself).

Alyssa Rosenzweig

Graphics are one of the many things I just can’t do. I’m not artistic and I’m in awe of anyone who is capable of wrangling bits to make computers do graphical magic. People who can reverse engineer graphics hardware that would otherwise only be supported by icky binary blobs impress me even more. Alyssa is such a person, working on the Panfrost driver for ARM’s Mali Midgard + Bifrost GPUs. The lack of a Free driver stack for this hardware is a real problem for the ARM ecosystem and she has been tirelessly working to bring this to many ARM based platforms. I was delighted when I saw one of my favourite Free Software consultancies, Collabora, had given her an internship over the summer. (Selfishly I’m hoping it means the Gemini PDA will eventually be able to run an upstream kernel with accelerated graphics.)

Angie McKeown

The first time I saw Angie talk it was about the user experience of Virtual Reality, and how it had an entirely different set of guidelines to conventional user interfaces. In particular the premise of not trying to shock or surprise the user while they’re in what can be a very immersive environment. Obvious once someone explains it to you! Turns out she was also involved in the early days of custom PC builds and internet cafes in Northern Ireland, and has interesting stories to tell. These days she’s concentrating on cyber security - I’ve her to thank for convincing me to persevere with Ghidra - having looked at Bluetooth security as part of her Masters. She’s also deeply aware of the implications of the GDPR and has done some interesting work on thinking about how it affects the computer gaming industry - both from the perspective of the author, and the player.

Claire Wilgar

I’m not particularly fond of modern web design. That’s unfair of me, but web designers seem happy to load megabytes of Javascript from all over the internet just to display the most basic of holding pages. Indeed it seems that such things now require all the includes rather than being simply a matter of HTML, CSS and some graphics, all from the same server. Claire talked at Women Techmakers Belfast about moving away from all of this bloat and back to a minimalistic approach with improved performance, responsiveness and usability, without sacrificing functionality or presentation. She said all the things I want to say to web designers, but from a position of authority, being a front end developer as her day job. It’s great to see someone passionate about front-end development who wants to do things the right way, and talks about it in a way that even people without direct experience of the technologies involved (like me) can understand and appreciate.

Karen Sandler

There aren’t enough people out there who understand law and technology well. Karen is one of the few I’ve encountered who do, and not only that, but really, really gets Free software and the impact of the four freedoms on users in a way many pure technologists do not. She’s had a successful legal career that’s transitioned into being the general counsel for the Software Freedom Law Center, been the executive director of GNOME and is now the executive director of the Software Freedom Conservancy. As someone who likes to think he knows a little bit about law and technology I found Karen’s wealth of knowledge and eloquence slightly intimidating the first time I saw her speak (I think at some event in San Francisco), but I’ve subsequently (gratefully) discovered she has an incredible amount of patience (and ability) when trying to explain the nuances of free software legal issues.

08 October, 2019 07:00AM

October 07, 2019

hackergotchi for Julien Danjou

Julien Danjou

Python and fast HTTP clients

Python and fast HTTP clients

Nowadays, it is more than likely that you will have to write an HTTP client for your application that will have to talk to another HTTP server. The ubiquity of REST API makes HTTP a first class citizen. That's why knowing optimization patterns are a prerequisite.

There are many HTTP clients in Python; the most widely used and easy to
work with is requests. It is the de-factor standard nowadays.

Persistent Connections

The first optimization to take into account is the use of a persistent connection to the Web server. Persistent connections are a standard since HTTP 1.1 though many applications do not leverage them. This lack of optimization is simple to explain if you know that when using requests in its simple mode (e.g. with the get function) the connection is closed on return. To avoid that, an application needs to use a Session object that allows reusing an already opened connection.

import requests

session = requests.Session()
session.get("http://example.com")
# Connection is re-used
session.get("http://example.com")
Using Session with requests

Each connection is stored in a pool of connections (10 by default), the size of
which is also configurable:

import requests


session = requests.Session()
adapter = requests.adapters.HTTPAdapter(
    pool_connections=100,
    pool_maxsize=100)
session.mount('http://', adapter)
response = session.get("http://example.org")
Changing pool size

Reusing the TCP connection to send out several HTTP requests offers a number of performance advantages:

  • Lower CPU and memory usage (fewer connections opened simultaneously).
  • Reduced latency in subsequent requests (no TCP handshaking).
  • Exceptions can be raised without the penalty of closing the TCP connection.

The HTTP protocol also provides pipelining, which allows sending several requests on the same connection without waiting for the replies to come (think batch). Unfortunately, this is not supported by the requests library. However, pipelining requests may not be as fast as sending them in parallel. Indeed, the HTTP 1.1 protocol forces the replies to be sent in the same order as the requests were sent – first-in first-out.

Parallelism

requests also has one major drawback: it is synchronous. Calling requests.get("http://example.org") blocks the program until the HTTP server replies completely. Having the application waiting and doing nothing can be a drawback here. It is possible that the program could do something else rather than sitting idle.

A smart application can mitigate this problem by using a pool of threads like the ones provided by concurrent.futures. It allows parallelizing the HTTP requests in a very rapid way.

from concurrent import futures

import requests


with futures.ThreadPoolExecutor(max_workers=4) as executor:
    futures = [
        executor.submit(
            lambda: requests.get("http://example.org"))
        for _ in range(8)
    ]

results = [
    f.result().status_code
    for f in futures
]

print("Results: %s" % results)
Using futures with requests

This pattern being quite useful, it has been packaged into a library named requests-futures. The usage of Session objects is made transparent to the developer:

from requests_futures import sessions


session = sessions.FuturesSession()

futures = [
    session.get("http://example.org")
    for _ in range(8)
]

results = [
    f.result().status_code
    for f in futures
]

print("Results: %s" % results)
Using futures with requests

By default a worker with two threads is created, but a program can easily customize this value by passing the max_workers argument or even its own executor to the FuturSession object – for example like this: FuturesSession(executor=ThreadPoolExecutor(max_workers=10)).

Asynchronicity

As explained earlier, requests is entirely synchronous. That blocks the application while waiting for the server to reply, slowing down the program. Making HTTP requests in threads is one solution, but threads do have their own overhead and this implies parallelism, which is not something everyone is always glad to see in a program.

Starting with version 3.5, Python offers asynchronicity as its core using asyncio. The aiohttp library provides an asynchronous HTTP client built on top of asyncio. This library allows sending requests in series but without waiting for the first reply to come back before sending the new one. In contrast to HTTP pipelining, aiohttp sends the requests over multiple connections in parallel, avoiding the ordering issue explained earlier.

import aiohttp
import asyncio


async def get(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            return response


loop = asyncio.get_event_loop()

coroutines = [get("http://example.com") for _ in range(8)]

results = loop.run_until_complete(asyncio.gather(*coroutines))

print("Results: %s" % results)
Using aiohttp

All those solutions (using Session, threads, futures or asyncio) offer different approaches to making HTTP clients faster.

Performances

The snippet below is an HTTP client sending requests to httpbin.org, an HTTP API that provides (among other things) an endpoint simulating a long request (a second here). This example implements all the techniques listed above and times them.

import contextlib
import time

import aiohttp
import asyncio
import requests
from requests_futures import sessions

URL = "http://httpbin.org/delay/1"
TRIES = 10


@contextlib.contextmanager
def report_time(test):
    t0 = time.time()
    yield
    print("Time needed for `%s' called: %.2fs"
          % (test, time.time() - t0))


with report_time("serialized"):
    for i in range(TRIES):
        requests.get(URL)


session = requests.Session()
with report_time("Session"):
    for i in range(TRIES):
        session.get(URL)


session = sessions.FuturesSession(max_workers=2)
with report_time("FuturesSession w/ 2 workers"):
    futures = [session.get(URL)
               for i in range(TRIES)]
    for f in futures:
        f.result()


session = sessions.FuturesSession(max_workers=TRIES)
with report_time("FuturesSession w/ max workers"):
    futures = [session.get(URL)
               for i in range(TRIES)]
    for f in futures:
        f.result()


async def get(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            await response.read()

loop = asyncio.get_event_loop()
with report_time("aiohttp"):
    loop.run_until_complete(
        asyncio.gather(*[get(URL)
                         for i in range(TRIES)]))
Program to compare the performances of different requests usage

Running this program gives the following output:

Time needed for `serialized' called: 12.12s
Time needed for `Session' called: 11.22s
Time needed for `FuturesSession w/ 2 workers' called: 5.65s
Time needed for `FuturesSession w/ max workers' called: 1.25s
Time needed for `aiohttp' called: 1.19s
Python and fast HTTP clients

Without any surprise, the slower result comes with the dumb serialized version, since all the requests are made one after another without reusing the connection — 12 seconds to make 10 requests.

Using a Session object and therefore reusing the connection means saving 8% in terms of time, which is already a big and easy win. Minimally, you should always use a session.

If your system and program allow the usage of threads, it is a good call to use them to parallelize the requests. However threads have some overhead, and they are not weight-less. They need to be created, started and then joined.

Unless you are still using old versions of Python, without a doubt using aiohttp should be the way to go nowadays if you want to write a fast and asynchronous HTTP client. It is the fastest and the most scalable solution as it can handle hundreds of parallel requests. The alternative, managing hundreds of threads in parallel is not a great option.

Streaming

Another speed optimization that can be efficient is streaming the requests. When making a request, by default the body of the response is downloaded immediately. The stream parameter provided by the requests library or the content attribute for aiohttp both provide a way to not load the full content in memory as soon as the request is executed.

import requests


# Use `with` to make sure the response stream is closed and the connection can
# be returned back to the pool.
with requests.get('http://example.org', stream=True) as r:
    print(list(r.iter_content()))
Streaming with requests
import aiohttp
import asyncio


async def get(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
            return await response.content.read()

loop = asyncio.get_event_loop()
tasks = [asyncio.ensure_future(get("http://example.com"))]
loop.run_until_complete(asyncio.wait(tasks))
print("Results: %s" % [task.result() for task in tasks])
Streaming with aiohttp

Not loading the full content is extremely important in order to avoid allocating potentially hundred of megabytes of memory for nothing. If your program does not need to access the entire content as a whole but can work on chunks, it is probably better to just use those methods. For example, if you're going to save and write the content to a file, reading only a chunk and writing it at the same time is going to be much more memory efficient than reading the whole HTTP body, allocating a giant pile of memory, and then writing it to disk.

I hope that'll make it easier for you to write proper HTTP clients and requests. If you know any other useful technic or method, feel free to write it down in the comment section below!

07 October, 2019 09:30AM by Julien Danjou

Antoine Beaupré

This is why native apps matter

I was just looking a web stream on Youtube today and was wondering why my CPU was so busy. So I fired up top and saw my web browser (Firefox) took up around 70% of a CPU to play the stream.

I thought, "this must be some high resolution crazy stream! how modern! such wow!" Then I thought, wait, this is the web, there must be something insane going on.

So I did a little experiment: I started chromium --temp-profile on the stream, alongside vlc (which can also play Youtube streams!). Then I took a snapshot of the top(1) command after 5 minutes. Here are the results:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
16332 anarcat   20   0 1805160 269684 102660 S  60,2   1,7   3:34.96 chromium
16288 anarcat   20   0  974872 119752  87532 S  33,2   0,7   1:47.51 chromium
16410 anarcat   20   0 2321152 176668  80808 S  22,0   1,1   1:15.83 vlc
 6641 anarcat   20   0   21,1g 520060 137580 S  13,8   3,2  55:36.70 x-www-browser
16292 anarcat   20   0  940340  83980  67080 S  13,2   0,5   0:41.28 chromium
 1656 anarcat   20   0 1970912  18736  14576 S  10,9   0,1   4:47.08 pulseaudio
 2256 anarcat   20   0  435696  93468  78120 S   7,6   0,6  16:03.57 Xorg
16262 anarcat   20   0 3240272 165664 127328 S   6,2   1,0   0:31.06 chromium
  920 message+  20   0   11052   5104   2948 S   1,3   0,0   2:43.37 dbus-daemon
17915 anarcat   20   0   16664   4164   3276 R   1,3   0,0   0:02.07 top

To deconstruct this, you can see my Firefox process (masquerading as x-www-browser) which has been started for a long time. It's taken 55 hours of CPU time, but let's ignore that for now as it's not in the benchmark. What I find fascinating is there are at least 4 chromium processes running here, and they collectively take up over 7 minutes of CPU time.

Compare this a little over one (1!!!11!!!) minute of CPU time for VLC, and you realize why people are so ranty about everything being packaged as web apps these days. It's basically using up an order of magnitudes more processing power (and therefore electric power and slave labor) to watch those silly movies in your web browsers than in a proper video player.

Keep that in mind next time you let Youtube go on a "autoplay Donald Drumpf" playlist...

07 October, 2019 12:15AM

October 06, 2019

Iustin Pop

IronBike Einsiedeln 2019

The previous race I did at the end of August was soo much fun—I forgot how much fun this is—that I did the unthinkable and registered to the IronBike as well, at a short (but not the shortest) distance.

Why unthinkable? Last time I was here, three years ago, I apparently told my wife I will never, ever go to this race again, since it’s not a fun one. Steep up, steep down, or flat, but no flowing sections.

Well, it seems I forgot I said that, I only remembered “it’s a hard race”, so I registered at the 53km distance, where there are no age categories. Just “Herren Fun” ☺

September

The month of September was a pretty good one, overall, and I managed to even lose 2kg (of fat, I’m 100% sure), and continued to increase my overall fitness - 23→34 CTL in Training Peaks.

I also did the Strava Escape challenge and the Sufferfest Yoga challenge, so core fitness improved as well. And I was more rested: Training Peaks form was -8, up from -18 for previous race.

Overall, I felt reasonably confident to complete (compete?) the shorter distance; official distance 53km, 1’400m altitude.

Race (day)

The race for this distance starts very late (around half past ten), so it was an easy morning, breakfast, drive to place, park, get dressed, etc.

The weather was actually better than I feared, it was plain shorts and t-shirt, no need for arm warmers, or anything like that. And it got sunnier as the race went on, which reminded me why on my race prep list it says “Sunscreen (always)”, and not just sunscreen.

And yes, this race as well, even at this “fun” distance, started very strong. The first 8.7km are flat (total altitude gain 28m, i.e. nothing), and I did them at an average speed of 35.5km/h! Again, this is on a full-suspension mountain bike, with thick tires. I actually drafted, a lot, and it was the first time I realised even with MTB, knowing how to ride in a group is useful.

And then, the first climb starts. About 8.5km, gaining 522m altitude (around a third only of the goal…), and which was tiring. On the few places where the climb flattened I saw again that one of the few ways in which I can gain on people is by timing my effort well enough such that when it flattens, I can start strongly and thus gain, sometimes significantly.

I don’t know if this is related to the fact that I’m on the heavier but also stronger side (this is comparing peak 5s power with other people on Sufferfest/etc.), or just I was fresh enough, but I was able to apply this repeatedly over the first two thirds of the race.

At this moment I was still well, and the only negative aspect—my lower back pain, which started very early—was on and off, not continuous, so all good.

First descent, 4.7km, average almost -12%, thus losing almost all the gain, ~450m. And this was an interesting descent: a lot of it was on a quite wide trail, but “paved” with square wood logs. And the distance between the wood logs, about 3-4cm, was not filled well enough with soil, so it was very jittery ride. I’ve seen a lot of people not able to ride there (to my surprise), and, an even bigger surprise, probably 10 or 15 people with flats. I guess they went low enough on pressure to get bites (not sure if correct term) and thus a flat. I had no issues, I was running a bit high pressure (lucky guess), which I took off later. But here it served me very well.

Another short uphill, 1.8km/230m altitude, but I gave up and pushed the bike. Not fancy, but when I got off the bike it was 16%!! And that’s not my favourite place in the world to be. Pushing the bike was not that bad; I was going 4.3km/h versus 5.8km/h beforehand, so I wasn’t losing much time. But yes, I was losing time.

Then another downhill, thankfully not so steep so I could enjoy going down for longer, and then a flat section. Overall almost 8km flat section, which I remembered quite well. Actually I was remembering quite a bit of things from the previous race, despite different route, but especially this flat section I did remember in two ways:

  • that I was already very, very tired back then
  • and by the fact that I don’t remembered at all what was following

So I start on this flat section, expecting good progress, only to find myself going slower and slower. I couldn’t understand why it was so tiring to go straight, until I started approaching the group ahead. I realised then it was a “false flat”; I knew the term before, but didn’t understand well what it means, but now I got it ☺ About 1km, at a mean grade of 4.4%, which is not high when you know it’s a climb, but it is not flat. Once I realised this, I felt better. The other ~6.5km I spent pursuing a large group ahead, which I managed to catch about half a kilometre before the end of the flat.

I enjoy this section, then the road turns and I saw what my brain didn’t want to remember: the last climb, 4km long, only ~180m altitude, but mean grade 10%, but I don’t have energy for all of it. Which is kind of hilarious, 180m altitude is only twice and a bit of my daily one-way commute, so peanuts, but I’m quite tired from both previous climbs and chasing that group, so after 11 minutes I get off the bike, and start pushing.

And the pushing is the same as before, actually even better. I was going 5.2km/h on average here, versus 6.1km/h before, so less than 1km/h difference. This walking speed is more than usual walking, as it was not very steep I was walking fast. Since I was able to walk fast and felt recovered, I get back on the bike before the climb finished, only to feel a solid cramp in the right leg as soon as I try to start pedalling. OK, let me pedal with the left, ouch cramp as well. So I’m going actually almost as slow as walking, just managing the cramps and keeping them below-critical, and after about 5 minutes they go away.

It was the first time I got such cramps, or any cramps, during a race. I’m not sure I understand why, I did drink energy drinks not just water (so I assume not acute lack of some electrolytes). Apparently reading on the internet it seems cramps are not a solved issue, and most likely just due to the hard efforts at high intensity. It was scary about ten seconds as I feared a full cramping of one of both legs (and being clipped in… not funny), but once I realised I can keep it under control, it was just interesting.

I remembered now the rest of the route (ahem, see below), so I knew I was looking at a long (7km), flat-ish section (100m gain), before the last, relatively steep descent. Steep as in -24.4% max gradient, with average -15%. This done, I was on relatively flat roads (paved, even), and my Garmin was showing 1’365m altitude gain since the start of the race. Between the official number of 1’400m and this was a difference of only 35m, which I was sure was due to simple errors; distance now was 50.5km, so I was looking at the rest of 2.5km being a fast run towards the end.

Then in a village we exit the main road and take a small road going up a bit, and then up a bit, and then—I was pretty annoyed here already—the road opens to a climb. A CLIMB!!!! I could see the top of the hill, people going up, but it was not a small climb! It was not 35m, not fewer than that, it was a short but real climb!

I was swearing with all my power in various languages. I completely and totally forgot this last climb, I did not plan for it, and I was pissed off ☺ It turned out to be 1.3km long, mean grade of 11.3% (!), with 123m of elevation gain. It was almost 100m more than I planned… I got off the bike again, especially as it was a bit tricky to pedal and the gradient was 20% (!!!), so I pushed, then I got back on, then I got back off, and then back on. At the end of the climb there was a photographer again, so I got on the bike and hoped I might get a smiling picture, which almost happened:

Pretending I'm enjoying this, and showing how much weight I should get rid of :/ Pretending I’m enjoying this, and showing how much weight I should get rid of :/

And then from here on it was clean, easy downhill, at most -20% grade, back in Einsiedeln, and back to the start with a 10m climb, which I pushed hard, overtook a few people (lol), passed the finish, and promptly found first possible place to get off the bike and lie down. This was all behind me now:

Thanks VeloViewer! Thanks VeloViewer!

And, my Garmin said 1’514m altitude, more than 100m more than the official number. Boo!

Aftermath

It was the first race I actually had to lie down afterwards. First race I got cramps as well ☺ Third 1h30m heart rate value ever, a lot of other year 2019 peaks too. And I was dead tired.

I learned it’s very important to know well the route altitude profile, in order to be able to time your efforts well. I got reminded I suck at climbing (so I need to train this over and over), but I learned that I can push better than other people at the end of a climb, so I need to train this strength as well.

I also learned that you can’t rely on the official race data, since it can be off by more than 100m. Or maybe I can’t rely on my Garmin(s), but with a barometric altimeter, I don’t think the problem was here.

I think I ate a bit too much during the race, which was not optimal, but I was very hungry already after the first climb…

But the biggest thing overall was that, despite no age group, I got a better placement than I thought. I finished in 3h:37m.14s, 1h:21m.56s behind the first place, but a good enough time that I was 325th place out of 490 finishers.

By my calculations, 490 finishers means the thirds are 1-163, 164-326, and 327-490. Which means, I was in the 2nd third, not in the last one!!! Hence the subtitle of this post, moving up, since I usually am in the bottom third, not the middle one. And there were also 11 DNF people, so I’m well in the middle third ☺

Joke aside, this made me very happy, as it’s the first time I feel like efforts (in a bad year like this, even) do amount to something. So good.

Speaking of DNF, I was shocked at the amount of people I saw either trying to fix their bike on the side, or walking their bike (to the next repair post), or just shouting angrily at a broken component. I counted probably towards 20 of such people, a lot after that first descent, but given only 11 DNF, I guess many people do carry spare tubes. I didn’t, hope is a strategy sometimes ☺

Now, with the season really closed, time to start thinking about next year. And how to lose those 10kg of fat I still need to lose…

06 October, 2019 09:00PM

Antoine Beaupré

Calibre replacement considerations

Summary

TL;DR: I'm considering replacing those various Calibre compnents with...

My biggest blocker that don't really have good alternatives are:

  • collection browser
  • metadata editor
  • device sync

See below why and a deeper discussion on all the features.

Problems with Calibre

Calibre is an amazing software: it allows users to manage ebooks on your desktop and a multitude of ebook readers. It's used by Linux geeks as well as Windows power-users and vastly surpasses any native app shipped by ebook manufacturers. I know almost exactly zero people that have an ebook reader that do not use Calibre.

However, it has had many problems over the years:

Update: a previous version of that post claimed that all of Calibre had been removed from Debian. This was inaccurate, as the Debian Calibre maintainer pointed out. What happened was Calibre 4.0 was uploaded to Debian unstable, then broke because of missing Python 2 dependencies, and an older version (3.48) was uploaded in its place. So Calibre will stay around in Debian for the foreseeable future, hopefully, but the current latest version (4.0) cannot get in because it depends on older Python 2 libraries.

The latest issue (Python 3) is the last straw, for me. While Calibe is an awesome piece of software, I can't help but think it's doing too much, and the wrong way. It's one of those tools that looks amazing on the surface, but when you look underneath, it's a monster that is impossible to maintain, a liability that is just bound to cause more problems in the future.

What does Calibre do anyways

So let's say I wanted to get rid of Calibre, what would that mean exactly? What do I actually use Calibre for anyways?

Calibre is...

  • an ebook viewer: Calibre ships with the ebook-viewer command, which allows one to browse a vast variety of ebook formats. I rarely use this feature, since I read my ebooks on a e-reader, on purpose. There is, besides, a good variety of ebook-readers, on different platforms, that can replace Calibre here:

    • Atril, MATE's version of Evince, supports ePUBs (Evince doesn't seem to), but fails to load certain ebooks (book #1459 for example)
    • Bookworm looks very promising, not in Debian (Debian bug #883867), but Flathub. scans books on exit, and can take a loong time to scan an entire library (took 24+ hours here, and had to kill pdftohtml a few times) without a progress bar. but has a nice library browser, although it looks like covers are sorted randomly. search works okay, however. unclear what happens when you add a book, it doesn't end up in the chosen on-disk library.
    • Buka is another "ebook" manager written in Javascript, but only supports PDFs for now.
    • coolreader is another alternative, not yet in Debian (#715470)
    • Emacs (of course) supports ebooks through nov.el
    • fbreader also supports ePUBs, but is much slower than all those others, and turned proprietary so is unmaintained
    • Foliate looks gorgeous and is built on top of the ePUB.js library, not in Debian, but Flathub.
    • GNOME Books is interesting, but relies on the GNOME search engine and doesn't find my books (and instead lots of other garbage). it's been described as "basic" and "the least mature" in this OMG Ubuntu review
    • koreader is a good alternative reader for the Kobo devices and now also has builds for Debian, but no Debian package
    • lucidor is a Firefox extension that can read an organize books, but is not packaged in Debian either (although upstream provides a .deb). It depends on older Firefox releases (or "Pale moon", a Firefox fork), see also the firefox XULocalypse for details
    • MuPDF also reads ePUBs and is really fast, but the user interface is extremely minimal, and copy-paste doesn't work so well (think "Xpdf"). it also failed to load certain books (e.g. 1359) and warns about 3.0 ePUBs (e.g. book 1162)
    • Okular supports ePUBs when okular-extra-backends is installed
    • plato is another alternative reader for Kobo readers, not in Debian
  • an ebook editor: Calibre also ships with an ebook-edit command, which allows you to do all sorts of nasty things to your ebooks. I have rarely used this tool, having found it hard to use and not giving me the results I needed, in my use case (which was to reformat ePUBs before publication). For this purpose, Sigil is a much better option, now packaged in Debian. There are also various tools that render to ePUB: I often use the Sphinx documentation system for that purpose, and have been able to produce ePUBs from LaTeX for some projects.

  • a file converter: Calibre can convert between many ebook formats, to accomodate the various readers. In my experience, this doesn't work very well: the layout is often broken and I have found it's much better to find pristine copies of ePUB books than fight with the converter. There are, however, very few alternatives to this functionality, unfortunately.

  • a collection browser: this is the main functionality I would miss from Calibre. I am constantly adding books to my library, and Calibre does have this incredibly nice functionality of just hitting "add book" and Just Do The Right Thing™ after that. Specifically, what I like is that it:

    • sort, view, and search books in folders, per author, date, editor, etc
    • quick search is especially powerful
    • allows downloading and editing metadata (like covers) easily
    • track read/unread status (although that's a custom field I had to add)

    Calibre is, as far as I know, the only tool that goes so deep in solving that problem. The Liber web server, however, does provide similar search and metadata functionality. It also supports migrating from an existing Calibre database as it can read the Calibre metadata stores. When no metadata is found, it fetches some from online sources (currently Google Books).

    One major limitation of Liber in this context is that it's solely search-driven: it will not allow you to see (for example) the "latest books added" or "browse by author". It also doesn't support "uploading" books although it will incrementally pick up new books added by hand in the library. It somewhat assumes Calibre already exists, in a way, to properly curate the library and is more designed to be a search engine and book sharing system between liber instances.

    This also connects with the more general "book inventory" problem I have which involves an inventory physical books and directory of online articles. See also firefox (Zotero section) and ?bookmarks for a longer discussion of that problem.

  • a metadata editor: the "collection browser" is based on a lot of metadata that Calibre indexes from the books. It can magically find a lot of stuff in the multitude of file formats it supports, something that is pretty awesome and impressive. For example, I just added a PDF file, and it found the book cover, author, publication date, publisher, language and the original mobi book id (!). It also added the book in the right directory and dumped that metadata and the cover in a file next to the book. And if that's not good enough, it can poll that data from various online sources like Amazon, and Google books.

    Maybe the work Peter Keel did could be useful in creating some tool which would do this automatically? Or maybe Sigil could help? Liber can also fetch metadata from Google books, but not interactively.

    I still use Calibre mostly for this.

  • a device synchronization tool : I mostly use Calibre to synchronize books with an ebook-reader. It can also automatically update the database on the ebook with relevant metadata (e.g. collection or "shelves"), although I do not really use that feature. I do like to use Calibre to quickly search and prune books from by ebook reader, however. I might be able to use git-annex for this, however, given that I already use it to synchronize and backup my ebook collection in the first place...

  • an RSS reader: I used this for a while to read RSS feeds on my ebook-reader, but it was pretty clunky. Calibre would be continously generating new ebooks based on those feeds and I would never read them, because I would never find the time to transfer them to my ebook viewer in the first place. Instead, I use a regular RSS feed reader. I ended up writing my own, feed2exec) and when I find an article I like, I add it to Wallabag which gets sync'd to my reader using wallabako, another tool I wrote.

  • an ebook web server : Calibre can also act as a web server, presenting your entire ebook collection as a website. It also supports acting as an OPDS directory, which is kind of neat. There are, as far as I know, no alternative for such a system although there are servers to share and store ebooks, like Trantor or Liber.

Note that I might have forgotten functionality in Calibre in the above list: I'm only listing the things I have used or am using on a regular basis. For example, you can have a USB stick with Calibre on it to carry the actual software, along with the book library, around on different computers, but I never used that feature.

So there you go. It's a colossal task! And while it's great that Calibre does all those things, I can't help but think that it would be better if Calibre was split up in multiple components, each maintained separately. I would love to use only the document converter, for example. It's possible to do that on the commandline, but it still means I have the entire Calibre package installed.

Maybe a simple solution, from Debian's point of view, would be to split the package into multiple components, with the GUI and web servers packaged separately from the commandline converter. This way I would be able to install only the parts of Calibre I need and have limited exposure to other security issues. It would also make it easier to run Calibre headless, in a virtual machine or remote server for extra isoluation, for example.

Update: this post generated some activity on Mastodon, follow the conversation here or on your favorite Mastodon instance.

06 October, 2019 07:27PM

hackergotchi for Norbert Preining

Norbert Preining

RIP (for now) Calibre in Debian

The current purge of all Python2 related packages has a direct impact on Calibre. The latest version of Calibre requires Python modules that are not (anymore) available for Python 2, which means that Calibre >= 4.0 will for the foreseeable future not be available in Debian.

I just have uploaded a version of 3.48 which is the last version that can run on Debian. From now on until upstream Calibre switches to Python 3, this will be the last version of Calibre in Debian.

In case you need newer features (including the occasional security fixes), I recommend switching to the upstream installer which is rather clean (installing into /opt/calibre, creating some links to the startup programs, and installing completions for zsh and bash. It also prepares an uninstaller that reverts these changes.

Enjoy.

06 October, 2019 02:14AM by Norbert Preining

October 05, 2019

Reproducible Builds

Reproducible Builds in September 2019

Welcome to the September 2019 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up over the past month. As a quick refresher of what our project is about, whilst anyone can inspect the source code of free software for malicious changes, most software is distributed to end users or servers as precompiled binaries. The motivation behind the reproducible builds effort is to ensure zero changes have been introduced during these compilation processes. This is achieved by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In September’s report, we cover:

  • Media coverage & eventsmore presentations, preventing Stuxnet, etc.
  • Upstream newskernel reproducibility, grafana, systemd, etc.
  • Distribution workreproducible images in Arch Linux, policy changes in Debian, etc.
  • Software developmentyet more work on diffoscope, upstream patches, etc.
  • Misc news & getting in touchfrom our mailing list how to contribute, etc

If you are interested in contributing to our project, please visit our Contribute page on our website.


Media coverage & events

This month Vagrant Cascadian attended the 2019 GNU Tools Cauldron in Montréal, Canada and gave a presentation entitled Reproducible Toolchains for the Win (video).

In addition, our project was highlighted as part of a presentation by Andrew Martin at the All Systems Go conference in Berlin titled Rootless, Reproducible & Hermetic: Secure Container Build Showdown, and Björn Michaelsen from the Document Foundation presented at the 2019 LibreOffice Conference in Almería in Spain on the status of reproducible builds in the LibreOffice office suite.

In academia, Anastasis Keliris and Michail Maniatakos from the New York University Tandon School of Engineering published a paper titled ICSREF: A Framework for Automated Reverse Engineering of Industrial Control Systems Binaries (PDF) that speaks to concerns regarding the security of Industrial Control Systems (ICS) such as those attacked via Stuxnet. The paper outlines their ICSREF tool for reverse-engineering binaries from such systems and furthermore demonstrates a scenario whereby a commercial smartphone equipped with ICSREF could be easily used to compromise such infrastructure.

Lastly, It was announced that Vagrant Cascadian will present a talk at SeaGL in Seattle, Washington during November titled There and Back Again, Reproducibly.


2019 Summit

Registration for our fifth annual Reproducible Builds summit that will take place between 1st → 8th December in Marrakesh, Morocco has opened and personal invitations have been sent out.

Similar to previous incarnations of the event, the heart of the workshop will be three days of moderated sessions with surrounding “hacking” days and will include a huge diversity of participants from Arch Linux, coreboot, Debian, F-Droid, GNU Guix, Google, Huawei, in-toto, MirageOS, NYU, openSUSE, OpenWrt, Tails, Tor Project and many more. If you would like to learn more about the event and how to register, please visit our our dedicated event page.


Upstream news

Ben Hutchings added documentation to the Linux kernel regarding how to make the build reproducible. As he mentioned in the commit message, the kernel is “actually” reproducible but the end-to-end process was not previously documented in one place and thus Ben describes the workflow and environment needed to ensure a reproducible build.

Daniel Edgecumbe submitted a pull request which was subsequently merged to the logging/journaling component of systemd in order that the output of e.g. journalctl --update-catalog does not differ between subsequent runs despite there being no changes in the input files.

Jelle van der Waa noticed that if the grafana monitoring tool was built within a source tree devoid of Git metadata then the current timestamp was used instead, leading to an unreproducible build. To avoid this, Jelle submitted a pull request in order that it use SOURCE_DATE_EPOCH if available.

Mes (a Scheme-based compiler for our “sister” bootstrappable builds effort) announced their 0.20 release.


Distribution work

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution. Thunderbird and kernel-vanilla packages will be among the larger ones to become reproducible soon and there were additional Python patches to help reproducibility issues of modules written in this language that have C bindings.

OpenWrt is a Linux-based operating system targeting embedded devices such as wireless network routers. This month, Paul Spooren (aparcar) switched the toolchain the use the GCC version 8 by default in order to support the -ffile-prefix-map= which permits a varying build path without affecting the binary result of the build []. In addition, Paul updated the kernel-defaults package to ensure that the SOURCE_DATE_EPOCH environment variable is considered when creating the the /init directory.

Alexander “lynxis” Couzens began working on a set of build scripts for creating firmware and operating system artifacts in the coreboot distribution.

Lukas Pühringer prepared an upload which was sponsored by Holger Levsen of python-securesystemslib version 0.11.3-1 to Debian unstable. python-securesystemslib is a dependency of in-toto, a framework to protect the integrity of software supply chains.

Arch Linux

The mkinitcpio component of Arch Linux was updated by Daniel Edgecumbe in order that it generates reproducible initramfs images by default, meaning that two subsequent runs of mkinitcpio produces two files that are identical at the binary level. The commit message elaborates on its methodology:

Timestamps within the initramfs are set to the Unix epoch of 1970-01-01. Note that in order for the build to be fully reproducible, the compressor specified (e.g. gzip, xz) must also produce reproducible archives. At the time of writing, as an inexhaustive example, the lzop compressor is incapable of producing reproducible archives due to the insertion of a runtime timestamp.

In addition, a bug was created to track progress on making the Arch Linux ISO images reproducible.

Debian

In July, Holger Levsen filed a bug against the underlying tool that maintains the Debian archive (“dak”) after he noticed that .buildinfo metadata files were not being automatically propagated in the case that packages had to be manually approved in “NEW queue”. After it was pointed out that the files were being retained in a separate location, Benjamin Hof proposed a patch for the issue that was merged and deployed this month.

Aurélien Jarno filed a bug against the Debian Policy (#940234) to request a section be added regarding the reproducibility of source packages. Whilst there is already a section about reproducibility in the Policy, it only mentions binary packages. Aurélien suggest that it:

… might be a good idea to add a new requirement that repeatedly building the source package in the same environment produces identical .dsc files.

In addition, 51 reviews of Debian packages were added, 22 were updated and 47 were removed this month adding to our knowledge about identified issues. Many issue types were added by Chris Lamb including buildpath_in_code_generated_by_bison, buildpath_in_postgres_opcodes and ghc_captures_build_path_via_tempdir.

Software development

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb uploaded versions 123, 124 and 125 and made the following changes:

  • New features:

    • Add /srv/diffoscope/bin to the Docker image path. (#70)
    • When skipping tests due to the lack of installed tool, print the package that might provide it. []
    • Update the “no progressbar” logging message to match the parallel missing tlsh module warnings. []
    • Update “requires foo” messages to clarify that they are referring to Python modules. []
  • Testsuite updates:

    • The test_libmix_differences ELF binary test requires the xxd tool. (#940645)
    • Build the OCaml test input files on-demand rather than shipping them with the package in order to prevent test failures with OCaml 4.08. (#67)
    • Also conditionally skip the identification and “no differences” tests as we require the Ocaml compiler to be present when building the test files themselves. (#940471)
    • Rebuild our test squashfs images to exclude the character device as they requires root or fakeroot to extract. (#65)
  • Many code cleanups, including dropping some unnecessary control flow [], dropping unnecessary pass statements [] and dropping explicitly inheriting from object class as it unnecessary in Python 3 [].

In addition, Marc Herbert completely overhauled the handling of ELF binaries particularly around many assumptions that were previously being made via file extensions, etc. [][][] and updated the testsuite to support a newer version of the coreboot utilities. []. Mattia Rizzolo then ensured that diffoscope does not crash when the progress bar module is missing but the functionality was requested [] and made our version checking code more lenient []. Lastly, Vagrant Cascadian not only updated diffoscope to versions 123 and 125, he enabled a more complete test suite in the GNU Guix distribution. [][][][][][]

Project website

There was yet more effort put into our our website this month, including:

In addition, Cindy Kim added in-toto to our “Who is Involved?” page, James Fenn updated our homepage to fix a number of spelling and grammar issues [] and Peter Conrad added BitShares to our list of projects interested in Reproducible Builds [].

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from successful builds. This month, Marc Herbert made a huge number of changes including:

  • GNU ar handler:
    • Don’t corrupt the pseudo file mode of the symbols table.
    • Add test files for “symtab” (/) and long names (//).
    • Don’t corrupt the SystemV/GNU table of long filenames.
  • Add a new $File::StripNondeterminism::verbose global and, if enabled, tell the user that ar(1) could not set the symbol table’s mtime.

In addition, Chris Lamb performed some issue investigation with the Debian Perl Team regarding issues in the Archive::Zip module including a problem with corruption of members that use bzip compression as well as a regression whereby various metadata fields were not being updated that was reported in/around Debian bug #940973.

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org.

  • Alexander “lynxis” Couzens:
    • Fix missing .xcompile in the build system. []
    • Install the GNAT Ada compiler on all builders. []
    • Don’t install the iasl ACPI power management compiler/decompiler. []
  • Holger Levsen:
    • Correctly handle the $DEBUG variable in OpenWrt builds. []
    • Fefactor and notify the #archlinux-reproducible IRC channel for problems in this distribution. []
    • Ensure that only one mail is sent when rebooting nodes. []
    • Unclutter the output of a Debian maintenance job. []
    • Drop a “todo” entry as we vary on a merged /usr for some time now. []

In addition, Paul Spooren added an OpenWrt snapshot build script which downloads .buildinfo and related checksums from the relevant download server and attempts to rebuild and then validate them for reproducibility. []

The usual node maintenance was performed by Holger Levsen [][][], Mattia Rizzolo [] and Vagrant Cascadian [][].

reprotest

reprotest is our end-user tool to build same source code twice in different environments and then check the binaries produced by each build for differences. This month, a change by Dmitry Shachnev was merged to not use the faketime wrapper at all when asked to not vary time [] and Holger Levsen subsequently released this as version 0.7.9 as dramatically overhauling the packaging [][].


Misc news & getting in touch

On our mailing list Rebecca N. Palmer started a thread titled Addresses in IPython output which points out and attempts to find a solution to a problem with Python packages, whereby objects that don’t have an explicit string representation have a default one that includes their memory address. This causes problems with reproducible builds if/when such output appears in generated documentation.

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:



This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa, Mattia Rizzolo and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

05 October, 2019 08:43PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Setting a Lotus Pot

Experiences setting up a Lotus Pond Pot

A novice’s first time experience setting up a Lotus pond and germinating the Lotus seeds to a full plant

The trigger

Our neighbors have a very nice Lotus setup in the front of their garden, with flowers blooming in it. It is really a pleasing experience to see it. With lifestyles limiting to specifics, I’m glad to be around with like minded people. So we decided to set up a Lotus Pond Pot in our garden too.

Hunting the pot

About 2/3rd of our garden has been laid out with Mexican Grass, with the exception of some odd spots where there are 2 large concrete tanks in the ground and other small tanks. The large one is fairly big, with a dimension around 3.5 x 3.5 ft. We wanted to make use of that spot. I checked out some of the available pots and found a good looking circular pot carved in stone, of around 2 ft, but it was quite expensive and not fitting my budget.

With the stone pot out of my budget, the other options left out were

  • One molded in cement
  • Granite

We looked at another pot maker who made pots by molds in cement. They had some small size pot samples of around 1x1 feet but the finished product samples didn’t look very good. From the available samples, we weren’t sure how well would the pot what we wanted, would look like. Also, they weren’t having a proportionate price difference. The vendor would take a months time to make one. With no ready made sample of that size in place, this was something we weren’t very enthused to explore.

We instead chose to explore the possibility of building one with granite slabs. First reason being, granite vendors were more easily available. But second and most important, given the spot I had chosen, I felt a square granite based setup will be an equally good fit. Also, the granite based pot was coming in budget friendly. Finally, we settled with a 3 x 3 ft granite pot.

And this is what our initial Lotus Pot looked like.

Granite Lotus Pot

Note: The granite pot was quite heavy, close to around 200 Kilograms. It took us 3 people to place it at the designated spot

Actual Lotus pot

As you can see from the picture above, there’s another pot in the larger granite pot itself. Lotus’ grow in water and sludge. So we needed to provide it with the same setup. We used one of the Biryani Pots to prepare for the sludge. This is where the Lotus’ roots, tuber, will live and grow, inside the water and sludge.

With an open pot, when you have water stored, you have other challenges to take care of.

  • Aeration of the water
  • Mosquitoes

We bought a solar water fountain to take care of the water. It is a nice device that works very well under sunlight. Remember, for your Lotus, you need a well sun-lit area. So, the solar water fountain was a perfect fit for this scenario.

But, with water, breed mosquitoes. As most would do, we chose to put in some fish into the pond to take care of that problem. We put in some Mollies, which we hope would produce more. Also the waste they produce, should work as a supplement fertilizer for the Lotus plant.

Aeration is very important for the fish and so far our Solar Water Fountain seems to be doing a good job there, keeping the fishes happy and healthy.

So in around a week, we had the Lotus Pot in place along with a water fountain and some fish.

The Lotus Plant itself

With the setup in place, it was now time to head to the nearby nursery and get the Lotus plant. It was an utter surprise to us to not find the Lotus plant in any of the nurseries. After quite a lot of searching around, we came across one nursery that had some lotus plants. They didn’t look in good health but that was the only option we had. But the bigger disappointment was the price, which was insanely high for a plant. We returned back home without the Lotus, a little disheartened.

Thank you Internet

The internet has connected the world so well. I looked up the internet and was delighted to see so many people sharing experiences in the form of articles and YouTube videos about Lotus. You can find all the necessary information about. With it, we were all charged up to venture into the next step, to grow the lotus from scratch seeds instead.

The Lotus seeds

First, we looked up the internet and placed an order for 15 lotus seeds. Soon, they were delivered. And this is what Lotus seeds look like.

Lotus Seeds

Lotus Seeds

I had, never in my life before, seen lotus seeds. The shell is very very hard. An impressive remnant of the Lotus plant.

Germinating the seed

Germinating the seed is an experience of its own, given how hard the lotus’ shell is. There are very good articles and videos on the internet explaining on the steps on how to germinate a seed. In a gist, you need to scratch the pointy end of the seed’s shell enough to see the inner membrane. And then you need to submerge it in water for around 7-10 days. Every day, you’ll be able to witness the germination process.

Lotus seed with the pointy end

Lotus seed with the pointy end

Make sure to scratch the pointy end well enough, while also ensuring to not damage the inner membrane of the seed. You also need to change the water in the container on a daily basis and keep it in a decently lit area.

Here’s an interesting bit, specific to my own experience. Turned out that the seeds we purchased online wasn’t of good quality. Except for one, none of the other seeds germinated. And the one that did, did not sprout out proper. It popped a single shoot but the shoot did not grow much. In all, it didn’t work.

But while we were waiting for the seeds to be delivered, my wife looked at the pictures of the seeds that I was studying about online, realized that we already had the lotus seeds at home. Turns out, these seeds are used in our Hindu Puja Rituals. The seeds are called कऎल ŕ¤—ŕ¤ŸŕĽŕ¤Ÿŕ¤ž in Hindi. We had some of the seeds from the puja remaining. So we had in parallel, used those seeds for germination and they sprouted out very well.

Unfortunately, I had not taken any pictures of them during the germination phase Update: Sat 12 Oct 2019

The sprouting phase should be done in a tall glass. This will allow for the shoots to grow long, as eventually, it needs to be set into the actual pot. During the germination phase, the shoot will grow daily around an inch or so, ultimately targeting to reach the surface of the water. Once it reaches the surface, eventually, at the base, it’ll start developing the roots.

Now is time to start the preparation to sow the seed into the sub-pot which has all the sludge

Sowing your seeds

Initial Lotus growing in the sludge

Initial Lotus growing in the sludge

This picture is from a couple of days after I sowed the seeds. When you transfer the shoots into the sludge, use your finger to submerge into the sludge making room for the seed. Gently place the seed into the sludge. You can also cover the sub-pot with some gravel just to keep the sludge intact and clean.

Once your shoots get accustomed to the new environment, they start to grow again. That is what you see in the picture above, where the shoot reaches the surface of the water and then starts developing into the beautiful lotus leaf

Lotus leaf

It is a pleasure to see the Lotus leaves floating on the water. The flowering is going to take well over a year, from what I have read on the internet. But the Lotus Leaves, its veins and the water droplets on it, for now itself, are very soothing to watch

Lotus Closeup

Lotus leaf veins

Lotus leaf veins and water droplets

Lotus leaf veins and water droplets

Final Result

As of today, this is what the final setup looks like. Hopefully, in a year’s time, there’ll be flowers

Lotus Pot setup now

Lotus pot setup now

Update Sat 12 Oct 2019

So I was able to capture some pictures from the Lotus pot, wherein you can see the Lotus Shoot while inside the water, then Lotus Shoot when it reaches the surface, and finally Lotus Shoot developing its leaf

Lotus Shoot Inside Water

Lotus Shoot Inside Water

Lotus Shoot Reaching Surface

Lotus Shoot Reaching Surface

Lotus Shoot Developing Leaf

Lotus Shoot Developing Leaf

05 October, 2019 04:26PM by Ritesh Raj Sarraf (rrs@researchut.com)

October 04, 2019

John Goerzen

Resurrecting Ancient Operating Systems on Debian, Raspberry Pi, and Docker

I wrote recently about my son playing Zork on a serial terminal hooked up to a PDP-11, and how I eventually bought a vt420 (ok, some vt420s and vt510s, I couldn’t stop at one) and hooked it up to a Raspberry Pi.

This led me down another path: there is a whole set of hardware and software that I’ve never used. For some, it fell out of favor before I could read (and for others, before I was even born).

The thing is – so many of these old systems have a legacy that we live in today. So much so, in fact, that we are now seeing articles about how modern CPUs are fast PDP-11 emulators in a sense. The PDP-11, and its close association with early Unix, lives on in the sense that its design influenced microprocessors and operating systems to this day. The DEC vt100 terminal is, nowadays, known far better as that thing that is emulated, but it was, in fact, a physical thing. Some goes back into even mistier times; Emacs, for instance, grew out of the MIT ITS project but was later ported to TOPS-20 before being associated with Unix. vi grew up in 2BSD, and according to wikipedia, was so large it could barely fit in the memory of a PDP-11/70. Also in 2BSD, a buggy version of Zork appeared — so buggy, in fact, that the save game option was broken. All of this happened in the late 70s.

When we think about the major developments in computing, we often hear of companies like IBM, Microsoft, and Apple. Of course their contributions are undeniable, and emulators for old versions of DOS are easily available for every major operating system, plus many phones and tablets. But as the world is moving heavily towards Unix-based systems, the Unix heritage is far more difficult to access.

My plan with purchasing and setting up an old vt420 wasn’t just to do that and then leave. It was to make it useful for modern software, and also to run some of these old systems under emulation.

To that end, I have released my vintage computing collection – both a script for setting up on a system, and a docker image. You can run Emacs and TECO on TOPS-20, zork and vi on 2BSD, even Unix versions 5, 6, and 7 on a PDP-11. And for something particularly rare, RDOS on a Data General Nova. I threw in some old software compiled for modern systems: Zork, Colossal Cave, and Gopher among them. The bsdgames collection and some others are included as well.

I hope you enjoy playing with the emulated big-iron systems of the 70s and 80s. And in a dramatic turnabout of scale and cost, those machines which used to cost hundreds of thousands of dollars can now be run far faster under emulation on a $35 Raspberry Pi.

04 October, 2019 10:09PM by John Goerzen

hackergotchi for Ben Hutchings

Ben Hutchings

Kernel Recipes 2019, part 2

This conference only has a single track, so I attended almost all the talks. This time I didn't take notes but I've summarised all the talks I attended. This is the second and last part of that; see part 1 if you missed it.

XDP closer integration with network stack

Speaker: Jesper Dangaard Brouer

Details and slides: https://kernel-recipes.org/en/2019/xdp-closer-integration-with-network-stack/

Video: Youtube

The speaker introduced XDP and how it can improve network performance.

The Linux network stack is extremely flexible and configurable, but this comes at some performance cost. The kernel has to generate a lot of metadata about every packet and check many different control hooks while handling it.

The eXpress Data Path (XDP) was introduced a few years ago to provide a standard API for doing some receive packet handling earlier, in a driver or in hardware (where possible). XDP rules can drop unwanted packets, forward them, pass them directly to user-space, or allow them to continue through the network stack as normal.

He went on to talk about how recent and proposed future extensions to XDP allow re-using parts of the standard network stack selectively.

This talk was supposed to be meant for kernel developers in general, but I don't think it would be understandable without some prior knowledge of the Linux network stack.

Faster IO through io_uring

Speaker: Jens Axboe

Details and slides: https://kernel-recipes.org/en/2019/talks/faster-io-through-io_uring/

Video: Youtube. (This is part way through the talk, but the earlier part is missing audio.)

The normal APIs for file I/O, such as read() and write(), are blocking, i.e. they make the calling thread sleep until I/O is complete. There is a separate kernel API and library for asynchronous I/O (AIO), but it is very restricted; in particular it only supports direct (uncached) I/O. It also requires two system calls per operation, whereas blocking I/O only requires one.

Recently the io_uring API was introduced as an entirely new API for asynchronous I/O. It uses ring buffers, similar to hardware DMA rings, to communicate operations and completion status between user-space and the kernel, which is far more efficient. It also removes most of the restrictions of the current AIO API.

The speaker went into the details of this API and showed performance comparisons.

The Next Steps toward Software Freedom for Linux

Speaker: Bradley Kuhn

Details: https://kernel-recipes.org/en/2019/talks/the-next-steps-toward-software-freedom-for-linux/

Slides: http://ebb.org/bkuhn/talks/Kernel-Recipes-2019/kernel-recipes.html

Video: Youtube

The speaker talked about the importance of the GNU GPL to the development of Linux, in particular the ability of individual developers to get complete source code and to modify it to their local needs.

He described how, for a large proportion of devices running Linux, the complete source for the kernel is not made available, even though this is required by the GPL. So there is a need for GPL enforcement—demanding full sources from distributors of Linux and other works covered by GPL, and if necessary suing to obtain them. This is one of the activities of his employer, Software Freedom Conservancy, and has been carried out by others, particularly Harald Welte.

In one notable case, the Linksys WRT54G, the release of source after a lawsuit led to the creation of the OpenWRT project. This is still going many years later and supports a wide range of networking devices. He proposed that the Conservancy's enforcement activity should, in the short term, concentrate on a particular class of device where there would likely be interest in creating a similar project.

Suricata and XDP

Speaker: Eric Leblond

Details and slides: https://kernel-recipes.org/en/2019/talks/suricata-and-xdp/

Video: Youtube

The speaker described briefly how an Intrusion Detection System (IDS) interfaces to a network, and why it's important to be able to receive and inspect all relevant packets.

He then described how the Suricata IDS uses eXpress Data Path (XDP, explained in an earlier talk) to filter and direct packets, improving its ability to handle very high packet rates.

CVEs are dead, long live the CVE!

Speaker: Greg Kroah-Hartman

Details and slides: https://kernel-recipes.org/en/2019/talks/cves-are-dead-long-live-the-cve/

Video: Youtube

Common Vulnerabilities and Exposures Identifiers (CVE IDs) are a standard, compact way to refer to specific software and hardware security flaws.

The speaker explained problems with the way CVE IDs are currently assigned and described, including assignments for bugs that don't impact security, lack of assignment for many bugs that do, incorrect severity scores, and missing information about the changes required to fix the issue. (My work on CIP's kernel CVE tracker addresses some of these problems.)

The average time between assignment of a CVE ID and a fix being published is apparently negative for the kernel, because most such IDs are being assigned retrospectively.

He proposed to replace CVE IDs with "change IDs" (i.e. abbreviated git commit hashes) identifying bug fixes.

Driving the industry toward upstream first

Speaker: Enric Balletbo i Serra

Details snd slides: https://kernel-recipes.org/en/2019/talks/driving-the-industry-toward-upstream-first/

Video: Youtube

The speaker talked about how the Chrome OS developers have tried to reduce the difference between the kernels running on Chromebooks, and the upstream kernel versions they are based on. This has succeeded to the point that it is possible to run a current mainline kernel on at least some Chromebooks (which he demonstrated).

Formal modeling made easy

Speaker: Daniel Bristot de Oliveira

Details and slides: https://kernel-recipes.org/en/2019/talks/formal-modeling-made-easy/

Video: Youtube

The speaker explained how formal modelling of (parts of) the kernel could be valuable. A formal model will describe how some part of the kernel works, in a way that can be analysed and proven to have certain properties. It is also necessary to verify that the model actually matches the kernel's implementation.

He explained the methodology he used for modelling the real-time scheduler provided by the PREEMPT_RT patch set. The model used a number of finite state machines (automata), with conditions on state transitions that could refer to other state machines. He added (I think) tracepoints for all state transitions in the actual code and a kernel module that verified that at each such transition the model's conditions were met.

In the process of this he found a number of bugs in the scheduler.

Kernel documentation: past, present, and future

Speaker: Jonathan Corbet

Details and slides: https://kernel-recipes.org/en/2019/kernel-documentation-past-present-and-future/

Video: Youtube

The speaker is the maintainer of the Linux kernel's in-tree documentation. He spoke about how the documentation has been reorganised and reformatted in the past few years, and what work is still to be done.

GNU poke, an extensible editor for structured binary data

Speaker: Jose E Marchesi

Details and slides: https://kernel-recipes.org/en/2019/talks/gnu-poke-an-extensible-editor-for-structured-binary-data/

Video: Youtube

The speaker introduced and demonstrated his project, the "poke" binary editor, which he thinks is approaching a first release. It has a fairly powerful and expressive language which is used for both interactive commands and scripts. Type definitions are somewhat C-like, but poke adds constraints, offset/size types with units, and types of arbitrary bit width.

The expected usage seems to be that you write a script ("pickle") that defines the structure of a binary file format, use poke interactively or through another script to map the structures onto a specific file, and then read or edit specific fields in the file.

04 October, 2019 01:30PM

hackergotchi for Norbert Preining

Norbert Preining

TensorFlow 2.0 with GPU on Debian/sid

Some time ago I have been written about how to get Tensorflow (1.x) running on current Debian/sid back then. It turned out that this isn’t correct anymore and needs an update, so here it is, getting the most uptodate TensorFlow 2.0 running with nVidia support running on Debian/sid.

Step 1: Install CUDA 10.0

Follow more or less the instructions here and do

wget -O- https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | sudo tee /etc/apt/trusted.gpg.d/nvidia-cuda.asc
echo "deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /" | sudo tee /etc/apt/sources.list.d/nvidia-cuda.list
sudo apt-get update
sudo apt-get install cuda-libraries-10-0

Warning! Don’t install the 10-1 version since the TensorFlow binaries need 10.0.

This will install lots of libs into /usr/local/cuda-10.0 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-10-0.conf.

Step 2: Install CUDA 10.0 CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 10.0. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Download cuDNN v7.N.N (xxxx NN, YYYY), for CUDA 10.0 and then cuDNN Runtime Library for Ubuntu18.04 (Deb).

At the moment (as of today) this will download a file libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.6.4.38-1+cuda10.0_amd64.deb.

Step 3: Install Tensorflow for GPU

This is the easiest one and can be done as explained on the TensorFlow installation page using

pip3 install --upgrade tensorflow-gpu

This will install several other dependencies, too.

Step 4: Check that everything works

Last but not least, make sure that TensorFlow can be loaded and find your GPU. This can be done with the following one-liner, and in my case gives the following output:

$ python3 -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
....(lots of output)
2019-10-04 17:29:26.020013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3390 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
tf.Tensor(444.98087, shape=(), dtype=float32)
$

I haven’t tried to get R working with the newest TensorFlow/Keras combination, though. Hope the above helps.

04 October, 2019 08:37AM by Norbert Preining

hackergotchi for Matthew Garrett

Matthew Garrett

Investigating the security of Lime scooters

(Note: to be clear, this vulnerability does not exist in the current version of the software on these scooters. Also, this is not the topic of my Kawaiicon talk.)

I've been looking at the security of the Lime escooters. These caught my attention because:
(1) There's a whole bunch of them outside my building, and
(2) I can see them via Bluetooth from my sofa
which, given that I'm extremely lazy, made them more attractive targets than something that would actually require me to leave my home. I did some digging. Limes run Linux and have a single running app that's responsible for scooter management. They have an internal debug port that exposes USB and which, until this happened, ran adb (as root!) over this USB. As a result, there's a fair amount of information available in various places, which made it easier to start figuring out how they work.

The obvious attack surface is Bluetooth (Limes have wifi, but only appear to use it to upload lists of nearby wifi networks, presumably for geolocation if they can't get a GPS fix). Each Lime broadcasts its name as Lime-12345678 where 12345678 is 8 digits of hex. They implement Bluetooth Low Energy and expose a custom service with various attributes. One of these attributes (0x35 on at least some of them) sends Bluetooth traffic to the application processor, which then parses it. This is where things get a little more interesting. The app has a core event loop that can take commands from multiple sources and then makes a decision about which component to dispatch them to. Each command is of the following form:

AT+type,password,time,sequence,data$

where type is one of either ATH, QRY, CMD or DBG. The password is a TOTP derived from the IMEI of the scooter, the time is simply the current date and time of day, the sequence is a monotonically increasing counter and the data is a blob of JSON. The command is terminated with a $ sign. The code is fairly agnostic about where the command came from, which means that you can send the same commands over Bluetooth as you can over the cellular network that the Limes are connected to. Since locking and unlocking is triggered by one of these commands being sent over the network, it ought to be possible to do the same by pushing a command over Bluetooth.

Unfortunately for nefarious individuals, all commands sent over Bluetooth are ignored until an authentication step is performed. The code I looked at had two ways of performing authentication - you could send an authentication token that was derived from the scooter's IMEI and the current time and some other stuff, or you could send a token that was just an HMAC of the IMEI and a static secret. Doing the latter was more appealing, both because it's simpler and because doing so flipped the scooter into manufacturing mode at which point all other command validation was also disabled (bye bye having to generate a TOTP). But how do we get the IMEI? There's actually two approaches:

1) Read it off the sticker that's on the side of the scooter (obvious, uninteresting)
2) Take advantage of how the scooter's Bluetooth name is generated

Remember the 8 digits of hex I mentioned earlier? They're generated by taking the IMEI, encrypting it using DES and a static key (0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88), discarding the first 4 bytes of the output and turning the last 4 bytes into 8 digits of hex. Since we're discarding information, there's no way to immediately reverse the process - but IMEIs for a given manufacturer are all allocated from the same range, so we can just take the entire possible IMEI space for the modem chipset Lime use, encrypt all of them and end up with a mapping of name to IMEI (it turns out this doesn't guarantee that the mapping is unique - for around 0.01%, the same name maps to two different IMEIs). So we now have enough information to generate an authentication token that we can send over Bluetooth, which disables all further authentication and enables us to send further commands to disconnect the scooter from the network (so we can't be tracked) and then unlock and enable the scooter.

(Note: these are actual crimes)

This all seemed very exciting, but then a shock twist occurred - earlier this year, Lime updated their authentication method and now there's actual asymmetric cryptography involved and you'd need to engage in rather more actual crimes to obtain the key material necessary to authenticate over Bluetooth, and all of this research becomes much less interesting other than as an example of how other companies probably shouldn't do it.

In any case, congratulations to Lime on actually implementing security!

comment count unavailable comments

04 October, 2019 06:04AM

October 03, 2019

hackergotchi for Joey Hess

Joey Hess

Project 62 Valencia Floor Lamp review

From Target, this brass finish floor lamp evokes 60's modernism, updated for the mid-Anthropocene with a touch plate switch.

The integrated microcontroller consumes a mere 2.2 watts while the lamp is turned off, in order to allow you to turn the lamp on with a stylish flick. With a 5 watt LED bulb (sold separately), the lamp will total a mere 7.2 watts while on, making it extremely energy efficient. While off, the lamp consumes a mere 19 kilowatt-hours per year.

clamp multimeter reading 0.02 amps AC, connected to a small circuit board with a yellow capacitor, a coil, and a heat sinked IC visible lamp from rear; a small round rocker switch has been added to the top of its half-globe shade

Though the lamp shade at first appears perhaps flimsy, while you are drilling a hole in it to add a physical switch, you will discover metal, though not brass all the way through. Indeed, this lamp should last for generations, should the planet continue to support human life for that long.

As an additional bonus, the small plastic project box that comes free in this lamp will delight any electrical enthusiast. As will the approximately 1 hour conversion process to delete the touch switch phantom load. The 2 cubic foot of syrofoam packaging is less delightful.

Two allen screws attach the pole to the base; one was missing in my lamp. Also, while the base is very heavily weighted, the lamp still rocks a bit when using the aftermarket switch. So I am forced to give it a mere 4 out of 5 stars.

front view of lit lamp beside a bookcase

03 October, 2019 08:30PM

Molly de Blanc

Free software activities (September 2019)

September marked the end of summer and the end of my summer travel.  Paid and non-paid activities focused on catching up with things I fell behind on while traveling. Towards the middle of September, the world of FOSS blew up, and then blew up again, and then blew up again.

A photo of a river with the New York skyline in the background.

Free software activities: Personal

  • I caught up on some Debian Community Team emails I’ve been behind on. The CT is in search of new team members. If you think you might be interested in joining, please contact us.
  • After much deliberation, the OSI decided to appoint two directors to the board. We will decide who they will be in October, and are welcoming nominations.
  • On that note, the OSI had a board meeting.
  • Wrote a blog post on rights and freedoms to create a shared vocabulary for future writing concerning user rights. I also wrote a bit about leadership in free software.
  • I gave out a few pep talks. If you need a pep talk, hmu.

Free software activities: Professional

  • Wrote and published the September Friends of GNOME Update.
  • Interviewed Sammy Fung for the GNOME Engagement Blog.
  • Did a lot of behind the scenes work for GNOME, that you will hopefully see more of soon!
  • I spent a lot of time fighting with CiviCRM.
  • I attended GitLab Commit on behalf of GNOME, to discuss how we implement GitLab.

 

03 October, 2019 04:49PM by mollydb

Thorsten Alteholz

My Debian Activities in September 2019

FTP master

This month I accepted 246 packages and rejected 28. The overall number of packages that got accepted was 303.

Debian LTS

This was my sixty third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 23.75h. During that time I did LTS uploads of:

    [DLA 1911-1] exim4 security update for one CVE
    [DLA 1936-1] cups security update for one CVE
    [DLA 1935-1] e2fsprogs security update for one CVE
    [DLA 1934-1] cimg security update for 8 CVEs
    [DLA 1939-1] poppler security update for 3 CVEs

I also started to work on opendmarc and spip but did not finish testing yet.
Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the sixteenth ELTS month.

During my allocated time I uploaded:

  • ELA-160-1 of exim4
  • ELA-166-1 of libpng
  • ELA-167-1 of cups
  • ELA-169-1 of openldap
  • ELA-170-1 of e2fsprogs

I also did some days of frontdesk duties.

Other stuff

This month I uploaded new packages of …

I also uploaded new upstream versions of …

I improved packaging of …

On my Go challenge I uploaded golang-github-rivo-uniseg, golang-github-bruth-assert, golang-github-xlab-handysort, golang-github-paypal-gatt.

I also sponsored the following packages: golang-gopkg-libgit2-git2go.v28.

03 October, 2019 03:08PM by alteholz

Birger Schacht

Installing and running Signal on Tails

Because the topic comes up every now and then, I thought I’d write down how to install and run Signal on Tails. These instructions are based on the 2nd Beta of Tails 4.0 - the 4.0 release is scheduled for October 22nd. I’m not sure if these steps also work on Tails 3.x, I seem to remember having some problems with installing flatpaks on Debian Stretch.

The first thing to do is to enable the Additional Software feature of Tails persistence (the Personal Data feature is also required, but that one is enabled by default when configuring persistence). Don’t forget to reboot afterwards. When logging in after the reboot, please set an Administration Password.

The approach I use to run Signal on Tails is using flatpak, so install flatpak either via Synaptic or via commandline:

sudo apt install flatpak

Tails then asks if you want to add flatpak to your additional software and I recommend doing so. The list of additional software can be checked via Applications → System Tools → Additional Software. The next thing you need to do is set up the directories- flatpak installs the software packages either system-wide in $prefix/var/lib/flatpak/[1] or per user in $HOME/.local/share/flatpak/ (the latter lets you manage your flatpaks without having to use elevated permissions). User specific data of the apps goes into $HOME/.var/app. This means we have to create directories on our Peristent folder for those two locations and then link them to their targets in /home/amnesia.

I recommend putting these commands into a script (i.e. /home/amnesia/Persistent/flatpak-setup.sh) and making it executable (chmod +x /home/amnesia/Persistent/flatpak-setup.sh):

#!/bin/sh

mkdir -p /home/amnesia/Persistent/flatpak
mkdir -p /home/amnesia/.local/share
ln -s /home/amnesia/Persistent/flatpak /home/amnesia/.local/share/flatpak
mkdir -p /home/amnesia/Persistent/app
mkdir -p /home/amnesia/.var
ln -s /home/amnesia/Persistent/app /home/amnesia/.var/app

Now you need to add a flatpak remote and install signal:

amnesia@amnesia:~$ torify flatpak remote-add --user --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
amnesia@amnesia:~$ torify flatpak install flathub org.signal.Signal

This will take a couple of minutes.

To show Signal the way to the next whiskey bar through Tor the HTTP_PROXY and HTTPS_PROXY environment variables have to be set. I recommend again to put this into a script (i.e. /home/amnesia/Persistent/signal.sh)

#!/bin/sh

export HTTP_PROXY=socks://127.0.0.1:9050
export HTTPS_PROXY=socks://127.0.0.1:9050
flatpak run org.signal.Signal

Screenshot of Signal on Tails 4 Yay it works!

To update signal you have to run

amnesia@amnesia:~$ torify flatpak update

To make the whole thing a bit more comfortably, the folder softlinks can be automatically created on login using a Gnome autostart script. For that to work you have to have the Dotfiles feature of Tails enabled. Then you can create a /live/persistence/TailsData_unlocked/dotfiles/.config/autostart/FlatpakSetup.desktop file:

[Desktop Entry]
Name=FlatpakSetup
GenericName=Setup Flatpak on Tails
Comment=This script runs the flatpak-setup.sh script on start of the user session
Exec=/live/persistence/TailsData_unlocked/Persistent/flatpak-setup.sh
Terminal=false
Type=Application

By adding /live/persistence/TailsData_unlocked/dotfiles/.local/share/applications/Signal.desktop file to the dotfiles folder, Signal also shows as part of the Gnome applications with a nice Signal icon:

[Desktop Entry]
Name=Signal
GenericName=Signal Desktop Messenger
Exec=/home/amnesia/Persistent/signal.sh
Terminal=false
Type=Application
Icon=/home/amnesia/.local/share/flatpak/app/org.signal.Signal/current/active/files/share/icons/hicolor/128x128/apps/org.signal.Signal.png

Screenshot of Signal Application Icon Tails 4


  1. It is also possible to configure additional system wide installation locations, details are documented in flatpak-installation(5) [return]

03 October, 2019 08:52AM