February 22, 2018

hackergotchi for Jonathan Dowland

Jonathan Dowland

A Nice looking Blog

I stumbled across this rather nicely-formatted blog by Alex Beal and thought I'd share it. It's a particular kind of minimalist style that I like, because it puts the content first. It reminds me of Mark Pilgrim's old blog.

I can't remember which post in particular I came across first, but the one that I thought I would share was this remarkably detailed personal research project on tracking mood.

That would have been the end of it, but I then stumbled across this great review of "Type Driven Development with Idris", a book by Edwin Brady. I bought this book during the Christmas break but I haven't had much of a chance to deep dive into it yet.

Coincidentally I am hoping to invite Dr. Brady to present a Colloquium at my School. This all reminds me that I've written nothing about my PhD on this blog yet. I will hopefully address that very soon!

22 February, 2018 02:38PM

Russell Coker

Dell PowerEdge T30

I just did a Debian install on a Dell PowerEdge T30 for a client. The Dell web site is a bit broken at the moment, it didn’t list the price of that server or give useful specs when I was ordering it. I was under the impression that the server was limited to 8G of RAM, that’s unusually small but it wouldn’t be the first time a vendor crippled a low end model to drive sales of more expensive systems. It turned out that the T30 model I got has 4*DDR4 sockets with only one used for an 8G DIMM. It apparently can handle up to 64G of RAM.

It has space for 4*3.5″ SATA disks but only has 4*SATA connectors on the motherboard. As I never use the DVD in a server this isn’t a problem for me, but if you want 4 disks and a DVD then you need to buy a PCI or PCIe SATA card.

Compared to the PowerEdge T130 I’m using at home the new T30 is slightly shorter and thinner while seeming to have more space inside. This is partly due to better design and partly due to having 2 hard drives in the top near the DVD drive which are a little inconvenient to get to. The T130 I have (which isn’t the latest model) has 4*3.5″ SATA drive bays at the bottom which are very convenient for swapping disks.

It has two PCIe*16 slots (one of which is apparently quad speed), one shorter PCIe slot, and a PCI slot. For a cheap server a PCI slot is a nice feature, it means I can use an old PCI Ethernet card instead of buying a PCIe Ethernet card. The T30 cost $1002 so using an old Ethernet card saved 1% of the overall cost.

The T30 seems designed to be more of a workstation or personal server than a straight server. The previous iterations of the low end tower servers from Dell didn’t have built in sound and had PCIe slots that were adequate for a RAID controller but vastly inadequate for video. This one has built in line in and out for audio and has two DisplayPort connectors on the motherboard (presumably for dual-head support). Apart from the CPU (an E3-1225 which is slower than some systems people are throwing out nowadays) the system would be a decent gaming system.

It has lots of USB ports which is handy for a file server, I can attach lots of backup devices. Also most of the ports support “super speed”, I haven’t yet tested out USB devices that support such speeds but I’m looking forward to it. It’s a pity that there are no USB-C ports.

One deficiency of the T30 is the lack of a VGA port. It has one HDMI and two DisplayPort sockets on the motherboard, this is really great for a system on or under your desk, any monitor you would want on your desk will support at least one of those interfaces. But in a server room you tend to have an old VGA monitor that’s there because no-one wants it on their desk. Not supporting VGA may force people to buy a $200 monitor for their server room. That increases the effective cost of the system by 20%. It has a PC serial port on the motherboard which is a nice server feature, but that doesn’t make up for the lack of VGA.

The BIOS configuration has an option displayed for enabling charging devices from USB sockets when a laptop is in sleep mode. It’s disappointing that they didn’t either make a BIOS build for a non-laptop or have the BIOS detect at run-time that it’s not on laptop hardware and hide that.

Conclusion

The PowerEdge T30 is a nice low-end workstation. If you want a system with ECC RAM because you need it to be reliable and you don’t need the greatest performance then it will do very well. It has Intel video on the motherboard with HDMI and DisplayPort connectors, this won’t be the fastest video but should do for most workstation tasks. It has a PCIe*16 quad speed slot in case you want to install a really fast video card. The CPU is slow by today’s standards, but Dell sells plenty of tower systems that support faster CPUs.

It’s nice that it has a serial port on the motherboard. That could be used for a serial console or could be used to talk to a UPS or other server-room equipment. But that doesn’t make up for the lack of VGA support IMHO.

One could say that a tower system is designed to be a desktop or desk-side system not run in any sort of server room. However it is cheaper than any rack mounted systems from Dell so it will be deployed in lots of small businesses that have one server for everything – I will probably install them in several other small businesses this year. Also tower servers do end up being deployed in server rooms, all it takes is a small business moving to a serviced office that has a proper server room and the old tower servers end up in a rack.

Rack vs Tower

One reason for small businesses to use tower servers when rack servers are more appropriate is the issue of noise. If your “server room” is the room that has your printer and fax then it typically won’t have a door and you just can’t have the noise of a rack mounted server in there. 1RU systems are inherently noisy because the small diameter of the fans means that they have to spin fast. 2RU systems can be made relatively quiet if you don’t have high-end CPUs but no-one seems to be trying to do that.

I think it would be nice if a company like Dell sold low-end servers in a rack mount form-factor (19 inches wide and 2RU high) that were designed to be relatively quiet. Then instead of starting with a tower server and ending up with tower systems in racks a small business could start with a 19 inch wide system on a shelf that gets bolted into a rack if they move into a better office. Any laptop CPU from the last 10 years is capable of running a file server with 8 disks in a ZFS array. Any modern laptop CPU is capable of running a file server with 8 SSDs in a ZFS array. This wouldn’t be difficult to design.

22 February, 2018 02:06PM by etbe

February 21, 2018

Renata D'Avila

How to use the EventCalendar ical

Hello!

If you follow this blog, you should probably know by now that I have been working with my mentors to contribute to MoinMoin EventCalendar macro, adding the possility to export the events' data to an icalendar file.

A screenshot of the code, with the function definition for creating the ical file from events from the macro

The code (which can be found on this Github repository) isn't quite ready yet, because I'm still working to convert the recurrence rule to the icalendar format, but other than that, it should be working. Hopefully.

The icalendar file is now generated as an attachment the moment the macro is loaded. I created an "ical" link at the bottom of the calendar. When activated, this link prompts the download of the ical attachment of the page. Being an attachment, there is still the possibility to just view ical the file using the "attachment" menu if the user wishes to do so.

Wiki page showing the calendar, with the 'ical' link at the bottom

There are two ways of importing this calendar on Thunderbird. The first one is to download the file by clicking on the link and then proceeding to import it manually to Thunderbird.

Thunderbird screenshot, with the menus "Events and Tasks" and "Import" selected

The second option is to "Create a new calendar / On the network" and to use the URL address from the ical link as the "location", as it is shown below:

Thunderbird screenshot, showing the new calendar dialog and the ical URL pasted into the "location" textboxd

As usual, it's possible to customize the name for the calendar, the color for the events and such...

Thunderbird screenshot, showing the new calendar with it's events

I noticed a few Wikis that use the EventCalendar, such as Debian wiki itself and the FSFE wiki. Python wiki also seems to be using MoinMoin and EventCalendar, but it seems that they use a Google service to export the event data do iCal.

If you read this and are willing to try the code in your wiki and give me feedback, I would really appreciate. You can find the ways to contact me in my Debian Wiki profile.

21 February, 2018 10:49PM by Renata

hackergotchi for Jonathan McDowell

Jonathan McDowell

Getting Debian booting on a Lenovo Yoga 720

I recently got a new work laptop, a 13” Yoga 720. It proved difficult to install Debian on; pressing F12 would get a boot menu allowing me to select a USB stick I have EFI GRUB on, but after GRUB loaded the kernel and the initrd it would just sit there never outputting anything else that indicated the kernel was even starting. I found instructions about Ubuntu 17.10 which helped but weren’t the complete picture. What seems to be the situation is that the kernel won’t happily boot if “Legacy Support” is not enabled - enabling this (and still booting as EFI) results in a happier experience. However in order to be able to enable legacy boot you have to switch the SATA controller from RAID to AHCI, which can cause Windows to get unhappy about its boot device going away unless you warn it first.

  • Fire up an admin shell in Windows (right click on the start menu)
  • bcdedit /set safeboot minimal
  • Reboot into the BIOS
  • Change the SATA Controller mode from RAID to AHCI (dire warnings about “All data will be erased”. It’s not true, but you’ve back up first, right?) Set “Boot Mode” to “Legacy Support”.
  • Save changes and let Windows boot to Safe Mode
  • Fire up an admin shell in Windows (right click on the start menu again)
  • bcdedit /deletevalue safeboot
  • Reboot again and Windows will load in normal mode with the AHCI drivers

Additionally I had problems getting the GRUB entry added to the BIOS; efibootmgr shows it fine but it never appears in the BIOS boot list. I ended up using Windows to add it as the primary boot option using the following (<guid> gets replaced with whatever the new “Debian” section guid is):

bcdedit /enum firmware
bcdedit /copy "{bootmgr}" /d "Debian"
bcdedit /set "{<guid>}" path \EFI\Debian\grubx64.efi
bcdedit /set "{fwbootmgr}" displayorder "{<guid>}" /addfirst

Even with that at one point the BIOS managed to “forget” about the GRUB entry and require me to re-do the final “displayorder” command.

Once you actually have the thing installed and booting it seems fine - I’m running Buster due to the fact it’s a Skylake machine with lots of bits that seem to want a newer kernel, but claimed battery life is impressive, the screen is very shiny (though sometimes a little too shiny and reflective) and the NVMe SSD seems pretty nippy as you’d expect.

21 February, 2018 09:46PM

hackergotchi for MJ Ray

MJ Ray

How hard can typing æ, ø and å be?

Petter Reinholdtsen: How hard can æ, ø and å be? comments on the rubbish state of till printers and their mishandling of foreign characters.

Last week, I was trying to type an email, on a tablet, in Dutch. The tablet was running something close to Android and I was using a Bluetooth keyboard, which seemed to be configured correctly for my location in England.

Dutch doesn’t even have many accents. I wanted an e acute (é). If you use the on screen keyboard, this is actually pretty easy, just press and hold e and slide to choose the accented one… but holding e on a Bluetooth keyboard? eeeeeeeeeee!

Some guides suggest Alt and e, then e. Apparently that works, but not on keyboards set to Great British… because, I guess, we don’t want any of that foreign muck since the Brexit vote, or something(!)

Even once you figure out that madness and switch the keyboard back to international, which also enables alt i, u, n and so on to do other accents, I can’t find grave, check, breve or several other accents. I managed to send the emails in Dutch but I’d struggle with various other languages.

Have I missed a trick or what are the Android developers thinking? Why isn’t there a Compose key by default? Is there any way to get one?

21 February, 2018 04:14PM by mjr

Sam Hartman

Tools of Love

From my spiritual blog

I have been quiet lately. My life has been filled with gentle happiness, work, and less gentle wedding planning. How do you write about quiet happiness without sounding like the least contemplative aspects of Facebook? How do I share this part of the journey in a way that others can learn from? I was offering thanks the other day and was reminded of one of my early experiences at Fires of Venus. Someone was talking about how they were there working to do the spiritual work they needed in order to achieve their dream of opening a restaurant. I'll admit that when I thought of going to a multi-day retreat focused on spiritual connection to love, opening a restaurant had not been at the forefront of my mind. And yet, this was their dream, and surely dreams are the stuff of love. As they continued, they talked about finding self love deep enough to have the confidence to believe in dreams.



As I recalled this experience, I offered thanks for all the tools I've found to use as a lover. Every time I approach something with joy and awe, I gain new insight into the beauty of the world around us. In my work within the IETF I saw the beauty of the digital world we're working to create. Standing on sacred land, I can find the joy and love of nature and the moment.



I can share the joy I find and offer it to others. I've been mentoring someone at work. They're at a point where they're appreciating some of the great mysteries of computing like “Reflections on Trusting Trust” or two's compliment arithmetic. I’ve had the pleasure of watching their moments of discovery and also helping them understand the complex history in how we’ve built the digital world we have. Each moment of delight reinforces the idea that we live in a world where we expect to find this beauty and connect with it. Each experience reinforces the idea that we live in a world filled with things to love.



And so, I’ve turned even my experiences as a programmer into tools for teaching love and joy. I’ve been learning another new tool lately. I’ve been putting together the dance mix for my wedding. Between that and a project last year, I’ve learned a lot about music. I will never be a professional DJ or song producer. However, I have always found joy in music and dance, and I absolutely can be good enough to share that with my friends. I can be good enough to let music and rhythm be tools I use to tell stories and share joy. In learning skills and improving my ability with music, I better appreciate the music I hear.



The same is true with writing: both my work here and my fiction. I’m busy enough with other things that I am unlikely to even attempt writing as my livelihood. Even so, I have more tools for sharing the love I find and helping people find the love and joy in their world.



These are all just tools. Words and song won’t suddenly bring us all together any more than physical affection and our bodies. However, words, song, and the joy we find in each other and in the world we build can help us find connection and empathy. We can learn to see the love that is there between us. All these tools can help us be vulnerable and open together. And that—the changes we work within ourselves using these tools—can bring us to a path of love. And so how do I write about happiness? I give thanks for the things it allows me to explore. I find value in growing and trying new things. In my best moments, each seems a lens through which I can grow as a lover as I walk Venus’s path.


21 February, 2018 01:00AM

February 20, 2018

Reproducible builds folks

Reproducible Builds: Weekly report #147

Here's what happened in the Reproducible Builds effort between Sunday February 11 and Saturday February 17 2018:

Media coverage

Reproducible work in other projects

Packages reviewed and fixed, and bugs filed

Various previous patches were merged upstream:

Reviews of unreproducible packages

38 package reviews have been added, 27 have been updated and 13 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been added:

One issue type has been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (24)
  • Boyuan Yang (1)
  • Cédric Boutillier (1)
  • Jeremy Bicha (1)
  • Matthias Klose (1)

diffoscope development

  • Chris Lamb:
    • Add support for comparing Berkeley DB files (#890528). This is currently incomplete because the Berkeley DB libraries do not return the same uid/hash reliably (it returns "random" memory contents) so we must strip those from the human-readable output.

Website development

Misc.

This week's edition was written by Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

20 February, 2018 08:56PM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Lookalikes

Hippy/mako lookalikes

Did I forget a period of my life when I grew a horseshoe mustache and dreadlocks, walked around topless, and illustrated this 2009 article in the Economist on the economic boon that hippy festivals represent to rural American communities?


Previous lookalikes are here.

20 February, 2018 07:45PM by Benjamin Mako Hill

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Time to Join Extended Long Term Support for Debian 7 Wheezy

Debian 7 Wheezy LTS period ends on May 31st and some companies asked Freexian if they could get security support past this date. Since about half of the current team of paid LTS contributors is willing to continue to provide security updates for Wheezy, I have started to work on making this possible.

I just initiated a discussion on debian-devel with multiple Debian teams to see whether it is possible to continue to use debian.org infrastructure to host the wheezy security updates that would be prepared in this extended LTS period.

From the sponsor side, this extended LTS will not work like the regular LTS. It is unrealistic to continue to support all packages and all architectures so only the packages/architectures requested by sponsors will be supported. The amount invoiced to each sponsor will be directly related to the package list that they ask us to support. We made an estimation (based on history) of how much it costs to support each package and we split that cost between all the sponsors that are requesting support for this package. That cost is re-evaluated quarterly and will likely increase over time as sponsors are stopping their support (when they finished to migrate all their machines for example).

This extended LTS will also have some restrictions in terms of packages that we can support. For instance, we will no longer support the linux kernel from wheezy, you will have to switch to the kernel used in jessie (or maybe we will maintain a backport ourselves in wheezy). It is also not yet clear whether we can support OpenJDK since upstream support of version 7 stops at the end of June. And switching to OpenJDK 8 is likely non-trivial. There are likely other unsupportable packages too.

Anyway, if your company needs wheezy security support past end of May, now is the time to worry about it. Please send us a mail with the list of source packages that you would like to see supported. The more companies get involved, the less it will cost to each of them. Our plans are to gather the required data from interested companies in the next few weeks and make a first estimation of the price they will have to pay for the first quarter by mid-march. Then they confirm that they are OK with the offer and we will emit invoices in April so that they can be paid before end of May.

Note however that we decided that it would not be possible to sponsor extended wheezy support (and thus influence which packages are supported) if you are not among the regular LTS sponsors (at bronze level at least). Extended LTS would not be possible without the regular LTS so if you need the former, you have to support the latter too.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

20 February, 2018 04:57PM by Raphaël Hertzog

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Weblate 2.19.1

Weblate 2.19.1 has been released today. This is bugfix only release mostly to fix problematic migration from 2.18 which some users have observed.

Full list of changes:

  • Fixed migration issue on upgrade from 2.18.
  • Improved file upload API validation.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

20 February, 2018 02:00PM

hackergotchi for Nicolas Dandrimont

Nicolas Dandrimont

Listing and loading of Debian repositories: now live on Software Heritage

Software Heritage is the project for which I’ve been working during the past two and a half years now. The grand vision of the project is to build the universal software archive, which will collect, preserve and share the Software Commons.

Today, we’ve announced that Software Heritage is archiving the contents of Debian daily. I’m reposting this article on my blog as it will probably be of interest to readers of Planet Debian.

TL;DR: Software Heritage now archives all source packages of Debian as well as its security archive daily. Everything is ready for archival of other Debian derivatives as well. Keep on reading to get details of the work that made this possible.

History

When we first announced Software Heritage, back in 2016, we had archived the historical contents of Debian as present on the snapshot.debian.org service, as a one-shot proof of concept import.

This code was then left in a drawer and never touched again, until last summer when Sushant came do an internship with us. We’ve had the opportunity to rework the code that was originally written, and to make it more generic: instead of the specifics of snapshot.debian.org, the code can now work with any Debian repository. Which means that we could now archive any of the numerous Debian derivatives that are available out there.

This has been live for a few months, and you can find Debian package origins in the Software Heritage archive now.

Mapping a Debian repository to Software Heritage

The main challenge in listing and saving Debian source packages in Software Heritage is mapping the content of the repository to the generic source history data model we use for our archive.

Organization of a Debian repository

Before we start looking at a bunch of unpacked Debian source packages, we need to know how a Debian repository is actually organized.

At the top level of a Debian repository lays a set of suites, representing versions of the distribution, that is to say a set of packages that have been tested and are known to work together. For instance, Debian currently has 6 active suites, from wheezy (“old old stable” version), all the way up to experimental; Ubuntu has 8, from precise (12.04 LTS), up to bionic (the future 18.04 release), as well as a devel suite. Each of those suites also has a bunch of “overlay” suites, such as backports, which are made available in the archive alongside full suites.

Under the suites, there’s another level of subdivision, which Debian calls components, and Ubuntu calls areas. Debian uses its components to segregate packages along licensing terms (main, contrib and non-free), while Ubuntu uses its areas to denote the level of support of the packages (main, universe, multiverse, …).

Finally, components contain source packages, which merge upstream sources with distribution-specific patches, as well as machine-readable instructions on how to build the package.

Organization of the Software Heritage archive

The Software Heritage archive is project-centric rather than version-centric. What this means is that we are interested in keeping the history of what was available in software origins, which can be thought of as a URL of a repository containing software artifacts, tagged with a type representing the means of access to the repository.

For instance, the origin for the GitHub mirror of the Linux kernel repository has the following data:

For each visit of an origin, we take a snapshot of all the branches (and tagged versions) of the project that were visible during that visit, complete with their full history. See for instance one of the latest visits of the Linux kernel. For the specific case of GitHub, pull requests are also visible as virtual branches, so we fetch those as well (as branches named refs/pull/<pull request number>/head).

Bringing them together

As we’ve seen, Debian archives (just as well as archives for other “traditional” Linux distributions) are release-centric rather than package-centric. Mapping distributions to the Software Heritage archive therefore takes a little bit of gymnastics, to transpose the list of source packages available in each suite to a list of available versions per source package. We do this step by step:

  1. Download the Sources indices for all the suites and components known in the Debian repository
  2. Parse the Sources indices, listing all source packages inside
  3. For each source package, tell the Debian loader to load all the available versions (grouped by name), generating a complete snapshot of the state of the source package across the Debian repository

The source packages are mapped to origins using the following format:

  • type: deb
  • url: deb://<repository name>/packages/<source package name> (e.g. deb://Debian/packages/linux)

We use a repository name rather than the actual URL to a repository so that links can persist even if a given mirror disappears.

Loading Debian source packages

To load Debian source packages into the Software Heritage archive, we have to convert them: Debian-based distributions distribute source packages as a set of files, a dsc (Debian Source Control) and a set of tarballs (usually, an upstream tarball and a Debian-specific overlay). On the other hand, Software Heritage only stores version-control information such as revisions, directories, files.

Unpacking the source packages

Our philosophy at Software Heritage is to store the source code of software in the precise form that allows a developer to start working on it. For Debian source packages, this is the unpacked source code tree, with all patches applied. After checking that the files we have downloaded match the checksums published in the index files, we simply use dpkg-source -x to extract the source package, with patches applied, ready to build. This also means that we currently fail to import packages that don’t extract with the version of dpkg-source available in Debian Stretch.

Generating a synthetic revision

After walking the extracted source package tree, computing identifiers for all its contents, we get the identifier of the top-level tree, which we will reference in the synthetic revision.

The synthetic revision contains the “reproducible” metadata that is completely intrinsic to the Debian source package. With the current implementation, this means:

  • the author of the package, and the date of modification, as referenced in the last entry of the source package changelog (referenced as author and committer)
  • the original artifact (i.e. the information about the original source package)
  • basic information about the history of the package (using the parsed changelog)

However, we never set parent revisions in the synthetic commits, for two reasons:

  • there is no guarantee that packages referenced in the changelog have been uploaded to the distribution, or imported by Software Heritage (our update frequency is lower than that of the Debian archive)
  • even if this guarantee existed, and all versions of all packages were available in Software Heritage, there would be no guarantee that the version referenced in the changelog is indeed the version we imported in the first place

This makes the information stored in the synthetic revision fully intrinsic to the source package, and reproducible. In turn, this allows us to keep a cache, mapping the original artifacts to synthetic revision ids, to avoid loading packages again once we have loaded them once.

Storing the snapshot

Finally, we can generate the top-level object in the Software Heritage archive, the snapshot. For instance, you can see the snapshot for the latest visit of the glibc package.

To do so, we generate a list of branches by concatenating the suite, the component, and the version number of each detected source package (e.g. stretch/main/2.24-10 for version 2.24-10 of the glibc package available in stretch/main). We then point each branch to the synthetic revision that was generated when loading the package version.

In case a version of a package fails to load (for instance, if the package version disappeared from the mirror between the moment we listed the distribution, and the moment we could load the package), we still register the branch name, but we make it a “null” pointer.

There’s still some improvements to make to the lister specific to Debian repositories: it currently hardcodes the list of components/areas in the distribution, as the repository format provides no programmatic way of eliciting them. Currently, only Debian and its security repository are listed.

Looking forward

We believe that the model we developed for the Debian use case is generic enough to capture not only Debian-based distributions, but also RPM-based ones such as Fedora, Mageia, etc. With some extra work, it should also be possible to adapt it for language-centric package repositories such as CPAN, PyPI or Crates.

Software Heritage is now well on the way of providing the foundations for a generic and unified source browser for the history of traditional package-based distributions.

We’ll be delighted to welcome contributors that want to lend a hand to get there.

20 February, 2018 01:52PM by olasd

hackergotchi for Daniel Pocock

Daniel Pocock

Hacking at EPFL Toastmasters, Lausanne, tonight

As mentioned in my earlier blog, I give a talk about Hacking at the Toastmasters club at EPFL tonight. Please feel free to join us and remember to turn off your mobile device or leave it at home, you never know when it might ring or become part of a demonstration.

20 February, 2018 11:39AM by Daniel.Pocock

February 19, 2018

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, January 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 160 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly at 187 hours per month. It would be nice if the slow growth could continue as the amount of work seems to be slowly growing too.

The security tracker currently lists 23 packages with a known CVE and the dla-needed.txt file 23 too. The number of open issues seems to be stable compared to last month which is a good sign.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

19 February, 2018 05:18PM by Raphaël Hertzog

hackergotchi for Steve Kemp

Steve Kemp

How we care for our child

This post is a departure from the regular content, which is supposed to be "Debian and Free Software", but has accidentally turned into a hardware blog recently!

Anyway, we have a child who is now about 14 months old. The way that my wife and I care for him seems logical to us, but often amuses local people. So in the spirit of sharing this is what we do:

  • We divide the day into chunks of time.
  • At any given time one of us is solely responsible for him.
    • The other parent might be nearby, and might help a little.
    • But there is always a designated person who will be changing nappies, feeding, and playing at any given point in the day.
  • The end.

So our weekend routine, covering Saturday and Sunday, looks like this:

  • 07:00-08:00: Husband
  • 08:01-13:00: Wife
  • 13:01-17:00: Husband
  • 17:01-18:00: Wife
  • 18:01-19:30: Husband

Our child, Oiva, seems happy enough with this and he sometimes starts walking from one parent to the other at the appropriate time. But the real benefit is that each of us gets some time off - in my case I get "the morning" off, and my wife gets the afternoon off. We can hide in our bedroom, go shopping, eat cake, or do anything we like.

Week-days are similar, but with the caveat that we both have jobs. I take the morning, and the evenings, and in exchange if he wakes up overnight my wife helps him sleep and settle between 8PM-5AM, and if he wakes up later than 5AM I deal with him.

Most of the time our child sleeps through the night, but if he does wake up it tends to be in the 4:30AM/5AM timeframe. I'm "happy" to wake up at 5AM and stay up until I go to work because I'm a morning person and I tend to go to bed early these days.

Day-care is currently a complex process. There are three families with small children, and ourselves. Each day of the week one family hosts all the children, and the baby-sitter arrives there too (all the families live within a few blocks of each other).

All of the parents go to work, leaving one carer in charge of 4 babies for the day, from 08:15-16:15. On the days when we're hosting the children I greet the carer then go to work - on the days the children are at a different families house I take him there in the morning, on my way to work, and then my wife collects him in the evening.

At the moment things are a bit terrible because most of the children have been a bit sick, and the carer too. When a single child is sick it's mostly OK, unless that is the child which is supposed to be host-venue. If that child is sick we have to panic and pick another house for that day.

Unfortunately if the child-carer is sick then everybody is screwed, and one parent has to stay home from each family. I guess this is the downside compared to sending the children to public-daycare.

This is private day-care, Finnish-style. The social-services (kela) will reimburse each family €700/month if you're in such a scheme, and carers are limited to a maximum of 4 children. The net result is that prices are stable, averaging €900-€1000 per-child, per month.

(The €700 is refunded after a month or two, so in real-terms people like us pay €200-€300/month for Monday-Friday day-care. Plus a bit of beaurocracy over deciding which family is hosting, and which parents are providing food. With the size being capped, and the fees being pretty standard the carers earn €3600-€4000/month, which is a good amount. To be a school-teacher you need to be very qualified, but to do this caring is much simpler. It turns out that being an English-speaker can be a bonus too, for some families ;)

Currently our carer has a sick-note for three days, so I'm staying home today, and will likely stay tomorrow too. Then my wife will skip work on Wednesday. (We usually take it in turns but sometimes that can't happen easily.)

But all of this is due to change in the near future, because we've had too many sick days, and both of us have missed too much work.

More news on that in the future, unless I forget.

19 February, 2018 10:15AM

February 18, 2018

hackergotchi for Daniel Pocock

Daniel Pocock

SwissPost putting another nail in the coffin of Swiss sovereignty

A few people have recently asked me about the SwissID, as SwissPost has just been sending spam emails out to people telling them "Link your Swiss Post user account to SwissID".

This coercive new application of technology demands users email addresses and mobile phone numbers "for security". A web site coercing people to use text messages "for security" has quickly become a red flag for most people and many blogs have already covered why it is only an illusion of security, putting your phone account at risk so companies can profit from another vector for snooping on you.

SwissID is not the only digital identity solution in Switzerland but as it is run by SwissPost and has a name similar to another service it is becoming very well known.

In 2010 they began offering a solution which they call SuisseID (notice the difference? They are pronounced the same way.) based on digital certificates and compliant with Swiss legislation. Public discussion focussed on the obscene cost with little comment about the privacy consequences and what this means for Switzerland as a nation.

Digital certificates often embed an email address in the certificate.

With SwissID, however, they have a web site that looks like little more than vaporware, giving no details at all whether certificates are used. It appears they are basically promoting an app that is designed to harvest the email addresses and phone numbers of any Swiss people who install it, lulling them into that folly by using a name that looks like their original SuisseID. If it looks like phishing, if it feels like phishing and if it smells like phishing to any expert takes a brief sniff of their FAQ, then what else is it?

The thing is, the original SuisseID runs on a standalone smartcard so it doesn't need to have your mobile phone number, have permissions to all the data in your phone and be limited to working in areas with mobile phone signal.

The emails currently being sent by SwissPost tell people they must "Please use a private e-mail address for this purpose" but they don't give any information about the privacy consequences of creating such an account or what their app will do when it has access to read all the messages and contacts in your phone.

The actions you can take that they didn't tell you about

  • You can post a registered letter to SwissPost and tell them that for privacy reasons, you are immediately retracting the email addresses and mobile phone numbers they currently hold on file and that you are exercising your right not to give an email address or mobile phone number to them in future.
  • If you do decide you want a SwissID, create a unique email address for it and only use that email address with SwissPost so that it can't be cross-referenced with other companies. This email address is also like a canary in a coal mine: if you start receiving spam on that email address then you know SwissPost/SwissID may have been hacked or the data has been leaked or sold.
  • Don't install their app and if you did, remove it and you may want to change your mobile phone number.

Oddly enough, none of these privacy-protecting ideas were suggested in the email from SwissPost. Who's side are they on?

Why should people be concerned?

SwissPost, like every postal agency, has seen traditional revenues drop and so they seek to generate more revenue from direct marketing and they are constantly looking for ways to extract and profit from data about the public. They are also a huge company with many employees: when dealing with vast amounts of data in any computer system, it only takes one employee to compromise everything: just think of how Edward Snowden was able to act alone to extract many of the NSA's most valuable secrets.

SwissPost is going to great lengths to get accurate data on every citizen and resident in Switzerland, including deploying an app to get your mobile phone number and demanding an email address when you use their web site. That also allows them to cross-reference with your IP addresses.

  • Any person or organization who has your email address or mobile number may find it easier to get your home address.
  • Any person or organization who has your home address may be able to get your email address or mobile phone number.
  • When you call a company from your mobile phone and their system recognizes your phone number, it becomes easier for them to match it to your home address.
  • If SwissPost and the SBB successfully convince a lot of people to use a SwissID, some other large web sites may refuse to allow access without getting you to link them to your SwissID and all the data behind it too. Think of how many websites already try to coerce you to give them your mobile phone number and birthday to "secure" your account, but worse.

The Google factor

The creepiest thing is that over seventy percent of people are apparently using Gmail addresses in Switzerland and these will be a dependency of their registration for SwissID.

Given that SwissID is being promoted as a solution compliant with ZertES legislation that can act as an interface between citizens and the state, the intersection with such a powerful foreign actor as Gmail is extraordinary. For example, if people are registering to vote in Switzerland's renowned referendums and their communication is under the surveillance of a foreign power like the US, that is a mockery of democracy and it makes the allegations of Russian election hacking look like child's play.

Switzerland's referendums, decentralized system of Government, part-time army and privacy regime are all features that maintain a balance between citizen and state: by centralizing power in the hands of SwissID and foreign IT companies, doesn't it appear that the very name SwissID is a mockery of the Swiss identity?

Yellow in motion

No canaries were harmed in the production of this blog.

18 February, 2018 10:17PM by Daniel.Pocock

Petter Reinholdtsen

The SysVinit upstream project just migrated to git

Surprising as it might sound, there are still computers using the traditional Sys V init system, and there probably will be until systemd start working on Hurd and FreeBSD. The upstream project still exist, though, and up until today, the upstream source was available from Savannah via subversion. I am happy to report that this just changed.

The upstream source is now in Git, and consist of three repositories:

I do not really spend much time on the project these days, and I has mostly retired, but found it best to migrate the source to a good version control system to help those willing to move it forward.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

18 February, 2018 08:20AM

February 17, 2018

hackergotchi for Joey Hess

Joey Hess

futures of distributions

Seems Debian is talking about why they are unable to package whole categories of modern software, such as anything using npm. It's good they're having a conversation about that, and I want to give a broader perspective.

Lars Wirzenius's blog post about it explains the problem well from the Debian perspective. In short: The granularity at which software is built has fundamentally changed. It's now typical for hundreds of small libraries to be used by any application, often pegged to specific versions. Language-specific tools manage all the resulting complexity automatically, but distributions can't muster the manpower to package a fraction of this stuff.

Lars lists some ideas for incremental improvements, but the space within which a Linux distribution exists has changed, and that calls not for incremental changes, but for a fundamental rethink from the ground up. Whether Debian is capable of making such fundamental changes at this point in its lifecycle is up to its developers to decide.

Perhaps other distributions are dealing with the problem better? One way to evaluate this is to look at how a given programming language community feels about a distribution's handling of their libraries. Do they generally see the distribution as a road block that must be worked around, or is the distribution a useful part of their workflow? Do they want their stuff included in the distribution, or does that seem like a lot of pointless bother?

I can only speak about the Haskell community. While there are some exceptions, it generally is not interested in Debian containing Haskell packages, and indeed system-wide installations of Haskell packages can be an active problem for development. This is despite Debian having done a much better job at packaging a lot of Haskell libraries than it has at say, npm libraries. Debian still only packages one version of anything, and there is lag and complex process involved, and so friction with the Haskell community.

On the other hand, there is a distribution that the Haskell community broadly does like, and that's Nix. A subset of the Haskell community uses Nix to manage and deploy Haskell software, and there's generally a good impression of it. Nix seems to be doing something right, that Debian is not doing.

It seems that Nix also has pretty good support for working with npm packages, including ingesting a whole dependency chain into the package manager with a single command, and thousands of npm libraries included in the distribution I don't know how the npm community feels about Nix, but my guess is they like it better than Debian.

Nix is a radical rethink of the distribution model. And it's jettisoned a lot of things that Debian does, like manually packaging software, or extreme license vetting. It's interesting that Guix, which uses the same technologies as Nix, but seems in many ways more Debian-like with its care about licensing etc, has also been unable to manage npm packaging. This suggests to me that at least some of the things that Nix has jettisoned need to be jettisoned in order to succeed in the new distribution space.

But. Nix is not really exploding in popularity from what I can see. It seems to have settled into a niche of its own, and is perhaps expanding here and there, but not rapidly. It's insignificant compared with things like Docker, that also radically rethink the distribution model.

We could easily end up with some nightmare of lithification, as described by Robert "r0ml" Lefkowitz in his Linux.conf.au talk. Endlessly copied and compacted layers of code, contained or in the cloud. Programmer-archeologists right out of a Vinge SF novel.

r0ml suggests that we assume that's where things are going (or indeed where they already are outside little hermetic worlds like Debian), and focus on solving technical problems, like deployment of modifications of cloud apps, that prevent users from exercising software freedoms.

In a way, r0ml's ideas are what led me to thinking about extending Scuttlebutt with Annah, and indeed if you squint at that right, it's an idea for a radically different kind of distribution.

Well, that's all I have. No answers of course.

17 February, 2018 08:51PM

John Goerzen

The downfall of… Trump or Democracy?

The future of the United States as a democracy is at risk. That’s plenty scary. More scary is that many Americans know this, but don’t care. And even more astonishing is that this same thing happened 45 years ago.

I remember it clearly. January 30, just a couple weeks ago. On that day, we had the news that FBI deputy director McCabe — a frequent target of apparently-baseless Trump criticism — had been pushed out. The Trump administration refused to enforce the bipartisan set of additional sanctions on Russia. And the House Intelligence Committee voted on party lines to release what we all knew then, and since have seen confirmed, was a memo filled with errors designed to smear people investigating the president, but which nonetheless contained enough classified material to cause an almighty kerfuffle in Washington.

I told my wife that evening, “I think today will be remembered as a turning point. Either to the downfall of Trump, or the downfall of our democracy, but I don’t know which.”

I have not written much about this scandal, because so many quality words have already been written. But it is time to add something.

I was interested in Watergate years ago. Back in middle school, I read All the President’s Men. I wondered what it must have been like to live through those events — corruption at the highest level of government, dirty tricks, not knowing how it would play out. I wished I could have experienced it.

A couple of decades later, I have got my wish and I am not amused. After all:

“If these allegations prove to be true, what they were seeking to steal was not the jewels, money or other property of American citizens, but something much more valuable — their most precious heritage, the right to vote in a free election…

If the allegations… are substantiated, there has been a very serious subversion of the integrity of the electoral process, and the committee will be obliged to consider the manner in which such a subversion affects the continued existence of this nation as a representative democracy, and how, if we are to survive, such subversions may be prevented in the future.”

Sen. Sam Ervin Jr, May 17, 1973

That statement from 45 years ago captures accurately my contemporary fears. If foreign interference in our elections is not only tolerated but embraced, where does that leave us? Are we really a republic anymore?

I have been diving back into Watergate. In One Man Against The World: The Tragedy of Richard Nixon, written by Tim Weiner in 2015, he dives into the Nixon story in unprecedented detail, thanks to the release of many more files from that time. In his very first page, he writes:

[Nixon] made war in pursuit of peace. He committed crimes in the name of the law. He tore the country apart while trying to unite it. He sabotaged his presidency by violating the Constitution. He destroyed himself and damaged the nation through deliberate acts of folly…

He practiced geopolitics without subtlety; he preferred subterfuge and brutality. He dropped bombs and napalm without remorse; he believed they delivered a political message beyond flood and fire. Hr charted the course of the war without a strategy; he delivered victory to his adversaries.

His gravest decisions undermined his allies abroad. His grandest delusions armed his enemies at home…

The truth was not in him; secrecy and deception were his touchstones.

That these words describe another American president, one that I’m sure Weiner had not foreseen, is jarring. The parallels between Nixon and Trump in the pages of Weiner’s book are so strong that one sometimes wonders if Weiner has a more accurate story of Trump than Wolff got – and also if the pages of his book let us see what’s in store for us this year.

Today I started listening to the excellent podcast Slow Burn. If you have time for nothing else, listen to episode 5: True Believers. It discusses the politicization of the Senate Watergate committee, and more ominously, the efforts of reports to understand the people that still supported Nixon — despite all the damning testimony already out there.

Gail Sheehy went to a bar where Nixon supporters gathered, wanting to get their reaction to the Watergate hearings. The supporters didn’t want to watch. They thought the hearings were just an attempt by liberals to take down Nixon. Sheehy found the president’s people to be “angry, demoralized, and disconcertingly comfortable with the idea of a police state run by Richard Nixon.”

These guys felt they were nobodies… except Richard Nixon gave them an identity. He was a tough guy who was “going to get rid of all those anti-war people, anarchists, terrorists… the people that were tearing down our country!”

Art Buchwald’s tongue-in-cheek handy excuses for Nixon backers seems to be copied almost verbatim by Fox News (substitute Hillary’s emails for Chappaquiddick).

And what happened to the scum of Richard Nixon’s era? Yes, some went to jail, but not all.

  • Steve King, one of Nixon’s henchmen that kidnapped Martha Mitchell (wife of Attorney General and Nixon henchman John Mitchell) for a week to keep her from spilling the beans on Watergate, beat her up, and had her drugged — well he was appointed by Trump to be ambassador to the Czech Republic and confirmed by the Senate.
  • The man that said that the Watergate burglars were “not criminal at heart” because “their only aim was to re-elect the president” later got elected president himself, and pardoned one of the burglars. (Ronald Reagan)
  • The man that said “just let the president do his job!” was also elected president (George H. W. Bush)
  • The man that finally carried out Nixon’s order to fire special prosecutor Archibald Cox was nominated to the Supreme Court, but his nomination was blocked in the Senate. (Robert Bork) He was, however, on the United States Court of Appeals for 6 years.
  • And in an odd conspiracy-laden introduction to a reprint of a youth’s history book on Watergate, none other than Roger Stone, wrapped up in Trump’s shenanigans, was trying to defend Nixon. Oh, and he was a business partner with Paul Manafort and lobbyist for Ferdinand Marcos.

One comfort from all of this is the knowledge that we had been there before. We had lived through an era of great progress in civil rights, and right after that elected a dictatorial crook president. We survived the president’s fervent supporters refusing to believe overwhelming evidence of his crookedness. We survived.

And yet, that is no guarantee. After all, as John Dean put it, Nixon “might have survived if there’d been a Fox News.”

17 February, 2018 08:36PM by John Goerzen

hackergotchi for Lars Wirzenius

Lars Wirzenius

What is Debian all about, really? Or: friction, packaging complex applications

Another weekend, another big mailing list thread

This weekend, those interested in Debian development have been having a discussion on the debian-devel mailing list about "What can Debian do to provide complex applications to its users?". I'm commenting on that in my blog rather than the mailing list, since this got a bit too long to be usefully done in an email.

directhex's recent blog post "Packaging is hard. Packager-friendly is harder." is also relevant.

The problem

To start with, I don't think the email that started this discussion poses the right question. The problem not really about complex applications, we already have those in Debian. See, for example, LibreOffice. The discussion is really about how Debian should deal with the way some types of applications are developed upstream these days. They're not all complex, and they're not all big, but as usual, things only get interesting when n is big.

A particularly clear example is the whole nodejs ecosystem, but it's not limited to that and it's not limited to web applications. This is also not the first time this topic arises, but we've never come to any good conclusion.

My understanding of the problem is as follows:

A current trend in software development is to use programming languages, often interpreted high level languages, combined with heavy use of third-party libraries, and a language-specific package manager for installing libraries for the developer to use, and sometimes also for the sysadmin installing the software for production to use. This bypasses the Linux distributions entirely. The benefit is that it has allowed ecosystems for specific programming languages where there is very little friction for using libraries written in that language to be used by developers, speeding up development cycles a lot.

When I was young(er) the world was horrible

In comparison, in the old days, which for me means the 1990s, and before Debian took over my computing life, the cycle was something like this:

I would be writing an application, and would need to use a library to make some part of my application easier to write. To use that library, I would download the source code archive of the latest release, and laboriously decipher and follow the build and installation instructions, fix any problems, rinse, repeat. After getting the library installed, I would get back to developing my application. Often the installation of the dependency would take hours, so not a thing to be undertaken lightly.

Debian made some things better

With Debian, and apt, and having access to hundreds upon hundreds of libraries packaged for Debian, this become a much easier process. But only for the things packaged for Debian.

For those developing and publishing libraries, Debian didn't make the process any easier. They would still have to publish a source code archive, but also hope that it would eventually be included in Debian. And updates to libraries in the Debian stable release would not get into the hands of users until the next Debian stable release. This is a lot of friction. For C libraries, that friction has traditionally been tolerable. The effort of making the library in the first place is considerable, so any friction added by Debian is small by comparison.

The world has changed around Debian

In the modern world, developing a new library is much easier, and so also the friction caused by Debian is much more of a hindrance. My understanding is that things now happen more like this:

I'm developing an application. I realise I could use a library. I run the language-specific package manager (pip, cpan, gem, npm, cargo, etc), it downloads the library, installs it in my home directory or my application source tree, and in less than the time it takes to have sip of tea, I can get back to developing my application.

This has a lot less friction than the Debian route. The attraction to application programmers is clear. For library authors, the process is also much streamlined. Writing a library, especially in a high-level language, is fairly easy, and publishing it for others to use is quick and simple. This can lead to a virtuous cycle where I write a useful little library, you use and tell me about a bug or a missing feature, I add it, publish the new version, you use it, and we're both happy as can be. Where this might have taken weeks or months in the old days, it can now happen in minutes.

The big question: why Debian?

In this brave new world, why would anyone bother with Debian anymore? Or any traditional Linux distribution, since this isn't particularly specific to Debian. (But I mention Debian specifically, since it's what I now best.)

A number of things have been mentioned or alluded to in the discussion mentioned above, but I think it's good for the discussion to be explicit about them. As a computer user, software developer, system administrator, and software freedom enthusiast, I see the following reasons to continue to use Debian:

  • The freeness of software included in Debian has been vetted. I have a strong guarantee that software included in Debian is free software. This goes beyond the licence of that particular piece of software, but includes practical considerations like the software can actually be built using free tooling, and that I have access to that tooling, because the tooling, too, is included in Debian.

    • There was a time when Debian debated (with itself) whether it was OK to include a binary that needed to be built using a proprietary C compiler. We decided that it isn't, or not in the main package archive.

    • These days we have the question of whether "minimised Javascript" is OK to be included in Debian, if it can't be produced using tools packaged in Debian. My understanding is that we have already decided that it's not, but the discussion continues. To me, this seems equivalent to the above case.

  • I have a strong guarantee that software in a stable Debian release won't change underneath me in incompatible ways, except in special circumstances. This means that if I'm writing my application and targeting Debian stable, the library API won't change, at least not until the next Debian stable release. Likewise for every other bit of software I use. Having things to continue to work without having to worry is a good thing.

    • Note that a side-effect of the low friction of library development current ecosystems sometimes results in the library API changing. This would mean my application would need to change to adapt to the API change. That's friction for my work.
  • I have a strong guarantee that a dependency won't just disappear. Debian has a large mirror network of its package archive, and there are easy tools to run my own mirror, if I want to. While running my own mirror is possible for other package management systems, each one adds to the friction.

    • The nodejs NPM ecosystem seems to be especially vulnerable to this. More than once packages have gone missing, resulting other projects, which depend on the missing packages, to start failing.

    • The way the Debian project is organised, it is almost impossible for this to happen in Debian. Not only are package removals carefully co-ordinated, packages that are depended on on by other packages aren't removed.

  • I have a strong guarantee that a Debian package I get from a Debian mirror is the official package from Debian: either the actual package uploaded by a Debian developer or a binary package built by a trusted Debian build server. This is because Debian uses cryptographic signatures of the package lists and I have a trust path to the Debian signing key.

    • At least some of the language specific package managers fail to have such a trust path. This means that I have no guarantees that the library package I download today, was the same code uploaded by library author.

    • Note that https does not help here. It protects the transfer from the package manger's web server to me, but makes absolutely no guarantees about the validity of the package. There's been enough cases of the package repository having been attacked that this matters to me. Debian's signatures protect against malicious changes on mirror hosts.

  • I have a reasonably strong guarantee that any problem I find can be fixed, by me or someone else. This is not a strong guarantee, because Debian can't do anything about insanely complicated code, for example, but at least I can rely on being able to rebuild the software. That's a basic requirement for fixing a bug.

  • I have a reasonably strong guarantee that, after upgrading to the next Debian stable release, my stuff continues to work. Upgrades may always break, but at least Debian tests them and treats it as a bug if an upgrade doesn't work, or loses user data.

These are the reasons why I think Debian and the way it packages and distributes software is still important and relevant. (You may disagree. I'm OK with that.)

What about non-Linux free operating systems

I don't have much personal experience with non-Linux systems, so I've only talked about Linux here. I don't think the BSD systems, for example, are actually all that different from Linux distributions. Feel free to substitute "free operating system" for "Linux" throughout.

What is it Debian tries to do, anyway?

The previous section is one level of abstraction too low. It's important, but it's beneficial take a further step back and consider what it is Debian actually tries to achieve. Why does Debian exist?

The primary goal of Debian is to enable its users to use their computers using only free software. The freedom aspect is fundamentally important and a principle that Debian is not willing to compromise on.

The primary approach to achieve this goal is to produce a "distribution" of free software, to make installing a free software operating system and applications, and to maintain such a computer, a feasible thing for our users.

This leads to secondary goals, such as:

  • Making it easy to install Debian on a computer. (For values of easy that should be compared to toggling boot sector bytes manually.)

    We've achieved this, though of course things can always be improved.

  • Making it easy to install applications on a computer with Debian. (Again, compared to the olden days, when that meant configuring and compiling everything from scratch, with no guidance.)

    We've achieved this, too.

  • A system with Debian installed is reasonably secure, and easy to keep reasonably secure.

    This means Debian will provide security support for software it distributes, and has ways in which to install security fixes. We've achieved this, though this, too, can always be improved.

  • A system with Debian installed should keep working for extended periods of time. This is important to make using Debian feasible. If it takes too much effort to have a computer running Debian, it's not feasible for many people to that, and then Debian fails its primary goal.

    This is why Debian has stable releases with years of security support. We've achieved this.

The disconnect

On the one hand, we have Debian, which pretty much has achieved what I declare to be its primary goal. On the other hand, a lot of developers now expect much less friction than what Debian offers. This is a disconnect that is cause, I believe, the debian-devel discussion, and variants of that discussion all over the open source landscape.

These discussions often go one of two ways, depending on which community is talking.

  • In the distribution and more old-school communities, the low-friction approach of language-specific package managers is often considered to be a horror, and an abandonment of all the good things that the Linux world has achieved. "Young saplings, who do they think they are, all agile and bendy and with no principles at all, get off our carefully cultivated lawn."

  • In the low-friction communities, Linux distributions are something only old, stodgy, boring people care about. "Distributions are dead, they only get in the way, nobody bothers with them anymore."

This disconnect will require effort by both sides to close the gap.

On the one hand, so much new software is being written by people using the low-friction approach, that Linux distributions may fail to attract new users and especially new developers, and this will hurt them and their users.

On the other hand, the low-friction people may be sawing the tree branch they're sitting on. If distributions suffer, the base on which low-friction development relies on, will wither away, and we'll be left with running low-friction free software on proprietary platforms.

Things for low-friction proponents to improve

Here's a few things I've noticed that go wrong in the various communities oriented towards the low-friction approach.

  • Not enough care is given to copyright licences. This is a boring topic, but it's the legal basis that all of free software and open source is based on. If copyright licences are violated, or copyrights are not respected, or copyrights or licences are not expressed well enough, or incompatible licences are mixed, the result is very easily not actually either free software or open source.

    It's boring, but be sufficiently pedantic here. It's not even all that difficult.

  • Do provide actual source. It seems quite a number of Javascript projects only distribute "minimised" versions of code. That's not actually source code, any more than, say, Java byte code is, even if a de-compiler can make it kind of editable. If source isn't available, it's not free software or open source.

  • Please try to be careful with API changes. What used to work should still work with a new version of a library. If you need to make an API change that breaks compatibility, find a way to still support those who rely on the old API, using whatever mechanisms available to you. Ideally, support the old API for a long time, years. Two weeks is really not enough.

  • Do be careful with your dependencies. Locking down dependencies on a specific version makes things difficult for distributions, because they often can only provide one or a very small number of versions of any one package. Likewise, avoid embedding dependencies in your own source tree, because that explodes the amount of work distributions have to do to patch security holes. (No, distributions can't rely on tends of thousands of upstream to each do the patching correctly and promptly.)

Things for Debian to improve

There are many sources of friction that come from Debian itself. Some of them are unavoidable: if upstream projects don't take care of copyright licence hygiene, for example, then Debian will impose that on them and that can't be helped. Other things are more avoidable, however. Here's a list off the top of my head:

  • A lot of stuff in Debian happens over email, which might happen using a web application, if it were not for historical reasons. For example, the Debian bug tracking system (bugs.debian.org) requires using email, and given delays caused by spam filtering, this can cause delays of more than fifteen minutes. This is a source of friction that could be avoided.

  • Likewise, Debian voting happens over email, which can cause friction from delays.

  • Debian lets its package maintainers use any version control system, any packaging helper tooling, and packaging workflow they want. This means that every package is, to some extent, a new territory for someone other than its primary maintainers. Even when the same tools are used, they can be used in variety of different ways. Consistency should reduce friction.

  • There's too little infrastructure to do things like collecting copyright information into debian/control. This really shouldn't be a manual task.

  • Debian packaging uses arcane file formats, loosely based on email headers. More standard formats might make things easier, and reduce friction.

  • There's not enough automated testing, or it's too hard to use, making it too hard to know if a new package will work, or a modified package doesn't break anything that used to work.

  • Overall, making a Debian package tends to require too much manual work. Packaging helpers like dh certainly help, but not enough. I don't have a concrete suggestion how to reduce it, but it seems like an area Debian should work on.

  • Maybe consider supporting installing multiple versions of a package, even if only for, say, Javascript libraries. Possibly with a caveat that only specific versions will be security supported, and a way to alert the sysadmin if vulnerable packages are installed. Dunno, this is a difficult one.

  • Maybe consider providing something where the source package gets automatically updated to every new upstream release (or commit), with binary packages built from that, and those automatically tested. This might be a separate section of the archive, and packages would be included into the normal part of the archive only by manual decision.

  • There's more, but mostly not relevant to this discussion, I think. For example, Debian is a big project, and the mere size is a cause of friction.

Comments?

I don't allow comments on my blog, and I don't want to debate this in private. If you have comments on anything I've said above, please post to the debian-devel mailing list. Thanks.

Baits

To ensure I get some responses, I will leave these bait here:

Anyone who's been programming less than 12332 days is a young whipper-snapper and shouldn't be taken seriously.

Depending on the latest commit of a library is too slow. The proper thing to do for really fast development is to rely on the version in the unsaved editor buffer of the library developer.

You shouldn't have read any of this. I'm clearly a troll.

17 February, 2018 05:13PM

hackergotchi for Martín Ferrari

Martín Ferrari

OSM in IkiWiki

Since about 15 years ago, I have been thinking of creating a geo-referenced wiki of pubs, with loads of structured data to help searching. I don't know if that would be useful for anybody else, but I know I would use it!

Sadly, the many times I started coding something towards that goal, I ended blocked by something, and I keep postponing my dream project.

Independently of that, for the past two years I have been driving a regular social meeting in Dublin for CouchSurfers, called the Dublin Mingle. The idea is pretty simple: to go every week to a different pub, and make friends.

I wanted to make a map marking all the places visited. Completely useless, but pretty! So, I went back to looking into IkiWiki internals, as the current osm plugin would not fulfill all my needs, and has a few annoying bugs.

After a few days of work, I made it: a refurbished osm plugin that uses the modern and pretty Leaflet library. If the javascript is not lost in the way (because you are reading from an aggregator, for example), below you should see the result. Otherwise, you can see it in action on its own page: Mingle)

©

The code is still not ready for merging into Ikiwiki, as I need to write tests and documentation. But you can find the changes in my GitHub repo.

It is still a long way to go before I can create my pubs wiki, but it is the first building block! Now I need a way to easily import and sync data from OSM, and then to create a structured search function.

Comment

17 February, 2018 03:11PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Downloading all the Critical Role podcasts in one batch

I've been watching Critical Role1 for a while now and since I've started my master's degree I haven't had much time to sit down and watch the show on YouTube as I used to do.

I thus started listening to the podcasts instead; that way, I can listen to the show while I'm doing other productive tasks. Pretty quickly, I grew tired of manually downloading every episode each time I finished the last one. To make things worst, the podcast is hosted on PodBean and they won't let you download episodes on a mobile device without their app. Grrr.

After the 10th time opening the terminal on my phone to download the podcast using some wget magic I decided enough was enough: I was going to write a dumb script to download them all in one batch.

I'm a little ashamed to say it took me more time than I had intended... The PodBean website uses semi-randomized URLs, so I could not figure out a way to guess the paths to the hosted audio files. I considered using youtube-dl to get the DASH version of the show on YouTube, but Google has been heavily throttling DASH streams recently. Not cool Google.

I then had the idea to use iTune's RSS feed to get the audio files. Surely they would somehow be included there? Of course Apple doesn't give you a simple RSS feed link on the iTunes podcast page, so I had to rummage around and eventually found out this is the link you have to use:

https://itunes.apple.com/lookup?id=1243705452&entity=podcast

Surprise surprise, from the json file this links points to, I found out the main Critical Role podcast page has a proper RSS feed. To my defense, the RSS button on the main podcast page brings you to some PodBean crap page.

Anyway, once you have the RSS feed, it's only a matter of using grep and sed until you get what you want.

Around 20 minutes later, I had downloaded all the episodes, for a total of 22Gb! Victory dance!

Script

Here's the bash script I wrote. You will need recode to run it, as the RSS feed includes some HTML entities.

# Get the whole RSS feed
wget -qO /tmp/criticalrole.rss http://criticalrolepodcast.geekandsundry.com/feed/

# Extract the URLS and the episode titles
mp3s=( $(grep -o "http.\+mp3" /tmp/criticalrole.rss) )
titles=( $(tail -n +45 /tmp/criticalrole.rss | grep -o "<title>.\+</title>" \
           | sed -r 's@</?title>@@g; s@ @\\@g' | recode html..utf8) )

# Download all the episodes under their titles
for i in ${!titles[*]}
do
  wget -qO "$(sed -e "s@\\\@\\ @g" <<< "${titles[$i]}").mp3" ${mp3s[$i]}
done

1 - For those of you not familiar with Critical Role, it's web series where a group of voice actresses and actors from LA play Dungeons & Dragons. It's so good even people like me who never played D&D can enjoy it..

17 February, 2018 05:00AM by Louis-Philippe Véronneau

Sergio Durigan Junior

Hello, Planet Debian

Hey, there. This is long overdue: my entry in Planet Debian! I’m creating this post because, until now, I didn’t have a debian tag in my blog! Well, not anymore.

Stay tunned!

17 February, 2018 05:00AM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

My Kuro5hin Diary Entries

Kuro5hin logo

Kuro5hin (pronounced “corrosion” and abbreviated K5) was a website created in 1999 that was popular in the early 2000s. K5 users could post stories to be voted upon as well as entries to their personal diaries.

I posted a couple dozen diary entries between 2002 and 2003 during my final year of college and the months immediately after.

K5 was taken off-line in 2016 and the Internet Archive doesn’t seem to have snagged comments or full texts of most diary entries. Luckily, someone managed to scrape most of them before they went offline.

Thanks to this archive, you can now once again hear from 21-year-old-me in the form of my old K5 diary entries which I’ve imported to my blog Copyrighteous. I fixed the obvious spelling errors but otherwise restrained myself and left them intact.

If you’re interested in preserving your own K5 diaries, I wrote some Python code to parse the K5 HTML files for diary pages and import them into WordPress using it’s XML-RPC API. You’ll need to tweak the code to use it but it’s pretty straightforward.

17 February, 2018 03:23AM by Benjamin Mako Hill

February 16, 2018

hackergotchi for Steve Kemp

Steve Kemp

Updated my package-repository

Yesterday I overhauled my Debian package-hosting repository, in response to user-complaints.

I started down the rabit hole due to:

  W: No Hash entry in Release file /.._._Release which is considered strong enough for security purposes

I fixed that by changing my hashes from SHA1 to SHA256 + SHA512, but I was only making a little progress, due to the more serious problem, my repository-signing key was DSA-based and "small". I replaced it with a modern key, then changed how I generate my packages and all is well.

In the past I was generating the Release files manually, via a silly shell-script. Anyway here is my trivial Makefile for making the per-project and per-distribution archive, no doubt it could be improved:

   all: repo

   clean:
       @rm -f InRelease Packages Sources Packages.gz Sources.gz Release Release.gpg

   Packages: $(wildcard *.deb)
       @apt-ftparchive packages . > Packages 2>/dev/null
       @gzip -c Packages > Packages.gz

   Sources: $(wildcard *.tar.gz)
       @apt-ftparchive sources . > Sources 2>/dev/null
       @gzip -c Sources > Sources.gz

   repo: Packages Sources
       @apt-ftparchive release . > Release
       @gpg --yes --clearsign -o InRelease Release
       @gpg --yes -abs -o Release.gpg Release

In conclusion, in the unlikely event you're using my packages please see GPG-instructions. I've also hidden any packages which were solely for Squeeze and Wheezy, but they continue to exist to avoid breaking links.

16 February, 2018 10:00PM

February 15, 2018

hackergotchi for Erich Schubert

Erich Schubert

Disable Web Notification Prompts

Recently, tons of website ask you for the permission to display browser notifications. 99% of the time, you will not want these. In fact, all the notifications increase stress, so you should try to get rid of them for your own productivity. Eliminate distractions.

I find even the prompt for these notifications very annoying. With Chrome/Chromium it is even worse than with Firefox.

In Chrome, you can disable the functionality by going to the location chrome://settings/content/notifications and toggling the switch (the label will turn to “blocked”, from “ask”).

In Firefox, go to about:config, and toggle dom.webnotifications.enabled is supposed to help, but does not disable the prompts here. You need to even disable dom.push.enabled completely. That may break some services that you want, but I have not yet noticed anything.

15 February, 2018 09:41PM by Erich Schubert

hackergotchi for Joachim Breitner

Joachim Breitner

Interleaving normalizing reduction strategies

A little, not very significant, observation about lambda calculus and reduction strategies.

A reduction strategy determines, for every lambda term with redexes left, which redex to reduce next. A reduction strategy is normalizing if this procedure terminates for every lambda term that has a normal form.

A fun fact is: If you have two normalizing reduction strategies s1 and s2, consulting them alternately may not yield a normalizing strategy.

Here is an example. Consider the lambda-term o = (λx.xxx), and note that oo → ooo → oooo → …. Let Mi = (λx.(λx.x))(oooo) (with i ocurrences of o). Mi has two redexes, and reduces to either (λx.x) or Mi + 1. In particular, Mi has a normal form.

The two reduction strategies are:

  • s1, which picks the second redex if given Mi for an even i, and the first (left-most) redex otherwise.
  • s2, which picks the second redex if given Mi for an odd i, and the first (left-most) redex otherwise.

Both stratgies are normalizing: If during a reduction we come across Mi, then the reduction terminates in one or two steps; otherwise we are just doing left-most reduction, which is known to be normalizing.

But if we alternatingly consult s1 and s2 while trying to reduce M2, we get the sequence


M2 → M3 → M4 → …

which shows that this strategy is not normalizing.

Afterthought: The interleaved strategy is not actually a reduction strategy in the usual definition, as it not a pure (stateless) function from lambda term to redex.

15 February, 2018 07:17PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Holger Levsen

Holger Levsen

20180215-mini-debconf-hamburg

Everything about the Mini-DebConf in Hamburg in May 2018

Moin!

With great joy we are finally offically announcing the Debian MiniDebConf which will take place in Hamburg (Germany) from May 16 to 20, with three days of Debcamp style hacking, followed by two days of talks, workshops and more hacking. And then, Monday the 21st is also a holiday in Germany, so you might choose to extend your stay by a day! (Though there will not be an official schedule for the 21st.)

;tl;dr: We're having a MiniDebConf in Hamburg on May 16-20. It's going to be awesome. You should all come! Register now!

the longer version:

Registration

Please register now, registration is free and now open until May 1st.

In order to register, add your name and details to the registration page in the Debian wiki.

There's space for approximately 150 people due to limited space in the main auditorium.

Please register ASAP, as we need this information for planning food and hacking space size calculations.

Talks wanted (CfP)

We have assembled a content team (consisting of Margarita Manterola, Michael Banck and Lee Garrett), who soon will publish an extra post for the CfP. Though you don't need to wait for that and can already send your proposals to

    cfp@minidebconfhamburg.debian.net

We will have talks on Saturday and Sunday, the exact slots are yet to be determined by the content team.

We expect submissions and talks to be held in English, as this is the working language in Debian and at this event.

Debian Sprints

The miniDebcamp from Wednesday to Friday is a perfect opportunity to host Debian sprints. We would welcome if teams assemble and work together on their projects.

Location

The event will be hosted in the former Victoria Kaserne, now called Fux (or Frappant), which is a collective art space located in a historical monument. It is located between S-Altona and S-Holstenstraße, so there is a direct subway connection to/from the Hamburg Airport (HAM) and Altona is also a long distance train station.

There's a Gigabit-Fiber uplink connection and wireless coverage (almost) everywhere in the venue and in the outside areas. (And then, we can also fix locations without wireless coverage.)

Within the venue, there are three main areas we will use, plus the garden and corridors:

dock europe

dock europe is an international educational centre with a meeting space within the venue which offers three rooms which can be combined into one big one. During the Mini-DebCamp from Wednesday to Friday we will probably use the rooms in the split configuration, while on Saturday and Sunday it will be one big room hosting presentations and such stuff. There are also two small rooms we can use as small hacklabs for 4-6 people.

dock europe also provides accomodation for some us, see further below.

CCCHH hackerspace

Just down two corridors in the same floor and building as dock europe there is the CCC Hamburg Hackerspace which will be open for us on all five days and which can be used for "regular Debian hacking" or, if you find some nice CCCHH members to help you, you might also be able to use the lasercutter, 3d printer, regular printer and many other tools and devices. It's definitly also suitable for smaller ad-hoc workshops but beware, it will also somewhat be the noisy hacklab, as it will also be open to regular CCC folks when we are there.

fux und ganz

The Fux also has a cantina called "fux und ganz" which will serve us (and other visitors of the venue) with lunch and dinner. Please register until May 1st to ease their planning as well!

Accommodation

The Mini-DebConf will take place in the center of Hamburg, so there are many accomodation options available. Some suggestions for housing options are given in the wiki and you might want to share your findings there too.

There is also limited on-site accomodation available, dock europe provides 36 beds in double-rooms in the venue. The rooms are nice, small, clean, have a locker, wireless and are just one floor away from our main spaces. There's also a sufficient amount of showers and toilets and breakfast is available (for those 36 people) as well.

Thankfully nattie has agreed to be in charge of distributing these 36 beds, so please mail her if you want a bed. The beds will be distributed from two buckets on a first come, first serve base:

  • 24 beds for anyone, first come, first serve, costs 27e/night.
  • 12 beds for video team, frontdesk desk, talk meisters, etc, also by first come, first served and nattie decides, whether you qualify indeed. Those also costs 27e/night.

Sponsors wanted

Making a Mini DebConf happen costs money, we need to rent the venue, video gear, hopefully can pay hard working volunteers lunch and dinner and maybe also sponsor some travel. So we really appreciate companies willing to support this meeting!

We have three sponsor categories:

  • 1000€ = sponsor, listed as such in all material.

  • 2500€ = gold sponsor, listed as such in all material, logo featured in the videos.

  • 5000€ = platinum sponsor, listed as such prominently in all material, logo featured prominently in the videos

Plus, there's corporate registration as an option too, where we will charge you 250€ for the registration. Please contact us if you are interested in that!

More volunteers wanted

Some things still need more helping hands:

So far we thankfully have Nattie volunteering for frontdesk duties. In turn, she'd be very thankful if some people join her staffing frontdesk, because shared work is more joyful!

The same goes for the video team. So far, we know the gear will arrive, and probably a person knowing how to operate it, but that's it. Please consider making sure we'll have videos released! ;) (And streams hopefully too.)

Also, please consider submitting a talk or holding a workshop! cfp@minidebconfhamburg.debian.net is waiting for you!

Finally, we would also very much welcome a nice logo and t-shirts with it being printed. Can you come up with a logo? Print shirts?

Contact

If you want to help, need help, have comments or want to contact us for other reasons, there are several ways:

Looking forward to see you in Hamburg!

Holger, for the 2018 Mini DebConf Hamburg team

15 February, 2018 04:16PM

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Weblate 2.19

Weblate 2.19 has been released today. The biggest improvement are probably addons to customize translation workflow, but there are some other enhancements as well.

Full list of changes:

  • Fixed imports across some file formats.
  • Display human friendly browser information in audit log.
  • Added TMX exporter for files.
  • Various performance improvements for loading translation files.
  • Added option to disable access management in Weblate in favor of Django one.
  • Improved glossary lookup speed for large strings.
  • Compatibility with django_auth_ldap 1.3.0.
  • Configuration errors are now stored and reported persistently.
  • Honor ignore flags in whitespace autofixer.
  • Improved compatibility with some Subversion setups.
  • Improved built in machine translation service.
  • Added support for SAP Translation Hub service.
  • Added support for Microsoft Terminology service.
  • Removed support for advertisement in notification mails.
  • Improved translation progress reporting at language level.
  • Improved support for different plural formulas.
  • Added support for Subversion repositories not using stdlayout.
  • Added addons to customize translation workflows.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

15 February, 2018 03:30PM

Arturo Borrero González

New round of GSoC: 2018

GSoC goodies

The other day Google published the list of accepted projects for this year round of Google Summer of Code. Many organizations were accepted, and there are 3 that are specially interesting to me: Netfilter, Wikimedia Foundation and Debian.

The GSoC initiative is a great opportunity to enter the professional FLOSS world, knowing more of your favorite project, having a mentor and earning a stipend along the way.

The Netfilter project (check the dashboard) has published a list of ideas for students to work on. I will likely be mentoring here. Be aware that students who submit patches as part of the warmup period are more likely to be elected.

The Debian project (check the dashboard) also has a great list of proposals in a variety of different technologies, from packaging to Android, also with some web and backend project ideas. Is great to see again Debian participating in GSoC. Last year we weren’t present.

The Wikimedia Foundation (check the dashboard) has like 8 projects for students to work on, also with different scopes, including an interesting project for improving Toolforge.

So, students, don’t be afraid to participate! There are a lot of projects, different technologies and people to work with, so there should be one waiting for you.

15 February, 2018 09:22AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Xerox printers on Debian - an update

This blog post is 90% rant and 10% update on how I made our new Xerox Altalink printer work on Debian. Skip the rant by clicking here.

I think the lamest part of my current job is that we heavily rely on multifunction printers. We need to print a high volume of complicated documents on demand. You know, 1500 copies of a color booklet printed on 11x17 paper folded in 3 stapled in the middle kind of stuff.

Pardon my French, but printers suck big time. The printer market is an oligopoly clusterfuck and it seems it keeps getting worse (looking at you, Fuji-Xerox merger). None of the drivers support Linux properly, all the printers are big piles of proprietary code and somehow the corporations selling them keep adding features no one needs.

My reaction when I learnt we needed to replace our current printer

Good job XeroxFuji Xerox, the new shiny printer you forced us to rent1 comes with an app that lets you print directly from Dropbox. I guess they expect people:

  • not to use a password manager
  • not to use long randomly generated passwords
  • not to use 2FA
  • to use proprietary services like Dropbox

Oh wait, I guess that's what people actually do. My bad.

As for fixing their stupid bugs, try again.

Xerox Altalink C8045

Rant aside, here's a short follow-up on the blog post I wrote two years ago on how to install Xerox printers on Debian.

As far as I can see, the Xerox Altalink C8045 seems to be working properly with the x86_64-5.20.606.3946 version of the Xerox driver for Debian. Make sure that you use the bi-directional setup or else you might have trouble. Sadly, all the gimmicks I wrote about two years ago still stand.

If you find that interesting, I also rewrote our Puppet module that manages all of this for you to be compatible with Puppet 4. Yay?


1 - Long story short, we used to have a Xerox ColorQube printer that used wax instead of toner to print, but Xerox doesn't support them anymore and bought back our support contract. E-waste FTW.

15 February, 2018 05:00AM by Louis-Philippe Véronneau

February 14, 2018

Bits from Debian

hackergotchi for Daniel Silverstone

Daniel Silverstone

Epic Journey in my Ioniq

This weekend just-gone was my father's 90th birthday, so since we don't go to Wales very often, we figured we should head down to visit. As this would be our first major journey in the Ioniq (I've done Manchester to Cambridge a few times now, but this is almost 3 times further) we took an additional day off (Friday) so that we could easily get from our home in southern Manchester to my parent's house in St Davids, Pembrokeshire.

I am not someone to enter into these experiences lightly. I spent several hours consulting with zap-map and also Google maps, looking at chargers en-route. In the UK there's a significant number of chargers on the motorway system provided by Ecotricity but this infrastructure is not pervasive and doesn't really extend beyond the motorway service stations (and some IKEAs). I made my plan for the journey to Wales, ensuring that each planned stop was simply the first in a line of possible stops in order that if something went wrong, I'd have enough charge to move forwards from there.

First leg took us from our home to the Ecotricity charger at Hilton Park Southbound services. My good and dear friend Tim very kindly offered to charge us for free and he used one of his fifty-two free charges to top us up. This went flawlessly and set us in a very good mood for the journey to come. Since we would then have a very long jump from the M5 to the M4, we decided that our second charge would be to top-up at Chateau Impney which has a Polar charger. Unfortunately by this point the wind and rain were up and the charger failed to work properly, eventually telling us that its input voltages were unbalanced and then powering itself off entirely. We decided to head to the other Polar charger at Webbs of Wychbold. That charger started up fine so we headed in, had a loo visit, grabbed some lunch, watched the terrapins swimming around, and when a sufficient time had passed for the car to charge, headed back only to discover that it had emergency-stopped mere moments after we'd left the car, so we had no charge for the entire time we were there. No matter we thought - we'd sit in the car while it charged, and eat our lunch. Sadly we were defeated, the charger repeatedly e-stopped, so we gave up.

Our fallback position was to charge at the Strensham services at the M5/M50 junction. Sadly the southbound services have no chargers at all (they're under a lot of building work right now, so perhaps that's part of it) so we had to get to the northbound services and charge there. That charge went fine, and with a £2.85 bill from Ecotricity automatically paid, we snuck our way along back-roads and secret junctions to the southbound services, and headed off down the M50. Sadly we're now a lot later than we should have been, having lost about ninety minutes in total to the wasted time at the two Polar chargers, which meant that we hit a lot of congestion at Monmouth and around Newport on the M4.

We made it to Cardiff Gate where we plugged in, set it charging, and then headed into the service area where we happened to meet my younger brother who was heading home too. He went off, and I looked at the Ecotricity app on my phone which had decided at that point that I wasn't charging at all. I went out to check, the charger was still delivering current, so, chalking it up to a bit of a de-sync, we went in, had a coffee and a relax, and then headed out to the car to wait for it to finish charging. It finished, we unplugged, and headed out. But to this day I've not been charged by Ecotricity for that so "yay".

Our final stop along the M4 was Swansea West. Unfortunately the Pont Abraham services don't have a rapid charger compatible with my car so we have to stop earlier. Fortunately there are three chargers at Swansea West. Unfortunately the CCS was plugged into an i3 which wasn't charging but was set to keep the connector locked in so I couldn't snarf it. I plugged into a slower (AC) charger to get a bit of juice while we went in to wait for the i3 owner to leave. I nipped out after 10 minutes and conveniently they'd gone, so I swapped the car over to the CCS charger and set it going. 37 minutes later and that charger had actually worked, charged me up, and charged me a princely £5.52 for the privilege.

From here we nipped along the A48/A40, dropped in on my sister-in-law to collect a gift for my father, and then got to St Davids at around nine pm. A mere eleven hours after we left Manchester. By comparison, when I drove a Passat, I would leave Manchester at 3pm, drive 100 fewer miles, and arrive at around 9pm, having had a few nice stops for loo breaks and dinner.

Saturday it had been raining quite hard overnight, St Davids has one (count it, ONE) charger compatible with my car (type 2 in this instance) but fortunately it's free to use (please make donation in the tourist-information-office). Unfortunately after the rain, the parking space next to the charger was under a non-trivial amount of water, so poor Rob had to mountaineer next to the charger to plug in without drowning. We set the car charging and went to have a nice breakfast in St Davids. A few hours later, I wandered back up to the car park with Rob and we unplugged and retrieved the car. Top marks for the charger, but a pity the space was a swimming pool.

Sunday morning dawned bright and early we headed out to Llandewi Velfrey to visit my brother who runs Silverstone Green Energy. We topped up there and then headded to Sarn Parc at his suggestion. It's a nice service area, unfortunately the AC/Chademo charger was giving 'Remote Start Error' so the Leaf there was on the Chademo/CCS charger. However as luck would have it, that charger was on free-vend, so once we got on the charger (30m later or so) we got to charge for free. Thanks Ecotricity.

From Sarn Parc, we decided that since we'd had such a good experience at Strensham North, we'd go directly there. We arrived with 18m to spare in the "tank" but unfortunately the CCS/Chademo charger was broken (with an error along the lines of PWB1 is 0x0008) and there was an eGolf there which also had wanted to use CCS but had to charge slowly in order to get enough range to get to another charger. As a result we had to sit there for an hour to wait for him to have enough in his 'tank' that he was prepared to let us charge. We then got a "full" 45 minute charge (£1.56, 5.2kWh) which gave us enough to get north again to Chateau Impney (which had been marked working again on Zap-map).

The charge there worked fine (yay) so we drove on north to Keele services. We arrived in the snow/hail/rain (yay northern weather) found the charger, plugged in, tried to set it going using the app, and we were told "Unable to contact charger". So I went through the process again and we were told "Charger in use". It bloody well wasn't in use, because I was plugged into it and it definitely wasn't charging my car. We waited for the rain to die down again and looked at the charger, which at that moment said "Connect vehicle" and then it started up charging the car (yay). We headed in for a loo and dinner break. Unfortunately the app couldn't report on progress but it had started charging so we were confident we'd be fine. More fool us. It had stopped charging moments after we'd left the car and once again we wasted time because it wasn't charging when we thought it was. We returned, discovered the car hadn't charged, but then discovered the charger had switched to free-vend so we charged up again for free, but that was another 40 minute wait.

Finally we got home (via a short stop at the pub) and on Monday I popped along to a GMEV rapid charger, and it worked perfectly as it has every single time I've used it.

So, in conclusion, the journey was reasonably cheap, which is nice, but we had two failed charge attempts on Polar, and several Ecotricity cockups (though they did mostly end up in our favour in terms of money) which cost us around 90 to 120 minutes in each direction. The driving itself (in the Ioniq) was fine and actually meant I wasn't frazzled and unhappy the whole time, but the charging infrastructure is simply not good enough. It's unreliable, Ecotricity don't have support lines at the weekend (or evenings/early mornings), and is far too sparse to be useful when one wishes to travel somewhere not on the motorway network. If I'd tried to drive my usual route, I'd have had to spend four hours in Aberystwyth using my granny charger to put about 40 miles in the tank from a public 3 pin socket.

14 February, 2018 09:18PM by Daniel Silverstone

hackergotchi for Erich Schubert

Erich Schubert

Online Dating Cannot Work Well

Daniel Pocock (via planet.debian.org) points out what tracking services online dating services expose you to. This certainly is an issue, and of course to be expected by a free service (you are the product – advertisers are the customer). Oh, and in case you forgot already: some sites employ fake profiles to retain you as long as possible on their site… But I’d like to point out how deeply flawed online dating is. It is surprising that some people meet successfully there; and I am not surprised that so many dates turn out to not work: they earn money if you remain single, and waste time on their site, not if you are successful.

I am clearly not an expert on online dating, because I am happily married. I met my wife in a very classic setting: offline, in my extended social circle. The motivation for this post is that I am concerned about seeing people waste their time. If you want to improve your life, eliminate apps and websites that are just distraction! And these days, we see more online/app distraction than ever. Smartphone zombie apocalpyse.

There are some obvious issues with online dating:

  • you treat people as if they were an object in an online shop. If you want to find a significant other, don’t treat him/her like a shoe.
  • you get too many choices. So if one turns out to be just 99% okay, then you will ignore this in favor of another 100% potential match.
  • you get to choose exactly what you want. No need to tolerate. And of course you know exactly what fits to you, don’t you? No, actually we are pretty bad at that, and a good relationship will require you to be tolerant.
  • inflated expectations: in reality, the 100s turn out to be more like 55% matches, because the image was photoshopped, they are too nervous, and their profile was written by a ghostwriter. Oh, and some of them will simply be chatbots, or employees, or already married, or …. So they don’t even exist.
  • because you are also just 99%, everybody seems to prefer someone else, and you are only the second choice, if chosen at all. You don’t get picked.
  • you will never be comfortable on the actual first date. Because of inflated expectations, it will be disappointing, and you just want to get away.
  • the companies earn money if you are online at their site, not if you are successful.

And yes, there is scientific research backing up these things. For example:

Online Dating: A Critical Analysis From the Perspective of Psychological Science
Eli J. Finkel, Paul W. Eastwick, Benjamin R. Karney, Harry T. Reis, Susan Sprecher, Psychological Science in the Public Interest, 13(1), 3-66.
“the ready access to a large pool of potential partners can elicit an evaluative, assessment-oriented mindset that leads online daters to objectify potential partners and might even undermine their willingness to commit to one of them”

and

Dating preferences and meeting opportunities in mate choice decisions
Belot, Michèle, and Marco Francesconi, Journal of Human Resources 48.2 (2013): 474-508.
“[in speed dating] suggesting that a highly popular individual is almost 5 times more likely to get a date with another highly popular mate than with a less popular individual”

which means that if you are not in the top most attractive accounts, you probably just get swiped away.

If you want to maximize your chances of meeting someone, you probably have to use this approach (vimeo.com).

And you can find many more reports on “Generation Tinder” and its hard time to find partners because of inflated expectations. It is also because these apps and online services make you unhappy, and that makes you unattractive.

Instead, I suggest you extend your offline social circle.

For example, I used to go dancing a lot. Not the “drunken, and too loud music to talk” kind, but ballroom. Not only this can drastically improve your social and communication skills (in particular, non-verbal communication, but also just being natural rather than nervous), but it also provides great opportunities to meet new people with a shared interest. And quite a lot of my friends in dancing got married to a partner they first met at a dance.

For others, other social sport does this job (although many find chit chat at the gym or yoga annoying). Walk your dog in a new area - you may meet some new faces there. But it is best if you get to talk. Apparently, some people love meeting strangers for cooking (where you’d cook and eat antipasti, main dishes, and dessert in different places). Go to some board game nights, etc. I think anything will do that lets you meet new people with at least some shared interest or social connection, and where you are not just going because of dating (because then you’ll be stressed out), but where you can relax. If you are authentically relaxed and happy, this will make you attractive. And hey, maybe someone will want to meet you a second time.

Spending all that time online chatting or swiping certainly will not improve your social skills when you actually have to face someone face-to-face… it is the worst thing to do, if you aren’t already a very open person that easily chats up strangers (and then you won’t need it).

Forget all that online crap you get pulled into all the time. Don’t let technology hijack your social life, and make you addicted to scrolling through online profiles of people you are not going to meet. Don’t be the product, and nor is your significant other.

They earn money if you spend time on their website, not if you meet your significant other.

So don’t expect them to work. They don’t need to, and they don’t intend to. Dating is something you need to do offline.

14 February, 2018 07:46PM by Erich Schubert

hackergotchi for Sean Whitton

Sean Whitton

A better defence against the evil maid attack on a laptop

The attack

Laptops need full disc encryption. Indeed, my university has explicitly banned us keeping any information on our students’ grades on our laptops unless we use FDE. Not even comments on essays, apparently, as that counts as grade information.

There must, though, exist unencrypted code that tells the computer how to decrypt everything else. Otherwise you can’t turn your laptop on. If you’re only trying to protect your data from your laptop being permanently stolen, it’s fine for this to be in an unencrypted partition on the laptop’s HDD: when your laptop is stolen, the data you are trying to protect remains encrypted.

An evil maid attack involves the replacement of this unencrypted code with something malicious – perhaps it e-mails data from the encrypted partition to someone who wants it. Of course, if someone has access to your laptop without your knowledge, they can always work around any security scheme you might develop. They might add a hardware keylogger, for example. So why might we want to try to protect against the evil maid attack – haven’t we already lost if someone is able to modify the contents of the unencrypted partition of our hard drive?

Well, the thing about the evil maid attack is that it is very quick and easy to modify the contents of a laptop’s hard drive, as compared to other security-bypassing hardware modifications, which take much longer to perform without detection. Users expect to be able to easily replace their hard drives, so they are usually accessible with the removal of just a single screw. It could take less than five minutes to deploy the evil maid payload.

Laptops are often left unattended for the two or three minutes it would take to deliver an evil maid payload; they are less often left for long enough that deeper hardware modifications could be made. So it is worth taking steps to prevent evil maid attacks.

The best solution

UEFI Secure Boot. But

  • Debian does not support this yet; and
  • my laptop does not have the hardware support anyway.

My current solution

The standard solution is to put the unencrypted hard drive partition on a USB key, and keep that on one’s keychain. Then there is no unencrypted code on the laptop at all; you boot from the USB, which decrypts the root partition, and then you unmount the USB key.

Problem with this solution

The big problem with this is kernel and bootloader upgrades. You have to ensure your USB key is mounted before your package manager upgrades the kernel. This effectively rules out using unattended-upgrades to get security upgrades for the kernel. They must be applied manually. Further, you probably want a backup USB key with the kernel and bootloader on it. Now you have to upgrade both, using commands like apt-get --reinstall.

This is a real maintenance burden and is likely to delay your security upgrades. And the whole point of putting /boot on a USB key was to improve security!

Something better

Recent GRUB is able to decrypt partitions itself. So /boot can reside within your encrypted root partition. GRUB’s setup scripts are smart enough that you can switch over to this in just a few steps:

  1. Move contents of /boot from USB drive into root partition.
  2. Remove/comment /boot from /etc/fstab.
  3. Set GRUB_ENABLE_CRYPTODISK=y in /etc/default/grub.
  4. grub-install /dev/sda
  5. update-grub

It’s still true that there must be unencrypted code that knows how to decrypt the root partition. Where does that go? grub-install is the command that installs that code; where does it put it? The ArchLinux wiki has the answer. If you’re using EFI, it will go in the EFI system partition (ESP). Under BIOS, if your drive is formatted with an MBR, it goes in the “post-MBR gap” between the MBR and the first partition (on drive partitioned with very old tools, this post-MBR gap might be too small to accommodate the larger GRUB image that contains the decryption code; however, drives partitioned with recent tools that “support 1 MiB partition alignment” (including the Debian stretch installer) will be fine – to check fdisk -l and look at where your first partition starts). Under BIOS, if your drive is formatted with a GPT, you have to add a 1MiB BIOS boot partition, and the code goes there.

We’ve resolved the issue of package updates modifying /boot, which now resides in the encrypted root partition. However, this is not all of the picture. If we are using EFI, now we have unencrypted code in the EFI system partition which is subject to the evil maid attack. And if we respond by moving the EFI system partition onto a USB drive, the package update problem reoccurs: the EFI system partition will not always be mounted. If we are using BIOS, the evil maid reoccurs since it is not that much harder to modify the code in the post-MBR gap or the BIOS boot partition.

My proposed solution, pending UEFI Secure Boot, is to use BIOS boot with a MBR partition table, keep /boot in the encrypted root partition and grub-install to the USB drive. dpkg-reconfigure grub-pc and tell it to never grub-install to anything. Then set the laptop’s boot order to never try to boot from the HDD, only from USB. (There’s no real advantage of GPT with my simple partitioning setup but I think that would also work fine.)

How does this solve the various issues I’ve raised? Well, the amount of code on the USB drive is very small (less than 1MiB) so it is much less likely to require manual updates. Kernel updates will modify /boot; only bootloader could require me to manually run grub-install to modify the contents of the post-MBR gap, but these are very infrequent.

Of course, the BIOS could be cracked such that the laptop will boot from the HDD no matter what USB I have plugged in, or even only when some USB is plugged in, but that’s a hardware modification beyond the evil maid, against which we are not trying to protect.

As a nice bonus, the USB drive’s single FAT32 partition is now usable for sneakernet.

14 February, 2018 07:22PM

Renata D'Avila

Debugging MoinMoin and using an IDE

Debugging

When I was creating the cal_action, I didn't quite know how to debug MoinMoin. Could I use pudb with the wiki? I wasn't sure how. To figure out if the code I was writing worked, I ended up consulting the error logs from Apache. It sort of worked, but of course that was very far from the ideal. What if I wanted to check something that wasn't an error?

Well, MoinMoin supposedly has a logging module, that lives on moin-1.V.V/wiki/config/logging/, but I simply couldn't get it to work as I wanted.

I searched some more and found a guide on setting up Winpdb Source Level Debugger, but I don't use Windows (really, where is the GNU/Linux guide to debug?), so that didn't help. 😭

But... MoinMoin does offer a guide on setting up a development enviroment with Eclipse that I ended up following.

Using an IDE

Up until this point, most of the code I created in Python where simple scripts that could be ran and debugged in the terminal. I had used IDLE while I was taking the Python para Zumbis (Python for Zombies) course, but other than that, I just used a code editor (Sublime, then Vim and, finally, Atom) when programming in Python.

When I was taking a tech vocational course, I used Eclipse, an Integrated development environment, or IDE to code in Java, but that was it. After I passed the course and didn't need to code in Java again, I simply let go of the IDE.

As it turns out, going back to Eclipse, along with the PyDev plugin - both free software - was what actually helped me in debugging and figuring my way around the MoinMoin macro.

The steps I had to take:

  1. Install eclipse-pydev and it's dependencies using Synaptic (Debian package manager)
  2. Define Python 2.7 as the interpreter in preferences
  3. Create a new workspace
  4. Create a new project
  5. Import the installed MoinMoin into the new project
  6. Configure the new wiki
  7. Run wikiserver.py

To develop the plugins (macro and actions):

  1. Create a new workdir for the Plugins, that goes alongsite Moin
  2. Copy the contents from the plugin directory of the wiki to the new directory

On step 2, though, instead of copying I just created a symbolic link to the files I had been working on that where in another directory. It would make no sense to have two copies of the same file in different places in the computer - besides, it would just complicate tracking what changes had been made ans where. To create a symbolic link:

$ ln -s PATH-TO-THE-ORIGINAL-FILE PATH-TO-THE-DESTINATION/FILE_ON_DESTINATION

More on symbolic links can be found using the command man ln on Debian's terminal.

With the Eclipse console, I could use print help(request) to figure out what methods would be available to me with the request provided by the macro. With this, I finally began to figure out how to create the response we want (without returning the whole wiki page with it, just the event information in the icalendar format).

Eclipse IDE running the wikiserver.py from MoinMoin and showing the debug output in a terminal

If you don't know what I mean with request/response: in simple terms, when you click something on a webpage (for instance, my ical link in the bottom of the calendar) in your internet browser, you are requesting a resource (the icalendar file). It's up to the server to respond with the appropriate resource (the file) or with an status code explaining why it can't fulfill your request (for instance, you get an 404 error when the page - resource - you're trying to access - requesting - can't be found).

A simplified diagram of a static web server showing the request-response cycle.

Here you can find more information on client-Server overview, by Mozilla web docs.

So now I'm working on constructing that response. Thanks to the Eclipse console, now I know that just trying to use the response.write() method with the return value of my method I get a TypeError: Expected bytes. I will probably have to transform the result of the method to generate the icalendar into bytes instead of InstanceClass. Well, at least I can say that the choices that have been made when writing the ExportPDF macro come to me more clearly now.

14 February, 2018 07:19PM by Renata

Debugging MoinMoin and using an IDE

Debugging

When I was creating the cal_action, I didn't quite know how to debug MoinMoin. Could I use pudb with the wiki? I wasn't sure how. To figure out if the code I was writing worked, I ended up consulting the error logs from Apache. It sort of worked, but of course that was very far from the ideal. What if I wanted to check something that wasn't an error?

Well, MoinMoin supposedly has a logging module, that lives on moin-1.V.V/wiki/config/logging/, but I simply couldn't get it to work as I wanted.

I searched some more and found a guide on setting up Winpdb Source Level Debugger, but I don't use Windows (really, where is the GNU/Linux guide to debug?), so that didn't help. 😭

But... MoinMoin does offer a guide on setting up a development enviroment with Eclipse that I ended up following.

Using an IDE

Up until this point, most of the code I created in Python where simple scripts that could be ran and debugged in the terminal. I had used IDLE while I was taking the Python para Zumbis (Python for Zombies) course, but other than that, I just used a code editor (Sublime, then Vim and, finally, Atom) when programming in Python.

When I was taking a tech vocational course, I used Eclipse, an Integrated development environment, or IDE to code in Java, but that was it. After I passed the course and didn't need to code in Java again, I simply let go of the IDE.

As it turns out, going back to Eclipse, along with the PyDev plugin - both free software - was what actually helped me in debugging and figuring my way around the MoinMoin macro.

The steps I had to take:

  1. Install eclipse-pydev and it's dependencies using Synaptic (Debian package manager)
  2. Define Python 2.7 as the interpreter in preferences
  3. Create a new workspace
  4. Create a new project
  5. Import the installed MoinMoin into the new project
  6. Configure the new wiki
  7. Run wikiserver.py

To develop the plugins (macro and actions):

  1. Create a new workdir for the Plugins, that goes alongsite Moin
  2. Copy the contents from the plugin directory of the wiki to the new directory

On step 2, though, instead of copying I just created a symbolic link to the files I had been working on that where in another directory. It would make no sense to have two copies of the same file in different places in the computer - besides, it would just complicate tracking what changes had been made ans where. To create a symbolic link:

$ ln -s PATH-TO-THE-ORIGINAL-FILE PATH-TO-THE-DESTINATION/FILE_ON_DESTINATION

More on symbolic links can be found using the command man ln on Debian's terminal.

With the Eclipse console, I could use print help(request) to figure out what methods would be available to me with the request provided by the macro. With this, I finally began to figure out how to create the response we want (without returning the whole wiki page with it, just the event information in the icalendar format).

Eclipse IDE running the wikiserver.py from MoinMoin and showing the debug output in a terminal

If you don't know what I mean with request/response: in simple terms, when you click something on a webpage (for instance, my ical link in the bottom of the calendar) in your internet browser, you are requesting a resource (the icalendar file). It's up to the server to respond with the appropriate resource (the file) or with an status code explaining why it can't fulfill your request (for instance, you get an 404 error when the page - resource - you're trying to access - requesting - can't be found).

A simplified diagram of a static web server showing the request-response cycle.

Here you can find more information on client-Server overview, by Mozilla web docs.

So now I'm working on constructing that response. Thanks to the Eclipse console, now I know that just trying to use the response.write() method with the return value of my method I get a TypeError: Expected bytes. I will probably have to transform the result of the method to generate the icalendar into bytes instead of InstanceClass. Well, at least I can say that the choices that have been made when writing the ExportPDF macro come to me more clearly now.

14 February, 2018 07:19PM by Renata

hackergotchi for Daniel Pocock

Daniel Pocock

What is the best online dating site and the best way to use it?

Somebody recently shared this with me, this is what happens when you attempt to access Parship, an online dating site, from the anonymous Tor Browser.

Experian is basically a private spy agency. Their website boasts about how they can:

  • Know who your customers are regardless of channel or device
  • Know where and how to reach your customers with optimal messages
  • Create and deliver exceptional experiences every time

Is that third objective, an "exceptional experience", what you were hoping for with their dating site honey trap? You are out of luck: you are not the customer, you are the product.

When the Berlin wall came down, people were horrified at what they found in the archives of the Stasi. Don't companies like Experian and Facebook gather far more data than this?

So can you succeed with online dating?

There are only three strategies that are worth mentioning:

  • Access sites you can't trust (which includes all dating sites, whether free or paid for) using anonymous services like Tor Browser and anonymous email addresses. Use fake photos and fake all other data. Don't send your real phone number through the messaging or chat facility in any of these sites because they can use that to match your anonymous account to a real identity: instead, get an extra SIM card that you pay for and top-up with cash. One person told me they tried this for a month as an experiment, expediently cutting and pasting a message to each contact to arrange a meeting for coffee. At each date they would give the other person a card that apologized for their completely fake profile photos and offering to start over now they could communicate beyond the prying eyes of the corporation.
  • Join online communities that are not primarily about dating and if a relationship comes naturally, it is a bonus.
  • If you really care about your future partner and don't want your photo to be a piece of bait used to exploit and oppress them, why not expand your real-world activities?

14 February, 2018 05:25PM by Daniel.Pocock

hackergotchi for Jo Shields

Jo Shields

Packaging is hard. Packager-friendly is harder.

Releasing software is no small feat, especially in 2018. You could just upload your source code somewhere (a Git, Subversion, CVS, etc, repo – or tarballs on Sourceforge, or whatever), but it matters what that source looks like and how easy it is to consume. What does the required build environment look like? Are there any dependencies on other software, and if so, which versions? What if the versions don’t match exactly?

Most languages feature solutions to the build environment dependency – Ruby has Gems, Perl has CPAN, Java has Maven. You distribute a manifest with your source, detailing the versions of the dependencies which work, and users who download your source can just use those.

Then, however, we have distributions. If openSUSE or Debian wants to include your software, then it’s not just a case of calling into CPAN during the packaging process – distribution builds need to be repeatable, and work offline. And it’s not feasible for packagers to look after 30 versions of every library – generally a distribution will contain 1-3 versions of a given library, and all software in the distribution will be altered one way or another to build against their version of things. It’s a long, slow, arduous process.

Life is easier for distribution packagers, the more the software released adheres to their perfect model – no non-source files in the distribution, minimal or well-formed dependencies on third parties, swathes of #ifdefs to handle changes in dependency APIs between versions, etc.

Problem is, this can actively work against upstream development.

Developers love npm or NuGet because it’s so easy to consume – asking them to abandon those tools is a significant impediment to developer flow. And it doesn’t scale – maybe a friendly upstream can drop one or two dependencies. But 10? 100? If you’re consuming a LOT of packages via the language package manager, as a developer, being told “stop doing that” isn’t just going to slow you down – it’s going to require a monumental engineering effort. And there’s the other side effect – moving from Yarn or Pip to a series of separate download/build/install steps will slow down CI significantly – and if your project takes hours to build as-is, slowing it down is not going to improve the project.

Therein lies the rub. When a project has limited developer time allocated to it, spending that time on an effort which will literally make development harder and worse, for the benefit of distribution maintainers, is a hard sell.

So, a concrete example: MonoDevelop. MD in Debian is pretty old. Why isn’t it newer? Well, because the build system moved away from a packager ideal so far it’s basically impossible at current community & company staffing levels to claw it back. Build-time dependency downloads went from a half dozen in the 5.x era (somewhat easily patched away in distributions) to over 110 today. The underlying build system changed from XBuild (Mono’s reimplementation of Microsoft MSBuild, a build system for Visual Studio projects) to real MSbuild (now FOSS, but an enormous shipping container of worms of its own when it comes to distribution-shippable releases, for all the same reasons & worse). It’s significant work for the MonoDevelop team to spend time on ensuring all their project files work on XBuild with Mono’s compiler, in addition to MSBuild with Microsoft’s compiler (and any mix thereof). It’s significant work to strip out the use of NuGet and Paket packages – especially when their primary OS, macOS, doesn’t have “distribution packages” to depend on.

And then there’s the integration testing problem. When a distribution starts messing with your dependencies, all your QA goes out the window – users are getting a combination of literally hundreds of pieces of software which might carry your app’s label, but you have no idea what the end result of that combination is. My usual anecdote here is when Ubuntu shipped Banshee built against a new, not-regression-tested version of SQLite, which caused a huge performance regression in random playback. When a distribution ships a broken version of an app with your name on it – broken by their actions, because you invested significant engineering resources in enabling them to do so – users won’t blame the distribution, they’ll blame you.

Releasing software is hard.

14 February, 2018 11:21AM by directhex

Petter Reinholdtsen

Using VLC to stream bittorrent sources

A few days ago, a new major version of VLC was announced, and I decided to check out if it now supported streaming over bittorrent and webtorrent. Bittorrent is one of the most efficient ways to distribute large files on the Internet, and Webtorrent is a variant of Bittorrent using WebRTC as its transport channel, allowing web pages to stream and share files using the same technique. The network protocols are similar but not identical, so a client supporting one of them can not talk to a client supporting the other. I was a bit surprised with what I discovered when I started to look. Looking at the release notes did not help answering this question, so I started searching the web. I found several news articles from 2013, most of them tracing the news from Torrentfreak ("Open Source Giant VLC Mulls BitTorrent Streaming Support"), about a initiative to pay someone to create a VLC patch for bittorrent support. To figure out what happend with this initiative, I headed over to the #videolan IRC channel and asked if there were some bug or feature request tickets tracking such feature. I got an answer from lead developer Jean-Babtiste Kempf, telling me that there was a patch but neither he nor anyone else knew where it was. So I searched a bit more, and came across an independent VLC plugin to add bittorrent support, created by Johan Gunnarsson in 2016/2017. Again according to Jean-Babtiste, this is not the patch he was talking about.

Anyway, to test the plugin, I made a working Debian package from the git repository, with some modifications. After installing this package, I could stream videos from The Internet Archive using VLC commands like this:

vlc https://archive.org/download/LoveNest/LoveNest_archive.torrent

The plugin is supposed to handle magnet links too, but since The Internet Archive do not have magnet links and I did not want to spend time tracking down another source, I have not tested it. It can take quite a while before the video start playing without any indication of what is going on from VLC. It took 10-20 seconds when I measured it. Some times the plugin seem unable to find the correct video file to play, and show the metadata XML file name in the VLC status line. I have no idea why.

I have created a request for a new package in Debian (RFP) and asked if the upstream author is willing to help make this happen. Now we wait to see what come out of this. I do not want to maintain a package that is not maintained upstream, nor do I really have time to maintain more packages myself, so I might leave it at this. But I really hope someone step up to do the packaging, and hope upstream is still maintaining the source. If you want to help, please update the RFP request or the upstream issue.

I have not found any traces of webtorrent support for VLC.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14 February, 2018 07:00AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

BH 1.66.0-1

A new release of the BH package arrived on CRAN a little earlier: now at release 1.66.0-1. BH provides a sizeable portion of the Boost C++ libraries as a set of template headers for use by R, possibly with Rcpp as well as other packages.

This release upgrades the version of Boost to the Boost 1.66.0 version released recently, and also adds one exciting new library: Boost compute which provides a C++ interface to multi-core CPU and GPGPU computing platforms based on OpenCL.

Besides the usual small patches we need to make (i.e., cannot call abort() etc pp to satisfy CRAN Policy) we made one significant new change in response to a relatively recent CRAN Policy change: compiler diagnostics are not suppressed for clang and g++. This may make builds somewhat noisy so we all may want to keep our ~/.R/Makevars finely tuned suppressing a bunch of warnings...

Changes in version 1.66.0-1 (2018-02-12)

  • Upgraded to Boost 1.66.0 (plus the few local tweaks)

  • Added Boost compute (as requested in #16)

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions are welcome via the mailing list or the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 February, 2018 01:37AM

February 13, 2018

hackergotchi for Gunnar Wolf

Gunnar Wolf

Is it an upgrade, or a sidegrade?

I first bought a netbook shortly after the term was coined, in 2008. I got one of the original 8.9" Acer Aspire One. Around 2010, my Dell laptop was stolen, so the AAO ended up being my main computer at home — And my favorite computer for convenience, not just for when I needed to travel light. Back then, Regina used to work in a national park and had to cross her province (~6hr by a combination of buses) twice a week, so she had one as well. When she came to Mexico, she surely brought it along. Over the years, we bought new batteries and chargers, as they died over time...

Five years later, it started feeling too slow, and I remember to start having keyboard issues. Time to change.

Sadly, 9" computers were no longer to be found. Even though I am a touch typist, and a big person, I miss several things about the Acer's tiny keyboard (such as being able to cover the diagonal with a single hand, something useful when you are typing while standing). But, anyway, I got the closest I could to it — In July 2013, I bought the successor to the Acer Aspire One: An 10.5" Acer Aspire One Nowadays, the name that used to identify just the smallest of the Acer Family brethen covers at least up to 15.6" (which is not exactly helpful IMO).

Anyway, for close to five years I was also very happy with it. A light laptop that didn't mean a burden to me. Also, very important: A computer I could take with me without ever thinking twice. I often tell people I use a computer I got at a supermarket, and that, bought as new, costed me under US$300. That way, were I to lose it (say, if it falls from my bike, if somebody steals it, if it gets in any way damaged, whatever), it's not a big blow. Quite a difference from my two former laptops, both over US$1000.

I enjoyed this computer a lot. So much, I ended up buying four of them (mine, Regina's, and two for her family members).

Over the last few months, I have started being nagged by unresponsivity, mainly in the browser (blame me, as I typically keep ~40 tabs open). Some keyboard issues... I had started thinking about changing my trusty laptop. Would I want a newfangle laptop-and-tablet-in-one? Just thinking about fiddling with the OS to recognize stuff was a sort-of-turnoff...

This weekend we had an incident with spilled water. After opening and carefully ensuring the computer was dry, it would not turn on. Waited an hour or two, and no changes. Clear sign, a new computer is needed ☹

I went to a nearby store, looked at the offers... And, in part due to the attitude of the salesguy, I decided not to (installing Linux will void any warranty, WTF‽ In 2018‽). Came back home, and... My Acer works again!

But, I know five years are enough. I decided to keep looking for a replacement. After some hesitation, I decided to join what seems to be the elite group in Debian, and go for a refurbished Thinkpad X230.

And that's why I feel this is some sort of "sidegrade" — I am replacing a five year old computer with another five year old computer. Of course, a much sturdier one, built to last, originally sold as an "Ultrabook" (that means, meant for a higher user segment) much more expandable... I'm paying ~US$250, which I'm comfortable with. Looking at several online forums, it is a model quite popular with "knowledgeable" people AFAICT even now. I was hoping, just for the sake of it, to find a X230t (foldable and usable as tablet)... But I won't put too much time into looking for it.

The Thinkpad is 12", which I expect will still fit in my smallish satchel I take to my classes. The machine looks as tweakable as I can expect. Spare parts for replacement are readily available. I have 4GB I bought for the Acer I will probably be able to carry on to this machine, so I'm ready with 8GB. I'm eager to feel the keyboard, as it's often repeated it's the best in the laptop world (although it's not the classic one anymore) I'm just considering to pop ~US$100 more and buy an SSD drive, and... Well, lets see how much does this new sidegrade make me smile!

13 February, 2018 07:43PM by gwolf

hackergotchi for Riku Voipio

Riku Voipio

Making sense of /proc/cpuinfo on ARM

Ever stared at output of /proc/cpuinfo and wondered what the CPU is?

...
processor : 7
BogoMIPS : 2.40
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 3
Or maybe like:

$ cat /proc/cpuinfo
processor : 0
model name : ARMv7 Processor rev 2 (v7l)
BogoMIPS : 50.00
Features : half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae
CPU implementer : 0x56
CPU architecture: 7
CPU variant : 0x2
CPU part : 0x584
CPU revision : 2
...
The bits "CPU implementer" and "CPU part" could be mapped to human understandable strings. But the Kernel developers are heavily against the idea. Therefor, to the next idea: Parse in userspace. Turns out, there is a common tool almost everyone has installed does similar stuff. lscpu(1) from util-linux. So I proposed a patch to do ID mapping on arm/arm64 to util-linux, and it was accepted! So using lscpu from util-linux 2.32 (hopefully to be released soon) the above two systems look like:

Architecture: aarch64
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 1
Vendor ID: ARM
Model: 3
Model name: Cortex-A53
Stepping: r0p3
CPU max MHz: 1200.0000
CPU min MHz: 208.0000
BogoMIPS: 2.40
L1d cache: unknown size
L1i cache: unknown size
L2 cache: unknown size
NUMA node0 CPU(s): 0-7
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
And

$ lscpu
Architecture: armv7l
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Vendor ID: Marvell
Model: 2
Model name: PJ4B-MP
Stepping: 0x2
CPU max MHz: 1333.0000
CPU min MHz: 666.5000
BogoMIPS: 50.00
Flags: half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae
As we can see, lscpu is quite versatile and can show more information than just what is available in cpuinfo.

13 February, 2018 07:18PM by Riku Voipio (noreply@blogger.com)

Reproducible builds folks

Reproducible Builds: Weekly report #146

Here's what happened in the Reproducible Builds effort between Sunday February 4 and Saturday February 10 2018:

Media coverage

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

63 package reviews have been added, 26 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

A new issue type have been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (34)
  • Antonio Terceiro (1)
  • James Cowgill (1)
  • Matthias Klose (1)

diffoscope development

In addition, Juliana—our Outreachy intern—continues her work on parallel processing.

disorderfs development

jenkins.debian.net development

Misc.

This week's edition was written by Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

13 February, 2018 06:00PM

Petter Reinholdtsen

Version 3.1 of Cura, the 3D print slicer, is now in Debian

A new version of the 3D printer slicer software Cura, version 3.1.0, is now available in Debian Testing (aka Buster) and Debian Unstable (aka Sid). I hope you find it useful. It was uploaded the last few days, and the last update will enter testing tomorrow. See the release notes for the list of bug fixes and new features. Version 3.2 was announced 6 days ago. We will try to get it into Debian as well.

More information related to 3D printing is available on the 3D printing and 3D printer wiki pages in Debian.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13 February, 2018 05:20AM

February 12, 2018

Jeremy Bicha

GNOME Tweaks 3.28 Progress Report 2

GNOME 3.28 has reached its 3.27.90 milestone. This milestone is important because it means that GNOME is now at API Freeze, Feature Freeze, and UI Freeze. From this point on, GNOME shouldn’t change much, but that’s good because it allows for distros, translators, and documentation writers to prepare for the 3.28 release. It also gives time to ensure that new feature are working correctly and as many important bugs as possible are fixed. GNOME 3.28 will be released in approximately one month.

If you haven’t read my last 3.28 post, please read it now. So what else has changed in Tweaks this release cycle?

Desktop

As has been widely discussed, Nautilus itself will no longer manage desktop icons in GNOME 3.28. The intention is for this to be handled in a GNOME Shell extension. Therefore, I had to drop the desktop-related tweaks from GNOME Tweaks since the old methods don’t work.

If your Linux distro will be keeping Nautilus 3.26 a bit longer (like Ubuntu), it’s pretty easy for distro maintainers to re-enable the desktop panel so you’ll still get all the other 3.28 features without losing the convenient desktop tweaks.

As part of this change, the Background tweaks have been moved from the Desktop panel to the Appearance panel.

Touchpad

Historically, laptop touchpads had two or three physical hardware buttons just like mice. Nowadays, it’s common for touchpads to have no buttons. At least on Windows, the historical convention was a click in the bottom left would be treated as a left mouse button click, and a click in the bottom right would be treated as a right mouse button click.

Macs are a bit different in handling right click (or secondary click as it’s also called). To get a right-click on a Mac, just click with two fingers simultaneously. You don’t have to worry about whether you are clicking in the bottom right of the touchpad so things should work a bit better when you get used to it. Therefore, this is even used now in some Windows computers.

My understanding is that GNOME used Windows-style “area” mouse-click emulation on most computers, but there was a manually updated list of computers where the Mac style “fingers” mouse-click emulation was used.

In GNOME 3.28, the default is now the Mac style for everyone. For the past few years, you could change the default behavior in the GNOME Tweaks app, but I’ve redesigned the section now to make it easier to use and understand. I assume there will be some people who prefer the old behavior so we want to make it easy for them!

GNOME Tweaks 3.27.90 Mouse Click Emulation

For more screenshots (before and after), see the GitLab issue.

Other

There is one more feature pending for Tweaks 3.28, but it’s incomplete so I’m not going to discuss it here yet. I’ll be sure to link to a blog post about it when it’s ready though.

For more details about what’s changed, see the NEWS file or the commit log.

12 February, 2018 05:35PM by Jeremy Bicha

hackergotchi for Julien Danjou

Julien Danjou

Scaling a polling Python application with asyncio

This article is a follow-up of my previous blog post about scaling a large number of connections. If you don't remember, I was trying to solve one of my followers' problem:

It so happened that I'm currently working on scaling some Python app. Specifically, now I'm trying to figure out the best way to scale SSH connections - when one server has to connect to thousands (or even tens of thousands) of remote machines in a short period of time (say, several minutes).

How would you write an application that does that in a scalable way?

In the first article, we wrote a program that could handle large scale of this problem by using multiple threads. While this worked pretty well, this had some severe limitations. This time, we're going to take a different approach.

The job

The job has not changed and is still about connecting to a remote server via ssh. This time, rather than faking it by using ping instead, we are going to connect for real to an ssh server. Once connected to the remote server, the mission will be to run a single command. For the sake of this example, the command that will be run here is just a simple "echo hello world".

Using an event loop

This time, rather than leveraging threads, we are using asyncio. Asyncio is the leading Python event loop system implementation. It allows executing multiple functions (named coroutines) concurrently. The idea is that each time a coroutine performs an I/O operation, it yields back the control to the event loop. As the input or output might be blocking (e.g., the socket has no data yet to be read), the event loop will reschedule the coroutine as soon as there is work to do. In the meantime, the loop can schedule another coroutine that has something to do – or wait for that to happen.

Not all libraries are compatible with the asyncio framework. In our case, we need an ssh library that has support for asyncio. It happens that AsyncSSH is a Python library that provides ssh connection handling support for asyncio. It is particularly easy to use, and the documentation has plenty of examples.

Here's the function that we're going to use to execute our command on a remote host:

import asyncssh
 
async def run_command(host, command):
async with asyncssh.connect(host) as conn:
result = await conn.run(command)
return result.stdout


The function run_command runs a command on a remote host once connected to it via ssh. It then returns the standard output of the command. The function uses the keywords async and await that are specific to Python >= 3.6 and asyncio. It indicates that the called functions are coroutine that might be blocking, and that the control is yield back to the event loop.

As I don't own hundreds of servers where I can connect to, I will be using a single remote server as the target – but the program will connect to it multiple times. The server is at a latency of about 6 ms, so that'll magnify a bit the results.

The first version of this program is simple and stupid. It'll run N times the run_command function serially by providing the tasks one at a time to the asyncio event loop:

loop = asyncio.get_event_loop()
 
outputs = [
loop.run_until_complete(
run_command("myserver", "echo hello world %d" % i))
for i in range(200)
]
print(outputs)


Once executed, the program prints the following:

$ time python3 asyncssh-test.py
['hello world 0\n', 'hello world 1\n', 'hello world 2\n', … 'hello world 199\n']
python3 asyncssh-test.py 6.11s user 0.35s system 15% cpu 41.249 total


It took 41 seconds to connect 200 times to the remote server and execute a simple printing command.

To make this faster, we're going to schedule all the coroutines at the same time. We just need to feed the event loop with the 200 coroutines at once. That will give it the ability to schedule them efficiently.

outputs = loop.run_until_complete(asyncio.gather(
*[run_command("myserver", "echo hello world %d" % i)
for i in range(200)]))
print(outputs)


By using asyncio.gather, it is possible to pass a list of coroutines and wait for all of them to be finished. Once run, this program prints the following:

$ time python3 asyncssh-test.py
['hello world 0\n', 'hello world 1\n', 'hello world 2\n', … 'hello world 199\n']
python3 asyncssh-test.py 4.90s user 0.34s system 35% cpu 14.761 total


This version only took ⅓ of the original execution time to finish! As a fun note, the main limitation here is that my remote server is having trouble to handle more than 150 connections in parallel, so this program is a bit tough for it alone.

Scalability

To show how great this method is, I've built a chart below that shows the difference of execution time between the two approaches, depending on the number of hosts the application has to connect to.

The trend lines highlight the difference of execution time and how important the concurrency is here. For 10,000 nodes, the time needed for a serial execution would be around 40 minutes whereas it would be only 7 minutes with a cooperative approach – quite a difference. The concurrent approach allows executing one command 205 times a day rather than only 36 times!

That was the second step

Using an event loop for tasks that can run concurrently due to their I/O intensive nature is really a great way to maximize the throughput of a program. This simple changes made the program 6× faster.

Anyhow, this is not the only way to scale Python program. There are a few other options available on top of this mechanism – I've covered those in my book Scaling Python, if you're interested in learning more!

Until then, stay tuned for the next article of this series!

12 February, 2018 04:58PM by Julien Danjou

hackergotchi for Jo Shields

Jo Shields

Long-term distribution support?

A question: how long is reasonable for an ISV to keep releasing software for an older distribution? When is it fair for them to say “look, we can’t feasibly support this old thing any more”.

For example, Debian 7 is still considered supported, via the Debian LTS project. Should ISV app vendors keep producing builds built for Debian 7, with its ancient versions of GCC or CMake, rudimentary C++11 support, ARM64 bugs, etc? How long is it fair to expect an ISV to keep spitting out builds on top of obsolete toolchains?

Let’s take Mono as an example, since, well, that’s what I’m paid to care about. Right now, we do builds for:

  • Debian 7 (oldoldstable, supported until May 2018)
  • Debian 8 (oldstable, supported until April 2020)
  • Debian 9 (stable, supported until June 2022)
  • Raspbian 8 (oldstable, supported until June 2018)
  • Raspbian 9 (stable, supported until June 2020)
  • Ubuntu 12.04 (EOL unless you pay big bucks to Canonical – but was used by TravisCI long after it was EOL)
  • Ubuntu 14.04 (LTS, supported until April 2019)
  • Ubuntu 16.04 (LTS, supported until April 2021)
  • CentOS 6 (LTS, supported until November 2020)
  • CentOS 7 (LTS, supported until June 2024)

Supporting just these is a problem already. CentOS 6 builds lack support for TLS 1.2+, as that requires GCC 4.7+ – but I can’t just drop it, since Amazon Linux (used by a surprising number of people on AWS) is based on CentOS 6. Ubuntu 12.04 support requires build-dependencies on a secret Mozilla-team maintained copy of GCC 4.7 in the archive, used to keep building Firefox releases.

Why not just use the CDN analytics to form my opinion? Well, it seems most people didn’t update their sources.list after we switched to producing per-distribution binaries some time around May 2017 – so they’re still hardcoding wheezy in their sources. And I can’t go by user agent to determine their OS, as Azure CDN helpfully aggregates all of them into “Debian APT-HTTP/1.x” rather than giving me the exact version numbers I’d need to cross-reference to determine OS release.

So, with the next set of releases coming on the horizon (e.g. Ubuntu 18.04), at what point is it okay to say “no more, sorry” to an old version?

Answers on a postcard. Or the blog comments. Or Twitter. Or Gitter.

12 February, 2018 03:55PM by directhex

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long and waited for more than month, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

12 February, 2018 11:00AM

Russ Allbery

February Haul

Most of this is the current StoryBundle: Black Narratives, in honor of Black History Month in the United States. But there's also a random selection of other things that I couldn't resist.

(I'm still reading this year too! Just a touch behind on reviews at the moment.)

Alicia Wright Brewster — Echo (sff)
T. Thorne Coyle — To Raise a Clenched Fist to the Sky (sff)
T. Thorne Coyle — To Wrest Our Bodies from the Fire (sff)
Julie E. Czerneda — Riders of the Storm (sff)
Julie E. Czerneda — Rift in the Sky (sff)
Terah Edun — Blades of Magic (sff)
Terah Edun — Blades of Illusion (sff)
L.L. Farmer — Black Borne (sff)
Jim C. Hines — Goblin Quest (sff)
Jim C. Hines — The Stepsister Scheme (sff)
Nalo Hopkinson — The Salt Roads (sff)
S.L. Huang — Root of Unity (sff)
Ursula K. Le Guin — Steering the Craft (nonfiction)
Nnedi Okorafor — Kabu-Kabu (sff collection)
Karen Lord — Redemption in Indigo (sff)
L. Penelope — Angelborn (sff)
Elizabeth Wein — The Pearl Thief (mainstream)

I'm slowly reading through the Czerneda that I missed, since I liked the Species Imperative series so much. Most of it isn't that good, and Czerneda has a few predictable themes, but it's fun and entertaining.

The Wein is a prequel to Code Name Verity, so, uh, yes please.

12 February, 2018 03:49AM

February 11, 2018

pgpcontrol 2.6

This is the legacy bundle of Usenet control message signing and verification tools, distributed primarily via ftp.isc.org (which hasn't updated yet as I write this). You can see the files for the current release at archives.eyrie.org.

This release adds support for using gpg for signature verification, provided by Thomas Hochstein, since gpgv may no longer support insecure digest algorithms.

Honestly, all the Perl Usenet control message code I maintain is a mess and needs some serious investment in effort, along with a major migration for the Big Eight signing key (and probably the signing key for various other archives). A lot of this stuff hasn't changed substantially in something like 20 years now, still supports software that no one has used in eons (like the PGP 2.6.3i release), doesn't use modern coding idioms, doesn't have a working test suite any longer, and is full of duplicate code to mess about with temporary files to generate signatures.

The formal protocol specification is also a pretty old and scanty description from the original project, and really should be a proper RFC.

I keep wanting to work on this, and keep not clearing the time to start properly and do a decent job of it, since it's a relatively large effort. But this could all be so much better, and I could then unify about four different software distributions I currently maintain, or at least layer them properly, and have something that would have a modern test suite and could be packaged properly. And then I could start a migration for the Big Eight signing key, which has been needed for quite some time.

Not sure when I'm going to do this, though, since it's several days of work to really get started. Maybe my next vacation?

(Alternately, I could just switch everything over to Julien's Python code. But I have a bunch of software already written in Perl of which the control message processing is just a component, so it would be easier to have a good Perl implementation.)

11 February, 2018 08:26PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Chess960 opening position analysis

Magnus Carlsen and Hikaru Nakamura are playing an unofficial Chess960 world championship, so I thought I'd have a look at what the white advantage is for the 960 different positions. Obviously, you can't build anything like a huge opening book, but I let Stockfish run on the positions for increasing depths until I didn't have time anymore (in all, it was a little over a week, multiplied by 20 cores plus hyperthreading).

I've been asked to publish the list, so here it is. It's calculated deterministically using a prerelease version of Stockfish 9 (git from about a month before release), using only a single thread and consistently cleared 256 MB hash. All positions are calculated to depth 39, which is about equivalent to looking at the position for 2–3 hours, but a few are at depth 40. (At those depths, the white advantage varies from 0.01 to 0.57 pawns.) Unfortunately, I didn't save the principal variation, so it can be hard to know exactly why it thinks one position is particularly good for white, but generally, the best positions for white contain early attacks that are hard to counter without putting the pieces in awkward positions.

One thing you may notice is that the evaluation varies quite a lot between depths. This means you shouldn't take the values as absolute gospel; it's fairly clear that the +0.57 position is better than the +0.01 position, but the difference between +0.5 and +0.4 is much less clear-cut, as you can easily see one position varying between e.g. +0.2 and +0.5.

Note that my analysis page doesn't use this information, since Stockfish doesn't have persistent hash; it calculates from scratch every game.

11 February, 2018 06:54PM

Petter Reinholdtsen

How hard can æ, ø and å be?

We write 2018, and it is 30 years since Unicode was introduced. Most of us in Norway have come to expect the use of our alphabet to just work with any computer system. But it is apparently beyond reach of the computers printing recites at a restaurant. Recently I visited a Peppes pizza resturant, and noticed a few details on the recite. Notice how 'ø' and 'å' are replaced with strange symbols in 'Servitør', 'Å BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi gleder oss til å se deg igjen'.

I would say that this state is passed sad and over in embarrassing.

I removed personal and private information to be nice.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

11 February, 2018 04:10PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

A hack and a snowflake

This would be a long post. Before starting, I would like to share or explain that I am not a native English speaker. I say this as what I’m going to write, may or may not be the same terms or meaning that others understand as I’m an Indian who uses American English + British English.

So it’s very much possible that I have picked up all the bad habits of learning and understanding and not any of the good ones in writing English as bad habits as elsewhere in life are the easier ones to pick up. Also I’m not a trained writer or have taken any writing lessons ever apart from when I was learning English in school as a language meant to communicate.

Few days back, I was reading an opinion piece (I tried to find the opinion piece again but have failed to do since) if anybody finds it, please share in the comments so will link here. A feminist author proclaimed how some poets preached or shared violence against women in their writings, poems including some of the most famous poets we admire today. The author of the article was talking about poets and artists like William Wordsworth and others. She picked out particular poems from their body of work which seemed to convey that message. Going further than that, she chose to de-hypnate between the poet and their large body of work. I wished she had cared enough to also look a bit more deeply in the poet’s life than just labeling him from one poem among perhaps tens or hundreds he may have written. I confess I haven’t read much of Wordsworth than what was in school and from those he seemed to be a nature lover rather than a sexual predator he was/is being made out to be. It is possible that I might have been mis-informed.

 

Meaning of author

– Courtesy bluediamondgallery.com – CC-by-SA

The reason I say this is because I’m a hack. A hack in the writing department or ‘business’ is somebody who is not supposed to tell any back story and just go away. Writers though, even those who write short stories need to have a backstory at least in the back of their mind about each character that s/he introduces into the story. Because it’s a short story s/he cannot reveal where they come from but only point at the actions they are doing. I had started the process two times and two small stories got somewhat written through me but I stopped both the times midway.

while I was hammering through the keyboard for the stories, it was as if the characters themselves who were taking me on a journey which was dark and I didn’t want to venture more. I had heard this from quite a few authors, few of them published as well and I had dismissed it as a kind of sales pitch or something.

When I did write those stories for the sake of argument, I realized the only thing that the author has is an idea and the overall arc of the story. You have to have complete faith in your characters even if they led you astray or in unexpected places. The characters speak to you and through you rather than the other way around. It is the most maddest and the most mysterious of journeys and it seemed the characters liked the darker tones more than the lighter ones. I do not know whether it the same for all the writers/hacks (at least in the beginning) or just me ? Or Maybe its a cathartic expression. I do hope to still do more stories and even complete them even if they have dark overtones just to understand the process. By dark I mean violence here.

That is why I asked that maybe if the author of the opinion piece had taken the pain and shared more of the context surrounding the poem themselves as to when did Mr. Wordsworth wrote that poem or other poets did, perhaps I could identify with that as well as many writers/authors themselves.

I was disappointed with the article in two ways, in one way they were dismissing the poet/the artist and yet they seemed or did not want to critique/ban all the other works because

a. either they liked the major part of the work or

b. they knew the audience to whom they were serving the article to probably likes the other works and chose not to provoke them.

Another point was I felt when you are pushing and punishing poets are you doing so because they are the soft targets now more than ever? Almost all the poets she had talked about are unfortunately not in this mortal realm anymore. On the business side of things, the publishing industry is in for grim times . The poets and the poems being perhaps the easiest targets atm as they are not the center of the world anymore as they used to do. Both in United States as well as here in India, literature or even fiction for that matter has been booted out of the educational system. The point I’m trying to make here that publishers would and are not in a position to protect authors or even themselves when such articles are written and opinions are being formed. Also see https://scroll.in/article/834652/debut-authors-heres-why-publishers-are-finding-it-difficult-to-market-your-books for an Indian viewpoint of the same.

I also did not understand what the author wanted when she named and shamed the poets. If you really want to name and shame people who have and are committing acts of violence against women, then the horror film genre apart from action genre should be easily targeted. In many of the horror movies, both in hollywood, Bollywood and perhaps in other countries as well, the female protagonist/lead is often molested,sexually assaulted, maimed, killed, cannibalized so and so forth. Should we ban such movies forthwith ?

Also does ‘banning’ a work of art really work ? The movie ‘Padmavaat‘ has been mired in controversies due to a cultural history where as the story/myth goes ‘Rani Padmavati’ (whether she is real or an imaginary figure is probably fodder for another blog post) when confronted with Khilji committed ‘Jauhar’ or self-immolation so that she remains ‘pure’. The fanatics rally around her as she is supposed to have paid the ultimate price, sacrificing herself. But if she were really a queen, shouldn’t she have thought of her people and lived to lead the fight, run away and fight for another day or if she was cunning enough to worm her way into Khilji’s heart and topple him from within. The history and politics of those days had all those options in front of her if she were a real character, why commit suicide ?

Because of the violence being perpetuated around Rani Padmavati there hasn’t been either a decent critique either about the movie or the historical times in which she lived. It perhaps makes the men of the land secure in the knowledge that the women then and even now should kill themselves than either falling in love with the ‘other’ (a Muslim) romantically thought of as the ‘invader’ a thought which has and was perpetuated by the English ever since the East India company came for their own political gains. Another idea being women being pure, oblivious and not ‘devious’ could also be debated.

(sarcasm) Of course, the idea that Khilji and Rani Padmavati living in the same century is not possible by actual historians is too crackpot to believe as the cultural history wins over real history. (/sarcasm)

The reason this whole thing got triggered was the ‘snowflake’ comments on https://lwn.net/Articles/745817/ . The article itself is a pretty good read as even though I’m an outsider to how the kernel comes together and although I have the theoretical knowledge about how the various subsystem maintainers pull and push patches up the train and how Linus manages to eke out a kernel release every 3-4 months, I did have an opportunity to observe how fly-by-contributors are ignored by subsystem-maintainers.

About a decade or less ago, my 2-button wheel Logitech mouse at the time was going down and I had no idea why sometimes the mouse used to function and why sometimes it didn’t. A hacker named ‘John Hill’ put up a patch. What the patch did essentially was trigger warnings on the console when the system was unable to get signal from my 2-button wheel mouse. I did comment and try to get it pushed into the trunk but it didn’t and there was also no explanation by anyone why the patch was discarded. I did come to know while building the mouse module as to how many types and models of mouse there were which was a revelation to me at that point in time. By pushing I had commented on where the patch was posted and the mailing list where threads for patches are posted and posted couple of times that the patch by John Hill should be considered but nobody either got back to me or him.

It’s been a decade since then and still we do not have any proper error reporting process AFAIK if the mouse/keyboard fails to transmit messages/signals to the system.

That apart the real long thread was about the term ‘snowflake’. I had been called that in the past but had sort of tuned it out as I didn’t know what the term means/meant.

When I went to wikipedia and came up with the ‘snowflake’ and it came with 3 meanings to the same word.

a. A unique crystalline shape of white

b. A person who believes that s/he is unique and hence entitled

c. A person who is weak or thin-skinned (overly sensitive)

I believe we all are of the above, the only difference is perhaps a factor. If we weren’t meant to be unique we wouldn’t have been given a personality, a body type, a sense of reasoning and logic and perhaps most important a sense of what is right or wrong. To be thick-skinned also comes the inability to love and have empathy with others.

To round off on a somewhat hopeful note, I was re-reading maybe for the umpteenth time ‘Sacred Stone‘ an action thriller in which four hindus along with a corrupt, wealthy and hurt billionaire try to blow the most sacred site of the Muslims, the Mecca and Medina. While I don’t know whether it would be possible or not, I would for sure like to see people using the pious days for reflection . I don’t have to do anything, just be.

Similarly, the spanish pilgrimage as shown in the Way . I don’t think any of my issues will be resolved in being either of the two places but it may trigger paths within which I have not yet explored or forgotten longtime ago.

At the end I would like to share two interesting articles that I saw/read over the week, the first one is about the ‘Alphonso‘ and the other Samarkhand . I hope you enjoy both the articles.

11 February, 2018 11:04AM by shirishag75

February 10, 2018

hackergotchi for Steve Kemp

Steve Kemp

Decoding 433Mhz-transmissions with software-defined radio

This blog-post is a bit of a diversion, and builds upon my previous entry of using 433Mhz radio-transmitters and receivers with Arduino and/or ESP8266 devices.

As mentioned in my post I've recently been overhauling my in-house IoT buttons, and I decided to go down the route of using commercially-available buttons which broadcast signals via radio, rather than using IR, or WiFi. The advantage is that I don't need to build any devices, or worry about 3D-printing a case - the commercially available buttons are cheap, water-proof, portable, and reliable, so why not use them? Ultimately I bought around ten buttons, along with a radio-receiver and radio-transmitter modules for my ESP8266 device. I wrote code to run on my device to receive the transmissions, decode the device-ID, and take different actions based upon the specific button pressed.

In the gap between buying the buttons (read: radio transmitters) and waiting for the transmitter/receiver modules I intended to connect to my ESP8266/arduino device(s) I remembered that I'd previously bought a software-defined-radio receiver, and figured I could use it to receive and react to the transmissions directly upon my PC.

The dongle I'd bought in the past was a simple USB-device which identifies itself as follows when inserted into my desktop:

  [17844333.387774] usb 3-9: New USB device found, idVendor=0bda, idProduct=2838
  [17844333.387777] usb 3-9: New USB device strings: Mfr=1, Product=2, SerialNumber=3
  [17844333.387778] usb 3-9: Product: RTL2838UHIDIR
  [17844333.387778] usb 3-9: Manufacturer: Realtek
  [17844333.387779] usb 3-9: SerialNumber: 00000001

At the time I bought it I wrote a brief blog post, which described tracking aircraft, and I said "I know almost nothing about SDR, except that it can be used to let your computer do stuff with radio."

So my first step was finding some suitable software to listen to the right-frequency and ideally decode the transmissions. A brief search lead me to the following repository:

The RTL_433 project is pretty neat as it allows receiving transmissions and decoding them. Of course it can't decode everything, but it has the ability to recognize a bunch of commonly-used hardware, and when it does it outputs the payload in a useful way, rather than just dumping a bitstream/bytestream.

Once you've got your USB-dongle plugged in, and you've built the project you can start receiving and decoding all discovered broadcasts like so:

  skx@deagol ~$ ./build/src/rtl_433 -U -G
  trying device  0:  Realtek, RTL2838UHIDIR, SN: 00000001
  Found Rafael Micro R820T tuner
  Using device 0: Generic RTL2832U OEM
  Exact sample rate is: 250000.000414 Hz
  Sample rate set to 250000.
  Bit detection level set to 0 (Auto).
  Tuner gain set to Auto.
  Reading samples in async mode...
  Tuned to 433920000 Hz.
  ...

Here we've added flags:

  • -G
    • Enable all decoders. So we're not just listening for traffic at 433Mhz, but we're actively trying to decode the payload of the transmissions.
  • -U
    • Timestamps are in UTC

Leaving this running for a few hours I noted that there are several nearby cars which are transmitting data about their tyre-pressure:

  2018-02-10 11:53:33 :      Schrader       :      TPMS       :      25
  ID:          1B747B0
  Pressure:    2.500 bar
  Temperature: 6 C
  Integrity:   CRC

The second log is from running with "-F json" to cause output to be generated in JSON format:

  {"time" : "2018-02-10 09:51:02",
   "model" : "Toyota",
   "type" : "TPMS",
   "id" : "5e7e0637",
   "code" : "63e6026d",
   "mic" : "CRC"}

In both cases we see "TPMS", and according to wikipedia that is Tyre Pressure Monitoring System. I'm a little shocked to receive this data, unencrypted!

Other events also become visible, when I left the scanner running, which is presumably some kind of temperature-sensor a neighbour has running:

  2018-02-10 13:19:08 : RF-tech
     Id:              0
     Battery:         LOW
     Button:          0
     Temperature:     0.0 C

Anyway I have a bunch of remote-controlled sockets, branded "NEXA", which look like this:

Radio-Controlled Sockets

When I press the remote I can see the transmissions and program my PC to react to them:

  2018-02-11 07:31:20 : Nexa
    House Code:  39920705
    Group:  1
    Channel: 3
    State:   ON
    Unit:    2

In conclusion:

  • SDR can be used to easily sniff & decode cheap and commonly available 433Mhz-based devices.
  • "Modern" cars transmit their tyre-pressure, apparently!
  • My neighbours can probably overhear my button presses.

10 February, 2018 10:00PM

hackergotchi for Norbert Preining

Norbert Preining

In memoriam Staszek Wawrykiewicz

We have lost a dear member of our community, Staszek Wawrykiewicz. I got notice that our friend died in an accident the other day. My heart stopped for an instant when I read the news, it cannot be – one of the most friendly, open, heart-warming friends has passed.

Staszek was an active member of the Polish TeX community, and an incredibly valuable TeX Live Team member. His insistence and perseverance have saved TeX Live from many disasters and bugs. Although I have been in contact with Staszek over the TeX Live mailing lists since some years, I met him in person for the first time on my first ever BachoTeX, the EuroBachoTeX 2007. His friendliness, openness to all new things, his inquisitiveness, all took a great place in my heart.

I dearly remember the evenings with Staszek and our Polish friends, in one of the Bachotek huts, around the bonfire place, him playing the guitar and singing traditional and not-so-traditional Polish music, inviting everyone to join and enjoy together. Rarely technical and social abilities have found such a nice combination as in Staszek.

Despite his age he often felt like someone in his twens, always ready for a joke, always ready to party, always ready to have fun. It is this kind of attitude I would like to carry with me when I get older. Thanks for giving me a great example.

The few times I managed to come to BachoTeX from far Japan, Staszek was as usual welcoming – it is the feeling of close friends that even if you haven’t seen each other for long time, the moment you meet it feels like it was just yesterday. And wherever you go during a BachoTeX conference, his traces and funniness were always present.

It is a very sad loss for all of those who knew Staszek. If I could I would like to board the plane just now and join the final service to a great man, a great friend.

Staszek, we will miss you. BachoTeX will miss you, TeX Live will miss you, I will miss you badly. A good friend has passed away. May you rest in peace.

Photo credit goes to various people attending BachoTeX conferences.

10 February, 2018 01:05PM by Norbert Preining

hackergotchi for Junichi Uekawa

Junichi Uekawa

Writing chrome extensions.

Writing chrome extensions. I am writing javascript with lots of async/await, and I forget. It's also a bit annoying that many system provided functions don't support promises yet.

10 February, 2018 12:46PM by Junichi Uekawa

February 09, 2018

John Goerzen

The Big Green Weasel Song

One does not normally get into one’s car intending to sing about barfing green weasels to the tune of a Beethoven symphony for a quarter of an hour. And yet, somehow, that’s what I wound up doing today.

Little did I know that when Jacob started band last fall, it would inspire him to sing about weasels to the tune of Beethoven’s 9th. That may come as a surprise to his band teacher, too, who didn’t likely expect that having the kids learn the theme to the Ninth Symphony would inspire them to sing about large weasels.

But tonight, as we were driving, I mentioned that I knew the original German words. He asked me to sing. I did.

Then, of course, Jacob and Oliver tried to sing it back in German. This devolved into gibberish and a fit of laughter pretty quick, and ended with something sounding like “schneezel”. So of course, I had to be silly and added, “I have a big green weasel!”

From then on, they were singing about big green weasels. It wasn’t long before they decided they would sing “the chorus” and I was supposed to improvise verse after verse about these weasels. Improvising to the middle of the 9th Symphony isn’t the easiest thing, but I had verses about giving weasels a hug, about weasels smelling like a bug, about a green weasel on a chair, about a weasel barfing on the stair. And soon, Jacob wanted to record the weasel song to make a CD of it. So we did, before we even got to town. Here it is:

[Youtube link]

I love to hear children delight in making music. And I love to hear them enjoying Beethoven. Especially with weasels.

I just wonder what will happen when they learn about Mozart.

09 February, 2018 09:08PM by John Goerzen

hackergotchi for Erich Schubert

Erich Schubert

Booking.com Genius Nonsense & Spam

Booking.com just spammed me with an email that claims that I were a “frequent traveller” (which I am not), and thus would get “Genius” status, and rebates (which means they are going to hide some non-partner search results from me…) - I hate such marketing spam.

What a big rip-off.

I have rarely ever used Booking.com, and in fact I have last used it 2015.

That is certainly not what you would call a “frequent traveler”.

But Booking.com sell this to their hotel customers as “most loyal guests”. As I am clearly not a “loyal guest”, I consider this claim of Booking.com to be borderline to fraud. And beware, that since this is a partner programme, it does come with a downside for the user: the partner results will be “boosted in our search results”. In other words, your search results will be biased. They will hide other results to boost their partners, that would otherwise come first (for example, because they are closer to your desired location, or even cheaper).

Forget Booking.com and their “Genius program”. It’s a marketing fake.

Going to report this as spam, and kill my account there now.

Pro tip: use incognito mode whenever possible for surfing. For Chromium (or Google Chrome), add the option --incognito to your launcher icon, for Firefox use --private-window. On a smartphone, you may want to switch to Firefox Focus, or the DuckDuckGo browser.

Looks like those hotel booking brokers (who are in a fierce competition) are getting quite despeate. We are certainly heading into the second big Dot-com bubble, and it is probably going to bust rather sooner than later. Maybe that current stock market fragility will finally trigger this. If some parts of the “old” economy have to cut down their advertisement budgets, this will have a very immediate effect on Google, Facebook, and many others.

09 February, 2018 03:01PM by Erich Schubert

hackergotchi for Lars Wirzenius

Lars Wirzenius

Qvisqve - an authorisation server, first alpha release

My company, QvarnLabs Ab, has today released the first alpha version of our new product, Qvisqve. Below is the press release. I wrote pretty much all the code, and it's free software (AGPL+).


Helsinki, Finland - 2018-02-09. QvarnLabs Ab is happy to announce the first public release of Qvisqve, an authorisation server and identity provider for web and mobile applications. Qvisqve aims to be secure, lightweight, fast, and easy to manage. "We have big plans for Qvisqve, and helping customers' manage cloud identities" says Kaius Häggblom, CEO of QvarnLabs.

In this alpha release, Qvisqve supports the OAuth2 client credentials grant, which is useful for authenticating and authorising automated systems, including IoT devices. Qvisqve can be integrated with any web service that can use OAuth2 and JWT tokens for access control.

Future releases will provide support for end-user authentication by implementing the OpenID Connect protocol, with a variety of authentication methods, including username/password, U2F, TOTP, and TLS client certificates. Multi-factor authentication will also be supported. "We will make Qvisqve be flexible for any serious use case", says Lars Wirzenius, software architect at QvarnLabs. "We hope Qvisqve will be useful to the software freedom ecosystem in general" Wirzenius adds.

Qvisqve is developed and supported by QvarnLabs Ab, and works together with the Qvarn software, which is award-winning free and open-source software for managing sensitive personal information. Qvarn is in production use in Finland and Sweden and manages over a million identities. Both Qvisqve and Qvarn are released under the Affero General Public Licence.

09 February, 2018 02:41PM

hackergotchi for Olivier Berger

Olivier Berger

A review of Virtual Labs virtualization solutions for MOOCs

I’ve just uploaded a new memo A review of Virtual Labs virtualization solutions for MOOCs in the form of a page on my blog, before I eventually publish something more elaborated (and valuated by peer review).

The subtitle is “From Virtual Machines running locally or on IaaS, to containers on a PaaS, up to hypothetical ports of tools to WebAssembly for serverless execution in the Web browser

Excerpt from the intro :

In this memo, we try to draw an overview of some benefits and concerns with existing approaches at using virtualization techniques for running Virtual Labs, as distributions of tools made available for distant learners.

We describe 3 main technical architectures: (1) running Virtual Machine images locally on a virtual machine manager, or (2) displaying the remote execution of similar virtual machines on a IaaS cloud, and (3) the potential of connecting to the remote execution of minimized containers on a remote PaaS cloud.

We then elaborate on some perspectives for locally running ports of applications to the WebAssembly virtual machine of the modern Web browsers.

I hope this will be of some interest for some.

Feel free to comment in this blog post.

09 February, 2018 02:17PM by Olivier Berger