September 19, 2017

Sylvain Beucler

dot-zed extractor

Following last week's .zed format reverse-engineered specification, Loïc Dachary contributed a POC extractor!
It's available at http://www.dachary.org/loic/zed/, it can list non-encrypted metadata without password, and extract files with password (or .pem file).
Leveraging on python-olefile and pycrypto, only 500 lines of code (test cases excluded) are enough to implement it :)

19 September, 2017 07:29PM

Reproducible builds folks

Reproducible Builds: Weekly report #125

Here's what happened in the Reproducible Builds effort between Sunday September 10 and Saturday September 16 2017:

Upcoming events

Reproduciblity work in Debian

devscripts/2.17.10 was uploaded to unstable, fixing #872514. This adds a script to report on reproducibility status of installed packages written by Chris Lamb.

#876055 was opened against Debian Policy to decide the precise requirements we should have on a build's environment variables.

Bugs filed:

Non-maintainer uploads:

  • Holger Levsen:

Reproduciblity work in other projects

Patches sent upstream:

  • Bernhard M. Wiedemann:

Reviews of unreproducible packages

16 package reviews have been added, 99 have been updated and 92 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been updated:

diffoscope development

  • Juliana Oliveira Rodrigues:
    • Fix comparisons between different container types not comparing inside files. It was caused by falling back to binary comparison for different file types even for unextracted containers.
    • Add many tests for the fixed behaviour.
    • Other code quality improvements.
  • Chris Lamb:
    • Various code quality and style improvements, some of it using Flake8.
  • Mattia Rizzolo:
    • Add a check to prevent installation with python < 3.4

reprotest development

  • Ximin Luo:
    • Split up the very large __init__.py and remove obsolete earlier code.
    • Extend the syntax for the --variations flag to support parameters to certain variations like user_group, and document examples in README.
    • Add a --vary flag for the new syntax and deprecate --dont-vary.
    • Heavily refactor internals to support > 2 builds.
    • Support >2 builds using a new --extra-build flag.
    • Properly sanitize artifact_pattern to avoid arbitrary shell execution.

trydiffoscope development

Version 65 was uploaded to unstable by Chris Lamb including these contributions:

  • Chris Lamb:
    • Packaging maintenance updates.
    • Developer documentation updates.

Reproducible websites development

tests.reproducible-builds.org

  • Vagrant Cascadian and Holger Levsen:
    • Added two armhf boards to the build farm. #874682
  • Holger also:
    • use timeout to limit the diffing of the two build logs to 30min, which greatly reduced jenkins load again.

Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Daniel Shahaf & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

19 September, 2017 05:45PM

September 18, 2017

Carl Chenet

The Github threat

Many voices arise now and then against risks linked to the Github use by Free Software projects. Yet the infatuation for the collaborative forge of the Octocat Californian start-ups doesn’t seem to fade away.

These recent years, Github and its services take an important role in software engineering as they are seen as easy to use, efficient for a daily workload with interesting functions in enterprise collaborative workflow or amid a Free Software project. What are the arguments against using its services and are they valid? We will list them first, then we’ll examine their validity.

1. Critical points

1.1 Centralization

The Github application belongs to a single entity, Github Inc, a US company which manage it alone. So, a unique company under US legislation manages the access to most of Free Software application code sources, which may be a problem with groups using it when a code source is no longer available, for political or technical reason.

The Octocat, the Github mascot

 

This centralization leads to another trouble: as it obtained critical mass, it becomes more and more difficult not having a Github account. People who don’t use Github, by choice or not, are becoming a silent minority. It is now fashionable to use Github, and not doing so is seen as “out of date”. The same phenomenon is a classic, and even the norm, for proprietary social networks (Facebook, Twitter, Instagram).

1.2 A Proprietary Software

When you interact with Github, you are using a proprietary software, with no access to its source code and which may not work the way you think it is. It is a problem at different levels. First, ideologically, but foremost in practice. In the Github case, we send them code we can control outside of their interface. We also send them personal information (profile, Github interactions). And mostly, Github forces any project which goes through the US platform to use a crucial proprietary tools: its bug tracking system.

Windows, the epitome of proprietary software, even if others took the same path

 

1.3 The Uniformization

Working with Github interface seems easy and intuitive to most. Lots of companies now use it as a source repository, and many developers leaving a company find the same Github working environment in the next one. This pervasive presence of Github in free software development environment is a part of the uniformization of said developers’ working space.

Uniforms always bring Army in my mind, here the Clone army

2 – Critical points cross-examination

2.1 Regarding the centralization

2.1.1 Service availability rate

As said above, nowadays, Github is the main repository of Free Software source code. As such it is a favorite target for cyberattacks. DDOS hit it in March and August 2015. On December 15, 2015, an outage led to the inaccessibility of 5% of the repositories. The same occurred on November 15. And these are only the incident reported by Github itself. One can imagine that the mean outage rate of the platform is underestimated.

2.1.2 Chain reaction could block Free Software development

Today many dependency maintenance tools, as npm for javascript, Bundler for Ruby or even pip for Python can access an application source code directly from Github. Free Software projects getting more and more linked and codependents, if one component is down, all the developing process stop.

One of the best examples is the npmgate. Any company could legally demand that Github take down some source code from its repository, which could create a chain reaction and blocking the development of many Free Software projects, as suffered the Node.js community from the decisions of Npm, Inc, the company managing npm.

2.2 A historical precedent: SourceForge

Github didn’t appear out of the blue. In his time, its predecessor, SourceForge, was also extremely popular.

Heavily centralized, based on strong interaction with the community, SourceForge is now seen as an aging SAAS (Software As A Service) and sees most of its customers fleeing to Github. Which creates lots of hurdles for those who stayed. The Gimp project suffered from spams and terrible advertising, which led to the departure of the VLC project, then from installers corrupted with adwares instead of the official Gimp installer for Windows. And finally, the Project Gimp’s SourceForge account was hacked by… SourceForge team itself!

These are very recent examples of what can do a commercial entity when it is under its stakeholders’ pressure. It is vital to really understand what it means to trust them with data and exchange centralization, where it could have tremendous repercussion on the day-to-day life and the habits of the Free Software and open source community.

2.3. Regarding proprietary software

2.3.1 One community, several opinions on proprietary software

Mostly based on ideology, this point deals with the definition every member of the community gives to Free Software and open source. Mostly about one thing: is it viral or not? Or GPL vs MIT/BSD.

Those on the side of the viral Free Software will have trouble to use a proprietary software as this last one shouldn’t even exist. It must be assimilated, to quote Star Trek, as it is a connected black box, endangering privacy, corrupting for profit our uses and restrain our freedom to use as we’re pleased what we own, etc.

Those on the side of complete freedom have no qualms using proprietary software as their very existence is a consequence of freedom without restriction. They even agree that code they developed may be a part of proprietary software, which is quite a common occurrence. This part of the Free Software community has no qualm using Github, which is well within their ideology parameters. Just take a look at the Janson amphitheater during Fosdem and check how many Apple laptops running on macOS are around.

FreeBSD, the main BSD project under the BSD license

2.3.2 Data loss and data restrictions linked to proprietary software use

Even without ideological consideration, and just focusing on Github infrastructure, the bug tracking system is a major problem by itself.

Bug report builds the memory of Free Software projects. It is the entrance point for new contributors, the place to find bug reporting, requests for new functions, etc. The project history can’t be limited only to the code. It’s very common to find bug reports when you copy and paste an error message in a search engine. Not their historical importance is precious for the project itself, but also for its present and future users.

Github gives the ability to extract bug reports through its API. What would happen if Github is down or if the platform doesn’t support this feature anymore? In my opinion, not that many projects ever thought of this outcome. How could they move all the data generated by Github into a new bug tracking system?

One old example now is Astrid, a TODO list bought by Yahoo a few years ago. Very popular, it grew fast until it was closed overnight, with only a few weeks for its users to extract their data. It was only a to-do list. The same situation with Github would be tremendously difficult to manage for several projects if they even have the ability to deal with it. Code would still be available and could still live somewhere else, but the project memory would be lost. A project like Debian has today more than 800,000 bug reports, which are a data treasure trove about problems solved, function requests and where the development stand on each. The developers of the Cpython project have anticipated the problem and decided not to use Github bug tracking systems.

Issues, the Github proprietary bug tracking system

Another thing we could lose if Github suddenly disappear: all the work currently done regarding the push requests (aka PRs). This Github function gives the ability to clone one project’s Github repository, to modify it to fit your needs, then to offer your own modification to the original repository. The original repository’s owner will then review said modification, and if he or she agrees with them will fuse them into the original repository. As such, it’s one of the main advantages of Github, since it can be done easily through its graphic interface.

However reviewing all the PRs may be quite long, and most of the successful projects have several ongoing PRs. And this PRs and/or the proprietary bug tracking system are commonly used as a platform for comment and discussion between developers.

Code itself is not lost if Github is down (except one specific situation as seen below), but the peer review works materialized in the PRs and the bug tracking system is lost. Let’s remember than the PR mechanism let you clone and modify projects and then generate PRs directly from its proprietary web interface without downloading a single code line on your computer. In this particular case, if Github is down, all the code and the work in progress is lost.

Some also use Github as a bookmark place. They follow their favorite projects’ activity through the Watch function. This technological watch style of data collection would also be lost if Github is down.

Debian, one of the main Free Software projects with at least a thousand official contributors

2.4 Uniformization

The Free Software community is walking a thigh rope between normalization needed for an easier interoperability between its products and an attraction for novelty led by a strong need for differentiation from what is already there.

Github popularized the use of Git, a great tool now used through various sectors far away from its original programming field. Step by step, Git is now so prominent it’s almost impossible to even think to another source control manager, even if awesome alternate solutions, unfortunately not as popular, exist as Mercurial.

A new Free Software project is now a Git repository on Github with README.md added as a quick description. All the other solutions are ostracized? How? None or very few potential contributors would notice said projects. It seems very difficult now to encourage potential contributors into learning a new source control manager AND a new forge for every project they want to contribute. Which was a basic requirement a few years ago.

It’s quite sad because Github, offering an original experience to its users, cut them out of a whole possibility realm. Maybe Github is one of the best web versioning control systems. But being the main one doesn’t let room for a new competitor to grow. And it let Github initiate development newcomers into a narrow function set, totally unrelated to the strength of the Git tool itself.

3. Centralization, uniformization, proprietary software… What’s next? Laziness?

Fight against centralization is a main part of the Free Software ideology as centralization strengthens the power of those who manage it and who through it control those who are managed by it. Uniformization allergies born against main software companies and their wishes to impose a closed commercial software world was for a long time the main fuel for innovation thirst and intelligent alternative development. As we said above, part of the Free Software community was built as a reaction to proprietary software and their threat. The other part, without hoping for their disappearance, still chose a development model opposite to proprietary software, at least in the beginning, as now there’s more and more bridges between the two.

The Github effect is a morbid one because of its consequences: at least centralization, uniformization, proprietary software usage as their bug tracking system. But some years ago the Dear Github buzz showed one more side effect, one I’ve never thought about: laziness. For those who don’t know what it is about, this letter is a complaint from several spokespersons from several Free Software projects which demand to Github team to finally implement, after years of polite asking, new functions.

Since when Free Software project facing a roadblock request for clemency and don’t build themselves the path they need? When Torvalds was involved in the Bitkeeper problem and the Linux kernel development team couldn’t use anymore their revision control software, he developed Git. The mere fact of not being able to use one tool or functions lacking is the main motivation to seek alternative solutions and, as such, of the Free Software movement. Every Free Software community member able to code should have this reflex. You don’t like what Github offers? Switch to Gitlab. You don’t like it Gitlab? Improve it or make your own solution.

The Gitlab logo

Let’s be crystal clear. I’ve never said that every Free Software developers blocked should code his or her own alternative. We all have our own priorities, and some of us even like their beauty sleep, including me. But, to see that this open letter to Github has 1340 names attached to it, among them some spokespersons for major Free Software project showed me that need, willpower and strength to code a replacement are here. Maybe said replacement will be born from this letter, it would be the best outcome of this buzz.

In the end, Github usage is just another example of Internet usage massification. As Internet users are bound to go to massively centralized social network as Facebook or Twitter, developers are following the same path with Github. Even if a large fraction of developers realize the threat linked this centralized and proprietary organization, the whole community is following this centralization and uniformization trend. Github service is useful, free or with a reasonable price (depending on the functions you need) easy to use and up most of the time. Why would we try something else? Maybe because others are using us while we are savoring the convenience? The Free Software community seems to be quite sleepy to me.

The lion enjoying the hearth warm

About Me

Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker.

Follow me on social networks

Translated from French by Stéphanie Chaptal. Original article written in 2015.

18 September, 2017 10:00PM by Carl Chenet

Russ Allbery

Consolidation haul

My parents are less fond than I am of filling every available wall in their house with bookshelves and did a pruning of their books. A lot of them duplicated other things that I had, or didn't sound interesting, but I still ended up with two boxes of books (and now have to decide which of my books to prune, since I'm out of shelf space).

Also included is the regular accumulation of new ebook purchases.

Mitch Albom — Tuesdays with Morrie (nonfiction)
Ilona Andrews — Clean Sweep (sff)
Catherine Asaro — Charmed Sphere (sff)
Isaac Asimov — The Caves of Steel (sff)
Isaac Asimov — The Naked Sun (sff)
Marie Brennan — Dice Tales (nonfiction)
Captain Eric "Winkle" Brown — Wings on My Sleeve (nonfiction)
Brian Christian & Tom Griffiths — Algorithms to Live By (nonfiction)
Tom Clancy — The Cardinal of the Kremlin (thriller)
Tom Clancy — The Hunt for the Red October (thriller)
Tom Clancy — Red Storm Rising (thriller)
April Daniels — Sovereign (sff)
Tom Flynn — Galactic Rapture (sff)
Neil Gaiman — American Gods (sff)
Gary J. Hudson — They Had to Go Out (nonfiction)
Catherine Ryan Hyde — Pay It Forward (mainstream)
John Irving — A Prayer for Owen Meany (mainstream)
John Irving — The Cider House Rules (mainstream)
John Irving — The Hotel New Hampshire (mainstream)
Lawrence M. Krauss — Beyond Star Trek (nonfiction)
Lawrence M. Krauss — The Physics of Star Trek (nonfiction)
Ursula K. Le Guin — Four Ways to Forgiveness (sff collection)
Ursula K. Le Guin — Words Are My Matter (nonfiction)
Richard Matheson — Somewhere in Time (sff)
Larry Niven — Limits (sff collection)
Larry Niven — The Long ARM of Gil Hamilton (sff collection)
Larry Niven — The Magic Goes Away (sff)
Larry Niven — Protector (sff)
Larry Niven — World of Ptavvs (sff)
Larry Niven & Jerry Pournelle — The Gripping Hand (sff)
Larry Niven & Jerry Pournelle — Inferno (sff)
Larry Niven & Jerry Pournelle — The Mote in God's Eye (sff)
Flann O'Brien — The Best of Myles (nonfiction)
Jerry Pournelle — Exiles to Glory (sff)
Jerry Pournelle — The Mercenary (sff)
Jerry Pournelle — Prince of Mercenaries (sff)
Jerry Pournelle — West of Honor (sff)
Jerry Pournelle (ed.) — Codominium: Revolt on War World (sff anthology)
Jerry Pournelle & S.M. Stirling — Go Tell the Spartans (sff)
J.D. Salinger — The Catcher in the Rye (mainstream)
Jessica Amanda Salmonson — The Swordswoman (sff)
Stanley Schmidt — Aliens and Alien Societies (nonfiction)
Cecilia Tan (ed.) — Sextopia (sff anthology)
Lavie Tidhar — Central Station (sff)
Catherynne Valente — Chicks Dig Gaming (nonfiction)
J.E. Zimmerman — Dictionary of Classical Mythology (nonfiction)

This is an interesting tour of a lot of stuff I read as a teenager (Asimov, Niven, Clancy, and Pournelle, mostly in combination with Niven but sometimes his solo work).

I suspect I will no longer consider many of these books to be very good, and some of them will probably go back into used bookstores after I've re-read them for memory's sake, or when I run low on space again. But all those mass market SF novels were a big part of my teenage years, and a few (like Mote In God's Eye) I definitely want to read again.

Also included is a random collection of stuff my parents picked up over the years. I don't know what to expect from a lot of it, which makes it fun to anticipate. Fall vacation is coming up, and with it a large amount of uninterrupted reading time.

18 September, 2017 12:34AM

September 17, 2017

hackergotchi for Sean Whitton

Sean Whitton

Debian Policy call for participation -- September 2017

Here’s a summary of the bugs against the Debian Policy Manual. Please consider getting involved, whether or not you’re an existing contributor.

Consensus has been reached and help is needed to write a patch

#172436 BROWSER and sensible-browser standardization

#273093 document interactions of multiple clashing package diversions

#299007 Transitioning perms of /usr/local

#314808 Web applications should use /usr/share/package, not /usr/share/doc/…

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#476810 Please clarify 12.5, “Copyright information”

#484673 file permissions for files potentially including credential informa…

#491318 init scripts “should” support start/stop/restart/force-reload - why…

#556015 Clarify requirements for linked doc directories

#568313 Suggestion: forbid the use of dpkg-statoverride when uid and gid ar…

#578597 Recommend usage of dpkg-buildflags to initialize CFLAGS and al.

#582109 document triggers where appropriate

#587991 perl-policy: /etc/perl missing from Module Path

#592610 Clarify when Conflicts + Replaces et al are appropriate

#613046 please update example in 4.9.1 (debian/rules and DEB_BUILD_OPTIONS)

#614807 Please document autobuilder-imposed build-dependency alternative re…

#628515 recommending verbose build logs

#664257 document Architecture name definitions

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#685746 debian-policy Consider clarifying the use of recommends

#688251 Built-Using description too aggressive

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#759316 Document the use of /etc/default for cron jobs

#761219 document versioned Provides

#767839 Linking documentation of arch:any package to arch:all

#770440 policy should mention systemd timers

#773557 Avoid unsafe RPATH/RUNPATH

#780725 PATH used for building is not specified

#793499 The Installed-Size algorithm is out-of-date

#810381 Update wording of 5.6.26 VCS-* fields to recommend encryption

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#515856 remove get-orig-source

#542288 Versions for native packages, NMU’s, and binary only uploads

#582109 document triggers where appropriate

#610083 Remove requirement to document upstream source location in debian/c…

#645696 [copyright-format] clearer definitions and more consistent License:…

#649530 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#683222 say explicitly that debian/changelog is required in source packages

#688251 Built-Using description too aggressive

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#810381 Update wording of 5.6.26 VCS-* fields to recommend encryption

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#850729 Documenting special version number suffixes

#874090 Clarify wording of some passages

#874095 copyright-format: Use the “synopsis” term established in the de…

Merged for the next release

#661928 recipe for determining shlib package name

#679751 please clarify package account and home directory location in policy

#683222 say explicitly that debian/changelog is required in source packages

#870915 [5.6.30] Testsuite: There are much more defined values

#872893 Chapters, sections, appendices and numbering

#872895 Include multi-page HTML in package

#872896 An html.tar.gz has leaked into the .deb?

#872900 Very generic info file name

#872950 Too much indirection in info file menus

#873819 upgrading-checklist.txt: typo pgpsignurlmangle in section 4.11 of V…

#874411 missing line breaks in summary of ways maintainers scripts are call…

17 September, 2017 11:04PM

Russ Allbery

Free software log (July and August 2017)

I've wanted to start making one of these posts for a few months but have struggled to find the time. But it seems like a good idea, particularly since I get more done when I write down what I do, so you all get a rather belated one. This covers July and August; hopefully the September one will come closer to the end of September.

Debian

August was DebConf, which included a ton of Policy work thanks to Sean Whitton's energy and encouragement. During DebConf, we incorporated work from Hideki Yamane to convert Policy to reStructuredText, which has already made it far easier to maintain. (Thanks also to David Bremner for a lot of proofreading of the result.) We also did a massive bug triage and closed a ton of older bugs on which there had been no forward progress for many years.

After DebConf, as expected, we flushed out various bugs in the reStructuredText conversion and build infrastructure. I fixed a variety of build and packaging issues and started doing some more formatting cleanup, including moving some footnotes to make the resulting document more readable.

During July and August, partly at DebConf and partly not, I also merged wording fixes for seven bugs and proposed wording (not yet finished) for three more, as well as participated in various Policy discussions.

Policy was nearly all of my Debian work over these two months, but I did upload a new version of the webauth package to build with OpenSSL 1.1 and drop transitional packages.

Kerberos

I still haven't decided my long-term strategy with the Kerberos packages I maintain. My personal use of Kerberos is now fairly marginal, but I still care a lot about the software and can't convince myself to give it up.

This month, I started dusting off pam-krb5 in preparation for a new release. There's been an open issue for a while around defer_pwchange support in Heimdal, and I spent some time on that and tracked it down to an upstream bug in Heimdal as well as a few issues in pam-krb5. The pam-krb5 issues are now fixed in Git, but I haven't gotten any response upstream from the Heimdal bug report. I also dusted off three old Heimdal patches and submitted them as upstream merge requests and reported some more deficiencies I found in FAST support. On the pam-krb5 front, I updated the test suite for the current version of Heimdal (which changed some of the prompting) and updated the portability support code, but haven't yet pulled the trigger on a new release.

Other Software

I merged a couple of pull requests in podlators, one to fix various typos (thanks, Jakub Wilk) and one to change the formatting of man page references and function names to match the current Linux manual page standard (thanks, Guillem Jover). I also documented a bad interaction with line-buffered output in the Term::ANSIColor man page. Neither of these have seen a new release yet.

17 September, 2017 08:08PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppClassic 0.9.7

A rather boring and otherwise uneventful release 0.9.7 of RcppClassic is now at CRAN. This package provides a maintained version of the otherwise deprecated first Rcpp API; no new projects should use it.

Once again no changes in user-facing code. But this makes it the first package to use the very new and shiny pinp package as the backend for its vignette, now converted to Markdown---see here for this new version. We also updated three sources files for tabs versus spaces as the current g++ version complained (correctly !!) about misleading indents. Otherwise a file src/init.c was added for dynamic registration, the Travis CI runner script was updated to using run.sh from our r-travis fork, and we now strip the library after they have been built. Again, no user code changes.

And no iterate: nobody should use this package. Rcpp is so much better in so many ways---this one is simply available as we (quite strongly) believe that APIs are contracts, and as such we hold up our end of the deal.

Courtesy of CRANberries, there are changes relative to the previous release.

Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 September, 2017 07:28PM

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

Free Software Efforts (2017W37)

I’d like to start making weekly reports again on my free software efforts. Part of the reason for these reports is for me to see how much time I’m putting into free software. Hopefully I can keep these reports up.

Debian

I have updated txtorcon (a Twisted-based asynchronous Tor control protocol implementation used by ooniprobe, magic-wormhole and tahoe-lafs) to its latest upstream version. I’ve also added two new binary packages that are built by the txtorcon source package: python3-txtorcon and python-txtorcon-doc for Python 3 support and generated HTML documentation respectively.

I have gone through the scapy (Python module for the forging and dissection of network packets) bugs and closed a couple that seem to have been silently fixed by new upstream releases and not been caught in the BTS. I’ve uploaded a minor revision to include a patch that fixes the version number reported by scapy.

I have prepared and uploaded a new package for measurement-kit (a portable C++11 network measurement library) from the Open Observatory of Network Interference, which at time of writing is still in the NEW queue. I have also updated ooniprobe (probe for the Open Observatory of Network Interference) to its latest upstream version.

I have updated the Swedish debconf strings in the xastir (X Amateur Station Tracking and Information Reporting) package, thanks to the translators.

I have updated the direwolf (soundcard terminal node controller for APRS) package to its latest upstream version and fixed the creation of the system user to run direwolf with systemd to happen at the time the package is installed. Unfortunately, it has been necessary to drop the PDF documentation from the package as I was unable to contact the upstream author and acquire the Microsoft Word sources for this release.

I have reviewed and sponsored the uploads of the new packages comptext (GUI based tool to compare two text streams), comptty (GUI based tool to compare two radio teletype streams) and flnet (amateur radio net control station software) in the hamradio team. Thanks to Ana Custura for preparing those packages, comptext and comptty are now available in unstable.

I have updated the Debian Hamradio Blend metapackages to include cubicsdr (a software defined radio receiver). This build also refreshes the list of packages that can now be included as they had not been packaged at the time of the last build.

I have produced and uploaded an initial package for python-azure-devtools (development tools for Azure SDK and CLI for Python) and have updated python-azure (the Azure SDK for Python) to a recent git snapshot. Due to some issues with python-vcr it is currently not possible to run the test suite as part of the build process and I’m watching the situation. I have also fixed the auto dependency generation for python3-azure, which had previously been broken.

Bugs closed (fixed/wontfix): #873036, #871940, #869566, #873083, #867420, #861753, #855385, #855497, #684727, #683711

Tor Project

I have been working through tickets for Atlas (a tool for looking up details about Tor relays and bridges) and have merged and deployed a number of fixes. Some highlights include: bandwidth sorting in search results is now semantically correct (not just an alphanumeric sort ignoring units), added when a relay was first seen to the details page along with the host name if a reverse DNS record has been found for the IP address of the relay and added support for the NoEdConsensus flag (although happily no relays had this flag at the time this support was added).

The metrics team has been working on merging projects into the metrics team website to give a unified view of information about the Tor network. This week I have been working towards a prototype of a port of Atlas to the metrics website’s style and this work has been published in my personal Atlas git repository. If you’d like to have a click around, you can do so.

A relay operators meetup will be happening in Montreal on the 14th of October. I won’t be present, but I have taken this opportunity to ask operators if there’s anything that they would like from Atlas that they are not currently getting. Some feedback has already been received and turned into code and trac tickets.

I also attended the weekly metrics team meeting in #tor-dev.

Bugs closed (fixed/wontfix): #6787, #9814, #21958, #21636, #23296, #23160

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I continue to be happy to spend my time on this work, however I do find myself in a position where it may not be sustainable when it comes to hardware. My desktop, a Sun Ultra 24, is now 10 years old and I’m starting to see random reboots which so far have not been explained. It is incredibly annoying to have this happen during a long build. Further, the hard drives in my NAS which are used for the local backups and for my local Debian mirror are starting to show SMART errors. It is not currently within my budget to replace any of this hardware. Please contact me if you believe you can help.

This week's energy was provided by Club Mate

17 September, 2017 03:45PM

Uwe Kleine-König

IPv6 in my home network

I am lucky and get both IPv4 (without CGNAT) and IPv6 from my provider. Recently after upgrading my desk router (that is an Netgear WNDR3800 that serves the network on my desk) from OpenWRT to latest LEDE I looked into what can be improved in the IPv6 setup for both my home network (served by a FRITZ!Box) and my desk network.

Unfortunately I was unable to improve the situation compared to what I already had before.

Things that work

Making IPv6 work in general was easy, just a few clicks in the configuration of the FRITZ!Box and it mostly worked. After that I have:

  • IPv6 connectivity in the home net
  • IPv6 connectivity in the desk net

Things that don't work

There are a few things however that I'd like to have, that are not that easy it seems:

ULA for both nets

I let the two routers announce an ULA prefix each. Unfortunately I was unable to make the LEDE box announce its net on the wan interface for clients in the home net. So the hosts in the desk net know how to reach the hosts in the home net but not the other way round which makes it quite pointless. (It works fine as long as the FRITZ!Box announces a global net, but I'd like to have local communication work independent of the global connectivity.)

To fix this I'd need something like radvd on my LEDE router, but that isn't provided by LEDE (or OpenWRT) any more as odhcpd is supposed to be used which AFAICT is unable to send RAs on the wan interface though. Ok, probably I could install bird, but that seems a bit oversized. I created an entry in the LEDE forum but without any reply up to now.

Alternatively (but less pretty) I could setup an IPv6 route in the FRITZ!Box, but that only works with a newer firmware and as this router is owned by my provider I cannot update it.

Firewalling

The FRITZ!Box has a firewall that is not very configurable. I can punch a hole in it for hosts with a given interface-ID, but that only works for hosts in the home net, not the machines in the delegated subnet behind the LEDE router. In fact I think the FRITZ!Box should delegate firewalling for a delegated net also to the router of that subnet. (Hello AVM, did you hear me? At least a checkbox for that would be nice.)

So having a global address on the machines on my desk doesn't allow me to reach them from the internet.

17 September, 2017 10:15AM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, August 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 189 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours is the same as last month.

The security tracker currently lists 59 packages with a known CVE and the dla-needed.txt file 60. The number of packages with open issues decreased slightly compared to last month but we’re not yet back to the usual situation. The number of CVE to fix per package tends to increase due to the increased usage of fuzzers.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

17 September, 2017 08:24AM by Raphaël Hertzog

September 16, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

pinp 0.0.1: pinp is not PNAS

A brandnew and very exciting (to us, at least) package called pinp just arrived on CRAN, following a somewhat unnecessarily long passage out of incoming. It is based on the PNAS LaTeX Style offered by the Proceeding of the National Academy of Sciences of the United States of America, or PNAS for short. And there is already a Markdown version in the wonderful rticles packages.

But James Balamuta and I thought we could do one better when we were looking to typeset our recent PeerJ Prepint as an attractive looking vignette for use within the Rcpp package.

And so we did by changing a few things (font, color, use of natbib and Chicago.bst for references, removal of a bunch of extra PNAS-specific formalities from the frontpage) and customized a number of other things for easier use by vignettes directly from the YAML header (draft mode watermark, doi or url for packages, easier author naming in footer, bibtex file and more).

We are quite pleased with the result which seems ready for the next Rcpp release---see e.g., these two teasers:

and

and the pinp package page or the GitHub repo have the full (four double-)pages of what turned a more dull looking 27 page manuscript into eight crisp two-column pages.

We have few more things planned (i.e., switching to single column mode, turning on linenumbers at least in one-column mode).

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 September, 2017 08:07PM

drat 0.1.3

A new version of drat arrived earlier today on CRAN as another no-human-can-delay-this automatic upgrade directly from the CRAN prechecks. It is mostly a maintenance release ensuring PACKAGES.rds is also updated, plus some minor other edits.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

The NEWS file summarises the release as follows:

Changes in drat version 0.1.3 (2017-09-16)

  • Ensure PACKAGES.rds, if present, is also inserted in repo

  • Updated 'README.md' removing stale example URLs (#63)

  • Use https to fetch Travis CI script from r-travis

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 September, 2017 04:13PM

September 15, 2017

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, August 2017

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 1 hour from July. I only worked 10 hours, so I will carry over 6 hours to the next month.

I prepared and released an update on the Linux 3.2 longterm stable branch (3.2.92), and started work on the next update. I rebased the Debian linux package on this version, but didn't yet upload it.

15 September, 2017 05:56PM

hackergotchi for Chris Lamb

Chris Lamb

Which packages on my system are reproducible?

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process.

As part of this project I wrote a script to determine which packages installed on your system are "reproducible" or not:

$ apt install devscripts
[]

$ reproducible-check
[]
W: subversion (1.9.7-2) is unreproducible (libsvn-perl, libsvn1, subversion) <https://tests.reproducible-builds.org/debian/subversion>
W: taglib (1.11.1+dfsg.1-0.1) is unreproducible (libtag1v5, libtag1v5-vanilla) <https://tests.reproducible-builds.org/debian/taglib>
W: tcltk-defaults (8.6.0+9) is unreproducible (tcl, tk) <https://tests.reproducible-builds.org/debian/tcltk-defaults>
W: tk8.6 (8.6.7-1) is unreproducible (libtk8.6, tk8.6) <https://tests.reproducible-builds.org/debian/tk8.6>
W: valgrind (1:3.13.0-1) is unreproducible <https://tests.reproducible-builds.org/debian/valgrind>
W: wavpack (5.1.0-2) is unreproducible (libwavpack1) <https://tests.reproducible-builds.org/debian/wavpack>
W: x265 (2.5-2) is unreproducible (libx265-130) <https://tests.reproducible-builds.org/debian/x265>
W: xen (4.8.1-1+deb9u1) is unreproducible (libxen-4.8, libxenstore3.0) <https://tests.reproducible-builds.org/debian/xen>
W: xmlstarlet (1.6.1-2) is unreproducible <https://tests.reproducible-builds.org/debian/xmlstarlet>
W: xorg-server (2:1.19.3-2) is unreproducible (xserver-xephyr, xserver-xorg-core) <https://tests.reproducible-builds.org/debian/xorg-server>
282/4494 (6.28%) of installed binary packages are unreproducible.

Whether a package is "reproducible" or not is determined by querying the Debian Reproducible Builds testing framework.



The --raw command-line argument lets you play with the data in more detail. For example, you can see who maintains your unreproducible packages:

$ reproducible-check --raw | dd-list --stdin
Alec Leamas <leamas.alec@gmail.com>
   lirc (U)

Alessandro Ghedini <ghedo@debian.org>
   valgrind

Alessio Treglia <alessio@debian.org>
   fluidsynth (U)
   libsoxr (U)
[]


reproducible-check is available in devscripts since version 2.17.10, which landed in Debian unstable on 14th September 2017.

15 September, 2017 08:29AM

September 14, 2017

Lior Kaplan

Public money? Public Code!

An open letter published today to the EU government says:

Why is software created using taxpayers’ money not released as Free Software?
We want legislation requiring that publicly financed software developed for the public sector be made publicly available under a Free and Open Source Software licence. If it is public money, it should be public code as well.

Code paid by the people should be available to the people!

See https://publiccode.eu/ for the campaign details.

This makes me think of starting an Israeli version…


Filed under: Uncategorized

14 September, 2017 09:30AM by Kaplan

hackergotchi for James McCoy

James McCoy

devscripts needs YOU!

Over the past 10 years, I've been a member of a dwindling team of people maintaining the devscripts package in Debian.

Nearly two years ago, I sent out a "Request For Help" since it was clear I didn't have adequate time to keep driving the maintenance.

In the mean time, Jonas split licensecheck out into its own project and took over development. Osamu has taken on much of the maintenance for uscan, uupdate, and mk-origtargz.

Although that has helped spread the maintenance costs, there's still a lot that I haven't had time to address.

Since Debian is still fairly early in the development cycle for Buster, I've decided this is as good a time as any for me to officially step down from active involvement in devscripts. I'm willing to keep moderating the mailing list and other related administrivia (which is fairly minimal given the repo is part of collab-maint), but I'll be unsubscribing from all other notifications.

I think devscripts serves as a good funnel for useful scripts to get in front of Debian (and its derivatives) developers, but Jonas may also be onto something by pulling scripts out to stand on their own. One of the troubles with "bucket" packages like devscripts is the lack of visibility into when to retire scripts. Breaking scripts out on their own, and possibly creating multiple binary packages, certainly helps with that. Maybe uscan and friends would be a good next candidate.

At the end of the day, I've certainly enjoyed being able to play my role in helping simplify the life of all the people contributing to Debian. I may come back to it some day, but for now it's time to let someone else pick up the reins.

If you're interested in helping out, you can join #devscripts on OFTC and/or send a mail to <devscripts-devel@lists.alioth.debian.org>.

14 September, 2017 03:18AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppMsgPack 0.2.0

A new and much enhanced version of RcppMsgPack arrived on CRAN a couple of days ago. It came together following this email to the r-package-devel list which made it apparent that Travers Ching had been working on MessagePack converters for R which required the very headers I had for use from, inter alia, the RcppRedis package.

So we joined our packages. I updated the headers in RcppMsgPack to the current upstream version 2.1.5 of MessagePack, and Travers added his helper functions allow direct packing / unpacking of MessagePack objects at the R level, as well as tests and a draft vignette. Very exciting, and great to have a coauthor!

So now RcppMspPack provides R with both MessagePack header files for use via C++ (or C, if you must) packages such as RcppRedis --- and direct conversion routines at the R prompt.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves.

Changes in version 0.2.0 (2017-09-07)

  • Added support for building on Windows

  • Upgraded to MsgPack 2.1.5 (#3)

  • New R functions to manipulate MsgPack objects: msgpack_format, msgpack_map, msgpack_pack, msgpack_simplify, mgspack_unpack (#4)

  • New R functions also available as msgpackFormat, msgpackMap, msgpackPack, msgpackSimplify, mgspackUnpack (#4)

  • New vignette (#4)

  • New tests (#4)

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

More information may be on the RcppMsgPack page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 September, 2017 01:28AM

RcppRedis 0.1.8

A new minor release of RcppRedis arrived on CRAN last week, following the release 0.2.0 of RcppMsgPack which brought the MsgPack headers forward to release 2.1.5. This required a minor and rather trivial change in the code. When the optional RcppMsgPack package is used, we now require this version 0.2.0 or later.

We made a few internal updates to the package as well.

Changes in version 0.1.8 (2017-09-08)

  • A new file init.c was added with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Symbol registration is enabled in useDynLib

  • Travis CI was updated to using run.sh

  • The (optional MessagePack) code was updated for MsgPack 2.*

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 September, 2017 01:26AM

September 13, 2017

hackergotchi for Shirish Agarwal

Shirish Agarwal

Android, Android marketplace and gaming addiction.

This would be a longish piece so please bear and play with tea, coffee, beer or anything stronger that you desire while reading below 🙂

I had bought an Android phone, a Samsung J5 just before going to debconf 2016. It was more for being in-trend rather than really using it. The one which I shared is the upgraded version (recentish) the one I have is 2 GB for which I had paid around double of what the list price was. The only reason I bought the model is that it had ‘removable battery’ at the price point I was willing to pay. I did see that Samsung has the same ham-handed issues with audio as previous Nokia devices use to, the speakers and microphone probably the cheapest you can get on the market. Nokia was same too, at least on the lower-end of the market, while Oppo has loud ringtones and loud music, perfect for those who are a bit hard of hearing (as yours truly is).

I had been pleasantly surprised by the quality of photos Samsung J5 was churning even though I’m less than average shooter, never been really into it so was a sort of wake-up call for where camera sensor technology is advancing. And of course with newer phones the kind of detail it can capture is mesmerizing to say the least, although wide-angle shots still would take some time to get right I guess.

If memory serves me right, sometime back Laura Arjona Reina (who handles part of debian-publicity and part of debian-women among other responsibilities) shared a blog post on p.d.o. where she had shared the troubles she had while exporting data from the phone. While she shared that and I lack the time or the energy to try and find it ( the entry is really bookmarkable, at least that specific blog post).

What was interesting though that I had gone few years ago to Bangalore, there is an organization which I like and admire CIS great for researchers. Anyways they had done a project getting between 10-20 phones from the market made of Chinese origin (almost all mobiles sold in India, the fabrication of CPU and APU etc. are done in China/Taiwan and then assembled here). Here what is done at the most is assembly which for all political purposes is called ‘manufacturing’ . All the mobiles kept quite a bit of info. on the device even after you wiped them clean/put some other ROM on them. The CIS site is more than a bit cluttered otherwise would have shared the direct link. I do hope to send an e-mail to CIS and hopefully they will respond with the report and will share that here as and when. It would be interesting to know if after people flash a custom rom if the status quo is the same as it was before. I do suspect it would be the former as flashing ROMs on phones is still a bit of specialized subject at least here in India with even an average phone costing a month or two’s salary or more and the idea of bricking the phone scares most people (including yours truly).

Anyways, for a long time I was on bed and had the phone. I used 2 games from the android marketplace which both mum and I enjoy and enjoyed. Those are Real jigsaw and Jigsaw puzzle HD . The permissions dialog which Real jigsaw among other games has is horrible and part of me freaks that all such apps. have unrestricted access to my storage area. Ideally, what Android should have done is either partition or give functionality to the user to have private space for their photos and whatever media they have and the rest of the area is like a public park. If anybody has any thoughts on partitioning on Android phone would like to hear that.

One game though which really hooked mumma and me is ‘The Island Experiment‘ . It reminded me of my younger days when gaming addiction was not treated as a disease but thankfully now is . I would call myself somewhat of a ‘functional addict’ as in do my every day things, work etc. but do dream about the game as to what it will show me next. A part of it is the game is web-based (which means it needs constant internet connection) and web access is somewhat pricey, although with Reliance Jio an upcoming data network operator having bundles of money and promising the moon, network issues at least on low-bandwidth game which I and mum are playing hopefully will not have any issues. I haven’t used tshark or any such tool to analyze the traffic but I guess it probably just sends short messages of number of clicks in a time period and things like that, all the rest (I guess) is happening on the mobile itself. I know at sometime I probably will try to put a custom rom on it but which one is the question as there are so many and also which is most compatible with my device. It seems I would have to do lot of homework before I can make any choices.

Couple of months back, a friend of mine Akshat who has been using Android for few years enabled Developer Options which I didn’t know about till he shared that info. with me. I do hope people do check Akshat’s repo. as he has made/has quite a few useful scripts, especially if you are into digital photography. I have shared with him gimp scripting few days back so along with imagemagick you might see him doing some rough scripts in it. Of course, if people use it and give feedback he might clean the scripts a bit so it gives useful error messages and gives statement like ‘gimp is not installed on your system, please install it or ask for specific version’ but as it works in free software it is somewhat directly proportional to the number of users, bugs and users behind it.

A good example of what I mean is youtube-dl . I filed 873853 where I shared the upstream ticket. Apparently YouTube again changed few days back and while upstream has fixed it, the youtube-dl maintainer probably needs to find time and get the new version up. Apparently the issues lies in –

[$] dpkg -L youtube-dl | grep youtube.py
/usr/lib/python3/dist-packages/youtube_dl/extractor/youtube.py

Hopefully somebody does the needful.

Btw, I find f-droid extremely useful and especially osmand but sadly both of them are not shared or talked about by people 😦

The reason I shared about Developer Options in Android is that few days back I noticed that the phone wonks off and has unpredictable behaviour such as not letting me browse the web, do additions or deletions using the google play store and alike . Things came to a head when few days back I saw a fibre-optic splicing operation being carried by some workers near my home by the state operator which elated me and wanted to shoot the video for it but the battery died/there was no power even though I hadn’t used it much. I have deliberately shared the hindi version which tells how that knowledge is now coming to the masses. I had seen fibre-optic splicing more than a decade and a half back at some net conference where it was going to be in your neighbourhood soonish, hopefully it will happen soon 🙂

I had my suspicions for quite sometime all the issues with the phone were not due to proper charging. During course of my investigation, found out that in Developer Options there is an option called USB Configuration and changing that from the default MTP (Media Transfer Protocol) (which is basically used to put or take movies, music or any file from the phone to the computer or vice-versa improved much better behaviour on my android phone. But this caused an unexpected side-effect, I got pretty aggressive polling of the phone by the computer even after saying I do not want to share the phone details with the computer. This I filed as 874216 . The phone and I am guessing most Samsung phones today come with an adaptor with a USB male socket which goes in to the phone’s usb port. There is the classical port for electricity but like most people heavily rely on usb charging even for deep fully powered down phone for full charging.

One interesting project which I saw which came in Debian some days back is dummydroid. I did file a bug about it . I do hope that either the maintainer gives some more documentation. I am sure many people might use and add to the resource if the documentation was/is there. I did take a look at the package and the profile seems to be like an xml pair kind of database. Having more profiles shouldn’t be hard if we knew what info. needs to be added and how do we find that info.

Lastly, I am slowly transferring all the above knowledge to my mum as well, although in small doses. She, just like me has and had problems coming from resistive touchscreen to capacitive touchscreen. You can call me wrong but resistive touchscreen seemed to be superior and not as error-prone or liable to commit mistakes as is possible in capacitive touchscreens. There may be a setting to higher/lower the threshold for touching which I have not been able to find as of yet.

Hope somebody finds something useful in there. I do hope that Debian does become a replacement to be used on such mobiles but then they would have to duplicate/also have some sort of mainstream content with editors to help people find stuff, something that Debian is not so good at currently. Also I’m not sure Synaptic is good fit as a mobile store.


Filed under: Miscellenous Tagged: #Android, #capacitive touchscreen, #custom ROMs, #digital photography, #dummydroid, #f-droid, #fabrication, #flashing, #game addiction, #Google Play Store, #Mainstreaming Debian, #mobile connectivity, #Oppo, #osmand, #planet-debian, #resistive touchscreen, #Samsung Galaxy J5, #scripting, #USB charging, #USB configuration, #youtube-dl, gaming

13 September, 2017 02:44PM by shirishag75

hackergotchi for Vincent Bernat

Vincent Bernat

Route-based IPsec VPN on Linux with strongSwan

A common way to establish an IPsec tunnel on Linux is to use an IKE daemon, like the one from the strongSwan project, with a minimal configuration1:

conn V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64
  authby      = psk
  auto        = route

The same configuration can be used on both sides. Each side will figure out if it is “left” or “right”. The IPsec site-to-site tunnel endpoints are 2001:db8:­1::1 and 2001:db8:­2::1. The protected subnets are 2001:db8:­a1::/64 and 2001:db8:­a2::/64. As a result, strongSwan configures the following policies in the kernel:

$ ip xfrm policy
src 2001:db8:a1::/64 dst 2001:db8:a2::/64
        dir out priority 399999 ptype main
        tmpl src 2001:db8:1::1 dst 2001:db8:2::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir fwd priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir in priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
[…]

This kind of IPsec tunnel is a policy-based VPN: encapsulation and decapsulation are governed by these policies. Each of them contains the following elements:

  • a direction (out, in or fwd2),
  • a selector (source subnet, destination subnet, protocol, ports),
  • a mode (transport or tunnel),
  • an encapsulation protocol (esp or ah), and
  • the endpoint source and destination addresses.

When a matching policy is found, the kernel will look for a corresponding security association (using reqid and the endpoint source and destination addresses):

$ ip xfrm state
src 2001:db8:1::1 dst 2001:db8:2::1
        proto esp spi 0xc1890b6e reqid 4 mode tunnel
        replay-window 0 flag af-unspec
        auth-trunc hmac(sha256) 0x5b68[…]8ba2904 128
        enc cbc(aes) 0x8e0e377ad8fd91e8553648340ff0fa06
        anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
[…]

If no security association is found, the packet is put on hold and the IKE daemon is asked to negotiate an appropriate one. Otherwise, the packet is encapsulated. The receiving end identifies the appropriate security association using the SPI in the header. Two security associations are needed to establish a bidirectionnal tunnel:

$ tcpdump -pni eth0 -c2 -s0 esp
13:07:30.871150 IP6 2001:db8:1::1 > 2001:db8:2::1: ESP(spi=0xc1890b6e,seq=0x222)
13:07:30.872297 IP6 2001:db8:2::1 > 2001:db8:1::1: ESP(spi=0xcf2426b6,seq=0x204)

All IPsec implementations are compatible with policy-based VPNs. However, some configurations are difficult to implement. For example, consider the following proposition for redundant site-to-site VPNs:

Redundant VPNs between 3 sites

A possible configuration between V1-1 and V2-1 could be:

conn V1-1-to-V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64,2001:db8:a6::cc:1/128,2001:db8:a6::cc:5/128
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64,2001:db8:a6::/64,2001:db8:a8::/64
  authby      = psk
  keyexchange = ikev2
  auto        = route

Each time a subnet is modified on one site, the configurations need to be updated on all sites. Moreover, overlapping subnets (2001:db8:­a6::/64 on one side and 2001:db8:­a6::cc:1/128 at the other) can also be problematic.

The alternative is to use route-based VPNs: any packet traversing a pseudo-interface will be encapsulated using a security policy bound to the interface. This brings two features:

  1. Routing daemons can be used to distribute routes to be protected by the VPN. This decreases the administrative burden when many subnets are present on each side.
  2. Encapsulation and decapsulation can be executed in a different routing instance or namespace. This enables a clean separation between a private routing instance (where VPN users are) and a public routing instance (where VPN endpoints are).

Route-based VPN on Juniper

Before looking at how to achieve that on Linux, let’s have a look at the way it works with a JunOS-based platform (like a Juniper vSRX). This platform as long-standing history of supporting route-based VPNs (a feature already present in the Netscreen ISG platform).

Let’s assume we want to configure the IPsec VPN from V3-2 to V1-1. First, we need to configure the tunnel interface and bind it to the “private” routing instance containing only internal routes (with IPv4, they would have been RFC 1918 routes):

interfaces {
    st0 {
        unit 1 {
            family inet6 {
                address 2001:db8:ff::7/127;
            }
        }
    }
}
routing-instances {
    private {
        instance-type virtual-router;
        interface st0.1;
    }
}

The second step is to configure the VPN:

security {
    /* Phase 1 configuration */
    ike {
        proposal IKE-P1 {
            authentication-method pre-shared-keys;
            dh-group group20;
            encryption-algorithm aes-256-gcm;
        }
        policy IKE-V1-1 {
            mode main;
            proposals IKE-P1;
            pre-shared-key ascii-text "d8bdRxaY22oH1j89Z2nATeYyrXfP9ga6xC5mi0RG1uc";
        }
        gateway GW-V1-1 {
            ike-policy IKE-V1-1;
            address 2001:db8:1::1;
            external-interface lo0.1;
            general-ikeid;
            version v2-only;
        }
    }
    /* Phase 2 configuration */
    ipsec {
        proposal ESP-P2 {
            protocol esp;
            encryption-algorithm aes-256-gcm;
        }
        policy IPSEC-V1-1 {
            perfect-forward-secrecy keys group20;
            proposals ESP-P2;
        }
        vpn VPN-V1-1 {
            bind-interface st0.1;
            df-bit copy;
            ike {
                gateway GW-V1-1;
                ipsec-policy IPSEC-V1-1;
            }
            establish-tunnels on-traffic;
        }
    }
}

We get a route-based VPN because we bind the st0.1 interface to the VPN-V1-1 VPN. Once the VPN is up, any packet entering st0.1 will be encapsulated and sent to the 2001:db8:­1::1 endpoint.

The last step is to configure BGP in the “private” routing instance to exchange routes with the remote site:

routing-instances {
    private {
        routing-options {
            router-id 1.0.3.2;
            maximum-paths 16;
        }
        protocols {
            bgp {
                preference 140;
                log-updown;
                group v4-VPN {
                    type external;
                    local-as 65003;
                    hold-time 6;
                    neighbor 2001:db8:ff::6 peer-as 65001;
                    multipath;
                    export [ NEXT-HOP-SELF OUR-ROUTES NOTHING ];
                }
            }
        }
    }
}

The export filter OUR-ROUTES needs to select the routes to be advertised to the other peers. For example:

policy-options {
    policy-statement OUR-ROUTES {
        term 10 {
            from {
                protocol ospf3;
                route-type internal;
            }
            then {
                metric 0;
                accept;
            }
        }
    }
}

The configuration needs to be repeated for the other peers. The complete version is available on GitHub. Once the BGP sessions are up, we start learning routes from the other sites. For example, here is the route for 2001:db8:­a1::/64:

> show route 2001:db8:a1::/64 protocol bgp table private.inet6.0 best-path

private.inet6.0: 15 destinations, 19 routes (15 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2001:db8:a1::/64   *[BGP/140] 01:12:32, localpref 100, from 2001:db8:ff::6
                      AS path: 65001 I, validation-state: unverified
                      to 2001:db8:ff::6 via st0.1
                    > to 2001:db8:ff::14 via st0.2

It was learnt both from V1-1 (through st0.1) and V1-2 (through st0.2). The route is part of the private routing instance but encapsulated packets are sent/received in the public routing instance. No route-leaking is needed for this configuration. The VPN cannot be used as a gateway from internal hosts to external hosts (or vice-versa). This could also have been done with JunOS’ security policies (stateful firewall rules) but doing the separation with routing instances also ensure routes from different domains are not mixed and a simple policy misconfiguration won’t lead to a disaster.

Route-based VPN on Linux

Starting from Linux 3.15, a similar configuration is possible with the help of a virtual tunnel interface3. First, we create the “private” namespace:

# ip netns add private
# ip netns exec private sysctl -qw net.ipv6.conf.all.forwarding=1

Any “private” interface needs to be moved to this namespace (no IP is configured as we can use IPv6 link-local addresses):

# ip link set netns private dev eth1
# ip link set netns private dev eth2
# ip netns exec private ip link set up dev eth1
# ip netns exec private ip link set up dev eth2

Then, we create vti6, a tunnel interface (similar to st0.1 in the JunOS example):

# ip tunnel add vti6 \
   mode vti6 \
   local 2001:db8:1::1 \
   remote 2001:db8:3::2 \
   key 6
# ip link set netns private dev vti6
# ip netns exec private ip addr add 2001:db8:ff::6/127 dev vti6
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_policy=1
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_xfrm=1
# ip netns exec private ip link set vti6 mtu 1500
# ip netns exec private ip link set vti6 up

The tunnel interface is created in the initial namespace and moved to the “private” one. It will remember its original namespace where it will process encapsulated packets. Any packet entering the interface will temporarily get a firewall mark of 6 that will be used only to match the appropriate IPsec policy4 below. The kernel sets a low MTU on the interface to handle any possible combination of ciphers and protocols. We set it to 1500 and let PMTUD do its work.

We can then configure strongSwan5:

conn V3-2
  left        = 2001:db8:1::1
  leftsubnet  = ::/0
  right       = 2001:db8:3::2
  rightsubnet = ::/0
  authby      = psk
  mark        = 6
  auto        = route
  keyexchange = ikev2
  keyingtries = %forever
  ike         = aes256gcm16-prfsha384-ecp384!
  esp         = aes256gcm16-prfsha384-ecp384!
  mobike      = no

The IKE daemon configures the following policies in the kernel:

$ ip xfrm policy
src ::/0 dst ::/0
        dir out priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:1::1 dst 2001:db8:3::2
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir fwd priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir in priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
[…]

Those policies are used for any source or destination as long as the firewall mark is equal to 6, which matches the mark configured for the tunnel interface.

The last step is to configure BGP to exchange routes. We can use BIRD for this:

router id 1.0.1.1;
protocol device {
   scan time 10;
}
protocol kernel {
   persist;
   learn;
   import all;
   export all;
   merge paths yes;
}
protocol bgp IBGP_V3_2 {
   local 2001:db8:ff::6 as 65001;
   neighbor 2001:db8:ff::7 as 65003;
   import all;
   export where ifname ~ "eth*";
   preference 160;
   hold time 6;
}

Once BIRD is started in the “private” namespace, we can check routes are learned correctly:

$ ip netns exec private ip -6 route show 2001:db8:a3::/64
2001:db8:a3::/64 proto bird metric 1024
        nexthop via 2001:db8:ff::5  dev vti5 weight 1
        nexthop via 2001:db8:ff::7  dev vti6 weight 1

The above route was learnt from both V3-1 (through vti5) and V3-2 (through vti6). Like for the JunOS version, there is no route-leaking between the “private” namespace and the initial one. The VPN cannot be used as a gateway between the two namespaces, only for encapsulation. This also prevent a misconfiguration (for example, IKE daemon not running) from allowing packets to leave the private network.

As a bonus, unencrypted traffic can be observed with tcpdump on the tunnel interface:

$ ip netns exec private tcpdump -pni vti6 icmp6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vti6, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
20:51:15.258708 IP6 2001:db8:a1::1 > 2001:db8:a3::1: ICMP6, echo request, seq 69
20:51:15.260874 IP6 2001:db8:a3::1 > 2001:db8:a1::1: ICMP6, echo reply, seq 69

You can find all the configuration files for this example on GitHub. The documentation of strongSwan also features a page about route-based VPNs.


  1. Everything in this post should work with Libreswan

  2. fwd is for incoming packets on non-local addresses. It only makes sense in transport mode and is a Linux-only particularity. 

  3. Virtual tunnel interfaces (VTI) were introduced in Linux 3.6 (for IPv4) and Linux 3.12 (for IPv6). Appropriate namespace support was added in 3.15. KLIPS, an alternative out-of-tree stack available since Linux 2.2, also features tunnel interfaces. 

  4. The mark is set right before doing a policy lookup and restored after that. Consequently, it doesn’t affect other possible uses (filtering, routing). However, as Netfilter can also set a mark, one should be careful for conflicts. 

  5. The ciphers used here are the strongest ones currently possible while keeping compatibility with JunOS. The documentation for strongSwan contains a complete list of supported algorithms as well as security recommendations to choose them. 

13 September, 2017 08:20AM by Vincent Bernat

Reproducible builds folks

Reproducible Builds: Weekly report #124

Here's what happened in the Reproducible Builds effort between Sunday September 3 and Saturday September 9 2017:

Media coverage

GSoC and Outreachy updates

Debian will participate in this year's Outreachy initiative and the Reproducible Builds is soliciting mentors and students to join this round.

For more background please see the following mailing list posts: 1, 2 & 3.

Reproduciblity work in Debian

In addition, the following NMUs were accepted:

Reproduciblity work in other projects

Patches sent upstream:

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

3 package reviews have been added, 2 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (15)

diffoscope development

Development continued in git, including the following contributions:

Mattia Rizzolo also uploaded the version 86 released last week to stretch-backports.

reprotest development

tests.reproducible-builds.org

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

13 September, 2017 07:48AM

September 12, 2017

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in August 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

DebConf 17 in Montreal

I traveled to DebConf 17 in Montreal/Canada. I arrived on 04. August and met a lot of different people which I only knew by name so far. I think this is definitely one of the best aspects of real life meetings, putting names to faces and getting to know someone better. I totally enjoyed my stay and I would like to thank all the people who were involved in organizing this event. You rock! I also gave a talk about the “The past, present and future of Debian Games”,  listened to numerous other talks and got a nice sunburn which luckily turned into a more brownish color when I returned home on 12. August. The only negative experience I made was with my airline which was supposed to fly me home to Frankfurt again. They decided to cancel the flight one hour before check-in for unknown reasons and just gave me a telephone number to sort things out.  No support whatsoever. Fortunately (probably not for him) another DebConf attendee suffered the same fate and together we could find another flight with Royal Air Maroc the same day. And so we made a short trip to Casablanca/Morocco and eventually arrived at our final destination in Frankfurt a few hours later. So which airline should you avoid at all costs (they still haven’t responded to my refund claims) ? It’s WoW-Air from Iceland. (just wow)

Debian Games

Debian Java

Debian LTS

This was my eighteenth month as a paid contributor and I have been paid to work 20,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 31. July until 06. August I was in charge of our LTS frontdesk. I triaged bugs in tinyproxy, mantis, sox, timidity, ioquake3, varnish, libao, clamav, binutils, smplayer, libid3tag, mpg123 and shadow.
  • DLA-1064-1. Issued a security update for freeradius fixing 6 CVE.
  • DLA-1068-1. Issued a security update for git fixing 1 CVE.
  • DLA-1077-1. Issued a security update for faad2 fixing 11 CVE.
  • DLA-1083-1. Issued a security update for openexr fixing 3 CVE.
  • DLA-1095-1. Issued a security update for freerdp fixing 5 CVE.

Non-maintainer upload

  • I uploaded a security fix for openexr (#864078) to fix CVE-2017-9110, CVE-2017-9112 and CVE-2017-9116.

Thanks for reading and see you next time.

12 September, 2017 11:04PM by Apo

Arturo Borrero González

Google Hangouts in Debian testing (Buster)

debian-suricata logo

Google offers a lot of software components packaged specifically for Debian and Debian-like Linux distributions. Examples are: Chrome, Earth and the Hangouts plugin. Also, there are many other Internet services doing the same: Spotify, Dropbox, etc. I’m really grateful for them, since this make our life easier.

Problem is that our ecosystem is rather complex, with many distributions and many versions out there. I guess is not an easy task for them to keep such a big variety of support variations.

In this particular case, it seems Google doesn’t support Debian testing in their .deb packages. In this case, testing means Debian Buster. And the same happens with the official Spotify client package.

I’ve identified several issues with them, to name a few:

  • packages depends on lsb-core, no longer present in Buster testing.
  • packages depends on libpango1.0-0, however testing contains libpango-1.0-0

I’m in need of using Google Hangout so I’ve been forced to solve this situation by editing the .deb package provided by Google.

Simple steps:

  • 1) create a temporal working directory
% user@debian:~ $ mkdir pkg
% user@debian:~ $ cd pkg/
  • 2) get the original .deb package, the Google Hangout talk plugin.
% user@debian:~/pkg $ wget https://dl.google.com/linux/direct/google-talkplugin_current_amd64.deb
[...]
  • 3) extract the original .deb package
% user@debian:~/pkg $ dpkg-deb -R google-talkplugin_current_amd64.deb google-talkplugin_current_amd64/
  • 4) edit the control file, replace libpango1.0-0 with libpango-1.0-0
% user@debian:~/pkg $ nano google-talkplugin_current_amd64/DEBIAN/control
  • 5) rebuild the package and install it!
% user@debian:~/pkg $ dpkg -b google-talkplugin_current_amd64
% user@debian:~/pkg $ sudo dpkg -i google-talkpluging_current_amd64.deb

I have yet to investigate how to workaround the lsb-core thing, so still I can’t use Google Earth.

12 September, 2017 07:37PM

September 11, 2017

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

rANS encoding of signed coefficients

I'm currently trying to make sense of some still image coding (more details to come at a much later stage!), and for a variety of reasons, I've chosen to use rANS as the entropy coder. However, there's an interesting little detail that I haven't actually seen covered anywhere; maybe it's just because I've missed something, or maybe because it's too blindingly obvious, but I thought I would document what I ended up with anyway. (I had hoped for something even more elegant, but I guess the obvious would have to do.)

For those that don't know rANS coding, let me try to handwave it as much as possible. Your state is typically a single word (in my case, a 32-bit word), which is refilled from the input stream as needed. The encoder and decoder works in reverse order; let's just talk about the decoder. Basically it works by looking at the lowest 12 (or whatever) bits of the decoder state, mapping each of those 2^12 slots to a decoded symbol. More common symbols are given more slots, proportionally to the frequency. Let me just write a tiny, tiny example with 2 bits and three symbols instead, giving four slots:

Lowest bits Symbol
00 0
01 0
10 1
11 2

Note that the zero coefficient here maps to one out of two slots (ie., a range); you don't choose which one yourself, the encoder stashes some information in there (which is used to recover the next control word once you know which symbol there is).

Now for the actual problem: When storing DCT coefficients, we typically want to also store a sign (ie., not just 1 or 2, but also -1/+1 and -2/+2). The statistical distribution is symmetrical, so the sign bit is incompressible (except that of course there's no sign bit needed for 0). We could have done this by introducing new symbols -1 and -2 in addition to our three other ones, but this means we'll need more bits of precision, and accordingly larger look-up tables (which is negative for performance). So let's find something better.

We could also simply store it separately somehow; if the coefficient is non-zero, store the bits in some separate repository. Perhaps more elegantly, you can encode a second symbol in the rANS stream with probability 1/2, but this is more expensive computationally. But both of these have the problem that they're divergent in terms of control flow; nonzero coefficients potentially need to do a lot of extra computation and even loads. This isn't nice for SIMD, and it's not nice for GPU. It's generally not really nice.

The solution I ended up with was simulating a larger table with a smaller one. Simply rotate the table so that the zero symbol has the top slots instead of the bottom slots, and then replicate the rest of the table. For instance, take this new table:

Lowest bits Symbol
000 1
001 2
010 0
011 0
100 0
101 0
110 -1
111 -2

(The observant reader will note that this doesn't describe the exact same distribution as last time—zero has twice the relative frequency as in the other table—but ignore that for the time being.)

In this case, the bottom half of the table doesn't actually need to be stored! We know that if the three bottom bits are >= 110 (6 in decimal), we have a negative value, can subtract 6, and then continue decoding. If we are go past the end of our 2-bit table despite that, we know we are decoding a zero coefficient (which doesn't have a sign), so we can just clamp the read; or for a GPU, reads out-of-bounds on a texture will typically return 0 anyway. So it all works nicely, and the divergent I/O is gone.

If this pickled your interest, you probably want to read up on rANS in general; Fabian Giesen (aka ryg) has some notes that work as a good starting point, but beware; some of this is pretty confusing. :-)

11 September, 2017 11:00PM

September 10, 2017

hackergotchi for Steve Kemp

Steve Kemp

Debian-Administration.org is closing down

After 13 years the Debian-Administration website will be closing down towards the end of the year.

The site will go read-only at the end of the month, and will slowly be stripped back from that point towards the end of the year - leaving only a static copy of the articles, and content.

This is largely happening due to lack of content. There were only two articles posted last year, and every time I consider writing more content I lose my enthusiasm.

There was a time when people contributed articles, but these days they tend to post such things on their own blogs, on medium, on Reddit, etc. So it seems like a good time to retire things.

An official notice has been posted on the site-proper.

10 September, 2017 09:00PM

hackergotchi for Adnan Hodzic

Adnan Hodzic

Secure traffic to ZNC on Synology with Let’s Encrypt

I’ve been using IRC since late 1990’s, and I continue to do so to this day due to it (still) being one of the driving development forces in various open source communities. Especially in Linux development … and some of my acquintances I can only get in touch with via IRC :)

My Setup

On my Synology NAS I run ZNC (IRC bouncer/proxy) to which I connect using various IRC clients (irssi/XChat Azure/AndChat) from various platforms (Linux/Mac/Android). In this case ZNC serves as a gateway and no matter which device/client I connect from, I’m always connected to same IRC servers/chat rooms/settings when I left off.

This is all fine and dandy, but connecting from external networks to ZNC means you will hand in your ZNC credentials in plain text. Which is a problem for me, even thought we’re “only” talking about IRC bouncer/proxy.

With that said, how do we encrypt external traffic to our ZNC?

HowTo: Chat securely with ZNC on Synology using Let’s Encrypt SSL certificate

For reference or more thorough explanation of some of the steps/topics please refer to: Secure (HTTPS) public access to Synology NAS using Let’s Encrypt (free) SSL certificate

Requirements:

  • Synology NAS running DSM >= 6.0
  • Sub/domain name with ability to update DNS records
  • SSH access to your Synology NAS

1: DNS setup

Create A record for sub/domain you’d like to use to connect to your ZNC and point it to your Synology NAS external (WAN) IP. For your reference, subdomain I’ll use is: irc.hodzic.org

2: Create Let’s Encrypt certificate

DSM: Control Panel > Security > Certificates > Add

Followed by:

Add a new certificate > Get a certificate from Let's Encrypt

Followed by adding domain name A record was created for in Step 1, i.e:

Get a certificate from Let's Encrypt for irc.hodzic.org

After certificate is created, don’t forget to configure newly created certificate to point to correct domain name, i.e:

Configure Let's Encrypt Certificate

3: Install ZNC

In case you already have ZNC installed, I suggest you remove it and do a clean install. Mainly due to some problems with package in past, where ZNC wouldn’t start automatically on boot which lead to creating projects like: synology-znc-autostart. In latest version, all of these problems have been fixed and couple of new features have been added.

ZNC can be installed using Synology’s Package Center, if community package sources are enabled. Which can simply be done by adding new package sources:

Name: SynoCommunity
Location: http://packages.synocommunity.com

Enable Community package sources in Synology Package Center

To successfuly authenticate newly added source, under “General” tab, “Trust Level” should be set to “Any publisher”

As part of installation process, ZNC config will be run with most sane/useful options and admin user will be created allowing you access to ZNC webadmin.

4: Secure access to ZNC webadmin

Now we want to bind our sub/domain created in “Step 1” to ZNC webadmin, and secure external access to it. This can be done by creating a reverse proxy.

As part of this, you need to know which port has been allocated for SSL in ZNC Webadmin, i.e:

ZNC Webadmin > Global settings - Listen Ports

In this case, we can see it’s 8251.

Reverse Proxy can be created in:

DSM: Control Panel > Application Portal > Reverse Proxy > Create

Where sub/domain created in “Step 1” needs to be point to ZNC SSL port on localhost, i.e:

Reverse proxy: irc.hodzic.org setup

ZNC Webadmin is now available via HTTPS on external network for the sub/domain you setup as part of Step 1, or in my case:

ZNC webadmin (HTTPS)

As part of this step, in ZNC webadmin I’d advise you to create IRC servers and chatrooms you would like to connect to later.

Step 5: Create .pem file from LetsEncrpyt certificate for ZNC to use

On Synology, Let’s Encrypt certificates are stored and located on:

/usr/syno/etc/certificate/_archive/

In case you have multiple certificates, based on date your certificate was created, you can determine in which directory is your newly generated certificated stored, i.e:

drwx------ 2 root root 4096 Sep 10 12:57 JeRh3Y

Once it’s determined which certifiate is the one we want use, generate .pem by running following:

sudo cat /usr/syno/etc/certificate/_archive/JeRh3Y/{privkey,cert,chain}.pem > /usr/local/znc/var/znc.pem

After this restart ZNC:

sudo /var/packages/znc/scripts/start-stop-status stop && sudo /var/packages/znc/scripts/start-stop-status start

6: Configure IRC client

In this example I’ll use XChat Azure on MacOS, and same procedure should be identical for HexChat/XChat clients on any other platform.

Altough all information is picked up from ZNC itself, user details will need to be filled in.

In my setup I automatically connect to freenode and oftc networks, so I created two for local network and two for external network usage, later is the one we’re concentrating on.

On “General” tab of our newly created server, hostname for our server should be the sub/domain we’ve setup as part of “Step 1”, and port number should be the one we defined in “Step 4”, SSL checkbox must be checked.

Xchat Azure: Network list - General tab

On “Connecting” tab “Server password” field needs to be filled in following format:

johndoe/freenode:password

Where, “johndoe” is ZNC username. “freenode” is ZNC network name, and “password” is ZNC password.

Xchat Azure: Network list - Connecting tab

“freenode” in this case must first be created as part of ZNC webadmin configuration, mentioned in “step 4”. Same case is for oftc network configuration.

As part of establishing the connection, information about our Let’s Encrypt certificate will be displayed, after which connection will be established and you’ll be automatically logged into all chatrooms.

Happy hacking!

10 September, 2017 04:40PM by Adnan Hodzic

intrigeri

Can you reproduce this Tails ISO image?

Thanks to a Mozilla Open Source Software award, we have been working on making the Tails ISO images build reproducibly.

We have made huge progress: since a few months, ISO images built by Tails core developers and our CI system have always been identical. But we're not done yet and we need your help!

Our first call for testing build reproducibility in August uncovered a number of remaining issues. We think that we have fixed them all since, and we now want to find out what other problems may prevent you from building our ISO image reproducibly.

Please try to build an ISO image today, and tell us whether it matches ours!

Build an ISO

These instructions have been tested on Debian Stretch and testing/sid. If you're using another distribution, you may need to adjust them.

If you get stuck at some point in the process, see our more detailed build documentation and don't hesitate to contact us:

Setup the build environment

You need a system that supports KVM, 1 GiB of free memory, and about 20 GiB of disk space.

  1. Install the build dependencies:

    sudo apt install \
        git \
        rake \
        libvirt-daemon-system \
        dnsmasq-base \
        ebtables \
        qemu-system-x86 \
        qemu-utils \
        vagrant \
        vagrant-libvirt \
        vmdebootstrap && \
    sudo systemctl restart libvirtd
    
  2. Ensure your user is in the relevant groups:

    for group in kvm libvirt libvirt-qemu ; do
       sudo adduser "$(whoami)" "$group"
    done
    
  3. Logout and log back in to apply the new group memberships.

Build Tails 3.2~alpha2

This should produce a Tails ISO image:

git clone https://git-tails.immerda.ch/tails && \
cd tails && \
git checkout 3.2-alpha2 && \
git submodule update --init && \
rake build

Send us feedback!

No matter how your build attempt turned out we are interested in your feedback.

Gather system information

To gather the information we need about your system, run the following commands in the terminal where you've run rake build:

sudo apt install apt-show-versions && \
(
  for f in /etc/issue /proc/cpuinfo
  do
    echo "--- File: ${f} ---"
    cat "${f}"
    echo
  done
  for c in free locale env 'uname -a' '/usr/sbin/libvirtd --version' \
            'qemu-system-x86_64 --version' 'vagrant --version'
  do
    echo "--- Command: ${c} ---"
    eval "${c}"
    echo
  done
  echo '--- APT package versions ---'
  apt-show-versions qemu:amd64 linux-image-amd64:amd64 vagrant \
                    libvirt0:amd64
) | bzip2 > system-info.txt.bz2

Then check that the generated file doesn't contain any sensitive information you do not want to leak:

bzless system-info.txt.bz2

Next, please follow the instructions below that match your situation!

If the build failed

Sorry about that. Please help us fix it by opening a ticket:

  • set Category to Build system;
  • paste the output of rake build;
  • attach system-info.txt.bz2 (this will publish that file).

If the build succeeded

Compute the SHA-512 checksum of the resulting ISO image:

sha512sum tails-amd64-3.2~alpha2.iso

Compare your checksum with ours:

9b4e9e7ee7b2ab6a3fb959d4e4a2db346ae322f9db5409be4d5460156fa1101c23d834a1886c0ce6bef2ed6fe378a7e76f03394c7f651cc4c9a44ba608dda0bc

If the checksums match: success, congrats for reproducing Tails 3.2~alpha2! Please send an email to tails-dev@boum.org (public) or tails@boum.org (private) with the subject "Reproduction of Tails 3.2~alpha2 successful" and system-info.txt.bz2 attached. Thanks in advance! Then you can stop reading here.

Else, if the checksums differ: too bad, but really it's good news as the whole point of the exercise is precisely to identify such problems :) Now you are in a great position to help improve the reproducibility of Tails ISO images by following these instructions:

  1. Install diffoscope version 83 or higher and all the packages it recommends. For example, if you're using Debian Stretch:

    sudo apt remove diffoscope && \
    echo 'deb http://ftp.debian.org/debian stretch-backports main' \
      | sudo tee /etc/apt/sources.list.d/stretch-backports.list && \
    sudo apt update && \
    sudo apt -o APT::Install-Recommends="true" \
             install diffoscope/stretch-backports
    
  2. Download the official Tails 3.2~alpha2 ISO image.

  3. Compare the official Tails 3.2~alpha2 ISO image with yours:

    diffoscope \
           --text diffoscope.txt \
           --html diffoscope.html \
           --max-report-size 262144000 \
           --max-diff-block-lines 10000 \
           --max-diff-input-lines 10000000 \
           path/to/official/tails-amd64-3.2~alpha2.iso \
           path/to/your/own/tails-amd64-3.2~alpha2.iso
    bzip2 diffoscope.{txt,html}
    
  4. Send an email to tails-dev@boum.org (public) or tails@boum.org (private) with the subject "Reproduction of Tails 3.2~alpha2 failed", attaching:

    • system-info.txt.bz2;
    • the smallest file among diffoscope.txt.bz2 and diffoscope.html.bz2, except if they are larger than 100 KiB, in which case better upload the file somewhere (e.g. share.riseup.net and share the link in your email.

Thanks a lot!

Credits

Thanks to Ulrike & anonym who authored a draft on which this blog post is based.

10 September, 2017 02:27PM

Sylvain Beucler

dot-zed archive file format

TL,DR: I reverse-engineered the .zed encrypted archive format.
Following a clean-room design, I'm providing a description that can be implemented by a third-party.
Interested? :)

(reference version at: https://www.beuc.net/zed/)

.zed archive file format

Introduction

Archives with the .zed extension are conceptually similar to an encrypted .zip file.

In addition to a specific format, .zed files support multiple users: files are encrypted using the archive master key, which itself is encrypted for each user and/or authentication method (password, RSA key through certificate or PKCS#11 token). Metadata such as filenames is partially encrypted.

.zed archives are used as stand-alone or attached to e-mails with the help of a MS Outlook plugin. A variant, which is not covered here, can encrypt/decrypt MS Windows folders on the fly like ecryptfs.

In the spirit of academic and independent research this document provides a description of the file format and encryption algorithms for this encrypted file archive.

See the conventions section for conventions and acronyms used in this document.

Structure overview

The .zed file format is composed of several layers.

  • The main container is using the (MS-CFB), which is notably used by MS Office 97-2003 .doc files. It contains several streams:

    • Metadata stream: in OLE Property Set format (MS-OLEPS), contains 2 blobs in a specific Type-Length-Value (TLV) format:

      • _ctlfile: global archive properties and access list
        It is obfuscated by means of static-key AES encryption.
        The properties include archive initial filename and a global IV.
        A global encryption key is itself encrypted in each user entry.

      • _catalog: file list
        Contains each file metadata indexed with a 15-bytes identifier.
        Directories are supported.
        Full filename is encrypted using AES.
        File extension is (redundantly) stored in clear, and so are file metadata such as modification time.

    • Each file in the archive compressed with zlib and encrypted with the standard AES algorithm, in a separate stream.
      Several encryption schemes and key sizes are supported.
      The file stream is split in chunks of 512 bytes, individually encrypted.

    • Optional streams, contain additional metadata as well as pictures to display in the application background ("watermarks"). They are not discussed here.

Or as a diagram:

+----------------------------------------------------------------------------------------------------+
| .zed archive (MS-CBF)                                                                              |
|                                                                                                    |
|  stream #1                         stream #2                       stream #3...                    |
| +------------------------------+  +---------------------------+  +---------------------------+     |
| | metadata (MS-OLEPS)          |  | encryption (AES)          |  | encryption (AES)          |     |
| |                              |  | 512-bytes chunks          |  | 512-bytes chunks          |     |
| | +--------------------------+ |  |                           |  |                           |     |
| | | obfuscation (static key) | |  | +-----------------------+ |  | +-----------------------+ |     |
| | | +----------------------+ | |  |-| compression (zlib)    |-|  |-| compression (zlib)    |-|     |
| | | |_ctlfile (TLV)        | | |  | |                       | |  | |                       | | ... |
| | | +----------------------+ | |  | | +---------------+     | |  | | +---------------+     | |     | 
| | +--------------------------+ |  | | | file contents |     | |  | | | file contents |     | |     |
| |                              |  | | |               |     | |  | | |               |     | |     |
| | +--------------------------+ |  |-| +---------------+     |-|  |-| +---------------+     |-|     |
| | | _catalog (TLV)           | |  | |                       | |  | |                       | |     |
| | +--------------------------+ |  | +-----------------------+ |  | +-----------------------+ |     |
| +------------------------------+  +---------------------------+  +---------------------------+     |
+----------------------------------------------------------------------------------------------------+

Encryption schemes

Several AES key sizes are supported, such as 128 and 256 bits.

The Cipher Block Chaining (CBC) block cipher mode of operation is used to decrypt multiple AES 16-byte blocks, which means an initialisation vector (IV) is stored in clear along with the ciphertext.

All filenames and file contents are encrypted using the same encryption mode, key and IV (e.g. if you remove and re-add a file in the archive, the resulting stream will be identical).

No cleartext padding is used during encryption; instead, several end-of-stream handlers are available, so the ciphertext has exactly the size of the cleartext (e.g. the size of the compressed file).

The following variants were identified in the 'encryption_mode' field.

STREAM

This is the end-of-stream handler for:

  • obfuscated metadata encrypted with static AES key
  • filenames and files in archives with 'encryption_mode' set to "AES-CBC-STREAM"
  • any AES ciphertext of size < 16 bytes, regardless of encryption mode

This end-of-stream handler is apparently specific to the .zed format, and applied when the cleartext's does not end on a 16-byte boundary ; in this case special processing is performed on the last partial 16-byte block.

The encryption and decryption phases are identical: let's assume the last partial block of cleartext (for encryption) or ciphertext (for decryption) was appended after all the complete 16-byte blocks of ciphertext:

  • the second-to-last block of the ciphertext is encrypted in AES-ECB mode (i.e. block cipher encryption only, without XORing with the IV)

  • then XOR-ed with the last partial block (hence truncated to the length of the partial block)

In either case, if the full ciphertext is less then one AES block (< 16 bytes), then the IV is used instead of the second-to-last block.

CTS

CTS or CipherText Stealing is the end-of-stream handler for:

  • filenames and files in archives with 'encryption_mode' set to "AES-CBC-CTS".
    • exception: if the size of the ciphertext is < 16 bytes, then "STREAM" is used instead.

It matches the CBC-CS3 variant as described in Recommendation for Block Cipher Modes of Operation: Three Variants of Ciphertext Stealing for CBC Mode.

Empty cleartext

Since empty filenames or metadata are invalid, and since all files are compressed (resulting in a minimum 8-byte zlib cleartext), no empty cleartext was encrypted in the archive.

metadata stream

It is named 05356861616161716149656b7a6565636e576a33317a7868304e63 (hexadecimal), i.e. the character with code 5 followed by '5haaaaqaIekzeecnWj31zxh0Nc' (ASCII).

The format used is OLE Property Set (MS-OLEPS).

It introduces 2 property names "_ctlfile" (index 3) and "_catalog" (index 4), and 2 instances of said properties each containing an application-specific VT_BLOB (type 0x0041).

_ctlfile: obfuscated global properties and access list

This subpart is stored under index 3 ("_ctlfile") of the MS-OLEPS metadata.

It consists of:

  • static delimiter 0765921A2A0774534752073361719300 (hexadecimal) followed by 0100 (hexadecimal) (18 bytes total)
  • 16-byte IV
  • ciphertext
  • 1 uint32be representing the length of all the above
  • static delimiter 0765921A2A0774534752073361719300 (hexadecimal) followed by "ZoneCentral (R)" (ASCII) and a NUL byte (32 bytes total)

The ciphertext is encrypted with AES-CBC "STREAM" mode using 128-bit static key 37F13CF81C780AF26B6A52654F794AEF (hexadecimal) and the prepended IV so as to obfuscate the access list. The ciphertext is continuous and not split in chunks (unlike files), even when it is larger than 512 bytes.

The decrypted text contain properties in a TLV format as described in _ctlfile TLV:

  • global archive properties as a 'fileprops' structure,

  • extra archive properties as a 'archive_extraprops' structure

  • users access list as a series of 'passworduser' and 'rsauser entries.

Archives may include "mandatory" users that cannot be removed. They are typically used to add an enterprise wide recovery RSA key to all archives. Extreme care must be taken to protect these key, as it can decrypt all past archives generated from within that company.

_catalog: file list

This subpart is stored under index 4 ("_catalog") of the MS-OLEPS metadata.

It contains a series of 'fileprops' TLV structures, one for each file or directory.

The file hierarchy can be reconstructed by checking the 'parent_id' field of each file entry. If 'parent_id' is 0 then the file is located at the top-level of the hierarchy, otherwise it's located under the directory with the matching 'file_id'.

TLV format

This format is a series of fields :

  • 4 bytes for Type (specified as a 4-bytes hexadecimal below)
  • 4 bytes for value Length (uint32be)
  • Value

Value semantics depend on its Type. It may contain an uint32be integer, a UTF-16LE string, a character sequence, or an inner TLV structure.

Unless otherwise noted, TLV structures appear once.

Some fields are optional and may not be present at all (e.g. 'archive_createdwith').

Some fields are unique within a structure (e.g. 'files_iv'), other may be repeated within a structure to form a list (e.g. 'fileprops' and 'passworduser').

The following top-level types that have been identified, and detailed in the next sections:

  • 80110600: fileprops, used for the file list as well as for the global archive properties
  • 001b0600: archive_extraprops
  • 80140600: accesslist

Some additional unidentified types may be present.

_ctlfile TLV

  • 80110600: fileprops (TLV structure): global archive properties
    • 00230400: archive_pathname (UTF-16LE string): initial archive filename (past versions also leaked the full pathname of the initial archive)
    • 80270200: encryption_mode (utf32be): 103 for "AES-CBC-STREAM", 104 for "AES-CBC-CTS"
    • 80260200: encryption_strength (utf32be): AES key size, in bytes (e.g. 32 means AES with a 256-bit key)
    • 80280500: files_iv (sequence of bytes): global IV for all filenames and file contents
  • 001b0600: archive_extraprops (TLV structure): additionnal archive properties (optional)
    • 00c40500: archive_creationtime (FILETIME): date and time when archive was initially created (optional)
    • 00c00400: archive_createdwith (UTF-16LE string): uuid-like structure describing the application that initialized the archive (optional)
      {00000188-1000-3CA8-8868-36F59DEFD14D} is Zed! Free 1.0.188.
  • 80140600: accesslist (TLV structure): describe the users, their key encryption and their permissions
    • 80610600: passworduser (TLV structure): user identified by password (0 or more)
    • 80620600: rsauser (TLV structure): user identified by RSA key (via file or PKCS#11 token) (0 or more)
    • Fields common to passworduser and rsauser:
      • 80710400: login (UTF-16LE string): user name
      • 80720300: login_md5 (sequence of bytes): used by the application to search for a user name
      • 807e0100: priv1 (uchar): user privileges; present and set to 1 when user is admin (optional)
      • 00830200: priv2 (uint32be): user privileges; present and set to 2 when user is admin, present and set to 5 when user is a marked as mandatory, e.g. for recovery keys (optional)
      • 80740500: files_key_ciphertext (sequence of bytes): the archive encryption key, itself encrypted
      • 00840500: user_creationtime (FILETIME): date and time when the user was added to the archive
    • passworduser-specific fields:
      • 80760500: pbe_salt (sequence of bytes): salt for PBE
      • 80770200: pbe_iter (uint32be): number of iterations for PBE
      • 80780200: pkcs12_hashfunc (uint32be): hash function used for PBE and PBA key derivation
      • 80790500: pba_checksum (sequence of bytes): password derived with PBA to check for password validity
      • 807a0500: pba_salt (sequence of bytes): salt for PBA
      • 807b0200: pba_iter (uint32be): number of iterations for PBA
    • rsauser-specific fields:
      • 807d0500: certificate (sequence of bytes): user X509 certificate in DER format

_catalog TLV

  • 80110600: fileprops (TLV structure): describe the archive files (0 or more)
    • 80300500: file_id (sequence of bytes): a 16-byte unique identifier
    • 80310400: filename_halfanon (UTF-16LE string): half-anonymized filename, e.g. File1.txt (leaking filename extension)
    • 00380500: filename_ciphertext (sequence of bytes): encrypted filename; may have a trailing NUL byte once decrypted
    • 80330500: file_size (uint64le): decompressed file size in bytes
    • 80340500: file_creationtime (FILETIME): file creation date and time
    • 80350500: file_lastwritetime (FILETIME): file last modification date and time
    • 80360500: file_lastaccesstime (FILETIME): file last access date and time
    • 00370500: parent_directory_id (sequence of bytes): file_id of the parent directory, 0 is top-level
    • 80320100: is_dir (uint32be): 1 if entry is directory (optional)

Decrypting the archive AES key

rsauser

The user accessing the archive will be authenticated by comparing his/her X509 certificate with the one stored in the 'certificate' field using DER format.

The 'files_key_ciphertext' field is then decrypted using the PKCS#1 v1.5 encryption mechanism, with the private key that matches the user certificate.

passworduser

An intermediary user key, a user IV and an integrity checksum will be derived from the user password, using the deprecated PKCS#12 method as described at rfc7292 appendix B.

Note: this is not PKCS#5 (nor PBKDF1/PBKDF2), this is an incompatible method from PKCS#12 that notably does not use HMAC.

The 'pkcs12_hashfunc' field defines the underlying hash function. The following values have been identified:

  • 21: SHA-1
  • 22: SHA-256

PBA - Password-based authentication

The user accessing the archive will be authenticated by deriving an 8-byte sequence from his/her password.

The parameters for the derivation function are:

  • ID: 3
  • 'pba_salt': the salt, typically an 8-byte random sequence
  • 'pba_iter': the iteration count, typically 200000

The derivation is checked against 'pba_checksum'.

PBE - Password-based encryption

Once the user is identified, 2 new values are derived from the password with different parameters to produce the IV and the key decryption key, with the same hash function:

  • 'pbe_salt': the salt, typically an 8-bytes random sequence
  • 'pbe_iter': the iteration count, typically 100000

The parameters specific to user key are:

  • ID: 1
  • size: 32

The user key needs to be truncated to a length of 'encryption_strength', as specified in bytes in the archive properties.

The parameters specific to user IV are:

  • ID: 2
  • size: 16

Once the key decryption key and the IV are derived, 'files_key_ciphertext' is decrypted using AES CBC, with PKCS#7 padding.

Identifying file streams

The name of the MS-CFB stream is derived by shuffling the bytes from the 'file_id' field and then encoding the result as hexadecimal.

The reordering is:

Initial  offset: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Shuffled offset: 3 2 1 0 5 4 7 6 8 9 10 11 12 13 14 15

The 16th byte is usually a NUL byte, hence the stream identifier is a 30-character-long string.

Decrypting files

The compressed stream is split in chunks of 512 bytes, each of them encrypted separately using AES CBS and the global archive encryption scheme. Decryption uses the global AES key (retrieved using the user credentials), and the global IV (retrieved from the deobfuscated archive metadata).

The IV for each chunk is computed by:

  • expressing the current chunk number as little endian on 16 bytes
  • XORing it with the global IV
  • encrypting with the global AES key in ECB mode (without IV).

Each chunk is an independent stream and the decryption process involves end-of-stream handling even if this is not the end of the actual file. This is particularly important for the CTS handler.

Note: this is not to be confused with CTR block cipher mode of operation with operates differently and requires a nonce.

Decompressing files

Compressed streams are zlib stream with default compression options and can be decompressed following the zlib format.

Test cases

Excluded for brevity, cf. https://www.beuc.net/zed/#test-cases.

Conventions and references

Feedback

Feel free to send comments at beuc@beuc.net. If you have .zed files that you think are not covered by this document, please send them as well (replace sensitive files with other ones). The author's GPG key can be found at 8FF1CB6E8D89059F.

Copyright (C) 2017 Sylvain Beucler

Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.

10 September, 2017 01:50PM

hackergotchi for Charles Plessy

Charles Plessy

Summary of the discussion on off-line keys.

Last month, there has been an interesting discussion about off-line GnuPG keys and their storage systems on the debian-project@l.d.o mailing list. I tried to summarise it in the Debian wiki, in particular by creating two new pages.

10 September, 2017 12:06PM

hackergotchi for Joachim Breitner

Joachim Breitner

Less parentheses

Yesterday, at the Haskell Implementers Workshop 2017 in Oxford, I gave a lightning talk titled ”syntactic musings”, where I presented three possibly useful syntactic features that one might want to add to a language like Haskell.

The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you

Context aka. Sections

This is probably the most relevant of the three proposals. Consider a bunch of related functions, say analyseExpr and analyseAlt, like these:

analyseExpr :: Expr -> Expr
analyseExpr (Var v) = change v
analyseExpr (App e1 e2) =
  App (analyseExpr e1) (analyseExpr e2)
analyseExpr (Lam v e) = Lam v (analyseExpr flag e)
analyseExpr (Case scrut alts) =
  Case (analyseExpr scrut) (analyseAlt <$> alts)

analyseAlt :: Alt -> Alt
analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)

You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the Var case. You thus add a parameter to all these functions, and hence an argument to every call:

type Flag = Bool

analyseExpr :: Flag -> Expr -> Expr
analyseExpr flag (Var v) = if flag then change1 v else change2 v
analyseExpr flag (App e1 e2) =
  App (analyseExpr flag e1) (analyseExpr flag e2)
analyseExpr flag (Lam v e) = Lam v (analyseExpr (not flag) e)
analyseExpr flag (Case scrut alts) =
  Case (analyseExpr flag scrut) (analyseAlt flag <$> alts)

analyseAlt :: Flag -> Alt -> Alt
analyseAlt flag (dc, pats, e) = (dc, pats, analyseExpr flag e)

I find this code problematic. The intention was: “flag is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, flag should be considered constant.”

But this intention is neither easily visible nor enforced. And in fact, in the above code, flag does “change”, as analyseExpr passes something else in the Lam case. The idiom is indistinguishable from the environment idiom, where a locally changing environment (such as “variables in scope”) is passed around.

So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over!

The solution I’d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here:

type Flag = Bool
section (flag :: Flag) where
  analyseExpr :: Expr -> Expr
  analyseExpr (Var v) = if flag then change1 v else change2v 
  analyseExpr (App e1 e2) =
    App (analyseExpr e1) (analyseExpr e2)
  analyseExpr (Lam v e) = Lam v (analyseExpr e)
  analyseExpr (Case scrut alts) =
    Case (analyseExpr scrut) (analyseAlt <$> alts)

  analyseAlt :: Alt -> Alt
  analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)

Now the intention is clear: Within a clearly marked block, flag is fixed and when reasoning about this code I do not have to worry that it might change. Either all variables will be passed to change1, or all to change2. An important distinction!

Therefore, inside the section, the type of analyseExpr does not mention Flag, whereas outside its type is Flag -> Expr -> Expr. This is a bit unusual, but not completely: You see precisely the same effect in a class declaration, where the type signature of the methods do not mention the class constraint, but outside the declaration they do.

Note that idioms like implicit parameters or the Reader monad do not give the guarantee that the parameter is (locally) constant.

More details can be found in the GHC proposal that I prepared, and I invite you to raise concern or voice support there.

Curiously, this problem must have bothered me for longer than I remember: I discovered that seven years ago, I wrote a Template Haskell based implementation of this idea in the seal-module package!

Less parentheses 1: Bulleted argument lists

The next two proposals are all about removing parentheses. I believe that Haskell’s tendency to express complex code with no or few parentheses is one of its big strengths, as it makes it easier to visualy parse programs. A common idiom is to use the $ operator to separate a function from a complex argument without parentheses, but it does not help when there are multiple complex arguments.

For that case I propose to steal an idea from the surprisingly successful markup language markdown, and use bulleted lists to indicate multiple arguments:

foo :: Baz
foo = bracket
        • some complicated code
          that is evaluated first
        • other complicated code for later
        • even more complicated code

I find this very easy to visually parse and navigate.

It is actually possible to do this now, if one defines (•) = id with infixl 0 •. A dedicated syntax extension (-XArgumentBullets) is preferable:

  • It only really adds readability if the bullets are nicely vertically aligned, which the compiler should enforce.
  • I would like to use $ inside these complex arguments, and multiple operators of precedence 0 do not mix. (infixl -1 • would help).
  • It should be possible to nest these, and distinguish different nesting levers based on their indentation.

Less parentheses 1: Whitespace precedence

The final proposal is the most daring. I am convinced that it improves readability and should be considered when creating a new language. As for Haskell, I am at the moment not proposing this as a language extension (but could be convinced to do so if there is enough positive feedback).

Consider this definition of append:

(++) :: [a] -> [a] -> [a]
[]     ++ ys = ys
(x:xs) ++ ys = x : (xs++ys)

Imagine you were explaining the last line to someone orally. How would you speak it? One common way to do so is to not read the parentheses out aloud, but rather speak parenthesised expression more quickly and add pauses otherwise.

We can do the same in syntax!

(++) :: [a] -> [a] -> [a]
[]   ++ ys = ys
x:xs ++ ys = x : xs++ys

The rule is simple: A sequence of tokens without any space is implicitly parenthesised.

The reaction I got in Oxford was horror and disgust. And that is understandable – we are very used to ignore spacing when parsing expressions (unless it is indentation, of course. Then we are no longer horrified, as our non-Haskell colleagues are when they see our code).

But I am convinced that once you let the rule sink in, you will have no problem parsing such code with ease, and soon even with greater ease than the parenthesised version. It is a very natural thing to look at the general structure, identify “compact chunks of characters”, mentally group them, and then go and separately parse the internals of the chunks and how the chunks relate to each other. More natural than first scanning everything for ( and ), matching them up, building a mental tree, and then digging deeper.

Incidentally, there was a non-programmer present during my presentation, and while she did not openly contradict the dismissive groan of the audience, I later learned that she found this variant quite obvious to understand and more easily to read than the parenthesised code.

Some FAQs about this:

  • What about an operator with space on one side but not on the other? I’d simply forbid that, and hence enforce readable code.
  • Do operator sections still require parenthesis? Yes, I’d say so.
  • Does this overrule operator precedence? Yes! a * b+c == a * (b+c).
  • What is a token? Good question, and I am not not decided. In particular: Is a parenthesised expression a single token? If so, then (Succ a)+b * c parses as ((Succ a)+b) * c, otherwise it should probably simply be illegal.
  • Can we extend this so that one space binds tighter than two spaces, and so on? Yes we can, but really, we should not.
  • This is incompatible with Agda’s syntax! Indeed it is, and I really like Agda’s mixfix syntax. Can’t have everything.
  • Has this been done before? I have not seen it in any language, but Lewis Wall has blogged this idea before.

Well, let me know what you think!

10 September, 2017 10:10AM by Joachim Breitner (mail@joachim-breitner.de)

Lior Kaplan

PHP 7.2 is coming… mcrypt extension isn’t

Early September, it’s about 3 months before PHP 7.2 is expected to be release (schedule here). One of the changes is the removal of the mcrypt extension after it was deprecated in PHP 7.1. The main problem with mcrypt extension is that it is based on libmcrypt that was abandoned by it’s upstream since 2007. That’s 10 years of keeping a library alive, moving the burden to distribution’s security teams. But this isn’t new, Remi already wrote about this two years ago: “About libmcrypt and php-mcrypt“.

But with removal of the extension from the PHP code base (about F**King time), it would force the recommendation was done “nicely” till now. And forcing people means some noise, although an alternative is PHP’s owns openssl extension. But as many migrations that require code change – it’s going slow.

The goal of this post is to reach to the PHP eco system and map the components (mostly frameworks and applications) to still require/recommend mcyrpt and to pressure them to fix it before PHP 72 is released. I’ll appreciate the readers’ help with this mapping in the comments.

For example, Laravel‘s release notes for 5.1:

In previous versions of Laravel, encryption was handled by the mcrypt PHP extension. However, beginning in Laravel 5.1, encryption is handled by the openssl extension, which is more actively maintained.

Or, on the other hand Joomla 3 requirements still mentions mcrypt.

mcrypt safe:

mcrypt dependant:

For those who really need mcrypt, it is part of PECL, PHP’s extensions repository. You’re welcome to compile it on your own risk.


Filed under: Debian GNU/Linux, PHP

10 September, 2017 08:56AM by Kaplan

September 09, 2017

Russell Coker

Observing Reliability

Last year I wrote about how great my latest Thinkpad is [1] in response to a discussion about whether a Thinkpad is still the “Rolls Royce” of laptops.

It was a few months after writing that post that I realised that I omitted an important point. After I had that laptop for about a year the DVD drive broke and made annoying clicking sounds all the time in addition to not working. I removed the DVD drive and the result was that the laptop was lighter and used less power without missing any feature that I desired. As I had installed Debian on that laptop by copying the hard drive from my previous laptop I had never used the DVD drive for any purpose. After a while I got used to my laptop being like that and the gaping hole in the side of the laptop where the DVD drive used to be didn’t even register to me. I would prefer it if Lenovo sold Thinkpads in the T series without DVD drives, but it seems that only the laptops with tiny screens are designed to lack DVD drives.

For my use of laptops this doesn’t change the conclusion of my previous post. Now the T420 has been in service for almost 4 years which makes the cost of ownership about $75 per year. $1.50 per week as a tax deductible business expense is very cheap for such a nice laptop. About a year ago I installed a SSD in that laptop, it cost me about $250 from memory and made it significantly faster while also reducing heat problems. The depreciation on the SSD about doubles the cost of ownership of the laptop, but it’s still cheaper than a mobile phone and thus not in the category of things that are expected to last for a long time – while also giving longer service than phones usually do.

One thing that’s interesting to consider is the fact that I forgot about the broken DVD drive when writing about this. I guess every review has an unspoken caveat of “this works well for me but might suck badly for your use case”. But I wonder how many other things that are noteworthy I’m forgetting to put in reviews because they just don’t impact my use. I don’t think that I am unusual in this regard, so reading multiple reviews is the sensible thing to do.

09 September, 2017 10:18AM by etbe

François Marier

TLS Authentication on Freenode and OFTC

In order to easily authenticate with IRC networks such as OFTC and Freenode, it is possible to use client TLS certificates (also known as SSL certificates). In fact, it turns out that it's very easy to setup both on irssi and on znc.

Generate your TLS certificate

On a machine with good entropy, run the following command to create a keypair that will last for 10 years:

openssl req -nodes -newkey rsa:2048 -keyout user.pem -x509 -days 3650 -out user.pem -subj "/CN=<your nick>"

Then extract your key fingerprint using this command:

openssl x509 -sha1 -noout -fingerprint -in user.pem | sed -e 's/^.*=//;s/://g'

Share your fingerprints with NickServ

On each IRC network, do this:

/msg NickServ IDENTIFY Password1!
/msg NickServ CERT ADD <your fingerprint>

in order to add your fingerprint to the access control list.

Configure ZNC

To configure znc, start by putting the key in the right place:

cp user.pem ~/.znc/users/<your nick>/networks/oftc/moddata/cert/

and then enable the built-in cert plugin for each network in ~/.znc/configs/znc.conf:

<Network oftc>
    ...
            LoadModule = cert
    ...
</Network>
    <Network freenode>
    ...
            LoadModule = cert
    ...
</Network>

Configure irssi

For irssi, do the same thing but put the cert in ~/.irssi/user.pem and then change the OFTC entry in ~/.irssi/config to look like this:

{
  address = "irc.oftc.net";
  chatnet = "OFTC";
  port = "6697";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";
}

and the Freenode one to look like this:

{
  address = "chat.freenode.net";
  chatnet = "Freenode";
  port = "7000";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";
}

That's it. That's all you need to replace password authentication with a much stronger alternative.

09 September, 2017 04:52AM

September 08, 2017

Vincent Fourmond

Extract many attachement from many mails in one go using ripmime

I was recently looking for a way to extract many attachments from a series of emails. I first had a look at the AttachmentExtractor thunderbird plugin, but it seems very old and not maintained anymore. So I've come up with another very simple solution that also works with any other mail client.

Just copy all the mails you want to extract attachments from to a single (temporary) mail folder, find out which file holds the mail folder and use ripmime on that file (ripmime is packaged for Debian). For my case, it looked like:

~ ripmime -i .icedove/XXXXXXX.default/Mail/pop.xxxx/tmp -d target-directory

Simple solution, but it saved me quite some time. Hope it helps !

08 September, 2017 09:42PM by Vincent Fourmond (noreply@blogger.com)

Sven Hoexter

munin with TLS

Primarily a note for my future self so I don't have to find out what I did in the past once more.

If you're running some smaller systems scattered around the internet, without connecting them with a VPN, you might want your munin master and nodes to communicate with TLS and validate certificates. If you remember what to do it's a rather simple and straight forward process. To manage the PKI I'll utilize the well known easyrsa script collection. For this special purpose CA I'll go with a flat layout. So it's one root certificate issuing all server and client certificates directly. Some very basic docs can be also found in the munin wiki.

master setup

For your '/etc/munin/munin.conf':

tls paranoid
tls_verify_certificate yes
tls_private_key /etc/munin/master.key
tls_certificate /etc/munin/master.crt
tls_ca_certificate /etc/munin/ca.crt
tls_verify_depth 1

A node entry with TLS will look like this:

[node1.stormbind.net]
    address [2001:db8::]
    use_node_name yes

Important points here:

  • "tls_certificate" is a Web Client Authentication certificate. The master connects to the nodes as a client.
  • "tls_ca_certificate" is the root CA certificate.
  • If you'd like to disable TLS connections, for example for localhost, set "tls disabled" in the node block.

For easy-rsa the following command invocations are relevant:

./easyrsa init-pki
./easyrsa build-ca
./easrsa gen-req master
./easyrsa sign-req client master
./easyrsa set-rsa-pass master nopass

node setup

For your '/etc/munin/munin-node.conf':

tls paranoid
tls_verify_certificate yes
tls_private_key /etc/munin/node1.key
tls_certificate /etc/munin/node1.crt
tls_ca_certificate /etc/munin/ca.crt
tls_verify_depth 1

For easy-rsa the following command invocations are relevant:

./easyrsa gen-req node1
./easyrsa sign-req server node1
./easyrsa set-rsa-pass node1 nopass

Important points here:

  • "tls_certificate" on the node must be a server certificate.
  • You've to provide the CA here as well so we can verify the client certificate provided by the munin master.

08 September, 2017 02:41PM

September 07, 2017

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Licensing woes

On releasing modified versions of GPLv3 software in binary form only (quote anonymized):

And in my opinion it's perfectly ok to give out a binary release of a project, that is a work in progress, so that people can try it out and coment on it. It's easier for them to have it as binary and not need to compile it themselfs. If then after a (long) while the code is still only released in binary form, then it's ok to start a discussion. But only for a quick test, that is unneccessary. So people, calm down and enjoy life!

I wonder at what point we got here.

07 September, 2017 11:37PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

It was thirty years ago today... (and a bit more): My first ever public speech!

I came across a folder with the most unexpected treasure trove: The text for my first ever public speech! (and some related materials)
In 1985, being nine years old, I went to the IDESE school, to learn Logo. I found my diploma over ten years ago and blogged about it in this same space. Of course, I don't expect any of you to remember what I wrote twelve years ago about a (then) twenty years old piece of paper!

I add to this very old stuff about Gunnar the four pages describing my game, Evitamono ("Avoid the monkey", approximately). I still remember the game quite vividly, including traumatic issues which were quite common back then; I wrote that «the sprites were accidentally deleted twice and the game once». I remember several of my peers telling about such experiences. Well, that is good if you account for the second system syndrome!

I also found the amazing course material for how to program sound and graphics in the C64 BASIC. That was a course taken by ten year old kids. Kids that understood that you had to write [255,129,165,244,219,165,0,102] (see pages 3-5) into a memory location starting at 53248 to redefine a character so it looked as the graphic element you wanted. Of course, it was done with a set of POKEs, as everything in C64. Or that you could program sound by setting the seven SID registers for each of the three voices containing low frequency, high frequency, low pulse, high pulse, wave control, wave length, wave amplitude in memory locations 54272 through 54292... And so on and on and on...

And as a proof that I did take the course:

...I don't think I could make most of my current BSc students make sense out of what is in the manual. But, being a kid in the 1980s, that was the only way to get a computer to do what you wanted. Yay for primitivity! :-D

AttachmentSize
Speech for "Evitamono"1.29 MB
Coursee material for sound and graphics programming in C64 BASIC15.82 MB
Proof that I was there!4.86 MB

07 September, 2017 06:35PM by gwolf

Lior Kaplan

FOSScamp Syros 2017 – day 3

The 3rd day should have started with a Debian sprint and then a LibreOffice one, taking advantage I’m still attending, as that’s my last day. But plans don’t always work out and we started 2 hours later. When everybody arrive we got everyone together for a short daily meeting (scrum style). The people were divided to 3 teams for translating:  Debian Installer, LibreOffice and Gnome. For each team we did a short list of what left and with what to start. And in the end – how does what so there will be no toe stepping. I was really proud with this and felt it was time well spent.

The current translation percentage for Albanian in LibreOffice is 60%. So my recommendation to the team is translate master only and do not touch the help translation. My plans ahead would be to improve the translation as much as possible for LibreOffice 6.0 and near the branching point (Set to November 20th by the release schedule) decide if it’s doable for the 6.0 life time or to set the goal at 6.1. In the 2nd case, we might try to backport translation back to 6.0.

For the translation itself, I’ve mentioned to the team about KeyID language pack and referred them to the nightly builds. These tools should help with keeping the translation quality high.

For the Debian team, after deciding who works on what, I’ve asked Silva to do review for the others, as doing it myself started to take more and more of my time. It’s also good that the reviewer know the target language and not like me, can catch more the syntax only mistakes. Another point, as she’s available more easily to the team while I’m leaving soon, so I hope this role of reviewer will stay as part of the team.

With the time left I mostly worked on my own tasks, which were packaging the Albanian dictionary, resulting in https://packages.debian.org/sid/myspell-sq and making sure the dictionary is also part of LibreOffice resulting in https://gerrit.libreoffice.org/#/c/41906/ . When it is accepted, I want to upload it to the LibreOffice repository so all users can download and use the dictionary.

During the voyage home (ferry, bus, plain and train), I mailed Sergio Durigan Junior, my NM applicant, with a set of questions. My first action as an AM (:

Overall FOSScamp results for Albanian translation were very close to the goal I set (100%):

  • Albanian (sq) level1 – 99%
  • Albanian (sq) level2 – 25% (the rest is pending at #874497)
  • Albanian (sq) level3 – 100%

That’s the result of work by Silva Arapi, Eva Vranici, Redon Skikuli, Anisa Kuci and Nafie Shehu.


Filed under: Debian GNU/Linux, i18n & l10n, LibreOffice

07 September, 2017 03:13PM by Kaplan

hackergotchi for Thomas Lange

Thomas Lange

My recent FAI activities

During DebConf 17 in Montréal I had a FAI demo session (video), where I showed how to create a customized installation CD and how to create a diskimage using the same configuration. This diskimage is ready for use with a VM software or can be booted inside a cloud environment.

During the last weeks I was working on FAI 5.4 which will be released in a few weeks. I you want to test it use

deb https://fai-project.org/download beta-testing koeln

in your sources.list file.

The most important new feature will be the cross architecture support. I managed to create an ARM64 diskimage on a x86 host and boot this inside Qemu. Currently I learn how to flash images onto my new Hikey960 board for booting my own Debian images on real hardware. The embedded world is still new for me and very different in respect to the boot process.

At DebConf, I also worked on debootstrap. I produced a set of patches which can speedup debootstrap by a factor of 2. See #871835 for details.

FAI debootstrap ARM

07 September, 2017 03:03PM

Reproducible builds folks

Reproducible Builds: Weekly report #123

Here's what happened in the Reproducible Builds effort between Sunday August 27 and Saturday September 2 2017:

Talks and presentations

Holger Levsen talked about our progress and our still-far goals at BornHack 2017 (Video).

Toolchain development and fixes

The Debian FTP archive will now reject changelogs where different entries have the same timestamps.

UDD now uses reproducible-tracker.json (~25MB) which ignores our tests for Debian unstable, instead of our full set of results in reproducible.json. Our tests for Debian unstable uses a stricter definition of "reproducible" than what was recently added to Debian policy, and these stricter tests are currently more unreliable.

Packages reviewed and fixed, and bugs filed

Patches sent upstream:

Debian bugs filed:

Debian packages NMU-uploaded:

Reviews of unreproducible packages

25 package reviews have been added, 50 have been updated and 86 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (46)
  • Martín Ferrari (1)
  • Steve Langasek (1)

diffoscope development

Version 86 was uploaded to unstable by Mattia Rizzolo. It included previous weeks' contributions from:

  • Mattia Rizzolo
    • tests/binary: skip a test if the 'distro' module is not available.
    • Some code quality and style improvements.
  • Guangyuan Yang
    • tests/iso9660: support both cdrtools' genisoimage's versions of isoinfo.
  • Chris Lamb
    • comparators/xml: Use name attribute over path to avoid leaking comparison full path in output.
    • Tidy diffoscope.progress a little.
  • Ximin Luo
    • Add a --tool-prefix-binutils CLI flag. Closes: #869868
    • On non-GNU systems, prefer some tools that start with "g". Closes: #871029
    • presenters/html: Don't traverse children whose parents were already limited. Closes: #871413
  • Santiago Torres-Arias
    • diffoscope.progress: Support the new fork of python-progressbar. Closes: #873157

reprotest development

Development continued in git with contributions from:

  • Ximin Luo:
    • Add -v/--verbose which is a bit more popular.
    • Make it possible to omit "auto" when building packages.
    • Refactor how the config file works, in preparation for new features.
    • chown -h for security.

Misc.

This week's edition was written by Ximin Luo, Chris Lamb, Bernhard M. Wiedemann and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

07 September, 2017 09:54AM

John Goerzen

Switching to xmonad + Gnome – and ditching a Mac

I have been using XFCE with xmonad for years now. I’m not sure exactly how many, but at least 6 years, if not closer to 10. Today I threw in the towel and switched to Gnome.

More recently, at a new job, I was given a Macbook Pro. I wasn’t entirely sure what to think of this, but I thought I’d give it a try. I found MacOS to be extremely frustrating and confining. It had no real support for a tiling window manager, and although projects like amethyst tried to approximate what xmonad can do on Linux, they were just too limited by the platform and were clunky. Moreover, the entire UI was surprisingly sluggish; maybe that was an induced effect from animations, but I don’t think that explains it. A Debisn stretch install, even on inferior hardware, was snappy in a way that MacOS never was. So I have requested to swap for a laptop that will run Debian. The strange use of Command instead of Control for things, combined with the overall lack of configurability of keybindings, meant that I was going to always be fighting muscle memory moving from one platform to another. Not only that, but being back in the world of a Free Software OS means a lot.

Now then, back to xmonad and XFCE situation. XFCE once worked very well with xmonad. Over the years, this got more challenging. Around the jessie (XFCE 4.10) time, I had to be very careful about when I would let it save my session, because it would easily break. With stretch, I had to write custom scripts because the panel wouldn’t show up properly, and even some application icons would be invisible, if things were started in a certain order. This took much trial and error and was still cumbersome.

Gnome 3, with its tightly-coupled Gnome Shell, has never been compatible with other window managers — at least not directly. A person could have always used MATE with xmonad — but a lot of people that run XFCE tend to have some Gnome 3 apps (for instance, evince) anyhow. Cinnamon also wouldn’t work with xmonad, because it is simply another tightly-coupled shell instead of Gnome Shell. And then today I discovered gnome-flashback. gnome-flashback is a Gnome 3 environment that uses the traditional X approach with a separate window manager (metacity of yore by default). Sweet.

It turns out that Debian’s xmonad has built-in support for it. If you know the secret: apt-get install gnome-session-flashback (OK, it’s not so secret; it’s even in xmonad’s README.Debian these days) Install that, plus gnome and gdm3 and things are nice. Configure xmonad with GNOME support and poof – goodness right out of the box, selectable from the gdm sessions list.

I still have some gripes about Gnome’s configurability (or lack thereof). But I’ve got to say: This environment is the first one I’ve ever used that got external display switching very nearly right without any configuration, and I include MacOS in that. Plug in an external display, and poof – it’s configured and set up. You can hit a toggle key (Windows+P by default) to change the configurations, or use the Display section in gnome-control-center. Unplug it, and it instantly reconfigures itself to put everything back on the laptop screen. Yessss! I used to have scripts to do this in the wheezy/jessie days. XFCE in stretch had numerous annoying failures in this area which rendered the internal display completely dark until the next reboot – very frustrating. With Gnome, it just works. And, even if you have “suspend on lid closed” turned on, if the system is powered up and hooked up to an external display, it will keep running even if the lid is closed, figuring you must be using it on the external screen. Another thing the Mac wouldn’t do well.

All in all, some pretty good stuff here. I continue to be impressed by stretch. It is darn impressive to put this OS on generic hardware and have it outshine the closed-ecosystem Mac!

07 September, 2017 02:43AM by John Goerzen

September 06, 2017

Mike Gabriel

MATE 1.18 landed in Debian testing

This is to announce that finally all MATE Desktop 1.18 components have landed in Debian testing (aka buster).

Credits

Again a big thanks to the packaging team (esp. Vangelis Mouhtsis and Martin Wimpress, but also to Jeremy Bicha for constant advice and Aron Xu for joining the Debian+Ubuntu MATE Packaging Team and merging all the Ubuntu zesty and artful branches back to master).

Fully Available on all Debian-supported Architectures

The very special thing about this MATE 1.18 release for Debian is that MATE is now available on all Debian hardware architectures. See "Buildd" column on our DDPO overview page [1]. Thanks to all the people from the Debian porters realm for providing feedback to my porting questions.

References

06 September, 2017 09:04AM by sunweaver

September 05, 2017

hackergotchi for Kees Cook

Kees Cook

security things in Linux v4.13

Previously: v4.12.

Here’s a short summary of some of interesting security things in Sunday’s v4.13 release of the Linux kernel:

security documentation ReSTification
The kernel has been switching to formatting documentation with ReST, and I noticed that none of the Documentation/security/ tree had been converted yet. I took the opportunity to take a few passes at formatting the existing documentation and, at Jon Corbet’s recommendation, split it up between end-user documentation (which is mainly how to use LSMs) and developer documentation (which is mainly how to use various internal APIs). A bunch of these docs need some updating, so maybe with the improved visibility, they’ll get some extra attention.

CONFIG_REFCOUNT_FULL
Since Peter Zijlstra implemented the refcount_t API in v4.11, Elena Reshetova (with Hans Liljestrand and David Windsor) has been systematically replacing atomic_t reference counters with refcount_t. As of v4.13, there are now close to 125 conversions with many more to come. However, there were concerns over the performance characteristics of the refcount_t implementation from the maintainers of the net, mm, and block subsystems. In order to assuage these concerns and help the conversion progress continue, I added an “unchecked” refcount_t implementation (identical to the earlier atomic_t implementation) as the default, with the fully checked implementation now available under CONFIG_REFCOUNT_FULL. The plan is that for v4.14 and beyond, the kernel can grow per-architecture implementations of refcount_t that have performance characteristics on par with atomic_t (as done in grsecurity’s PAX_REFCOUNT).

CONFIG_FORTIFY_SOURCE
Daniel Micay created a version of glibc’s FORTIFY_SOURCE compile-time and run-time protection for finding overflows in the common string (e.g. strcpy, strcmp) and memory (e.g. memcpy, memcmp) functions. The idea is that since the compiler already knows the size of many of the buffer arguments used by these functions, it can already build in checks for buffer overflows. When all the sizes are known at compile time, this can actually allow the compiler to fail the build instead of continuing with a proven overflow. When only some of the sizes are known (e.g. destination size is known at compile-time, but source size is only known at run-time) run-time checks are added to catch any cases where an overflow might happen. Adding this found several places where minor leaks were happening, and Daniel and I chased down fixes for them.

One interesting note about this protection is that is only examines the size of the whole object for its size (via __builtin_object_size(..., 0)). If you have a string within a structure, CONFIG_FORTIFY_SOURCE as currently implemented will make sure only that you can’t copy beyond the structure (but therefore, you can still overflow the string within the structure). The next step in enhancing this protection is to switch from 0 (above) to 1, which will use the closest surrounding subobject (e.g. the string). However, there are a lot of cases where the kernel intentionally copies across multiple structure fields, which means more fixes before this higher level can be enabled.

NULL-prefixed stack canary
Rik van Riel and Daniel Micay changed how the stack canary is defined on 64-bit systems to always make sure that the leading byte is zero. This provides a deterministic defense against overflowing string functions (e.g. strcpy), since they will either stop an overflowing read at the NULL byte, or be unable to write a NULL byte, thereby always triggering the canary check. This does reduce the entropy from 64 bits to 56 bits for overflow cases where NULL bytes can be written (e.g. memcpy), but the trade-off is worth it. (Besdies, x86_64’s canary was 32-bits until recently.)

IPC refactoring
Partially in support of allowing IPC structure layouts to be randomized by the randstruct plugin, Manfred Spraul and I reorganized the internal layout of how IPC is tracked in the kernel. The resulting allocations are smaller and much easier to deal with, even if I initially missed a few needed container_of() uses.

randstruct gcc plugin
I ported grsecurity’s clever randstruct gcc plugin to upstream. This plugin allows structure layouts to be randomized on a per-build basis, providing a probabilistic defense against attacks that need to know the location of sensitive structure fields in kernel memory (which is most attacks). By moving things around in this fashion, attackers need to perform much more work to determine the resulting layout before they can mount a reliable attack.

Unfortunately, due to the timing of the development cycle, only the “manual” mode of randstruct landed in upstream (i.e. marking structures with __randomize_layout). v4.14 will also have the automatic mode enabled, which randomizes all structures that contain only function pointers.

A large number of fixes to support randstruct have been landing from v4.10 through v4.13, most of which were already identified and fixed by grsecurity, but many were novel, either in newly added drivers, as whitelisted cross-structure casts, refactorings (like IPC noted above), or in a corner case on ARM found during upstream testing.

lower ELF_ET_DYN_BASE
One of the issues identified from the Stack Clash set of vulnerabilities was that it was possible to collide stack memory with the highest portion of a PIE program’s text memory since the default ELF_ET_DYN_BASE (the lowest possible random position of a PIE executable in memory) was already so high in the memory layout (specifically, 2/3rds of the way through the address space). Fixing this required teaching the ELF loader how to load interpreters as shared objects in the mmap region instead of as a PIE executable (to avoid potentially colliding with the binary it was loading). As a result, the PIE default could be moved down to ET_EXEC (0x400000) on 32-bit, entirely avoiding the subset of Stack Clash attacks. 64-bit could be moved to just above the 32-bit address space (0x100000000), leaving the entire 32-bit region open for VMs to do 32-bit addressing, but late in the cycle it was discovered that Address Sanitizer couldn’t handle it moving. With most of the Stack Clash risk only applicable to 32-bit, fixing 64-bit has been deferred until there is a way to teach Address Sanitizer how to load itself as a shared object instead of as a PIE binary.

early device randomness
I noticed that early device randomness wasn’t actually getting added to the kernel entropy pools, so I fixed that to improve the effectiveness of the latent_entropy gcc plugin.

That’s it for now; please let me know if I missed anything. As a side note, I was rather alarmed to discover that due to all my trivial ReSTification formatting, and tiny FORTIFY_SOURCE and randstruct fixes, I made it into the most active 4.13 developers list (by patch count) at LWN with 76 patches: a whopping 0.6% of the cycle’s patches. ;)

Anyway, the v4.14 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

05 September, 2017 11:01PM by kees

hackergotchi for Gunnar Wolf

Gunnar Wolf

Made with Creative Commons: Over half translated, yay!

An image speaks for a thousand words...

And our translation project is worth several thousand words!
I am very happy and surprised to say we have surpassed the 50% mark of the Made with Creative Commons translation project. We have translated 666 out of 1210 strings (yay for 3v1l numbers)!
I have to really thank Weblate for hosting us and allowing for collaboration to happen there. And, of course, I have to thank the people that have jumped on board and helped the translation — We are over half way there! Lets keep pushing!

Translation status

PS - If you want to join the project, just get in Weblate and start translating right away, either to Spanish or other languages! (Polish, Dutch and Norwegian Bokmål are on their way) If you translate into Spanish, *please* read and abide by the specific Spanish translation guidelines.

05 September, 2017 07:05PM by gwolf

hackergotchi for Chris Lamb

Chris Lamb

Ask the dumb questions

In the same way it vital to ask the "smart questions", it is equally important to ask the dumb ones.

Whilst your milieu might be—say—comparing and contrasting the finer details of commission structures between bond brokers, if you aren't quite sure of the topic learn to be bold and confident enough to boldly ask: I'm sorry, but what actually is a bond?

Don't consider this to be an all-or-nothing affair. After all, you might have at least some idea about what a bond is. Rather, adjust your tolerance to also ask for clarification when you are merely slightly unsure or merely slightly uncertain about a concept, term or reference.


So why do this? Most obviously, you are learning something and expanding your knowledge about the world, but a clarification can avoid problems later if you were mistaken in your assumptions.

Not only that, asking "can you explain that?" or admitting "I don't follow…" is not only being honest with yourself, the vulnerability you show when admitting one's ignorance opens yourself to others leading to closer friendships and working relationships.

We clearly have a tendency to want to come across as knowledgable or―perhaps more honestly―we don't want to appear dumb or uninformed as it will bruise our ego. But the precise opposite is true: nodding and muddling your way through conversations you only partly understand is unlikely to cultivate true feelings of self-respect and a healthy self-esteem.

Since adopting this approach I have found I've rarely derailed the conversation. In fact, speaking up not only encourages and flatters others that you care about their subject, it has invariably lead to related matters which are not only more inclusive but actually novel and interesting to all present.


So push through the voice in your head and be that elephant in the room. After all, you might not the only person thinking it. If it helps, try reframing it to yourself as helping others…

You'll be finding it effortless soon enough. Indeed, asking the dumb question is actually a positive feedback loop where each question you pose helps you make others in the future. Excellence is not an act, but a habit.

05 September, 2017 11:51AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

It's already September.

It's already September. I haven't written much code last month. I wrote a CSV parser and felt a little depressed after reading rfc4180. None of my CSV files were in CRLF.

05 September, 2017 12:26AM by Junichi Uekawa

September 04, 2017

hackergotchi for Jonathan Dowland

Jonathan Dowland

Sortpaper: 16:9 edition

sortpaper 16:9

sortpaper 16:9

Back in 2011 I stumbled across a file "sortpaper.png", which was a hand-crafted wallpaper I'd created some time in the early noughties to help me organise icons on my computer's Desktop. I published it at the time in the blog post sortpaper.

Since then I rediscovered the blog post, and since I was looking for an excuse to try out the Processing software, I wrote a Processing Sketch to re-create it, but with the size and colours parameterized: sortpaper.pde.txt. The thumbnail above links to an example 1920x1080 rendering.

04 September, 2017 09:34PM

Dominique Dumont

cme: some read-write backend features are being deprecated

Hello

Config::Model and cme read and write configuration data with a set of “backend” classes, like Config::Model::Backend::IniFile. These classes are managed by Config::Model::BackendMgr.

Well, that’s the simplified view. Actually, the backend manager can handle several different backends to read and write data: read backends are tried until one of them succeeds to read configuration data. And write backend cen be different from the read backend, thus offering the possibility to migrare from one format to another. This feature came at the beginning of the project, back in 2005. This felt like a good idea to let user migrate from one data format to another.

More than 10 years later, this feature has never been used and is handled by a bunch of messy code that hampers further evolution of the backend classes.

So, without further ado, I’m going to deprecate the following features in order to simplify the backend manager:

  • The “custom” backend that can be easily replaced with more standard backend based on Config::Model::Backend::Any. This feature has been deprecated with Config::Model 2.107
  • The possibility to specify more that one backend. Soon, only the first read backend will be taken into account. This will simplify the declaration of backend. The “read_config” parameter, which is currently a list of backend specification, will become a single backend specification. The command cme meta edit will handle the migration of existing model to the new scheme.
  • the “write_config” parameter will be removed.

Unless someone objects, actual removal of these feature will be done in the next few months, after a quite short deprecation period.

All the best


Tagged: cme, config-model, Config::Model, configuration

04 September, 2017 05:39PM by dod

hackergotchi for Alessio Treglia

Alessio Treglia

MeteoSurf: a free App for the Mediterranean Sea

meteosurf

MeteoSurf is a free multi-source weather forecasting App designed to provide wind and wave conditions of the Mediterranean Sea. It is an application for smartphones and tablets, built as a Progressive Web App able to supply detailed and updated maps and data showing heights of sea waves (and other information) in the Central Mediterranean. It is mainly targeted for surfers and wind-surfers but anyone who needs to know the sea conditions will take advantage from this app.

Data can be displayed as animated graphical maps, or as detailed table data. The maps refer to the whole Mediterranean Sea, while the table data is able to provide specific information for any of the major surf spots in the Med.

As of current version, MeteoSurf shows data collecting them from 3 different forecasting systems…

Read More… [by Fabio Marzocca]

04 September, 2017 10:14AM by Fabio Marzocca

Lior Kaplan

FOSScamp Syros 2017 – day 2

The morning stated by taking the bus to Kini beach. After some to enjoy the water (which were still cold in the morning), we sat for talking about the local Debian community and how can we help it grow. The main topic was localization (l10n), but we soon started to check other options. I reminded them that l10n isn’t only translation and we also talked about dictionaries for spell checking, fonts and local software which might be relevant (e.g. hdate for the Jewish/Hebrew calendar or Jcal for the Jalali calendar). For example it seems that regular Latin fonts are missing two Albanian characters.

We also talked about how to use Open Labs to better work together with two hats – member of the local FOSS community and also as members of various open source projects (not forgetting open content / data ones projects as well). So people can cooperate both on the local level, the international level or to mix (using the other’s project international resources). In short: connections, connections, connections.

Another aspect I tried to push the guys toward is cooperating with local companies about open source, whether it’s the local market, the municipal and general government. Such cooperation can take many forms, sponsoring events / giving resources (computers, physical space or employee’s time) and of course creating more jobs for open source people, which in turn will support more people doing open source for longer period.

One of the guys thought  benefit the local community will benefit from a mirror server, but that also requires to see the network topology of Albania to make sure it makes sense to invest in one (resources and effort).

We continued to how it would be best to contribute to open source, mostly that Debian, although great isn’t always the best target, and they should always try to work with the relevant upstream. It’s better to translate gnome upstream then sending the Debian maintainer the translation to be included in the package. That shortcut can work if there’s something urgent like a really problematic typo or something what unless done before the release would require a long long wait (e.g. the next Debian release). I gave an example that for important RTL bugs in LibreOffice I’ve asked Rene Engelhard to include the patch instead of waiting for the next release and its inclusion in Debian.

When I started the conversation I mentioned that we have 33% females out of the 12 participants. And that’s considered good comparing to other computer/technical events, especially open source. To my surprise the guys told me that in the Open Labs hackerspace the situation is the opposite, they have more female members than male (14 female to 12 male). Also in their last OSCAL event they had 220 women and 100 men. I think there’s grounds to learn what happens there, as the gals do something damn right over there. Maybe Outreachy rules for Albania should be different (:

Later that day I did another session with Redon Skikuli to be more practical, so I started to search on an Albanian dictionary for spell checking, found an old one and asked Redon to check the current status with the guy. And also check info about such technical stuff with Social Sciences and Albanological Section of the Academy of Sciences of Albania, who is officially the regulator for Albanian.

In parallel I started to check how to include the dictionary in LibreOffice, and asked Rene Engelhard to enable Albanian language pack in Debian (as upstream already provide one). Checking the dictionaries I’ve took the opportunity to update the Hebrew. It took me a little longer as I needed to get rust off my LibreOffice repositories (dictionaries is a different repository) and also the gerrit setup. But in the end: https://gerrit.libreoffice.org/#/c/41864/

With the talks toady and the starting to combine both Debian and LibreOffice work today (although much of it was talking) – I felt like I’m the right person on the right place. I’m happy to be here and contribute to two projects in parallel (:


Filed under: Debian GNU/Linux, i18n & l10n, LibreOffice

04 September, 2017 09:44AM by Kaplan

hackergotchi for Daniel Pocock

Daniel Pocock

Spyware Dolls and Intel's vPro

Back in February, it was reported that a "smart" doll with wireless capabilities could be used to remotely spy on children and was banned for breaching German laws on surveillance devices disguised as another object.

Would you trust this doll?

For a number of years now there has been growing concern that the management technologies in recent Intel CPUs (ME, AMT and vPro) also conceal capabilities for spying, either due to design flaws (no software is perfect) or backdoors deliberately installed for US spy agencies, as revealed by Edward Snowden. In a 2014 interview, Intel's CEO offered to answer any question, except this one.

The LibreBoot project provides a more comprehensive and technical analysis of the issue, summarized in the statement "the libreboot project recommends avoiding all modern Intel hardware. If you have an Intel based system affected by the problems described below, then you should get rid of it as soon as possible" - eerily similar to the official advice German authorities are giving to victims of Cayla the doll.

All those amateur psychiatrists suggesting LibreBoot developers suffer from symptoms of schizophrenia have had to shut their mouths since May when Intel confirmed a design flaw (or NSA backdoor) in every modern CPU had become known to hackers.

Bill Gates famously started out with the mission to put a computer on every desk and in every home. With more than 80% of new laptops based on an Intel CPU with these hidden capabilities, can you imagine the NSA would not have wanted to come along for the ride?

Four questions everybody should be asking

  • If existing laws can already be applied to Cayla the doll, why haven't they been used to alert owners of devices containing Intel's vPro?
  • Are exploits of these backdoors (either Cayla or vPro) only feasible on a targeted basis, or do the intelligence agencies harvest data from these backdoors on a wholesale level, keeping a mirror image of every laptop owner's hard disk in one of their data centers, just as they already do with phone and Internet records?
  • How long will it be before every fast food or coffee chain with a "free" wifi service starts dipping in to the data exposed by these vulnerabilities as part of their customer profiling initiatives?
  • Since Intel's admissions in May, has anybody seen any evidence that anything is changing though, either in what vendors are offering or in terms of how companies and governments outside the US buy technology?

Share your thoughts

This issue was recently raised on the LibrePlanet mailing list. Please feel free to join the list and click here to reply on the thread.

04 September, 2017 06:09AM by Daniel.Pocock

September 03, 2017

Lior Kaplan

FOSScamp Syros 2017 – day 1

During Debconf17 I was asked by Daniel Pocock if I can attend FOSScamp Syros to help with Debian’s l10n in the Balkans. I said I would be happy to, although my visit would be short (2.5 days) due to previous plans. The main idea of camp is to have FOSS people meet for about 1 week near a beach. So it’s sun, water and free software.  This year it takes place  in Syros, Greece.

After take the morning ferry, I met with the guys at noon. I didn’t know how would it be, as it’s my first time with this group/meeting, but they were very nice and welcoming. 10 minutes after my arrival I found myself setting with two of the female attendees starting to work on Albanian (sq) translation of Debian Installer.

It took my a few minutes to find my where to check out the current level1 files, as I thought they aren’t in SVN anymore, but ended up learning the PO files is the only part of the installer still on SVN. As the girls were quick with the assinged levle1 sublevels, I started to look for the level2 and level3 files, and it was annoying to have the POT files very accessible, but no links to the relevant git repositories. I do want to have all the relevant links in one central place, so people who want to help with translation could do that.

While some of the team member just used a text editor to edit the files, I suggested to them using either poedit or granslator, both I used a few years ago. Yaron Shahrabani also recommended virtaal to me, but after trying it for a while I didn’t like it (expect it’s great feature showing the diff with fuzzy messages). For the few people who also have Windows on their machine, both poedit and Virtaal have windows binaries for download. So you don’t have to have Linux in order to help with translations.

In parallel, I used the “free” time to work on the Hebrew translation for level1, as it’s been a while since either me or Omer Zak worked on it. Quite soon the guys started to send me the files for review, and I did find some errors using diff. Especially when not everyone use a PO editor. I also missed a few strings during the review, which got fixed later on by Christian Perrier. Team work indeed (:

I found it interesting to see the reactions and problems for the team to work with the PO files, and most projects now use some system (e.g. Pootle) for online web translation. Which saves some of the head ace, but also prevents from making some review and quality check before submitting the files. It’s a good idea to explore this option for Debian as well.

A tip for those who do want to work with PO files, either use git’s diff features or use colordiff to check your changes (notice less will require -R parameter to keep the color).

Although I met the guys only at noon, the day was very fruitful for Debian Installer l10n:

  • Albanian (sq) level1 – from 78% to 82% (Eva Vranici, Silva Arapi)
  • Albanian (sq) level2 – from 20% to 24% (Nafie Shehu)
  • Hebrew (he) level1 – from 96% to 97% (me)
  • Greek (el) level1 – from 96% to 97% (Sotirios Vrachas)

Some files are still work in progress and will be completed tomorrow. My goal is to have Albanian at 100% during the camp and ready for the next d-i alpha.

I must admit that I remember d-i to have many more strings as part of the 3 levels, especially levels 2+3 which were huge (e.g. the iso codes).

Except all the work and FOSS related conversations, I found a great group who welcomed me quickly, made me feel comfortable and taught me a thing or two about Greece and the Syros specifically.

TIP: try the dark chocolate with red hot chili pepper in the icecream shop.


Filed under: Debian GNU/Linux, i18n & l10n

03 September, 2017 07:09PM by Kaplan

Thorsten Alteholz

My Debian Activities in August 2017

FTP assistant

This month I accepted 217 packages and rejected 16 uploads. Though this might seem to be a low number this month, I am very pleased about the total number of packages that have been accepted: 558.

Debian LTS

This was my thirty-eight month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 20.25h. During that time I did LTS uploads of:

  • [DLA 1055-1] libgd2 security update for one CVE
  • [DLA 1060-1] libxml2 security update for two CVEs
  • [DLA 1062-1] curl security update for one CVE
  • [DLA 1063-1] extplorer security update for one CVE
  • [DLA 1065-1] fontforge security update for 8 CVEs
  • [DLA 1082-1] graphicsmagick security update for 8 CVEs

I also did the upload and sent the DLA for [DLA 1061-1] newsbeuter security update

Last but not least I also spent some time for frontdesk duties.

Other stuff

As announced last month I finished uploading all dependencies of glewlwyd and now we have an oauth2 server available in Debian. This month I am trying to really use it and will tell you about my experiences.

As the severity of the gcc-7 bugs have been raised, I took care of: #853501, #853304, #853305, #853306, #853307, #853308 and #871088

I also uploaded a new version of duktape and now try to also provide a library that can be used in other packages.

Last but not least my DOPOM of this month has been oysttyer. Actually it is a new package, but as ttytter has been abandonded upstream, this is the replacement. It is a fork so you should only get a new authorization key and simply use it as ttytter before.

03 September, 2017 06:10PM by alteholz

September 02, 2017

hackergotchi for David Bremner

David Bremner

Indexing Debian's buildinfo

Introduction

Debian is currently collecting buildinfo but they are not very conveniently searchable. Eventually Chris Lamb's buildinfo.debian.net may solve this problem, but in the mean time, I decided to see how practical indexing the full set of buildinfo files is with sqlite.

Hack

  1. First you need a copy of the buildinfo files. This is currently about 2.6G, and unfortunately you need to be a debian developer to fetch it.

     $ rsync -avz mirror.ftp-master.debian.org:/srv/ftp-master.debian.org/buildinfo .
    
  2. Indexing takes about 15 minutes on my 5 year old machine (with an SSD). If you index all dependencies, you get a database of about 4G, probably because of my natural genius for database design. Restricting to debhelper and dh-elpa, it's about 17M.

     $ python3 index.py
    

    You need at least python3-debian installed

  3. Now you can do queries like

     $ sqlite3 depends.sqlite "select * from depends where depend='dh-elpa' and depend_version<='0106'"
    

    where 0106 is some adhoc normalization of 1.6

Conclusions

The version number hackery is pretty fragile, but good enough for my current purposes. A more serious limitation is that I don't currently have a nice (and you see how generous my definition of nice is) way of limiting to builds currently available e.g. in Debian unstable.

02 September, 2017 08:41PM

Antoine Beaupré

My free software activities, August 2017

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. This month I worked on a few major packages that took a long time instead of multiple smaller issues. Affected packages were Mercurial, libdbd-mysql-perl and Ruby.

Mercurial updates

Mercurial was vulnerable to two CVEs: CVE-2017-1000116 (command injection on clients through malicious ssh URLs) and CVE-2017-1000115 (path traversal via symlink). The former is an issue that actually affects many other similar software like Git (CVE-2017-1000117), Subversion (CVE-2017-9800) and even CVS (CVE-2017-12836). The latter symlink issue is a distinct issue that came up during an internal audit.

The fix, shipped as DLA-1072-1, involved a rather difficult backport, especially because the Mercurial test suite takes a long time to complete. This reminded me of the virtues of DEB_BUILD_OPTIONS=parallel=4, which sped up the builds considerably. I also discovered that the Wheezy build chain doesn't support sbuild's --source-only-changes flag which I had hardcoded in my sbuild.conf file. This seems to be simply because sbuild passes --build=source to dpkg-buildpackage, an option that is supported only in jessie or later.

libdbd-mysql-perl

I have worked on fixing two issues with the libdbd-mysql-perl package, CVE-2017-10788 and CVE-2017-10789, which resulted in the DLA-1079-1 upload. Behind this mysteriously named package sits a critical piece of infrastructure, namely the mysql commandline client which is probably used and abused by hundreds if not thousands of home-made scripts, but also all of Perl's MySQL support, which is probably used by even a larger base of software.

Through the Debian bug reports (Debian bug #866818 and Debian bug #866821), I have learned that the patches existed in the upstream tracker but were either ignored or even reverted in the latest 4.043 upstream release. It turns out that there are talks of forking that library because of maintainership issue. It blows my mind that such an important part of MySQL is basically unmaintained.

I ended up backporting the upstream patches, which was also somewhat difficult because of the long-standing issues with SSL support in MySQL. The backport there was particularly hard to test, as you need to run that test suite by hand, twice: once with a server configured with a (valid!) SSL certificate and one without (!). I'm wondering how much time it is really worth spending on trying to fix SSL in MySQL, however. It has been badly broken forever, and while the patch is an improvement, I would actually still never trust SSL transports in MySQL over an untrusted network. The few people that I know use such transports wrap their connections around a simpler stunnel instead.

The other issue was easier to fix so I submitted a pull request upstream to make sure that work isn't lost, although it is not clear what the future of that patch (or project!) will be at this point.

Rubygems

I also worked on the rubygems issues, which, thanks to the "vendoring" practice of the Ruby community, also affects the ruby1.9 package. 4 distinct CVEs were triaged here (CVE-2017-0899, CVE-2017-0900, CVE-2017-0901 and CVE-2017-0902) and I determined the latter issue didn't affect wheezy as rubygems doesn't do its own DNS resolution there (later versions lookup SRV records).

This is another package where the test suite takes a long time to run. Worse, the packages in Wheezy actually fails to build from source: the test suites just fail in various steps, particularly because of dh key too small errors for Rubygems, but also other errors for Ruby. I also had trouble backporting one test which I had to simply skip for Rubygems. I uploaded and announced test packages and hopefully I'll be able to complete this work soon, although I would certainly appreciate any help on this...

Triage

I took a look at the sox, libvorbis and exiv2 issues. None had fixes available. sox and exiv2 were basically a list of fuzzing issues, which are often minor or at least of unknown severity. Those would have required a significant amount of work and I figured I would prioritize other work first.

I also triaged CVE-2017-7506, which doesn't seem to affect the spice package in wheezy, after doing a fairly thorough audit of the code. The vulnerability is specifically bound to the reds_on_main_agent_monitors_config function, which is simply not present in our older version. A hostile message would fall through the code and not provoke memory allocation or out of bounds access, so I simply marked the wheezy version as not-affected, something which usually happens during the original triage but can also happen during the actual patching work, as in this case.

Other free software work

This describes the volunteer work I do on various free software projects. This month, again, my internal reports show that I spent about the same time on volunteer and paid time, but this is probably a wrong estimate because I spent a lot of time at Debconf which I didn't clock in...

Debconf

So I participated in the 17th Debian Conference in Montreal. It was great to see (and make!) so many friends from all over the world in person again, and I was happy to work on specific issues together with other Debian developers. I am especially thankful to David Bremner for fixing the syncing of the flagged tag when added to new messages (patch series). This allows me to easily sync the one tag (inbox) that is not statically assigned during notmuch new, by using flagged as a synchronization tool. This allows me to use notmuch more easily across multiple machines without having to sync all tags with dump/restore or using muchsync which wasn't working for me (although a new release came out which may fix my issues). The magic incantation looks something like this:

notmuch tag -inbox tag:inbox and not tag:flagged
notmuch tag +inbox not tag:inbox and tag:flagged

However, most of my time in the first week (Debcamp) was spent trying to complete the networking setup: configure switches, setup wiring and so on. I also configured an apt-cacher-ng proxy to serve packages to attendees during the conference. I configured it with Avahi to configure clients automatically, which led me to discover (and fix) issue Debian bug #870321) although there are more issues with the autodiscovery mechanism... I spent extra time to document the (somewhat simple) configuration of such a server in the Debian wiki because it was not the first time I had research that procedure...

I somehow thought this was a great time to upgrade my laptop to stretch. Normally, I keep that device running stable because I don't use it often and I don't want to have major traumatizing upgrades every time I leave with it on a trip. But this time was special: there were literally hundreds of Debian developers to help me out if there was trouble. And there was, of course, trouble as it turns out! I had problems with the fonts on my display, because, well, I had suspended (twice) my laptop during the install. The fix was simply to flush the fontconfig cache, and I tried to document this in the fonts wiki page and my upgrades page.

I also gave a short training called Debian packaging 101 which was pretty successful. Like the short presentation I made at the last Montreal BSP, the workshop was based on my quick debian development guide. I'm thinking of expanding this to a larger audience with a "102" course that would discuss more complex packaging problems. But my secret plan (well, secret until now I guess) is to make packaging procedures more uniform in Debian by training new Debian packagers using that same training for the next 2 decades. But I will probably start by just trying to do this again at the next Debconf, if I can attend.

Debian uploads

I also sponsored two packages during Debconf: one was a "scratch an itch" upload (elpa-ivy) which I requested (Debian bug #863216) as part of a larger effort to ship the Emacs elisp packages as Debian packages. The other was an upload of diceware to build the documentation in a separate package and fix other issues I have found in the package during a review.

I also uploaded a bunch of other fixes to the Debian archive:

Signing keys rotation

I also started the process of moving my main OpenPGP certification key by adding a signing subkey. The subkey is stored in a cryptographic token so I can sign things on more than one machine without storing that critical key on all those devices physically.

Unfortunately, this meant that I need to do some shenanigans when I want to sign content in my Debian work, because the new subkey takes time to propagate to the Debian archive. For example, I have to specify the primary key with a "bang" when signing packages (debsign -k '792152527B75921E!' ...) or use inline signatures in email sent for security announcement (since that trick doesn't work in Mutt or Notmuch). I tried to figure out how to better coordinate this next time by reading up documentation on keyring.debian.org, but there is no fixed date for key changes on the rsync interface. There are "monthly changes" so one's best bet is to look for the last change in their git repository.

GitLab.com and LFS migration

I finally turned off my src.anarc.at git repository service by moving the remaining repos to GitLab. Unfortunately, GitLab removed support for git-annex recently, so I had to migrate my repositories to Git-LFS, which was an interesting experience. LFS is pretty easy to use, definitely simpler than git-annex. It also seems to be a good match for the use-case at hand, which is to store large files (videos, namely) as part of slides for presentations.

It turns out that their migration guide could have been made much simpler. I tried to submit those changes to the documentation but couldn't fork the GitLab EE project to make a patch, so I just documented the issue in the original MR for now. While I was there I filed a feature request to add a new reference shortcut (GL-NNN) after noticing a similar token used on GitHub. This would be a useful addition because I often have numbering conflicts between Debian BTS bug numbers and GitLab issues in packages I maintain there. In particular, I have problems using GitLab issue numbers in Monkeysign, because commit logs end up in Debian changelogs and will be detected by the Debian infrastructure even though those are GitLab bug numbers. Using such a shortcut would avoid detection and such a conflict.

Numpy-stats

I wrote a small tool to extract numeric statistics from a given file. I often do ad-hoc benchmarks where I store a bunch of numbers in a file and then try to make averages and so on. As an exercise in learning NumPy, I figured I would write such a simple tool, called numpy-stats, which probably sounds naive to seasoned Python scientists.

My incentive was that I was trying to figure out what was the distribution of password length in a given password generator scheme. So I wrote this simple script:

for i in seq 10000 ; do
    shuf -n4 /usr/share/dict/words | tr -d '\n'
done > length

And then feed that data in the tool:

$ numpy-stats lengths 
{
  "max": 60, 
  "mean": 33.883293722913464, 
  "median": 34.0, 
  "min": 14, 
  "size": 143060, 
  "std": 5.101490225062775
}

I am surprised that there isn't such a tool already: hopefully I am wrong and will just be pointed towards the better alternative in the comments here!

Safe Eyes

I added screensaver support to the new SafeEyes project, which I am considering as a replacement to the workrave project I have been using for years. I really like how the interruptions basically block the whole screen: way more effective than only blocking the keyboard, because all potential distractions go away.

One feature that is missing is keystrokes and mouse movement counting and of course an official Debian package, although the latter would be easy to fix because upstream already has an unofficial build. I am thinking of writing my own little tool to count keystrokes, since the overlap between SafeEyes and such a counter isn't absolutely necessary. This is something that workrave does, but there are "idle time" extensions in Xorg that do not need to count keystrokes. There are already certain tools to count input events, but none seem to do what I want (most of them are basically keyloggers). It would be an interesting test to see if it's possible to write something that would work both for Xorg and Wayland at the same time. Unfortunately, preliminary research show that:

  1. in Xorg, the only way to implement this is to sniff all events, ie. to implement a keylogger

  2. in Wayland, this is completely unsupported. it seems some compositors could implement such a counter, but then it means that this is compositor specific, or, in other words, unportable

So there is little hope here, which brings to my mind "painmeter" as an appropriate name for this future programming nightmare.

Ansible

I sent my first contribution to the ansible project with a small documentation fix. I had an eye opener recently when I discovered a GitLab ansible prototype that would manipulate GitLab settings. When I first discovered Ansible, I was frustrated by the YAML/Jinja DSL: it felt silly to write all this code in YAML when you are a Python developer. It was great to see reasonably well-written Python code that would do things and delegate the metadata storage (and only that!) to YAML, as opposed to using YAML as a DSL.

So I figured I would look at the Ansible documentation on how this works, but unfortunately, the Ansible documentation is severly lacking in this area. There are broken links (I only fixed one page) and missing pieces. For example, the developing plugins page doesn't explain how to program a plugin at all.

I was told on IRC that: "documentation around developing plugins is sparse in general. the code is the best documentation that exists (right now)". I didn't get a reply when asking which code in particular could provide good examples either. In comparison, Puppet has excellent documentation on how to create custom types, functions and facts. That is definitely a turn-off for a new contributor, but at least my pull request was merged in and I can only hope that seasoned Ansible contributors expand on this critical piece of documentation eventually.

Misc

As you can see, I'm all over the place, as usual. GitHub tells me I "Opened 13 other pull requests in 11 repositories" (emphasis mine), which I guess means on top of the "9 commits in 5 repositories" mentioned earlier. My profile probably tells a more detailed story that what would be useful to mention here. I should also mention how difficult it is to write those reports: I basically do a combination of looking into my GitHub and GitLab profiles, the last 30 days of emails (!) and filesystem changes (!!). En vrac, a list of changes which may be of interest:

  • font-large (and its alias, font-small): shortcut to send the right escape sequence to rxvt so it changes its font
  • fix-acer: short script to hardcode the modeline (you remember those?!) for my screen which has a broken EDID pin (so autodetection fails, yay Xorg log files...)
  • ikiwiki-pandoc-quickie: fake ikiwiki renderer that (ab)uses pandoc to generate a HTML file with the right stylesheet to preview Markdown as it may look in this blog (the basic template is missing still)
  • git-annex-transfer: a command I've often been missing in git-annex, which is a way to transfer files between remotes without having to copy them locally (upstream feature request)
  • I linked the graphics of the Debian archive software architecture in the Debian wiki in the hope more people notice it.
  • I did some tweaks on my Taffybar to introduce a battery meter and hoping to have temperature sensors, which mostly failed. there's a pending pull request that may bring some sense into this, hopefully.
  • I made two small patches in Monkeysign to fix gpg.conf handling and multiple email output, a dumb bug I cannot believe anyone noticed or reported just yet. Thanks Valerie for the bug report! The upload of this in Debian is pending a review from the release team.

02 September, 2017 08:16PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Playing with Moose and FFmpeg

As I've blogged before, I've been on and off working on SReview, a video review system. Development over the past year has been mostly driven by the need to have something up and running for first FOSDEM 2017, and then DebConf17, and so I've cut corners left and right which made the system, while functional, not quite entirely perfect everywhere. For instance, the backend scripts were done in ad-hoc perl, each reinventing their own wheel. Doing so made it easier for me to experiment with things and figure out where I want them to go, without immediately creating a lot of baggage that is not necessarily something I want to be stuck to. This flexibility has already paid off, in that I've redone the state machine between FOSDEM and DebConf17—and all it needed was to update a few SQL statements here and there. Well, and add a few of them, too.

It was always the intent to replace most of the ad-hoc perl with something better, however, once the time was ripe. One place where historical baggage is not so much of a problem, and where in fact abstracting away the complexity would now be an asset, is in the area of FFmpeg command lines. Currently, these are built by simple string expansion. For example, we do something like this (shortened for brevity):

system("ffmpeg -y -i $outputdir/$slug.ts -pass 1 -passlogfile ...");

inside an environment where the $outputdir and $slug variables are set in a perl environment. That works, but it has some downsides; e.g., adding or removing options based on which codecs we're using is not so easy. It would be much more flexible if the command lines were generated dynamically based on requested output bandwidth and codecs, rather than that they be hardcoded in the file. Case in point: currently there are multiple versions of some of the backend scripts, that only differ in details—mostly the chosen codec on the ffmpeg command line. Obviously this is suboptimal.

Instead, we want a way where video file formats can be autodetected, so that I can just say "create a file that uses encoder etc settings of this other file here". In addition, we also want a way where we can say "create a file that uses encoder etc settings of this other file here, except for these one or two options that I want to fine-tune manually". When I first thought about doing that about a year ago, that seemed complicated and not worth it—or at least not to that extent.

Enter Moose.

The Moose OO system for Perl 5 is an interesting way to do object orientation in Perl. I knew Perl supports OO, and I had heard about Moose, but never had looked into it, mostly because the standard perl OO features were "good enough". Until now.

Moose has a concept of adding 'attributes' to objects. Attributes can be set at object construction time, or can be accessed later on by way of getter/setter functions, or even simply functions named after the attribute itself (the default). For more complicated attributes, where the value may not be known until some time after the object has been created, Moose borrows the concept of "lazy" variables from Perl 6:

package Object;

use Moose;

has 'time' => (
    is => 'rw',
    builder => 'read_time',
    lazy => 1,
);

sub read_time {
    return localtime();
}

The above object has an attribute 'time', which will not have a value initially. However, upon first read, the 'localtime()' function will be called, the result is cached, and then (and on all further calls of the same function), the cached result will be returned. In addition, since the attribute is read/write, the time can also be written to. In that case, any cached value that may exist will be overwritten, and if no cached value exists yet, the read_time function will never be called. (it is also possible to clear values if needs be, so that the function would be called again).

We use this with the following pattern:

package SReview::Video;

use Moose;

has 'url' => (
    is => 'rw',
)

has 'video_codec' => (
    is => 'rw',
    builder => '_probe_videocodec',
    lazy => 1,
);

has 'videodata' => (
    is => 'bare',
    reader => '_get_videodata',
    builder => '_probe_videodata',
    lazy => 1,
);

has 'probedata' => (
    is => 'bare',
    reader => '_get_probedata',
    builder => '_probe',
    lazy => 1,
);

sub _probe_videocodec {
    my $self = shift;
    return $self->_get_videodata->{codec_name};
}

sub _probe_videodata {
    my $self = shift;
    if(!exists($self->_get_probedata->{streams})) {
        return {};
    }
    foreach my $stream(@{$self->_get_probedata->{streams}}) {
        if($stream->{codec_type} eq "video") {
            return $stream;
        }
    }
    return {};
}

sub _probe {
    my $self = shift;

    open JSON, "ffprobe -print_format json -show_format -show_streams " . $self->url . "|"
    my $json = "";
    while(<JSON>) {
        $json .= $_;
    }
    close JSON;
    return decode_json($json);
}

The videodata and probedata attributes are internal-use only attributes, and are therefore of the 'bare' type—that is, they cannot be read nor written to. However, we do add 'reader' functions that can be used from inside the object, so that the object itself can access them. These reader functions are generated, so they're not part of the object source. The probedata attribute's builder simply calls ffprobe with the right command-line arguments to retrieve data in JSON format, and then decodes that JSON file.

Since the passed JSON file contains an array with (at least) two streams—one for video and one for audio—and since the ordering of those streams depends on the file and is therefore not guaranteed, we have to loop over them. Since doing so in each and every attribute of the file we might be interested in would be tedious, we add a videodata attribute that just returns the data for the first found video stream (the actual source also contains a similar one for audio streams).

So, if you create an SReview::Video object and you pass it a filename in the url attribute, and then immediately run print $object->video_codec, then the object will

  • call ffprobe, and cache the (decoded) output for further use
  • from that, extract the video stream data, and cache that for further use
  • from that, extract the name of the used codec, cache it, and then return that name to the caller.

However, if the caller first calls $object->video_codec('h264'), then the ffprobe and most of the caching will be skipped, and instead the h265 data will be returned as video codec name.

Okay, so with a reasonably small amount of code, we now have a bunch of attributes that have defaults based on actual files but can be overwritten when necessary. Useful, right? Well, you might also want to care about the fact that sometimes you want to generate a video file that uses the same codec settings of this other file here. That's easy. First, we add another attribute:

has 'reference' => (
    is => 'ro',
    isa => 'SReview::Video',
    predicate => 'has_reference'
);

which we then use in the _probe method like so:

sub _probe {
    my $self = shift;

    if($self->has_reference) {
        return $self->reference->_get_probedata;
    }
    # original code remains here
}

With that, we can create an object like so:

my $video = SReview::Video->new(url => 'file.ts');
my $generated = SReview::Video->new(url => 'file2.ts', reference => $video);

now if we ask the $generated object what the value of its video_codec setting is without telling it ourselves first, it will use the $video object for its probed data, and use that.

That only misses generating the ffmpeg command line, but that's all fairly straightforward and therefore left as an exercise to the reader. Or you can cheat, and look it up.

02 September, 2017 04:12PM

hackergotchi for Daniel Silverstone

Daniel Silverstone

F/LOSS activity, August 2017

Shockingly enough, my focus started out on Gitano once more. We managed a 1.1 release of Gitano during the Debian conference's "camp" which occurs in the week before the conference. This was a joint effort of myself, Richard Maw, and Richard Ipsum. I have to take my hat off to Richard Maw, because without his dedication to features, 1.1 would lack some stuff which Richard Ipsum proposed around ruleset support for basic readers/writers and frankly 1.1 would be a weaker release without it.

Because of the debconf situation, we didn't have a Gitano developer day which, while sad, didn't slow us down much...

  • Once again, we reviewed our current task state
  • I submitted a series which fixed our test suite for Git 2.13 which was an FTBFS bug submitted against the Debian package for Gitano. Richard Maw reviewed and merged it.
  • Richard Maw supplied a series to add testing for dangling HEAD syndrome. I reviewed and merged that.
  • Richard Maw submitted a patch to improve the auditability of the 'as' command and I reviewed and merged that.
  • Richard Ipsum submitted a patch to add reader/writer configs to ease simple project management in Gitano. I'm not proud to say that I was too busy to look at this and ended up saying it was unlikely it'd get in. Richard Maw, quite rightly, took umbrage at that and worked on the patch, eventually submitting a new series with tests which I then felt obliged to review and I merged the series eventually.

    This is an excellent example of where just because one person is too busy doesn't mean that a good idea should be dropped, and I am grateful to Richard Maw for getting this work mergeable and effectively guilt-tripping me into reviewing/merging. This is a learnable moment for me and I hope to do better into the future.

  • During all that time, I was working on a plugin to support git-multimail.py in Gitano. This work ranged across hooks and caused me to spend a long time thinking about the semantics of configuration overriding etc. Fortunately I got there in the end, and with a massive review effort from Richard Maw, we got it merged into Gitano.
  • Finally I submitted a patch which caused the tests we run in Gitano to run from an 'install' directory which ensures that we catch bugs such as those which happened in earlier series where we missed out rules files for installation etc. Richard Maw reviewed and merged that.
  • And then we released the new version of Gitano and subsidiary libraries.

    There was Luxio version 13 which switched us to readdir() from readdir_r() thanks to Richard Ipsum; Gall 1.3 which contained a bunch of build cleanups, and also a revparse_single() implementation in the C code to speed things up thanks to Richard Maw; Supple 1.0.8 which improved wrapper environment cleanups thanks to Richard Ipsum, allowed baking of paths in which means Nix is easier to support (again thanks to Richard Ipsum), fixed setuid handling so that Nix is easier to support (guess what? Richard Ipsum again); Lace 1.4 which now verifies definition names in allow/deny/anyof/allof and also produces better error messages from nested includes.

    And, of course, Gitano 1.1 whose changes were somewhat numerous and so you are invited to read them in the Gitano NEWS file for the release.

Not Gitano

Of course, not everything I did in August was Gitano related. In fact once I had completed the 1.1 release and uploaded everything to Debian I decided that I was going to take a break from Gitano until the next developer day. (In fact there's even some patch series still unread on the mailing list which I will get to when I start the developer day.)

I have long been interested in STM32 microcontrollers, using them in a variety of projects including the Entropy Key which some of you may remember. Jorge Aparicio was working on Cortex-M3 support (among other microcontrollers) in Rust and he then extended that to include a realtime framework called RTFM and from there I got interested in what I might be able to do with Rust on STM32. I noticed that there weren't any pure Rust implementations of the USB device stack which would be necessary in order to make a device, programmed in Rust, appear on a USB port for a computer to control/use. This tweaked my interest.

As many of my readers are aware, I am very bad at doing things without some external motivation. As such, I then immediately offered to give a talk at a conference which should be happening in November, just so that I'd be forced to get on with learning and implementing the stack. I have been chronicling my work in this blog, and you're encouraged to go back and read them if you have similar interests. I'm sure that as my work progresses, I'll be doing more and more of that and less of Gitano, for at least the next two months.

To bring that into context as F/LOSS work, I did end up submitting some patches to Jorge's STM32F103xx repository to support a couple more clock configuration entries so that USB and ADCs can be set up cleanly. So at least there's that.

02 September, 2017 10:00AM by Daniel Silverstone

hackergotchi for Clint Adams

Clint Adams

Litigants

Bronwyn’s mom got hit by a semi. She was on the passenger side of the car, the side of impact, and she did not rebound with extreme resilience. The family sued the trucking company and came away with a settlement of roughly $10 million. The lawyers took $6.5 million of that: quite a deal.

Bronwyn learned two things from this, and neither one was about Christopher Lloyd.

Posted on 2017-09-02
Tags: mintings

02 September, 2017 01:21AM

September 01, 2017

Good night moon and spoon balloon

“Hello,” said Adrian, but Adrian was lying.

“My name is Adrian,” said Adrian, but Adrian was lying.

Posted on 2017-09-01
Tags: bgs

01 September, 2017 11:17PM