April 23, 2018

hackergotchi for Vincent Bernat

Vincent Bernat

A more privacy-friendy blog

When I started this blog, I embraced some free services, like Disqus or Google Analytics. These services are quite invasive for users’ privacy. Over the years, I have tried to correct this to reach a point where I do not rely on any “privacy-hostile” services.

Analytics🔗

Google Analytics is an ubiquitous solution to get a powerful analytics solution for free. It’s also a great way to provide data about your visitors to Google—also for free. There are self-hosted solutions like Matomo—previously Piwik.

I opted for a simpler solution: no analytics. It also enables me to think that my blog attracts thousands of visitors every day.

Fonts🔗

Google Fonts is a very popular font library and hosting service, which relies on the generic Google Privacy Policy. The google-webfonts-helper service makes it easy to self-host any font from Google Fonts. Moreover, with help from pyftsubset, I include only the characters used in this blog. The font files are lighter and more complete: no problem spelling “Antonín Dvořák”.

Videos🔗

  • Before: YouTube
  • After: self-hosted

Some articles are supported by a video (like “OPL2LPT: an AdLib sound card for the parallel port“). In the past, I was using YouTube, mostly because it was the only free platform with an option to disable ads. Streaming on-demand videos is usually deemed quite difficult. For example, if you just use the <video> tag, you may push a too big video for people with a slow connection. However, it is not that hard, thanks to hls.js, which enables to deliver video sliced in segments available at different bitrates. Users with Java­Script disabled are still delivered with a progressive version of medium quality.

In “Self-hosted videos with HLS”, I explain this approach in more details.

Comments🔗

Disqus is a popular comment solution for static websites. They were recently acquired by Zeta Global, a marketing company and their business model is supported only by advertisements. On the technical side, Disqus also loads several hundred kilobytes of resources. Therefore, many websites load Disqus on demand. That’s what I did. This doesn’t solve the privacy problem and I had the sentiment people were less eager to leave a comment if they had to execute an additional action.

For some time, I thought about implementing my own comment system around Atom feeds. Each page would get its own feed of comments. A piece of Java­Script would turn these feeds into HTML and comments could still be read without Java­Script, thanks to the default rendering provided by browsers. People could also subscribe to these feeds: no need for mail notifications! The feeds would be served as static files and updated on new comments by a small piece of server-side code. Again, this could work without Javascript.

Day Planner by Fowl Language Comics
Fowl Language Comics: Day Planner or the real reason why I didn't code a new comment system.

I still think this is a great idea. But I didn’t feel like developing and maintaining a new comment system. There are several self-hosted alternatives, notably Isso and Commento. Isso is a bit more featureful, with notably an imperfect import from Disqus. Both are struggling with maintainance and are trying to become sustainable with an hosted version. Commento is more privacy-friendly as it doesn’t use cookies at all. However, cookies from Isso are not essential and can be filtered with nginx:

proxy_hide_header Set-Cookie;
proxy_hide_header X-Set-Cookie;
proxy_ignore_headers Set-Cookie;

In Isso, there is currently no mail notifications, but I have added an Atom feed for each comment thread.

Another option would have been to not provide comments anymore. However, I had some great contributions as comments in the past and I also think they can work as some kind of peer review for blog articles: they are a weak guarantee that the content is not totally wrong.

Search engine🔗

An way to provide a search engine for a personal blog is to provide a form for a public search engine, like Google. That’s what I did. I also slapped some Java­Script on top of that to make it look like not Google.

The solution here is easy: switch to DuckDuckGo, which lets you customize a bit the search experience:

<form id="lf-search" action="https://duckduckgo.com/">
  <input type="hidden" name="kf" value="-1">
  <input type="hidden" name="kaf" value="1">
  <input type="hidden" name="k1" value="-1">
  <input type="hidden" name="sites" value="vincent.bernat.im/en">
  <input type="submit" value="">
  <input type="text" name="q" value="" autocomplete="off" aria-label="Search">
</form>

The Java­Script part is also removed as DuckDuckGo doesn’t provide an API. As it is unlikely that more than three people will use the search engine in a year, this seems a good idea to not spend too much time on this non-essential feature.

Newsletter🔗

  • Before: RSS feed
  • After: still RSS feed but also a MailChimp newsletter

Nowadays, RSS feeds are far less popular they were before. I am still baffled as why a technical audience wouldn’t use RSS, but some readers prefer to receive updates by mail.

MailChimp is a common solution to send newsletters. It provides a simple integration with RSS feeds to trigger a mail each time new items are added to the feed. From a privacy point of view, MailChimp seems a good citizen: data collection is mainly limited to the amount needed to operate the service. Privacy-conscious users can still avoid this service and use the RSS feed.

Less Java­Script🔗

  • Before: third-party Java­Script code
  • After: self-hosted Java­Script code

Many privacy-conscious people are disabling Java­Script or using extensions like uMatrix or NoScript. Except for comments, I was using Java­Script only for non-essential stuff:

For mathematical formulae, I have switched from MathJax to KaTeX. The later is faster but also enables server-side rendering: it produces the same output regardless of browser. Therefore, client-side Java­Script is not needed anymore.

For sidenotes, I have turned the Java­Script code doing the transformation into Python code, with pyquery. No more client-side Java­Script for this aspect either.

The remaining code is still here but is self-hosted.

Memento: CSP🔗

The HTTP Content-Security-Policy header controls the resources that a user agent is allowed to load for a given page. It is a safeguard and a memento for the external resources a site will use. Mine is moderately complex and shows what to expect from a privacy point of view:2

Content-Security-Policy:
  default-src 'self' blob:;
  script-src  'self' blob: https://d1g3mdmxf8zbo9.cloudfront.net/js/;
  object-src  'self' https://d1g3mdmxf8zbo9.cloudfront.net/images/;
  img-src     'self' data: https://d1g3mdmxf8zbo9.cloudfront.net/images/;
  frame-src   https://d1g3mdmxf8zbo9.cloudfront.net/images/;
  style-src   'self' 'unsafe-inline' https://d1g3mdmxf8zbo9.cloudfront.net/css/;
  font-src    'self' about: data: https://d1g3mdmxf8zbo9.cloudfront.net/fonts/;
  worker-src  blob:;
  media-src   'self' blob: https://luffy-video.sos-ch-dk-2.exo.io;
  connect-src 'self' https://luffy-video.sos-ch-dk-2.exo.io https://comments.luffy.cx;
  frame-ancestors 'none';
  block-all-mixed-content;

I am quite happy having been able to reach this result. 😊


  1. You may have noticed I am a footnote sicko and use them all the time for pointless stuff. ↩︎

  2. I don’t have issue with using a CDN like CloudFront: it is a paid service and Amazon AWS is not in the business of tracking users. ↩︎

23 April, 2018 06:16AM by Vincent Bernat

April 22, 2018

hackergotchi for Joachim Breitner

Joachim Breitner

Verifying local definitions in Coq

TL;DR: We can give top-level names to local definitions, so that we can state and prove stuff about them without having to rewrite the programs.

When a Haskeller writes Coq

Imagine you teach Coq to a Haskell programmer, and give them the task of pairing each element in a list with its index. The Haskell programmer might have

addIndex :: [a] -> [(Integer, a)]
addIndex xs = go 0 xs
  where go n [] = []
        go n (x:xs) = (n,x) : go (n+1) xs

in mind and write this Gallina function (Gallina is the programming language of Coq):

Require Import Coq.Lists.List.
Import ListNotations.

Definition addIndex {a} (xs : list a) : list (nat * a) :=
  let fix go n xs := match xs with
                     | []    => []
                     | x::xs => (n, x) :: go (S n) xs
                     end
  in go 0 xs.

Alternatively, imagine you are using hs-to-coq to mechanically convert the Haskell definition into Coq.

When a Coq user tries to verify that

Now your task is to prove something about this function, for example

Theorem addIndex_spec:
  forall {a} n (xs : list a),
  nth n (map fst (addIndex xs)) n = n.

If you just have learned Coq, you will think “I can do this, this surely holds by induction on xs.” But if you have a bit more experience, you will already see a problem with this (if you do not see the problem yet, I encourage you to stop reading, copy the few lines above, and try to prove it).

The problem is that – as so often – you have to generalize the statement for the induction to go through. The theorem as stated says something about addIndex or, in other words, about go 0. But in the inductive case, you will need some information about go 1. In fact, you need a lemma like this:

Lemma go_spec:
  forall {a} n m k (xs : list a), k = n + m ->
  nth n (map fst (go m xs)) k = k.

But go is not a (top-level) function! How can we fix that?

  • We can try to awkwardly work-around not having a name for go in our proofs, and essentially prove go_spec inside the proof of addIndex_spec. Might work in this small case, but does not scale up to larger proofs.
  • We can ask the programmer to avoid using local functions, and first define go as a top-level fixed point. But maybe we don’t want to bother them because of that. (Or, more likely, we are using hs-to-coq and that tool stubbornly tries to make the output as similar to the given Haskell code as possible.)
  • We can copy’n’paste the definition of go and make a separate, after-the-fact top-level definition. But this is not nice from a maintenance point of view: If the code changes, we have to update this copy.
  • Or we apply this one weird trick...

The weird trick

We can define go after-the-fact, but instead of copy’n’pasting the definition, we can use Coq’s tactics to define it. Here it goes:

Definition go {a} := ltac:(
  let e := eval cbv beta delta [addIndex] in (@addIndex a []) in
  (* idtac e; *)
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

Let us take it apart:

  1. We define go, and give the parameters that go depends upon. Note that of the two parameters of addIndex, the definition of go only depends on (“captures”) a, but not xs.
  2. We do not give a type to go. We could, but that would again just be copying information that is already there.
  3. We define go via an ltac expression: Instead of a term we give a tactic that calculates the term.
  4. This tactic first binds e to the body of addIndex. To do so, it needs to pass enough arguments to addIndex. The concrete value of the list argument does not matter, so we pass []. The term @addIndex a [] is now evaluated with the evaluation flags eval cbv beta delta [addIndex], which says “unfold addIndex and do beta reduction, but nothing else”. In particularly, we do not do zeta reduction, which would reduce the let go := … definition. (The user manual very briefly describes these flags.)
  5. The idtac e line can be used to peek at e, for example when the next tactic fails. We can use this to check that e really is of the form let fix go := … in ….
  6. The lazymatch line matches e against the pattern let x := ?def in _, and binds the definition of go to the name def.
  7. And the exact def tactic tells Coq to use def as the definition of go.

We now have defined go, of type go : forall {a}, nat -> list a -> list (nat * a), and can state and prove the auxiliary lemma:

Lemma go_spec:
  forall {a} n m k (xs : list a), k = n + m ->
  nth n (map fst (go m xs)) k = k.
Proof.
  intros ?????.
  revert n m k.
  induction xs; intros; destruct n; subst; simpl.
  1-3:reflexivity.
  apply IHxs; lia.
Qed.

When we come to the theorem about addIndex, we can play a little trick with fold to make the proof goal pretty:

Theorem addIndex_spec:
  forall {a} n (xs : list a),
  nth n (map fst (addIndex xs)) n = n.
Proof.
  intros.
  unfold addIndex.
  fold (@go a).
  (* goal here: nth n (map fst (go 0 xs)) n = n *)
  apply go_spec; lia.
Qed.

Multiple local definitions

The trick extends to multiple local definitions, but needs some extra considerations to ensure that terms are closed. A bit contrived, but let us assume that we have this function definition:

Definition addIndex' {a} (xs : list a) : list (nat * a) :=
  let inc := length xs in
  let fix go n xs := match xs with
                     | []    => []
                     | x::xs => (n, x) :: go (inc + n) xs
                     end in
  go 0 xs.

We now want to give names to inc and to go. I like to use a section to collect the common parameters, but that is not essential here. The trick above works flawlessly for `inc':

Section addIndex'.
Context {a} (xs : list a).

Definition inc := ltac:(
  let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

But if we try it for go', like such:

Definition go' := ltac:(
  let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
  lazymatch e with | let x := _ in let y := ?def in _ =>
    exact def
  end).

we get “Ltac variable def depends on pattern variable name x which is not bound in current context”. To fix this, we write

    exact (let x := inc in def)

instead. We have now defined both inc and go' and can use them in proofs about addIndex':

Theorem addIndex_spec':
  forall n, nth n (map fst (addIndex' xs)) n = n * length xs.
Proof.
  intros.
  unfold addIndex'.
  fold inc go'. (* order matters! *)
  (* goal here: nth n (map fst (go' 0 xs)) n = n * inc *)

Reaching into a match

This trick also works when the local definition we care about is inside a match statement. Consider:

Definition addIndex_weird {a} (oxs : option (list a))
  := match oxs with
     | None => []
     | Some xs =>
       let fix go n xs := match xs with
                          | []    => []
                          | x::xs => (n, x) :: go (S n) xs
                          end in
       go 0 xs
     end.

Definition go_weird {a} := ltac:(
  let e := eval cbv beta match delta [addIndex_weird]
           in (@addIndex_weird a (Some [])) in
  idtac e;
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

Note the addition of match to the list of evaluation flags passed to cbv.

Conclusion

While local definitions are idiomatic in Haskell (in particular thanks to the where syntax), they are usually avoided in Coq, because they get in the way of verification. If, for some reason, one is stuck with such definitions, then this trick presents a reasonable way out.

22 April, 2018 09:47PM by Joachim Breitner (mail@joachim-breitner.de)

Sam Hartman

Shaving the DJ Software Yak

I'm getting married this June. (For the Debian folks, the Ghillie shirt and vest just arrived to go with the kilt. My thanks go out to the lunch table at Debconf that made that suggestion. formal Scottish dress would not have fit, but I wanted something to go with the kilt.)
Music and dance have been an important part of my spiritual journey. Dance has also been an import part of the best weddings I attended. So I wanted dance to be a special part of our celebration. I put together a play list for my 40th birthday; it was special and helped set the mood for the event. Unfortunately, as I started looking at what I wanted to play for the wedding, I realized I needed to do better. Some of the songs were too long. Some of them really felt like they needed a transition. I wanted a continuous mix not a play list.
I'm blind. I certainly could use two turn tables and a mixer--or at least I could learn how to do so. However, I'm a kid of the electronic generation, and that's not my style. So, I started looking at DJ software. With one exception, everything I found was too visual for me to use.
I've used Nama before to put together a mashup. It seemed like Nama offered almost everything I needed. Unfortunately, there were a couple of problems. Nama would not be a great fit for a live mix: you cannot add tracks or effects into the chain without restarting the engine. I didn't strictly need live production for this project, but I wanted to look at that long-term. At the time of my analysis, I thought that Nama didn't support tempo-scaling tracks. For that reason, I decided I was going to have to write my own software. Later I learned that you can adjust the sample rate on a track import, which is more or less good enough for tempo scaling. By that point I already had working code.
I wanted a command line interface. I wanted BPM and key detection; it looked like Mixxx was open-source software with good support for that. Based on my previous work, I chose Csound as the realtime sound backend.

Where I got


I'm proud of what I came up with. I managed to stay focused on my art rather than falling into the trap of focusing too much on the software. I got something that allows me to quickly explore the music I want to mix, but also managed to accomplish my goal and come up with a mix that I'm really happy with. As a result, at the current time, my software is probably only useful to me. However, it is available under the GPL V3. If anyone else would be interested in hacking on it, I'd be happy to spend some time documenting and working with them.
Here's a basic description of the approach.

  • You are editing a timeline that stores the transformations necessary to turn the input tracks into the output mix.
  • There are 10 mixer stereo channels that will be mixed down into a master output.
  • there are a unlimited number of input tracks. Each track is associated with a given mixer channel. Tracks are placed on the timeline at a given start point (starting from a given cue point in the track) and run for a given length. During this time, the track is mixed into the mixer channel. Associated with each track is a gain (volume) that controls how the track is mixed into the mixer channel. Volumes are constant per track.
  • Between the mixer channel and the master is a volume fader and an effect chain.
  • Effects are written in Csound code. Being able to easily write Csound effects is one of the things that made me more interested in writing my own than in looking at adding better tempo scaling/BPM detection to Nama.
  • Associated with each effect are three sliders that give inputs to the effect. Changing the number of mixer channels and effect sliders is an easy code change. However it'd be somewhat tricky to be able to change that dynamically during a performance. Effects also get an arbitrary number of constant parameters.
  • Sliders and volume faders can be manipulated on the timeline. You can ask for a linear change from the current value to a target over a given duration starting at some point. So I can ask for the amplitude to move from 0 to 127 at the point where I want to mix in a track say across 2 seconds. You express slider manipulation in terms of the global timeline. However it is stored relative to the start of the track. That is, if you have a track fade out at a particular musical phrase, the fade out will stay with that phrase even if you move the cue point of the track or move where the track starts on the timeline. This is not what you want all the time, but my experience with Nama (which does it using global time) suggests that I at least save a lot of time with this approach.
  • There is a global effect chain between the output of the final mixer and the master output. This allows you to apply distortion, equalization or compression to the mix as a whole. The sliders for effects on this effect chain are against global time not a specific track.
  • There's a hard-coded compressor on the final output. I'm lazy and I needed it before I had effect chains.

There's some preliminary support for a MIDI controller I was given, but I realized that coding that wasn't actually going to save me time, so I left it. This was a really fun project. I managed to tell a story for my wedding that is really important for me to tell. I learned a lot about what goes into putting together a mix. It's amazing how many decisions go into even simple things like a pan slider. It was also great that there is free software for me to build on top of. I got to focus on the part of the problem I wanted to solve. I was able to reuse components for the realtime sound work and for analysis like BPM detection.

22 April, 2018 08:55PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

host detection in bash

There are many tools to implement this, and yeah, this is not the fastest. But the advantage is that you don't need extra tools beyond "bash" and "ping"...

for i in $(seq 1 254); do
  if ping -W 1 -c 1 192.168.0.$i; then
    HOST[$i]=1
  fi
done
echo ${!HOST[@]}

will give you the host addresses for the machines that are live on a given network...

22 April, 2018 10:04AM

hackergotchi for Norbert Preining

Norbert Preining

Specification and Verification of Software with CafeOBJ – Part 2 – Basics of CafeOBJ

This blog continues Part 1 of our series on software specification and verification with CafeOBJ.

Availability of CafeOBJ

CafeOBJ can be obtained from the website cafeobj.org. The site provides binary packages built from Linux, MacOS, and Windows, as well as the source code for those who want to build the interpreter themselves. Other services provided are tutorial pages, all kind of documentation (reference manual, wiki, user manual).

What is CafeOBJ

Let us recall some of the items mentioned in the previous blog. CafeOBJ is an algebraic specification language, as well as a verification and programming language. This means, that specifications written in CafeOBJ can be verified right within the system without the need to regress to external utilities.

As algebraic specification language it is built upon the logical foundation formed by the following items: (i) order sorted algebras, (ii) co-algebras (or hidden algebras), and (iii) rewriting logic. As verification and programming language it provides the user with an executable semantics of the equational theory, a rewrite engine that supports conditional, order-sorted, AC (associative and commutative) rewriting, a sofisticated module system including parametrization and inheritance, and last but not least a completely free syntax.

The algebraic semantics can be represented by the CafeOBJ cube exhibiting the various extensions starting more many sorted algebras:

For the algebraically inclined audience we just mention that all the systems and morphisms are formalized as institutions and institution morphisms.

Let us now go through some the the logical foundations of CafeOBJ:

Term rewriting

Term rewriting is concerned with systems of rules to replace certain parts of an expression with another expression. A very simple example of a rewrite system is:

  append(nil, ys)    → ys
  append(x : xs, ys) → x : append(xs, ys)

Here the first rule says that you can rewrite an expression append(nil, ys) where ys can be any list, with ys itself. And the second rule states how to rewrite an expression when the first element is not the empty list.

A typical reduction sequence – that is application of these rules – would be:

append(1 ∶ 2 ∶ 3 ∶ nil, 4 ∶ 5 ∶ nil) → 1 ∶ append(2 ∶ 3 ∶ nil, 4 ∶ 5 ∶ nil)
                                     → 1 ∶ 2 ∶ append(3 ∶ nil, 4 ∶ 5 ∶ nil)
                                     → 1 ∶ 2 ∶ 3 ∶ append(nil, 4 ∶ 5 ∶ nil)
                                     → 1 ∶ 2 ∶ 3 ∶ 4 ∶ 5 ∶ nil

Term rewriting is used in two different ways in CafeOBJ: First as execution engine that considers equations as directed rules and uses them to reduce expressions. And at the same time rewriting logic is included into the language specification allowing for reasoning about transitions.

Order sorted algebras

Most algebras we learn in school or even at the university are single sorted, that is all objects in the algebra are of the same type (e.g., integers, reals, function space). In this case an operation is determined by its arity, that is the number of arguments.

In the many sorted and order sorted case the simple number of arguments of a function is not enough, we need to know for each argument its type and also the type of the value the function returns. Thus, we assume a signature (S,F) given, such that S is a set of sorts, or simply sort names, and F is a set of operations f: s1, s2, ..., sk → s where all the s are sorts.

As an example assume we have two sorts, String and Int, one possible function would be

  substr: String, Int, Int → String

which would tell us that the function substr takes three arguments, the first of sort String, the others of sort Int, and it returns again a value of sort String.

In case the sorts are (partially ordered), we call the corresponding algebra order sorted algebra.

Using order sorted algebras has several advantages compared to other algebraic systems:

  • polymorphism (parametric, subsort) and overloading are natural consequences of ordered sorts;
  • error definition and handling via subsorts;
  • multiple inheritance;
  • rigorous model-theoretic semantics based on institutions;
  • operational semantics that executes equations as rewrite rules (executable specifications).

We want to close this blog post with a short history of CafeOBJ and a short sample list of specifications that have been carried out with CafeOBJ.

History, background, relatives, and examples

CafeOBJ, as an algebraic specification language based on equational theory, has its roots in CLEAR (Burgstall and Goguen, early 70s) and the OBJ language (Goguen et al, 70-80s SRI and UC San Diego). The successor OBJ2 was developed by Futatsugi, Goguen, Jouannaud, and Meseguer at UC San Diego in 1984, based on Horn logic, sub-sorts, and parametrized modules.

The developer then moved on to different languages or extensions: Meseguer started to develop Maude, Jouannaud moved on to develop Coq, an unrelated language, and Futatsugi built upon the OBJ3 language by Kirchner et al to create CafeOBJ.

Example specifications carried out in CafeOBJ are authentication protocols (NSLPK, STS, Otway-Rees), key secrecy PACE protocol (German passport), e-commerce protocols (SET), real time algorithms (Fischer’s mutual exclusion protocol), UML semantics, formal fault tree analysis.


In the next blog post we will make first steps with the CafeOBJ interpreter and see how to define modules, the basic building blocks, and carry out simple computations.

22 April, 2018 02:47AM by Norbert Preining

April 21, 2018

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Mako Hate

I recently discovered a prolific and sustained community of meme-makers on Tumblr dedicated to expressing their strong dislike for “Mako.”

Two tags with examples are #mako hate and #anti mako but there are many others.

“even Mako hates Mako…” meme. Found on this forum thread.

I’ve also discovered Tumblrs entirely dedicated to the topic!

For example, Let’s Roast Mako describes itself “A place to beat up Mako. In peace. It’s an inspiration to everyone!

The second is the Fuck Mako Blog which describes itself with series of tag-lines including “Mako can fuck right off and we’re really not sorry about that,” “Welcome aboard the SS Fuck-Mako;” and “Because Mako is unnecessary.” Sub-pages of the site include:

I’ll admit I’m a little disquieted.

21 April, 2018 09:31PM by Benjamin Mako Hill

hackergotchi for Joey Hess

Joey Hess

my haskell controlled offgrid fridge

I'm preparing for a fridge upgrade, away from the tiny propane fridge to a chest freezer conversion. My home computer will be monitoring the fridge temperature and the state of my offgrid energy system, and turning the fridge on and off using a relay and the inverter control board I built earlier.

This kind of automation is a perfect fit for Functional Reactive Programming (FRP) since it's all about time-varying behaviors and events being combined together.

Of course, I want the control code to be as robust as possible, well tested, and easy to modify without making mistakes. Pure functional Haskell code.

There are many Haskell libraries for FRP, and I have not looked at most of them in any detail. I settled on reactive-banana because it has a good reputation and amazing testimonials.

"In the programming-language world, one rule of survival is simple: dance or die. This library makes dancing easy." – Simon Banana Jones

But, it's mostly used for GUI programming, or maybe some musical live-coding. There were no libraries for using reactive-banana for the more staid task of home automation, or anything like that. Also, using it involves a whole lot of IO code, so not great for testing.

So I built reactive-banana-automation on top of it to address my needs. I think it's a pretty good library, although I don't have a deep enough grokking of FRP to say that for sure.

Anyway, it's plenty flexible for my fridge automation needs, and I also wrote a motion-controlled light automation with it to make sure it could be used for something else (and to partly tackle the problem of using real-world time events when the underlying FRP library uses its own notion of time).

The code for my fridge is a work in progress since the fridge has not arrived yet, and because the question of in which situations an offgrid fridge should optimally run and not run is really rather complicated.

Here's a simpler example, for a non-offgrid fridge.

fridge :: Automation Sensors Actuators
fridge sensors actuators = do
        -- Create a Behavior that reflects the most recently reported
        -- temperature of the fridge.
        btemperature <- sensedBehavior (fridgeTemperature sensors)
        -- Calculate when the fridge should turn on and off.
        let bpowerchange = calcpowerchange <$> btemperature
        onBehaviorChangeMaybe bpowerchange (actuators . FridgePower)
  where
        calcpowerchange (Sensed temp)
                | temp `belowRange` allowedtemp = Just PowerOff
                | temp `aboveRange` allowedtemp = Just PowerOn
                | otherwise = Nothing
        calcpowerchange SensorUnavailable = Nothing
        allowedtemp = Range 1 4

And here the code is being tested in a reproducible fashion:

> runner <- observeAutomation fridge mkSensors
> runner $ \sensors -> fridgeTemperature sensors =: 6
[FridgePower PowerOn]
> runner $ \sensors -> fridgeTemperature sensors =: 3
[]
> runner $ \sensors -> fridgeTemperature sensors =: 0.5
[FridgePower PowerOff]

BTW, building a 400 line library and writing reams of control code for a fridge that has not been installed yet is what we Haskell programmers call "laziness".

21 April, 2018 08:45PM

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

moving Qt 4 from Debian testing (aka Buster): some statistics, update II

As in my previous blogpost I'm taking a look at our Qt4 removal wiki page.

Of a total of 438 filed bugs:

  • 181 bugs (41.32%) have been already fixed by either porting the app/library to Qt 5 or a removal from the archive has happened. On most cases the code has been ported and most of the deletions are due to Qt 5 replacements already available in the archive and some due to dead upstreams (ie., no Qt5 port available).
  • 257 bugs (58.68%) still need a fix or are fixed in experimental.
  • 35 bugs (8% of the total, 13% of the remaining) of the remaining bugs are maintained inside the Qt/KDE team.
 We started filing bugs around September 9. That means roughly 32 weeks which gives us around 5.65 packages fixed per week, aka 0.85 packages per day. Obviously not as good as we started (remaining bugs tend to be more complicated), but still quite good.

So, how can you help?

If you are a maintainer of any of the packages still affected try to get upstream to make a port and package it.

If you are not a maintainer you might want to take a look at the list of packages in our wiki page and try to create a patch for them. If you can submit it directly to upstream, the better. Or maybe it's time for you to become the package's upstream or maintainer!



21 April, 2018 12:47PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

hackergotchi for Vincent Bernat

Vincent Bernat

OPL2 Audio Board: an AdLib sound card for Arduino

In a previous article, I presented the OPL2LPT, a sound card for the parallel port featuring a Yamaha YM3812 chip, also known as OPL2—the chip of the AdLib sound card. The OPL2 Audio Board for Arduino is another indie sound card using this chip. However, instead of relying on a parallel port, it uses a serial interface, which can be drived from an Arduino board or a Raspberry Pi. While the OPL2LPT targets retrogamers with real hardware, the OPL2 Audio Board cannot be used in the same way. Nonetheless, it can also be operated from ScummVM and DOSBox!

OPL2 Audio Board for Arduino
The OPL2 Audio Board over a “Grim Fandango” box.

Unboxing🔗

The OPL2 Audio Board can be purchased on Tindie, either as a kit or fully assembled. I have paired it with a cheap clone of the Arduino Nano. A library to drive the board is available on GitHub, along with some examples.

One of them is DemoTune.ino. It plays a short tune on three channels. It can be compiled and uploaded to the Arduino with PlatformIO—installable with pip install platformio—using the following command:1

$ platformio ci \
    --board nanoatmega328 \
    --lib ../../src \
    --project-option="targets=upload" \
    --project-option="upload_port=/dev/ttyUSB0" \
    DemoTune.ino
[...]
PLATFORM: Atmel AVR > Arduino Nano ATmega328
SYSTEM: ATMEGA328P 16MHz 2KB RAM (30KB Flash)
Converting DemoTune.ino
[...]
Configuring upload protocol...
AVAILABLE: arduino
CURRENT: upload_protocol = arduino
Looking for upload port...
Use manually specified: /dev/ttyUSB0
Uploading .pioenvs/nanoatmega328/firmware.hex
[...]
avrdude: 6618 bytes of flash written
[...]
===== [SUCCESS] Took 5.94 seconds =====

Immediately after the upload, the Arduino plays the tune. 🎶

The next interesting example is SerialIface.ino. It turns the audio board into a sound card over serial port. Once the code has been pushed to the Arduino, you can use the play.py program in the same directory to play VGM files. They are a sample-accurate sound format for many sound chips. They log the exact commands sent. There are many of them on VGMRips. Be sure to choose the ones for the YM3812/OPL2! Here is a small selection:

The OPL2 Audio Board playing some VGM files. It is connected to an Arduino Nano. You can see the LEDs blinking when the Arduino receives the commands from the serial port.

Usage with DOSBox & ScummVM🔗

Notice

The support for the serial protocol used in this section has not been merged yet. In the meantime, grab SerialIface.ino from the pull request: git checkout 50e1717.

When the Arduino is flashed with SerialIface.ino, the board can be driven through a simple protocol over the serial port. By patching DOSBox and ScummVM, we can make them use this unusual sound card. Here are some examples of games:

  • 0:00, with DOSBox, the first level of Doom 🎮
  • 1:06, with DOSBox, the introduction of Loom 🎼
  • 2:38, with DOSBox, the first level of Lemmings 🐹
  • 3:32, with DOSBox, the introduction of Legend of Kyrandia 🃏
  • 6:47, with ScummVM, the introduction of Day of the Tentacle ☢️
  • 11:10, with DOSBox, the introduction of Another World2 🐅

DOSBox🔗

The serial protocol is described in the SerialIface.ino file:

/*
 * A very simple serial protocol is used.
 *
 * - Initial 3-way handshake to overcome reset delay / serial noise issues.
 * - 5-byte binary commands to write registers.
 *   - (uint8)  OPL2 register address
 *   - (uint8)  OPL2 register data
 *   - (int16)  delay (milliseconds); negative -> pre-delay; positive -> post-delay
 *   - (uint8)  delay (microseconds / 4)
 *
 * Example session:
 *
 * Arduino: HLO!
 * PC:      BUF?
 * Arduino: 256 (switches to binary mode)
 * PC:      0xb80a014f02 (write OPL register and delay)
 * Arduino: k
 *
 * A variant of this protocol is available without the delays. In this
 * case, the BUF? command should be sent as B0F? The binary protocol
 * is now using 2-byte binary commands:
 *   - (uint8)  OPL2 register address
 *   - (uint8)  OPL2 register data
 */

Adding support for this protocol in DOSBox is relatively simple (patch). For best performance, we use the 2-byte variant (5000 ops/s). The binary commands are pipelined and a dedicated thread collects the acknowledgments. A semaphore captures the number of free slots in the receive buffer. As it is not possible to read registers, we rely on DOSBox to emulate the timers, which are mostly used to let the various games detect the OPL2.

The patch is tested only on Linux but should work on any POSIX system—not Windows. To test it, you need to build DOSBox from source:

$ sudo apt build-dep dosbox
$ git clone https://github.com/vincentbernat/dosbox.git -b feature/opl2audioboard
$ cd dosbox
$ ./autogen.sh
$ ./configure && make

Replace the sblaster section of ~/.dosbox/dosbox-SVN.conf:

[sblaster]
sbtype=none
oplmode=opl2
oplrate=49716
oplemu=opl2arduino
opl2arduino=/dev/ttyUSB0

Then, run DOSBox with ./src/dosbox. That’s it!

You will likely get the “OPL2Arduino: too slow, consider increasing buffer” message a lot. To fix this, you need to recompile SerialIface.ino with a bigger receive buffer:

$ platformio ci \
    --board nanoatmega328 \
    --lib ../../src \
    --project-option="targets=upload" \
    --project-option="upload_port=/dev/ttyUSB0" \
    --project-option="build_flags=-DSERIAL_RX_BUFFER_SIZE=512" \
    SerialIface.ino

ScummVM🔗

The same code can be adapted for ScummVM (patch). To test, build it from source:

$ sudo apt build-dep scummvm
$ git clone https://github.com/vincentbernat/scummvm.git -b feature/opl2audioboard
$ cd scummvm
$ ./configure --disable-all-engines --enable-engine=scumm && make

Then, you can start ScummVM with ./scummvm. Select “AdLib Emulator” as the music device and “OPL2 Arduino” as the AdLib emulator.3 Like for DOSBox, watch the console to check if you need a larger receive buffer.

Enjoy! 😍


  1. This command is valid for an Arduino Nano. For another board, take a look at the output of platformio boards arduino↩︎

  2. Another World (also known as Out of This World), released in 1991, designed by Éric Chahi, is using sampled sounds at 5 kHz or 10 kHz. With a serial port operating at 115,200 bits/s, the 5 kHz option is just within our reach. However, I have no idea if the rendering is faithful. It doesn’t sound like a SoundBlaster, but it sounds analogous to the rendering of the OPL2LPT which sounds similar to the SoundBlaster when using the 10 kHz option. DOSBox’ AdLib emulation using Nuked OPL3—which is considered to be the best—sounds worse. ↩︎

  3. If you need to specify a serial port other than /dev/ttyUSB0, add a line opl2arduino_device= in the ~/.scummvmrc configuration file. ↩︎

21 April, 2018 09:19AM by Vincent Bernat

April 20, 2018

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Hyak on Hyak

I recently fulfilled a yearslong dream of launching a job on Hyak* on Hyak.

Hyak onHyak

 


* Hyak is the University of Washington’s supercomputer which my research group uses for most of our computation-intensive research.
M/V Hyak is a Super-class ferry operated by the Washington State Ferry System.

20 April, 2018 10:26PM by Benjamin Mako Hill

hackergotchi for Jonathan Dowland

Jonathan Dowland

Twitter 10th anniversary

Tomorrow marks my 10th anniversary on Twitter. I have mixed feelings about the occasion. Twitter has been both a terrific success and a horrific failure. I've enjoyed it, I've discovered interesting people via Twitter and had some great interactions. I certainly prefer it to Facebook, but that's not a high benchmark.

Back in the early days I tried to engage with Twitter the way a hacker would. I worked out a scheme to archive my own tweets. I wrote a twitter bot. But Twitter became more and more hostile to that kind of interaction, so I no longer bother. Anything I put on Twitter I consider ephemeral. I've given up backing up my own tweets, conversations, or favourites. I deleted the bot. I keep a "sliding window" of recent tweets, outside of which I delete (via tweetdelete). My window started out a year wide; now it's down to three months.

Asides from the general hostility to third-parties wanting to build on the Twitter platform, they've also done a really poor job of managing bad actors. Of the the tools they do offer, they save the best for people with "verified" status: ostensibly a system for preventing fakes, now consider by some a status symbol. Twitter have done nothing to counter this, in fact they've actively encouraged it, by withdrawing it in at least one case from a notorious troll as an ad-hoc form of punishment. For the rest of us, the tools are woefully inadequate. If you find yourself on the receiving end of even a small pocket of bad attention, twitter becomes effectively unusable for hours or days on end. Finally troll-in-chief (and now President of the US) is inexplicably still permitted on Twitter despite repeatedly and egregiously violating their terms of service, demonstrating that there's different rules for some folks than the rest of us.

(By the way, I thoroughly recommend looking at Block Lists/Bots. I'm blocking thousands of accounts, although the system I've been using appears to have been abandoned. It might be worth a look at blocktogether.org; I intend to at some point.)

To some extent Twitter is responsible for—if not the death, the mortal wounding— of blogging. Back in the dim-and-distant, we'd write blog posts for the idle thoughts (e.g.), and they've migrated quite comfortably to tweets, but it seems to have had a sapping effect on people writing even longer-form stuff. Twitter isn't the only culprit: Google sunsetting Reader in 2013 was an even bigger blow, and I've still not managed to find something to replace it. (Plenty of alternatives exist; but the habit has died.)

One of the well-meaning, spontaneous things that came from the Twitter community was the notion of "Follow Friday": on Fridays, folks would nominate other interesting folks that you might like to follow. In that spirit, and wishing to try boost the idea of blogging again, I'd like to nominate some interesting blogs that you might enjoy. (Feel free to recommend me some more blogs to read in the comments!):

  • Vicky Lai first came up on my radar via Her One Bag, documenting her nomadic lifestyle (Hello UltraNav keyboard, and Stanley travel mug!), but her main site is worth following, too. Most recently she's written up how she makes her twitter ephemeral using AWS Lambda.
  • Alex Beal, who I have already mentioned.
  • Chris Siebenmann, a UNIX systems administrator at the University of Toronto. Siebenmann's blog feels to me like it comes from a parallel Universe where I stuck it out as a sysadmin, and got institutional support to do the job justice (I didn't, and I didn't.)
  • Darren Wilkinson writes about Statistics, computing, data science, Bayes, stochastic modelling, systems biology and bioinformatics
  • Friend of the family Mina writes candidly and brilliantly about her journey beating Lymphoma as a new mum at Lymphoma, Raphi and me
  • Ashley Pomeroy writes infrequently, eclectically (and surreally) on a range of topics, from the history of the Playstation 3, running old games on modern machines, photography and Thinkpads.

A couple of blogs from non-Debian/Linux OS developers. It's always nice to see what the other grass is like.

Finally, a more pleasing decennial: this year marks 10 years since my first uploaded package for Debian.

20 April, 2018 01:25PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

15.010958904109589041

Gregor's post made me think...

And yes! On April 15, I passed the 15-year-mark as a Debian Developer.

So, today I am 15.010958904109589041 years old in the project, give or take some seconds.

And, quoting my dear and admired friend, I deeply feel I belong to this community. Being part of Debian has defined the way I have shaped my career, has brought me beautiful friendships I will surely keep for many many more years, has helped me decide in which direction I should push to improve the world. I feel welcome and very recognized among people I highly value and admire, and that's the best collective present I could get.

Debian has grown and matured tremendously since the time I decided to join, and I'm very proud to be a part of that process.

Thanks, and lets keep it going for the next decade.

20 April, 2018 04:10AM by gwolf

hackergotchi for Kees Cook

Kees Cook

UEFI booting and RAID1

I spent some time yesterday building out a UEFI server that didn’t have on-board hardware RAID for its system drives. In these situations, I always use Linux’s md RAID1 for the root filesystem (and/or /boot). This worked well for BIOS booting since BIOS just transfers control blindly to the MBR of whatever disk it sees (modulo finding a “bootable partition” flag, etc, etc). This means that BIOS doesn’t really care what’s on the drive, it’ll hand over control to the GRUB code in the MBR.

With UEFI, the boot firmware is actually examining the GPT partition table, looking for the partition marked with the “EFI System Partition” (ESP) UUID. Then it looks for a FAT32 filesystem there, and does more things like looking at NVRAM boot entries, or just running BOOT/EFI/BOOTX64.EFI from the FAT32. Under Linux, this .EFI code is either GRUB itself, or Shim which loads GRUB.

So, if I want RAID1 for my root filesystem, that’s fine (GRUB will read md, LVM, etc), but how do I handle /boot/efi (the UEFI ESP)? In everything I found answering this question, the answer was “oh, just manually make an ESP on each drive in your RAID and copy the files around, add a separate NVRAM entry (with efibootmgr) for each drive, and you’re fine!” I did not like this one bit since it meant things could get out of sync between the copies, etc.

The current implementation of Linux’s md RAID puts metadata at the front of a partition. This solves more problems than it creates, but it means the RAID isn’t “invisible” to something that doesn’t know about the metadata. In fact, mdadm warns about this pretty loudly:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 /dev/sda1 /dev/sdb1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90

Reading from the mdadm man page:

-e, --metadata= ... 1, 1.0, 1.1, 1.2 default Use the new version-1 format superblock. This has fewer restrictions. It can easily be moved between hosts with different endian-ness, and a recovery operation can be checkpointed and restarted. The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2".

First we toss a FAT32 on the RAID (mkfs.fat -F32 /dev/md0), and looking at the results, the first 4K is entirely zeros, and file doesn’t see a filesystem:

# dd if=/dev/sda1 bs=1K count=5 status=none | hexdump -C 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00001000 fc 4e 2b a9 01 00 00 00 00 00 00 00 00 00 00 00 |.N+.............| ... # file -s /dev/sda1 /dev/sda1: Linux Software RAID version 1.2 ...

So, instead, we’ll use --metadata 1.0 to put the RAID metadata at the end:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 --metadata 1.0 /dev/sda1 /dev/sdb1 ... # mkfs.fat -F32 /dev/md0 # dd if=/dev/sda1 bs=1 skip=80 count=16 status=none | xxd 00000000: 2020 4641 5433 3220 2020 0e1f be77 7cac FAT32 ...w|. # file -s /dev/sda1 /dev/sda1: ... FAT (32 bit)

Now we have a visible FAT32 filesystem on the ESP. UEFI should be able to boot whatever disk hasn’t failed, and grub-install will write to the RAID mounted at /boot/efi.

However, we’re left with a new problem: on (at least) Debian and Ubuntu, grub-install attempts to run efibootmgr to record which disk UEFI should boot from. This fails, though, since it expects a single disk, not a RAID set. In fact, it returns nothing, and tries to run efibootmgr with an empty -d argument:

Installing for x86_64-efi platform. efibootmgr: option requires an argument -- 'd' ... grub-install: error: efibootmgr failed to register the boot entry: Operation not permitted. Failed: grub-install --target=x86_64-efi WARNING: Bootloader is not properly installed, system may not be bootable

Luckily my UEFI boots without NVRAM entries, and I can disable the NVRAM writing via the “Update NVRAM variables to automatically boot into Debian?” debconf prompt when running: dpkg-reconfigure -p low grub-efi-amd64

So, now my system will boot with both or either drive present, and updates from Linux to /boot/efi are visible on all RAID members at boot-time. HOWEVER there is one nasty risk with this setup: if UEFI writes anything to one of the drives (which this firmware did when it wrote out a “boot variable cache” file), it may lead to corrupted results once Linux mounts the RAID (since the member drives won’t have identical block-level copies of the FAT32 any more).

To deal with this “external write” situation, I see some solutions:

  • Make the partition read-only when not under Linux. (I don’t think this is a thing.)
  • Create higher-level knowledge of the root-filesystem RAID configuration is needed to keep a collection of filesystems manually synchronized instead of doing block-level RAID. (Seems like a lot of work and would need redesign of /boot/efi into something like /boot/efi/booted, /boot/efi/spare1, /boot/efi/spare2, etc)
  • Prefer one RAID member’s copy of /boot/efi and rebuild the RAID at every boot. If there were no external writes, there’s no issue. (Though what’s really the right way to pick the copy to prefer?)

Since mdadm has the “--update=resync” assembly option, I can actually do the latter option. This required updating /etc/mdadm/mdadm.conf to add <ignore> on the RAID’s ARRAY line to keep it from auto-starting:

ARRAY <ignore> metadata=1.0 UUID=123...

(Since it’s ignored, I’ve chosen /dev/md100 for the manual assembly below.) Then I added the noauto option to the /boot/efi entry in /etc/fstab:

/dev/md100 /boot/efi vfat noauto,defaults 0 0

And finally I added a systemd oneshot service that assembles the RAID with resync and mounts it:

[Unit] Description=Resync /boot/efi RAID DefaultDependencies=no After=local-fs.target [Service] Type=oneshot ExecStart=/sbin/mdadm -A /dev/md100 --uuid=123... --update=resync ExecStart=/bin/mount /boot/efi RemainAfterExit=yes [Install] WantedBy=sysinit.target

(And don’t forget to run “update-initramfs -u” so the initramfs has an updated copy of /dev/mdadm/mdadm.conf.)

If mdadm.conf supported an “update=” option for ARRAY lines, this would have been trivial. Looking at the source, though, that kind of change doesn’t look easy. I can dream!

And if I wanted to keep a “pristine” version of /boot/efi that UEFI couldn’t update I could rearrange things more dramatically to keep the primary RAID member as a loopback device on a file in the root filesystem (e.g. /boot/efi.img). This would make all external changes in the real ESPs disappear after resync. Something like:

# truncate --size 512M /boot/efi.img # losetup -f --show /boot/efi.img /dev/loop0 # mdadm --create /dev/md100 --level 1 --raid-disks 3 --metadata 1.0 /dev/loop0 /dev/sda1 /dev/sdb1

And at boot just rebuild it from /dev/loop0, though I’m not sure how to “prefer” that partition…

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

20 April, 2018 12:34AM by kees

April 19, 2018

Enrico Zini

Detect a UEFI partition

Today I had to implement a check to see if a disk contains a UEFI ESP partition.

Here it is, it also works on disk image files instead of devices:

def get_uefi_partition(self, disk_dev):
    """
    Return the partition device of the UEFI ESP partition for the device in
    disk_dev.

    Returns None if disk_dev contains no UEFI ESP partition.
    """
    import parted
    pdev = parted.getDevice(disk_dev)
    pdisk = parted.newDisk(pdev)
    if pdisk.type != "gpt":
        log.error("device %s has partition table type %s instead of gpt", disk_dev, pdisk.type)
        return None
    for part in pdisk.partitions:
        if part.getFlag(18):
            log.info("Found ESP partition in %s", part.path)
            return part.path
    log.info("No ESP partition found in %s", disk_dev)
    return None

19 April, 2018 09:04PM

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Diversity Update

I have to excuse for being silent for that long. Way too many things happened. In fact I already wrote most of this last fall, but then something happened that did impact me too much to finalize this entry. And with that I want to go a bit into details how I write my blog entries:
I start to write them in English, I like to cross-reference things, and after I'm done I go over it and write it again in German. That process helps me proof-reading the English part, but it also means that it takes a fair amount of time. And the longer the entries get the more energy the translation and proof reading part takes, too. That's mostly also the reason why I tend to write longer entries when I find the energy and time for it.

Anyway, the first thing that I want to mention here finally happened last June: I officially got changed my name and gender/sex marker in my papers! That was a very happy moment in so many ways. A week later I got my new passport, finally managed to book my flight to Debconf in my name. Yay me, I exist!

Then, Stretch was released. I have to admit I had very little to do, wasn't involved in the release process, neither from the website team nor anywhere else because ...

... because I was packing my stuff that weekend, because on June 21st, a second thing finally happened: I got the keys to my flat in the Que[e]rbau!! Yes, I'm aware that we still need to work on the website. The building company actually did make a big event out of it, called every single person onto stage and handed over the keys. And it made me happy to be able to receive my key in my name and not one I don't relate to since a long while anymore. It did hurt seeing that happening to someone else from our house, even though they knew what the Que[e]rbau is about ... And: I moved right in the same day. Gave up my old flat the following week, even though I didn't have much furniture nor a kitchen but I was waiting way too long to be able to not be there. And just watch that sunset from my balcony. <3

And I mentioned it in the last blog post already, the European Lesbian* Conference organization needed more and more work, too. The program for it started to finalize, but there were still more than enough things to do. I totally fell into this, this was the first time I really felt what intersectionality means and that it's not just a label but an internal part of this conference. The energy going on in the team on that grounds is really outstanding, and I'm totally happy to be part of this effort.

And then came along Debconf17 in Montreal. It was nice to be with a fair amount of people that grew on me like a family over the years. And interestingly I got the notice that there was a Trans March going on, so I joined that. It was a pleasure meeting Sophie LaBelle and Chase Ross there. I wasn't aware that Chase was from Montreal, so that part was a surprise. Sophie I knew, and I brought her back to Vienna in November, right before the Transgender Day of Remembrance. :)

But one of the two moving speeches at the march were from Charlie Rose titled My Gender Is Black. I managed to get a recording of this and another great speech from another Black Lives Matters activist, and hope I'll be able to put them online at some point. For the time being the link to the text should be able to help.

And then Debconf itself started. And I held the Debian Diversity Round Table. While the title might had been misleading, because this group isn't officially formed yet, it turned out to get a fair amount of interest. I started off with why I called for it, that I intentionally chose to not have it video taped for people to be able to speak more freely and after a short introduction round with names, pronouns and other things people wanted to share we had some interesting discussions on why people think this is a good idea, what direction to move. A few ideas did spring up, and then ... time ran out. So actually we scheduled a continuation BoF to further enhance the topic. At the end of that we came up with a pretty good consensual view on how to move forward. Unfortunately I didn't manage yet to follow up on that and feel quite bad about it. :/

Because, after returning, getting back into work, and needing a bit more time for EL*C I started to feel serious pain in my back and my leg which seems to be a slipped disc and was on sick leave for about two months. The pain was too much, I even had to stay at the hospital for two weeks because my stomach acted up too.

At the end of October we had a grand opening: We have a community space in our Que[e]rbau in which we built sort of a bar, with cooking facility and hi-fi equipment. And we intentionally opened it up to the public. It's name is Yella Yella! Nachbar_innentreff. We named it after Yella Hertzka who was an important feminist at the start of the 20th century. The park on the other side of the street is called Yella Hertzka park, so the pun in the name with the connection to the arabic proverb Yalla Yalla is intentional.

With the Yella Yella a fair amount of internal discussions emerged, we all only started to live together, so naturally this took a fair amount of energy and discussions. Things take time to get a feeling for all the people. There were several interviews made, and events to get organized to get it running.

And then out of the sudden it turned 2018 and I still haven't published this post. I'm sorry 'bout that, but sometimes there are other things needing time. And here I am. Time move on even if we don't look at it.

A recent project that I had the honor to be part of is my movement is limitless [trans_non-binary short]. It was interesting to think about the topic whether gender identity affects the way you dance. And to seen and hear other people's approach to it.

At the upcoming Linuxtage Graz there will be a session about Common misconceptions about names and spaces and communities because they were enforcing a realname policy – at a community event. Not only is this a huge issue for trans people but also works against privacy researchers or people from the community that noone really knows by the name in their papers. The discussions that happened on twitter or in the background were partly a fair bit disturbing. Let's hope that we'll manage to make a good panel.

Which brings us to a panel for the upcoming Debconf in Taiwan. There is a suggestion to have a Gender Forum at the Openday. I'm still not completely sure what it should cover or what is expected for it and I guess it's still open for suggestions. There will be a plan, let's see to make it diverse and great!

I won't promise to send the next update sooner, but I'll try to get back into it. Right now I'm also working on a (German language) submission for a non-binary YouTube project and it would be great to see that thing lift off. I'll be more verbose on that front.

Thanks for reading so far, and read you soon. :)

/personal | permanent link | Comments: 0 | Flattr this

19 April, 2018 07:53PM by Rhonda

hackergotchi for Gregor Herrmann

Gregor Herrmann

10 years + 1 day

yesterday 10 years ago I became a Debian Developer.
& I still feel that I belong to this community.
& it took me one more day to write this tiny blog post about it.
so tonight I can celebrate 10 years plus 1 day :)

19 April, 2018 06:21PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

MySQL 8.0 released

(This post is not endorsed by Oracle, and I do not speak for them.)

MySQL 8.0.11 GA (General Availability) is out today—for those not used to Oracle's idiosyncratic versioning, this essentially means “MySQL 8.0 is released” (8.0.1 and so forth were various stages of alpha and beta). This marks the end of three years of development, of which I've been on board for two or so of them.

It feels a bit strange to be working on a product you don't use yourself (my personal datastore of choice firmly remains PostgreSQL), but that's how things go—back when I worked in Google, there were also products that I felt more or less deeply about, although I generally felt more connected to them and less “it's just a job”. I do pride myself on having a neutral assessment of my employer's products, though—I don't use a product just because my employer makes it (e.g., I didn't use Gmail or Google Plus personally when I worked at Google, as I don't find them very good products, but I did use Maps and Search, which I both find excellent).

Being new on the team, it's hard to dive directly in and make major contributions—MySQL is a product with a lot of legacy, despite extensive cleanups over the last few years (especially after Oracle bought Sun and the original developers left), and the amount of documentation is rather varying. So what did I do? I removed stuff. Tons of it.

I removed warnings for newer compilers (in many rounds, and so did many others). I removed tons of unneeded includes. I removed unneeded usages of Boost. I removed the abomination that was my_global.h (there used to be a single global header file that everything was supposed to #include, and in turn #included half the world). I removed home-grown atomics, since C++11 includes its own in the standard. I removed DTrace probes that nobody used. I removed bad names. I removed the custom bool type (yes, seriously, MySQL had its own type for bool, which caused rather subtle bugs). I removed PAD SPACE behavior from the default collation, which enabled a few important optimizations (and NO PAD makes so much more sense in a Unicode world). I removed lots of internal header files from the default installation. I removed the home-grown TLS system (which sped up everything by a few percent). I removed the home-grown quicksort code. I removed radixsort, which was only a win because the old home-grown quicksort code was so slow. I removed the home-grown hash table HASH which was not type-safe, slower than the C++11 unordered_map and fairly buggy. I removed the dreaded query cache! (I didn't remove the embedded server, which doubled the compile time and nobody ever used, but I pushed pretty hard for doing so.) I removed ambiguous includes. I removed unused code from binaries, shrinking the distribution by a few megabytes. I removed compiler flag ricing that doesn't help in 2018. I removed a home-grown printf that was buggy. I removed a lot of C legacy, since MySQL no longer needs to be bound to a world without C++. I removed Sql_alloc, a magical class you'd inherit from and get surprising memory-allocation behavior. I removed some waiting on the compiler. I removed MySQL's custom and weird coding style (well, at least its formatting). And I removed obsolete SQL modes (together with others), which as far as I know is the first time anyone's removed an SQL mode in MySQL.

Of course, I didn't only remove stuff; I also added things like more efficient sorting of strings, added a new microbenchmark framework, sped up the new Unicode 9.0 collations by 20x, and probably added some legacy on my own. :-) But sometimes, it's good to remove. Remove some code today!

19 April, 2018 04:47PM

hackergotchi for Julien Danjou

Julien Danjou

Lessons from OpenStack Telemetry: Deflation

Lessons from OpenStack Telemetry: Deflation

This post is the second and final episode of Lessons from OpenStack Telemetry. If you have missed the first post, you can read it here.

Splitting

At some point, the rules relaxed on new projects addition with the Big Tent initiative, allowing us to rename ourselves to the OpenStack Telemetry team and splitting Ceilometer into several subprojects: Aodh (alarm evaluation functionality) and Panko (events storage). Gnocchi was able to join the OpenStack Telemetry party for its first anniversary.

Finally being able to split Ceilometer into several independent pieces of software allowed us to tackle technical debt more rapidly. We built autonomous teams for each project and gave them the same liberty they had in Ceilometer. The cost of migrating the code base to several projects was higher than we wanted it to be, but we managed to build a clear migration path nonetheless.

Gnocchi Shamble

With Gnocchi in town, we stopped all efforts on Ceilometer storage and API and expected people to adopt Gnocchi. What we underestimated is the unwillingness of many operators to think about telemetry. They did not want to deploy anything to have telemetry features in the first place, so adding yet a new component (a timeseries database) to have proper metric features was seen a burden – and sometimes not seen at all.
Indeed, we also did not communicate enough on our vision for that transition. After two years of existence, many operators were asking what Gnocchi was and what they needed it for. They deployed Ceilometer and its bogus storage and API and were confused about needing yet another piece of software.

It took us more than two years to deprecate the Ceilometer storage and API, which is way too long.

Deflation

In the meantime, people were leaving the OpenStack boat. Soon enough, we started to feel the shortage of human resources. Smartly, we never followed the OpenStack trend of imposing blueprints, specs, bug reports or any process to contributors, obeying my list of open source best practice. This flexibility allowed us to iterate more rapidly; compared to other OpenStack projects; we were going faster proportionately to the size of our contributor base.

Lessons from OpenStack Telemetry: Deflation

Nonetheless, we felt like bailing out a sinking ship. Our contributors were disappearing while we were swamped with technical debt: half-baked feature, unfinished migration, legacy choices and temporary hacks. After the big party that happened, we had to wash the dishes and sweep the floor.

Being part of OpenStack started to feel like a burden in many ways. The inertia of OpenStack being a big project was beginning to surface, so we put up a lot of efforts to dodge most of its implications. Consequently, the team was perceived as an outlier, which does not help, especially when you have to interact with a lot your neighbors.

The OpenStack Foundation never understood the organization of our team. They would refer to us as "Ceilometer" whereas we formally renamed ourselves to "Telemetry" since we were englobing four server projects and a few libraries. For example, while Gnocchi has been an OpenStack project for two years before leaving, it has never been listed on the project navigator maintained by the foundation.

That's a funny anecdote that demonstrates the peculiarity of our team, and how it has been both a strength and a weakness.

Competition

Nobody was trying to do what we were doing when we started Ceilometer. We filled the space of metering OpenStack. However, as the number of companies involved increased and the friction with it along, some people grew unhappy. The race to have a seat at the table of the feast and becoming a Project Team Leader was strong, so some people preferred to create their project rather than trying to play the contribution game. In many areas, including our, that divided the effort up to a ridiculous point where several teams where doing the exact the same thing, or were trying to step on each other toes to kill the competitors.

We spent a significant amount of time trying to bring other teams in the Telemetry scope, to unify our efforts, without much success. Some companies were not embracing open-source because of their cultural differences, while some others had no interest to join a project where they would not be seen as the leader.

That fragmentation did not help us, but also did not do much harm in the end. Most of those projects are now either dead or becoming irrelevant as the rest of the world caught up on what they were trying to do.

Epilogue

As of 2018, I'm the PTL for Telemetry – because nobody else ran. The official list of maintainer for the telemetry projects is five people: two are inactive, and three are part-time. During the latest development cycle (Queens), 48 people committed in Ceilometer, though only three developers made impactful contributions. The code size has been divided by two since the peak: Ceilometer is now 25k lines of code long.

Panko and Aodh have no active developer. A Red Hat colleague and I are maintaining the projects afloat to keep it working.

Gnocchi has humbly thriven since it left OpenStack. The stains from having been part of OpenStack are not yet all gone. It has a small community, but users see its real value and enjoy using it.

Those last six years have been intense, and riding the OpenStack train has been amazing. As I concluded in the first blog post of this series, most of us had a great time overall; the point of those writings is not to complain, but to reflect.

I find it fascinating to see how the evolution of a piece of software and the metamorphosis of its community are entangled. The amount of politics that a corporately-backed project of this size generates is majestic and has a prominent influence on the outcome of software development.

So, what's next? Well, as far as Ceilometer is concerned, we still have ideas and plans to keep shrinking its footprint to a minimum. We hope that one-day Ceilometer will become irrelevant – at least that's what we're trying to achieve so we don't have anything to maintain. That mainly depends on how the myriad of OpenStack projects will chose to address their metering.

We don't see any future for Panko nor Aodh.

Gnocchi, now blooming outside of OpenStack, is still young and promising. We've plenty of ideas and every new release brings new fancy features. The storage of timeseries at large scale is exciting. Users are happy, and the ecosystem is growing.

We'll see how all of that concludes, but I'm sure it'll be new lessons to learn and write about in six years!

19 April, 2018 11:55AM by Julien Danjou

Sven Hoexter

logstash 5.6.9 logstash-input-udp 3.3.1 br0ken

While the memory leak is fixed in logstash 5.6.9 the logstash-input-udp plugin is broken. A fixed plugin got released as version 3.3.2.

The code change is https://github.com/logstash-plugins/logstash-input-udp/commit/7ecec49a3f1a0f8b51c77bd9243b8cc0dbebaeb8.

The discussion is at https://discuss.elastic.co/t/udp-input-is-crashing/128485.

So instead of fiddling again with plugin updates and offline bundles we decided to just go down the ugly road of abusing ansible, and install a file copy of the udp.rb file. This is horrible but works.

- name: check for br0ken logstash udp input plugin version
  shell: /usr/share/logstash/bin/logstash-plugin list --verbose logstash-input-udp | grep -E '3\.3\.1'
  register: logstash_udp_plugin_check
  ignore_errors: True
  tags:
    - "skip_ansible_lint"

- name: install fixed udp input plugin
  copy:
    src: "hacks/udp.rb"
    dest: "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-3.3.1/lib/logstash/inputs/udp.rb"
    owner: "root"
    group: "root"
    mode: 0644
  when: logstash_udp_plugin_check.rc == 0
  notify: restart logstash

Kudos to Martin and Paul for handling this one swiftly.

19 April, 2018 11:00AM

hackergotchi for Norbert Preining

Norbert Preining

Analysing Debian packages with Neo4j – Part 2 UDD and Graph DB Schema

In the first part of this series of articles on analyzing Debian packages with Neo4j we gave a short introduction to Debian and the life time and structure of Debian packages.

The current second part first describes the Ultimate Debian Database UDD and how to map the information presented here from the UDD into a Graph Database by developing the database scheme, that is the set of nodes and relations, together with their attributes, from the inherent properties of Debian packages.

The next part will describe how to get the data from the UDD into Neo4j, give some sample queries, and discuss further work.

The Ultimate Debian Database UDD

The Ulimate Debian Database UDD

gathers a lot of data about various aspects of Debian in the same SQL database. It allows users to easily access and combine all these data.

Data currently being imported include: Packages and Sources files, from Debian and Ubuntu, Bugs from the Debian BTS, Popularity contest, History of uploads, History of migrations to testing, Lintian, Orphaned packages, Carnivore, Debtags, Ubuntu bugs (from Launchpad), Packages in NEW queue, DDTP translations.
Debian Wiki

Collection all these information and obviously having been grown over time, the database exhibits a highly de-normalized structure with ample duplication of the same information. As a consequence, reading the SQL code fetching data from the UDD and presenting them in a coherent interface tends to be highly convoluted.

This lets us to the project of putting (parts) of the UDD into a graph database, removing all the duplication on the way and representing the connections between the entities in a natural graph way.

Developing the database schema

Recall from the first column that there are source packages and binary packages in Debian, and that the same binary package can be built in different versions from different source packages. Thus we decided to have both source and binary packages as separate entities, that is nodes of the graph, and the two being connected via a binary relation builds.

Considering dependencies between Debian packages we recall the fact that there are versioned and unversioned dependencies. We thus decide to have again different entities for versioned source and binary packages, and unversioned source and binary packages.

The above considerations leads to the following sets of nodes and relations:


vsp -[:is_instance_of]-> sp
vbp -[:is_instance_of]-> bp
sp -[:builds]-> bp
vbp -[:next]-> vbp
vsp -[:next]-> vsp

where vsp stands for versioned source package, sp for (unversioned) source package, and analog for binary packages. The versioned variants carry besides the name attribute also a version attribute in the node.

The relations are is_instance_of between versioned and unversioned packages, builds between versioned source and versioned binary packages, and next that defines an order on the versions.

An example of a simple graph for the binary package luasseq which has was originally built from the source package luasseq but was then taken over into the TeX Live packages and built from a different source.

Next we want to register suites, that is associating which package has been included in which release of Debian. Thus we add a new node type suite and a new relation contains which connects suites and versioned binary packages vbp:

suite -[:contains]-> vbp

Nodes of type suite only contain one attribute name. We could add release dates etc, but refrained from it for now. Adding the suites to the above diagram we obtain the following:

Next we add maintainers. The new node type mnt has two attributes: name and email. Here it would be nice to add alternative email addresses as well as alternative spellings of the name, something that is quite common. We add a relation maintains to versioned source and binary packages only since, as we have seen, the maintainership can change over the history of a package:

mnt -[:maintains]-> vbp
mnt -[:maintains]-> vsp

This leads us to the following graph:

This concludes the first (easy) part with basic node types and relations. We now turn to the more complicated part to represent dependencies between packages in the graph.

Representing dependencies

For simple dependencies (versioned or unversioned, but no alternatives) we represent the dependency relation with two attributes reltype and relversion specifying the relation type (<<, <=, ==, >=, >>) and the version as string. For unversioned relations we use the reltype=none and relversion=1:


vbp -[:depends reltype: TYPE, relversion: VERS]-> bp

Adding all the dependencies to the above graph, we obtain the following graph:

Our last step is dealing with alternative dependencies. Recall from the first blog that a relation between two Debian packages can have alternative targets like in

Depends: musixtex (>= 1:0.98-1) | texlive-music
which means that either musixtex or texlive-music needs to be installed to satisfy this dependency.

We treat these kind of dependencies by introducing a new node type altdep and a new relation is_satisfied_by between altdep nodes and versioned or unversioned binary packages (vbp, bp).

The following slice of our graph shows binary packages pmx which has alternative dependencies as above:

Summary of nodes, relations, and attributes

Let us summarize the node types, relations types, and respective attributes we have deduced from the requirements and data in the Debian packages:

Nodes and attributes

  • mnt: name, email
  • bp, sp, suite, altdeps: name
  • vbp, vsp: name, version

Relations and attributes

  • breaks, build_conflicts, build_conflicts_indep, build_depends, build_depends_indep, conflicts, depends, enhances, is_satisfied_by, pre_depends, provides, recommends, replaces, suggests
    Attributes: reltype, relversion
  • builds, contains, is_instance_of, maintains, next: no attributes

Next is …

Now that we have the graph database schema set up, we need to pull data from the UDD and put them into the Graph database. This will be discussed in the next entry of this series.

19 April, 2018 05:21AM by Norbert Preining

Hideki Yamane

Improve debootstrap time a bit, without local mirror

I've introduced two features to improve debootstrap time, auto proxy detection via squid-deb-proxy-client (by Michael Vogt) and cache directory support. It reduces time to create chroot environment without huge local mirror.

Let's create chroot environment without any new features.
$ time sudo debootstrap sid sid-chroot
I: Target architecture can be executed     
I: Retrieving InRelease                     
I: Checking Release signature           
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages             
I: Validating Packages             
(snip)
I: Base system installed successfully. 
real    8m27.624s
user    1m52.732s
sys     0m10.786s

Then, use --cache-dir option.
$ time sudo debootstrap --cache-dir=/home/henrich/tmp/cache sid sid-chroot
E: /home/henrich/tmp/cache: No such directory

Yes, we should cache directory first.
$ mkdir ~/tmp/cache
Let's go.
$ time sudo debootstrap --cache-dir=/home/henrich/tmp/cache sid sid-chroot
I: Target architecture can be executed
I: Retrieving InRelease             
I: Checking Release signature         
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages                 
I: Validating Packages                   
(snip)
I: Base system installed successfully. 
real    2m10.180s
user    1m47.428s
sys     0m8.196s
It cuts about 6 minutes! (of course, it depends on the mirror you choose). Then, try to use proxy feature.
$ sudo apt install squid-deb-proxy-client
(snip)
$ time sudo debootstrap sid sid-chroot
Using auto-detected proxy: http://192.168.10.13:8000/
I: Target architecture can be executed     
I: Retrieving InRelease                     
I: Checking Release signature           
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages             
I: Validating Packages             
(snip)
I: Configuring systemd...
I: Base system installed successfully.
Can you see the words "Using auto-detected proxy: http://192.168.10.13:8000/"? It detects package proxy and use it. And its result is
real    2m15.995s
user    1m49.737s
sys     0m8.778s

Conclusion: If you already run squid-deb-proxy on some machine in local network, then install squid-deb-proxy-client and debootstrap automatically use it, or you can use --cache-dir option for speed up creating chroot environment via debootstrap. Especiall if you don't have good network conectivity, both features will help without effort.


Oh, and one more thing... Thomas Lange has proposed patches to improve debootstrap and it makes debootstrap much faster. If you're interested, please look into it.

19 April, 2018 05:14AM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Steve Kemp

Steve Kemp

A filesystem for known_hosts

The other day I had an idea that wouldn't go away, a filesystem that exported the contents of ~/.ssh/known_hosts.

I can't think of a single useful use for it, beyond simple shell-scripting, and yet I couldn't resist.

 $ go get -u github.com/skx/knownfs
 $ go install github.com/skx/knownfs

Now make it work:

 $ mkdir ~/knownfs
 $ knownfs ~/knownfs

Beneat out mount-point we can expect one directory for each known-host. So we'll see entries:

 ~/knownfs $ ls | grep \.vpn
 builder.vpn
 deagol.vpn
 master.vpn
 www.vpn

 ~/knownfs $ ls | grep steve
 blog.steve.fi
 builder.steve.org.uk
 git.steve.org.uk
 mail.steve.org.uk
 master.steve.org.uk
 scatha.steve.fi
 www.steve.fi
 www.steve.org.uk

The host-specified entries will each contain a single file fingerprint, with the fingerprint of the remote host:

 ~/knownfs $ cd www.steve.fi
 ~/knownfs/www.steve.fi $ ls
 fingerprint
 frodo ~/knownfs/www.steve.fi $ cat fingerprint
 98:85:30:f9:f4:39:09:f7:06:e6:73:24:88:4a:2c:01

I've used it in a few shell-loops to run commands against hosts matching a pattern, but beyond that I'm struggling to think of a use for it.

If you like the idea I guess have a play:

It was perhaps more useful and productive than my other recent work - which involves porting an existing network-testing program from Ruby to golang, and in the process making it much more uniform and self-consistent.

The resulting network tester is pretty good, and can now notify via MQ to provide better decoupling too. The downside is of course that nobody changes network-testing solutions on a whim, and so these things are basically always in-house only.

19 April, 2018 03:45AM

hackergotchi for Shirish Agarwal

Shirish Agarwal

getting libleveldb1v5 fixed

Please treat this as a child’s fantasy till the information is not approved or corrected by a DD/DM who obviously have much more info and experience in dealing with below.

It had been quite a few years since I last played Minetest, a voxel-based game similar and yet different to its more famous brethren minecraft .

I wanted to install and play it but found that one of the libraries it needs is libleveldb1v5, a fast key-value storage library which according to #877773 has been marked as grave bug report because of no info. on the soname bump.

I saw that somebody had also reported it upstream and the bug has been fixed and has some more optimizations done to the library as well. From the description of the library it reminded me so much of sqlite which has almost the same feature-set (used by mozilla for bookmarks and pwd management if I’m not mistaken).

I was thinking as to if this has been fixed quite some back then why the maintainer didn’t put the fixed version on sid and then testing. I realized it might be because the new version has a soname bump which means it would need to be transitioned probably with proper breaks and everything.

A quick check via

$ apt-rdepends -r libleveldb1v5 | wc -l
Reading package lists... Done
Building dependency tree
Reading state information... Done
195

revealed that almost 190 packages directly or indirectly will be affected by the transition change. I then tried to find where the VCS is located by doing –

$ apt-cache showsrc libleveldb1v5 | grep Vcs-Git
Vcs-Git: git://anonscm.debian.org/collab-maint/leveldb.git
Vcs-Git: git://anonscm.debian.org/collab-maint/leveldb.git

Then I cloned the repo to my system to see if the maintainer had done any recent changes and saw :-


b$ git log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr)' --abbrev-commit | head -15
* 7465515 - (HEAD -> master, tag: debian/1.20-2, origin/master, origin/HEAD) Packaging cleanup (4 months ago)
* f85b876 - Remove libleveldb-dbg package and use the auto-generated one (4 months ago)
* acac71f - Update Standards-Version to 4.1.2 (4 months ago)
* e281654 - Update debhelper level to 11 (4 months ago)
* df015eb - Don't run self-test parallel (4 months ago)
* ba81cc9 - (tag: debian/1.20-1) Update debhelper level to 10 (7 months ago)
* cb84f26 - Update Standards-Version to 4.1.0 (7 months ago)
* be0ef22 - Convert markdown documentation to HTML (7 months ago)
* ab8faa7 - Start 1.20-1 changelog (7 months ago)
* 03641f7 - Updated version 1.20 from 'upstream/1.20' (7 months ago)
|\
| * 59c75ca - (tag: upstream/1.20, origin/upstream) New upstream version 1.20 (7 months ago)
* | a21bcbc - (tag: debian/1.19-2) Add the missing ReadMemoryBarrier and WriteMemoryBarrier functions for mips* (1 year, 5 months ago)
* | 70c6e38 - Add myself to debian/copyright (1 year, 5 months ago)
* | 1ba7231 - Update source URL (1 year, 5 months ago)

There is probably a much simpler way to get the same output but for now that would have to suffice.

Anyways, there are many variations of the code I used using git log --pretty and git log --decorate etc. Maybe one of those could give the same output, would need the time diff as shared above.

Trivia – I am usually more interested in commit messages and time when the commits are done and know a bit of git to find out the author of a particular commit even if abbreviated commit is there and want to thank her(im) for the work done on that package or a particular commit which address some annoying bug that I had. /Trivia

Although the best I have hankered for is to have some sort of visualization tool about projects that I like

something like Andrews plot or the C-Chart for visualization purposes but till date haven’t found anything which would render it into those visuals straightway. Maybe a feature for a future git version, who knows 🙂

I know that in itself is a Pandora’s box as some people might just like to have visualization of only when releases were made of an upstream project while there will be others like who would enjoy and be fascinated to see amount of time between each commit on a project. I have seen quite a few projects rise, wane and have a rise again but having such visualizations may possibly help out in getting people more involved with a project/library whatever.

Andrews plot example - Wikipedia - CC-0

All the commits for the said library are done by the maintainer Laszlo Boszormenyi so it seems that the maintainer is interested in maintaining it. At least all the last 10-12 messages going almost 1.5 years shows that he is/was active till at least 4 months back, which brings me to another one of my pet issues.

There aren’t any ways to figure out how recently a DD or DM committed on Debian somewhere. People usually try the MIA team (Missing in Action) and many a times you feel you are taking the team’s time especially when it turns out to be a false positive. If users had more tools than probably MIA’s workload would be much lesser than before.

The only the other way is to look at all the packages a particular DD/DM is maintaining and if you are lucky then s(he) has made a release of a package or something that you can look into and know for certain that the person is active.

The other longer way is to download all the VCS repositories of a DD/DM, cycle through all of them using something like above to see when was the last commit done on all her(is) repos. and then come to conclusion one way or the other. If s(he) is really MIA then tell them to MIA team so they can try to connect with the person concerned, and if s(he) doesn’t respond in a reasonable time-frame then orphan the packages.

If a DD/DM has not committed for more than a year or two for any of her(is) projects I guess it’s reasonable to expect that the person concerned is MIA.

Anyways, it would be nice if the present maintainer is able to get the new release out so the other 190 packages which are probably installable could also work. When I was churning this on my head, I thought why couldn’t the DD’s have some sort of CI infrastructure which may automate things a bit and make life somewhat easier.

I have seen the Debian travis ci instance but know that’s limited to upstream projects hosted on github.

For those who might not know Travis CI is one of many such solutions. They are continuous integration software and they are quite a few of them.

What they do is they try to build the project/application/library etc. after each and every commit taking into account any parameters told/programmed into it. There may be times when upstream make an incompatible change or make some mistake while committing, because it’s autobuilds the application or whatever automatically, if it fails to build it forces the developer to see where they messed up. At the end you have a slightly better application at the end as at least obvious bugs are ironed out.

I do remember reading about gitlab-ci somewhere, maybe in the thread where DD’s were discussing about various alternatives to alioth or somewhere else. I dunno if would be just a matter of turning it on or that part is still not open-sourced yet, no idea.

If that happens, it would probably save the DD’s/DM some computational time apart from being able to know if things are going well or not.

I know gitlab had shared (paraphrasing here) they may make some of the things more open-source if Debian were to adopt the product, now that Debian has, I and guess most of the community would be hoping as lot of hard work, tears have gone into getting things ported from alioth to salsa especially in the last one month or so.

I do know that we have the autobuilder network but from what I understand, it’s for a slightly different use-case. This is more to see if the package builds on all the 10-11 official architectures and maybe some of the unofficial architectures.

While I was reading it, I was unable to find if just like people all around the world are doing mirrors (full or partial depending on the resources they have and the kind of pickup they are seeing) can people be part of autobuilder network to give additional computational power to the network. The name does say ‘autobuilder network’ so maybe that possibility exists, maybe it does not.

I did consult the documentation on the topic and it seems it’s a bit of work, see the workflow shared in wiki for transitions.

After reading that, you really wonder the patience of the people who slog through all this.

I did try to connect with him on the bug mentioned but he hasn’t got back, perhaps he’s busy IRL.

https://bugs.debian.org/cgi-bin/pkgreport.cgi?package=release.debian.org#_0_17_4

Till later.

Note – I have not talked about */debian/control or */debian/changelog.Debian, */debian/changelog or any of the files because once those are made, they are probably just need to be fiddled around a bit. The control file will probably list newer version of dependencies and may or may not have newer build dependencies. Changelog.Debian would document the changes the DD/DM had to do in order for the binary to be built successfully and in the archive and changelog will just document the time till where upstream’s work was taken.

19 April, 2018 01:23AM by shirishag75

Jeremy Bicha

gksu removed from Ubuntu

Today, gksu was removed from Ubuntu 18.04, four weeks after it was removed from Debian.

19 April, 2018 12:49AM by Jeremy Bicha

April 18, 2018

hackergotchi for Vincent Bernat

Vincent Bernat

Self-hosted videos with HLS

Note

This article was first published on Exoscale blog with some minor modifications.

Hosting videos on YouTube is convenient for several reasons: pretty good player, free bandwidth, mobile-friendly, network effect and, at your discretion, no ads.1 On the other hand, this is one of the less privacy-friendly solution. Most other providers share the same characteristics—except the ability to disable ads for free.

With the <video> tag, self-hosting a video is simple:2

<video controls>
  <source src="../videos/big_buck_bunny.webm" type="video/webm">
  <source src="../videos/big_buck_bunny.mp4" type="video/mp4">
</video>

However, while it is possible to provide a different videos depending on the screen width, adapting the video to the available bandwidth is trickier. There are two solutions:

They are both adaptive bitrate streaming protocols: the video is sliced in small segments and made available at a variety of different bitrates. Depending on current network conditions, the player automatically selects the appropriate bitrate to download the next segment.

HLS was initially implemented by Apple but is now also supported natively by Microsoft Edge and Chrome on Android. hls.js is a JavaScript library bringing HLS support to other browsers. MPEG-DASH is technically superior (codec-agnostic) but only works through a JavaScript library, like dash.js. In both cases, support of the Media Source Extensions is needed when native support is absent. Safari on iOS doesn’t have this feature and cannot use MPEG-DASH. Consequently, the most compatible solution is currently HLS.

Encoding🔗

To serve HLS videos, you need three kinds of files:

  • the media segments (encoded with different bitrates/resolutions),
  • a media playlist for each variant, listing the media segments, and
  • a master playlist, listing the media playlists.

Media segments can come in two formats:

  • MPEG-2 Transport Streams (TS), or
  • Fragmented MP4.

Fragmented MP4 media segments are supported since iOS 10. They are a bit more efficient and can be reused to serve the same content as MPEG-DASH (only the playlists are different). Also, they can be served from the same file with range requests. However, if you want to target older versions of iOS, you need to stick with MPEG-2 TS.3

FFmpeg is able to convert a video to media segments and generate the associated media playlists. Peer5’s documentation explains the suitable commands. I have put together an handy (Python 3.6) script, video2hls, stitching together all the steps. After executing it on your target video, you get a directory containing:

  • media segments for each resolution (1080p_1_001.ts, 720p_2_001.ts, …)
  • media playlists for each resolution (1080p_1.m3u8, 720p_2.m3u8, …)
  • master playlist (index.m3u8)
  • progressive (streamable) MP4 version of your video (progressive.mp4)
  • poster (poster.jpg)

The script accepts a lot of options for customization. Use the --help flag to discover them. Run it with --debug to get the ffmpeg commands executed with an explanation for each flag. For example, the poster is built with this command:

ffmpeg \
  `# seek to the given position (5%)` \
   -ss 4 \
  `# load input file` \
   -i ../2018-self-hosted-videos.mp4 \
  `# take only one frame` \
   -frames:v 1 \
  `# filter to select an I-frame and scale` \
   -vf 'select=eq(pict_type\,I),scale=1280:720' \
  `# request a JPEG quality ~ 10` \
   -qscale:v 28 \
  `# output file` \
   poster.jpg

Serving🔗

So, we got a bunch of static files we can upload anywhere. Yet two details are important:

  • When serving from another domain, CORS needs to be configured to allow GET requests. Adding Access-Control-Allow-Origin: * to response headers is enough.4
  • Some clients may be picky about the MIME types. Ensure files are served with the ones in the table below.
Kind Extension MIME type
Playlists .m3u8 application/vnd.apple.mpegurl
MPEG2-TS segments .ts video/mp2t
fMP4 segments .mp4 video/mp4
Progressive MP4 .mp4 video/mp4
Poster .jpg image/jpeg

Let’s host our files on Exoscale’s Object Storage which is compatible with S3 and located in Switzerland. As an example, the Caminandes 3: Llamigos video is about 213 MiB (five sizes for HLS and one progressive MP4). It would cost us less than 0.01 € per month for storage and 1.42 € for bandwidth if 1000 people watch the 1080p version from beginning to end—unlikely.5

We use s3cmd to upload files. First, you need to recover your API credentials from the portal and put them in ~/.s3cfg:

[default]
host_base = sos-ch-dk-2.exo.io
host_bucket = %(bucket)s.sos-ch-dk-2.exo.io
access_key = EXO.....
secret_key = ....
use_https = True
bucket_location = ch-dk-2

The second step is to create a bucket:

$ s3cmd mb s3://hls-videos
Bucket 's3://hls-videos/' created

You need to configure the CORS policy for this bucket. First, define the policy in a cors.xml file (you may want to restrict the allowed origin):

<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
 </CORSRule>
</CORSConfiguration>

Then, apply it to the bucket:

$ s3cmd setcors cors.xml s3://hls-videos

The last step is to copy the static files. Playlists are served compressed to save a bit of bandwidth. For each video, inside the directory containing all the generated files, use the following command:

while read extension mime gz; do
  [ -z "$gz" ] || {
    # gzip compression (if not already done)
    for f in *.${extension}; do
      ! gunzip -t $f 2> /dev/null || continue
      gzip $f
      mv $f.gz $f
    done
  }
  s3cmd --no-preserve -F -P \
        ${gz:+--add-header=Content-Encoding:gzip} \
        --mime-type=${mime} \
        --encoding=UTF-8 \
        --exclude=* --include=*.${extension} \
        --delete-removed \
    sync . s3://hls-videos/video1/
done <<EOF
m3u8  application/vnd.apple.mpegurl true
jpg   image/jpeg
mp4   video/mp4
ts    video/mp2t
EOF

The files are now available at https://hls-videos.sos-ch-dk-2.exo.io/video1/.

HTML🔗

We can insert our video in a document with the following markup:

<video poster="https://hls-videos.sos-ch-dk-2.exo.io/video1/poster.jpg"
       controls preload="none">
  <source src="https://hls-videos.sos-ch-dk-2.exo.io/video1/index.m3u8"
          type="application/vnd.apple.mpegurl">
  <source src="https://hls-videos.sos-ch-dk-2.exo.io/video1/progressive.mp4"
          type='video/mp4; codecs="avc1.4d401f, mp4a.40.2"'>
</video>

Browsers with native support use the HLS version while others would fall back to the progressive MP4 version. However, with the help of hls.js, we can ensure most browsers benefit from the HLS version too:

<script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>
<script>
    if(Hls.isSupported()) {
        var selector = "video source[type='application/vnd.apple.mpegurl']",
            videoSources = document.querySelectorAll(selector);
        videoSources.forEach(function(videoSource) {
            var once = false;

            // Clone the video to remove any source
            var oldVideo = videoSource.parentNode,
                newVideo = oldVideo.cloneNode(false);

            // Replace video tag with our clone.
            oldVideo.parentNode.replaceChild(newVideo, oldVideo);

            // On play, initialize hls.js, once.
            newVideo.addEventListener('play',function() {
                if (!once) return;
                once = true;

                var hls = new Hls({ capLevelToPlayerSize: true });
                hls.loadSource(m3u8);
                hls.attachMedia(newVideo);
                hls.on(Hls.Events.MANIFEST_PARSED, function() {
                    newVideo.play();
                });
            }, false);
        });
    }
</script>

Here is the result, featuring Caminandes 3: Llamigos, a video created by Pablo Vasquez, produced by the Blender Foundation and released under the Creative Commons Attribution 3.0 license:

Most JavaScript attributes, methods and events work just like with a plain <video> element. For example, you can seek to an arbitrary position, like 1:00 or 2:00—but you would need to enable JavaScript to test.

The player is different from one browser to another but provides the basic needs. You can upgrade to a more advanced player, like video.js or MediaElements.js. They also handle HLS videos through hls.js.

Hosting your videos on YouTube is not unavoidable: serving them yourself while offering quality delivery is technically affordable. If bandwidth requirements are modest and the network effect not important, self-hosting makes it possible to regain control of the published content and not to turn over readers to Google. In the same spirit, PeerTube offers a video sharing platform. Decentralized and federated, it relies on BitTorrent to reduce bandwidth requirements.

Addendum🔗

Preloading🔗

In the above example, preload="none" was used for two reasons:

  • Most readers won’t play the video as it is an addon to the main content. Therefore, bandwidth is not wasted by downloading a few segments of video, at the expense of slightly increased latency on play.
  • We do not want non-native HLS clients to start downloading the non-HLS version while hls.js is loading and taking over the video. This could also be done by declaring the progressive MP4 fallback from JavaScript, but this would make the video unplayable for users without JavaScript. If preloading is important, you can remove the preload attribute from JavaScript—and not wait for the play event to initialize hls.js.

CSP🔗

Setting up CSP correctly can be quite a pain. For browsers with native HLS support, you need the following policy, in addition to your existing policy:

  • image-src https://hls-videos.sos-ch-dk-2.exo.io for the posters,
  • media-src https://hls-videos.sos-ch-dk-2.exo.io for the playlists and media segments.

With hls.js, things are more complex. Ideally, the following policy should also be applied:

  • worker-src blob: for the transmuxing web worker,
  • media-src blob: for the transmuxed segments,
  • connect-src https://hls-videos.sos-ch-dk-2.exo.io to fetch playlists and media segments from JavaScript.

However, worker-src is quite recent. The expected fallbacks are child-src (deprecated), script-src (but not everywhere) and then default-src. Therefore, for broader compatibility, you also need to append blob: to default-src as well as to script-src and child-src if you already have them. Here is an example policy—assuming the original policy was just default-src 'self' and media, XHR and workers were not needed:

HTTP/1.0 200 OK
Content-Security-Policy: 
  default-src 'self' blob:;
  image-src 'self' https://hls-videos.sos-ch-dk-2.exo.io;
  media-src blob: https://hls-videos.sos-ch-dk-2.exo.io;
  connect-src https://hls-videos.sos-ch-dk-2.exo.io;
  worker-src blob:;

  1. YouTube gives you the choice to not display ads on your videos. In advanced settings, you can unselect “Allow advertisements to be displayed alongside my videos.” Alternatively, you can also monetize your videos. ↩︎

  2. Nowadays, everything supports MP4/H.264. It usually also brings hardware acceleration, which improves battery life on mobile devices. WebM/VP9 provides a better quality at the same bitrate. ↩︎

  3. You could generate both formats and use them as variants in the master playlist. However, a limitation in hls.js prevents this option. ↩︎

  4. Use https://example.org instead of the wildcard character to restrict access to your own domain. ↩︎

  5. There is no need to host those files behind a (costly) CDN. Latency doesn’t matter much as long as you can sustain the appropriate bandwidth. ↩︎

18 April, 2018 07:19PM by Vincent Bernat

hackergotchi for Chris Lamb

Chris Lamb

Re-elected as Debian Project Leader

I have been extremely proud to have served as the Debian Project Leader since my election in early 2017. During this time I've learned a great deal about the inner workings of the Project as well as about myself. I have grown as a person thanks to all manner of new interactions and fresh experiences.

I believe is a privilege simply to be a Debian Developer, let alone to be selected as their representative. It was therefore an even greater honour to learn that I have been re-elected by the community for another year. I profoundly and wholeheartedly thank everyone for placing their trust in me for another term.



Being the "DPL" is a hard job. It is even difficult to even communicate exactly how and any statistics somehow fail to capture it. However, I now understand the look in previous Leaders' eyes when they congratulated me on my appointment and future candidates should not nominate themselves lightly.

Indeed, I was unsure whether I would stand for re-appointment and I might not have done had it not been for some touching and encouraging words from some close confidants. They underlined to me that a year is not a long time, further counselling that I should consider myself just getting started and only now prepared to start to take on the bigger picture.



Debian itself will always face challenges but I sincerely believe that the Project remains as healthy as ever. We are uniquely cherished and remain remarkably poised to improve the free software ecosystem as a whole. Moreover, our stellar reputation for technical excellence, stability and software freedom remains highly respected and without peer. It is truly an achievement to be proud of.



I thank everyone who had the original confidence, belief and faith in me, but I offer my further sincere and humble thanks to all those who have felt they could extend this to a second term, especially with such a high turnout. I am truly excited and looking forward to the year ahead.


https://lamby-www.s3.amazonaws.com/yadt/blog.Image/image/original/34.jpeg

18 April, 2018 06:24PM

Sven Hoexter

logstash 5.6.9 released with logstash-filter-grok 4.0.3

In case you're using logstash 5.6.x from elastic, version 5.6.9 is released with logstash-filter-grok 4.0.3. This one fixes a bad memory leak that was a cause for frequent logstash crashes since logstash 5.5.6. Reference: https://github.com/logstash-plugins/logstash-filter-grok/issues/135

I hope this is now again a decent logstash 5.x release. I've heard some rumours that the 6.x versions is also a bit plagued by memory leaks. :-/

18 April, 2018 04:11PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

simple

Every now and then, for one reason or another, I am sat in front of a Linux-powered computer with the graphical user interface disabled, instead using an old-school text-only mode.

shell prompt

There's a strange, peaceful quality about these environments.

When I first started using computers in the 90s, the Text Mode was the inferior, non-multitasking system that you generally avoided unless you were trying to do something specific (like run Doom without any other programs eating up your RAM).

On a modern Linux (or BSD) machine, unless you are specifically trying to do something graphical, the power and utility of the machine is hardly diminished at all in this mode. The surface looks calm: there's nothing much visibly going on, just the steady blink of the command prompt, as if the whole machine is completely dedicated to you, and is waiting poised to do whatever you ask of it next. Yet most of the same background tasks are running as normal, doing whatever they do.

One difference, however, is the distractions. Rather like when you drive out of a city to the countryside and suddenly notice the absence of background noise, background light, etc., working at a text terminal — relative to a regular graphical desktop — can be a very calming experience.

So I decided to take a fresh look at my desktop and see whether there were unwelcome distractions. For some time now I've been using a flat background colour to avoid visual clutter. After some thought I realised that most of the time I didn't need to see what was in GNOME3's taskbar. I found and installed this hide-top-bar extension and now it's tucked away unless I mouse up to the top. Now that it's out of the way by default, I actually put more information into it: the full date in the time display; and (via another extension, TopIcons Plus) the various noisy icons that apps like Dropbox, OpenBox, VLC, etc. provide.

GNOME 3 Desktop

There's still some work to do, notably in my browser (Firefox), but I think this is a good start.

18 April, 2018 03:47PM

hackergotchi for Laura Arjona Reina

Laura Arjona Reina

Kubb 2018 season has just begun

Since last year I play kubb with my son. It’s a sport/game of marksmanship and patience. It’s a quite inclusive game and it’s played outside, in a grass or sand field.

It happens that the Spanish association of Kubb is in the town where I live (even, in my neighbourhood!) so several family gatherings with tournaments happen in the parks near my house. Last year we attended for first time and learned how to play, and since then, we participated in 2 or 3 events more.

As kubb is played in open air, season starts in March/April, when the weather is good enough to have a nice morning in the park. I got surprised that being a so minority game, about 50-100 people gather in each local tournament, grouped in teams of any kind: individuals, couples or up to 6 persons-teams, mothers and daughters, only kids-teams, teams formed by people of 3 different generations… as strenght or speed (or even experience) are not relevant to win this game, almost anybody can play with anybody.

image

Enjoying playing kubb makes me also think about how communities around a non-mainstream topic are formed and maintained, and how to foster diversity and good relationships among participants. I’ve noted down some ideas that I think the kubb association does well:

  •  No matter how big or small you are, always take into account the possible newcomers: setting a slot at the start of the event to welcome them and explain “how the day will work” makes those newcomers feel less stressed.
  • Designing events where the whole family can participate (or at least “be together”, not only “events with childcare”) but it’s not mandatory that all of them participate, helps people to get involved more long-term.
  • The format of the event has to be kept simple to avoid organisers to get burned out. If the organisers are so overwhelmed taking care of things that they cannot taste the result of their work, that means that the organisation team should grow and balance the load.
  • Having a “break” during the year so everybody can rest and do other things also helps people get more motivated when the next season/event starts.

Thinking about kubb, particularly together/versus with the other sport that my kid plays (football), I find similarities and contrasts with another “couple” of activities that we also experience in our family: the “free software way of life” versus the “mainstream use” of computers/devices nowadays. It’s good to know both (not to be “apart of the world in our warm bubble”), and it’s good to have the humble, but creative and more human-focused and good-values-loaded one as big reference for the type of future that we want to live and we build everyday with our small actions.

Comments?

You can comment on this post using this pump.io thread.

18 April, 2018 07:40AM by larjona

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2018 for Debian

TeX Live 2018 has hit Debian/unstable today. The packages are based on what will be (most likely, baring any late desasters) on the TeX Live DVD which is going to press this week. This brings the newest and shiniest version of TeX Live to Debian. There have

The packages that have been uploaded are:

The changes listed in the TeX Live documentation and which are relevant for Debian are:

  • Kpathsea: Case-insensitive filename matching now done by default in non-system directories; set texmf.cnf or environment variable texmf_casefold_search to 0 to disable. Full details in the Kpathsea manual.
  • epTEX, eupTEX: New primitive \epTeXversion.
  • LuaTEX: Preparation for moving to Lua 5.3 in 2019: a binary luatex53 is available on most platforms, but must be renamed to luatex to be effective. Or use the ConTEXt Garden files; more information there.
  • MetaPost: Fixes for wrong path directions, TFM and PNG output.
  • pdfTEX: Allow encoding vectors for bitmap fonts; current directory not hashed into PDF ID; bug fixes for \pdfprimitive and related.
  • XeTEX: Support /Rotate in PDF image inclusion; exit nonzero if the output driver fails; various obscure UTF-8 and other primitive fixes.
  • tlmgr: new front-ends tlshell (Tcl/Tk) and tlcockpit (Java); JSON output; uninstall now a synonym for remove; new action/option print-platform-info.

And above all, the most important change: We switched to CMSS, a font designed by DEK, for our logo and banners 😉

Enjoy.

18 April, 2018 01:33AM by Norbert Preining

April 17, 2018

hackergotchi for Alexander Wirt

Alexander Wirt

alioth deprecation - next steps

As you should be aware, alioth.debian.org will be decommissioned with the EOL of wheezy, which is at the end of May. The replacement for the main part of alioth, git, is alive and out of beta, you know it as salsa.debian.org. If you did not move your git repository yet, hurry up, time is running out.

The other important service from the alioth set, lists, moved to a new host and is now live at https://alioth-lists.debian.net with the lists which opted into migration. All public list archives moved over too and will continue to exist under the old URL.

decommissioning timeline

2018-05-01 DISABLE registration of new users on alioth. Until an improved SSO (GSOC Project) is ready, new user registrations needed for SSO services will be handled manually. More details on this will follow in a seperate announcement.
2018-05-10 - 2018-05-13 darcs, bzr and mercurial repositories will be exported as tarballs and made available readonly from a new archive host, details on that will follow.
2018-05-17 - 2018-05-18 During the Mini-DebConf Hamburg any existing cron jobs will be turned off, websites still on alioth will be disabled.
2018-05-31 All remaining repositories (cvs, svn and git) will be archived similar to the ones above. The host moszumanska, the home of alioth, will go offline!

17 April, 2018 10:19PM

hackergotchi for Norbert Preining

Norbert Preining

Specification and Verification of Software with CafeOBJ – Part 1 – Introducing CafeOBJ


Software bugs are everywhere – the feared Blue Screen of Death, the mobile phone rebooting at the most inconvenient moment, games crashing. Most of these bugs are not serious problems, but there are other cases that are far more serious:

While bugs will always remain, the question is how to deal with these kinds of bug. There is unsurmountable amount of literature on this topic, but it general falls into one of the following categories:

  • program testing: subject the program to be checked to a large set of tests trying to exhaust all possible code paths
  • post coding formal verification – model checking: given program code, model the behavior of the program in temporal logic and prove necessary properties
  • pre coding specification and verification: start with a formal specification what the program should do, and verify that the specification is correct, that is, that it satisfies desirable properties

The first two items above are extremely successful and well developed. In this blog series we want to discuss the third item, specification and their verification.

Overview on the blog series

This blog will introduce some general concepts about software and specifications, as well as introduce CafeOBJ as an algebraic specification language that is executable and thus can be used to verify the specification at the same time.

Further blog entries will introduce the CafeOBJ language in bit more detail, go through a simple example of cloud synchronization specification, and discuss more involved techniques and automated theorem proving using CafeOBJ.

Why should we verify specifications?

The value of formal specifications of software has been recognized since the early 80ies, and formal systems have been in development since then (Z, Larch and OBJ all originate at that time). On the other hand, actual use of these techniques did remain mostly in the academic surrounding – engineers and developers where mostly reluctant to learn highly mathematical languages to write specifications instead of writing code.

With the growth of interactivity, explosion of number of communication protocols (from low level TCP to high level SSL) with handshakes and data exchange sequences, the need for formal verification of these protocols, especially if they guard crucial data, has been increasing steadily.

The CafeOBJ approach

CafeOBJ is a member of the OBJ family and thus uses algebraic methods to describe specifications. This is in contrast to the Z system which uses set theory and lambda calculus.

Our aims in developing the language (as well as the system) CafeOBJ can be summarized as follows:

  • provide a reasonable blend of user and machine capabilities
  • allow intuitive modeling while preserving a rigorous formal background
  • allow for various levels of modelling – from high-level to hard-core
  • do not try to fully automate everything – understanding of design and problems is necessary

We believe that we achieve this through the combination of a rigid formal background, the incorporation of order-sorted equational theory, an executable semantics via rewriting, high-level programming facilities (inheritance, templates and instantiations, …), and last but not least a completely freedom to redefine the language of the specification (postfix, infix, mixfix, syntax overloading, …).

More specifically, the logical foundations are formed by the following four elements:

  • Order sorted algebras: partial order of sorts
  • Hidden algebras: co-algebraic methods, infinite objects
  • Rewriting logic: transitions as first class objects
  • Order sorted rewriting: executable semantics

Our vision

Our vision for safety aware software development can be summarized as follows:

  • Step 1: Model and describe a system in order-sorted algebraic specification
    The domain/design engineers construct proof scores hand-in-hand with formal specification;
  • Step 2: Construct proof score and verify the specification by rewriting
    The proof scores (CafeOBJ code) are executable instructions, which, when evaluated provide proofs of desirable properties of the specification.

This concludes the first part of this series on CafeOBJ. We will dive into the language of CafeOBJ in the next blog.

17 April, 2018 12:29AM by Norbert Preining

April 16, 2018

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, March 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 214 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 26. Thanks to a few extra hours dispatched this month (accumulated backlog of a contributor), the number of open issues came back to a more usual value.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

16 April, 2018 02:07PM by Raphaël Hertzog

hackergotchi for Holger Levsen

Holger Levsen

20180416-LTS-march

My LTS work in March

So in March I resumed contributing to LTS again, after 2 years of taking a break, due to being overwhelmed with work on Reproducible Builds... Reproducible Builds is still eating a lot of my time, but as we currently are unfunded I had to pick up some other sources of funding.

And then, due to Reproducible Builds still requiring a lot of my attention (both actual work as well as work on getting funded again) and other stuff happening in my life, I was also mostly unable to find time to really dive into LTS again, so while I managed to renew my knowledge of the procedures etc, I only managed to find 1.5h work to be done :/ Which in turn made me feel quite bad, so that I also postponed writing about this until now.

So, in March I only managed to mark libcdio as no-DSA and upload samba to fix CVE-2018-1050.

On the plus side and despite the above, I'm very happy to be able to work on LTS again, because a.) I consider it interesting (to fix bugs in old packages, yes!) and b.) because I use LTS myself and c.) because the LTS crowd is actually a nice and helpful one.

And now let's see how much LTS work I'll manage in April...!

16 April, 2018 01:01PM

April 15, 2018

Hideki Yamane

Of course it runs NetBSD


“Of course it runs NetBSD”

Note: It runs Debian squeeze chroot on NetBSD 7.1.2, with Linux emulation

15 April, 2018 10:56PM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Thorsten Glaser

Thorsten Glaser

mksh on Jehanne, a guest post by Shamar

Giacomo Tesio referenced mksh(1) in his annual Jehanne report and provided a guest post (dated 2018-01-09, sorry for posting it this late only) for us on his journey on porting mksh to Jehanne, his Plan 9 derivative operating system. Read on for his story!

(read more…)

15 April, 2018 09:11PM by MirOS Developer tg (tg@mirbsd.org)

Russ Allbery

Free software log (March 2018)

I did get a few software releases out this month, although not as much as I'd planned and I still have a lot of new releases pending that are waiting for me to have a bit more free time.

control-archive got a 1.8.0 release, which catches up from accumulated changes over the past year plus and falls back to GnuPG v1 for signature processing. One of the projects that I'd like to find time for is redoing all of my scattered code for making and checking Usenet control messages.

DocKnot 1.03 adds more support for SPDX license identifiers, which I've started using in my projects, and then 1.04 was a quick bug fix release for something I broke in the test suite on Windows systems.

I also redid the Kerberos authentication plugin for INN to use modern Kerberos APIs, which cleared up some build issues when pointing at non-system Kerberos libraries.

15 April, 2018 06:21PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#18: Adding Intel MKL easily via a simple script

Welcome to the eighteenth post in the rarely riveting R ramblings series of posts, or R4 for short.

The Intel Math Kernel Library (MKL) is well-know high(er) performance math library tailored for Intel CPUs offering best-in-class numerical performance on a number of low-level operations (BLAS, LAPACK, ...). They are not open source, used to be under commerical or research-only licenses --- but can now be had (still subject to license terms you should study) via apt-get (and even yum). This page describe the installation of the MKL (and other components) in detail (but stops short of the system integration aspect we show here).

Here we present one short script, discussed in detail below, to add the MKL to your Debian or Ubuntu system. Its main advantages are

  • clean standard code using package management tools;
  • additional steps to make it the the system default; and
  • with an option for clean removal leaning again on the package management system.

We put the script and a README.md largely identical to this writeup into this GitHub repo where issues, comments, questions, ... should be filed.

MKL for .deb-based systems: An easy recipe

This post describes how to easily install the Intel Math Kernel Library (MKL) on a Debian or Ubuntu system. Very good basic documentation is provided by Intel at their site. The discussion here is more narrow as it focusses just on the Math Kernel Library (MKL).

The tl;dr version: Use this script which contains the commands described here.

First Step: Set up apt

We download the GnuPG key first and add it to the keyring:

cd /tmp
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB

To add all Intel products we would run first command, but here we focus just on the MKL. The website above lists other suboptions (TBB, DAAL, MPI, ...)

## all products:
#wget https://apt.repos.intel.com/setup/intelproducts.list -O /etc/apt/sources.list.d/intelproducts.list

## just MKL
sh -c 'echo deb https://apt.repos.intel.com/mkl all main > /etc/apt/sources.list.d/intel-mkl.list'

We then update our lists of what is available in the repositories.

apt-get update

As a personal aside, I still use the awesome wajig frontend to dpkg, apt and more by Graham Williams (of rattle fame). Among other tricks, wajig keeps state and therefore "knows" what packages are new. Here, we see a lot:

edd@rob:/tmp$ wajig update
Hit:1 http://us.archive.ubuntu.com/ubuntu artful InRelease
Ign:2 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:3 http://us.archive.ubuntu.com/ubuntu artful-updates InRelease
Hit:4 https://download.docker.com/linux/ubuntu artful InRelease
Hit:5 http://us.archive.ubuntu.com/ubuntu artful-backports InRelease
Ign:6 https://cloud.r-project.org/bin/linux/ubuntu artful/ InRelease
Hit:7 https://cloud.r-project.org/bin/linux/ubuntu artful/ Release
Hit:8 http://security.ubuntu.com/ubuntu artful-security InRelease
Hit:9 https://apt.repos.intel.com/mkl all InRelease
Hit:10 http://dl.google.com/linux/chrome/deb stable Release
Hit:12 https://packagecloud.io/slacktechnologies/slack/debian jessie InRelease
Reading package lists... Done
This is 367 up on the previous count with 367 new packages.
edd@rob:/tmp$ wajig new
Package                  Description
========================-===================================================
intel-mkl-gnu-f-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-gnu-f-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-cluster-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-gnu-c-196      Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-cluster-c-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-f-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-cluster-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-gnu-239        Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-l-ps-libs-32bit-jp-174 OpenMP for Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-doc-ps-2018    Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-pgi-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-ss-tbb-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-mic-cluster-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-l-ps-libs-jp-196 OpenMP for Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-ps-mic-rt-174  Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-openmp-l-ps-libs-jp-239 OpenMP for Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-common-f-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-f95-mic-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-common-64bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-f95-common-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-psxe-common-2018.2-046 Intel(R) Parallel Studio XE 2018 Update 2 for Linux*
intel-mkl-ps-mic-cluster-rt-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-cluster-64bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-f95-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-cluster-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-gnu-rt-196     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-tbb-libs-2018.0-128 Intel(R) Threading Building Blocks 2018 for Linux*
intel-comp-l-all-vars-196 Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-common-ps-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-pgi-f-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-f95-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-openmp-18.0.0-128  OpenMP for Intel(R) Compilers 18.0 for Linux*
intel-mkl-common-c-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-ps-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-f95-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-f95-mic-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-common-c-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-cluster-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-doc-f-jp    Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-f-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-32bit-2018.1-038 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-common-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-openmp-l-all-196   OpenMP for Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-pgi-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-common-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-comp-nomcu-vars-18.0.0-128 Intel(R) Compilers 18.0 for Linux*
intel-mkl-common-c-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-f-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-common-c-64bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-32bit-174  Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-common-ps-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-cluster-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-gnu-f-rt-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-cluster-f-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-common-c-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-ss-tbb-196  Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-tbb-libs-32bit-2018.0-128 Intel(R) Threading Building Blocks 2018 for Linux*
intel-mkl-gnu-c-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-tbb-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-tbb-libs-2018.1-163 Intel(R) Threading Building Blocks 2018 Update 1 for Linux*
intel-mkl-ps-common-f-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-ss-tbb-rt-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-196            Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-pgi-196     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-psxe-2018.2-046 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-doc-c          Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-f95-196     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-cluster-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-tbb-libs-174       Intel(R) Threading Building Blocks 2017 Update 4 for Linux*
intel-comp-l-all-vars-174 Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-gnu-f-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-gnu-rt-174     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-openmp-l-ps-libs-32bit-jp-196 OpenMP for Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-gnu-f-rt-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-openmp-18.0.1-163  OpenMP for Intel(R) Compilers 18.0 Update 1 for Linux*
intel-mkl-ps-cluster-64bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-c-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-pgi-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-2018.2-046     Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-gnu-rt-239     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-rt-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-comp-l-all-vars-18.0.0-128 Intel(R) Compilers 18.0 for Linux*
intel-mkl-ps-common-jp-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-openmp-32bit-18.0.0-128 OpenMP for Intel(R) Compilers 18.0 for Linux*
intel-mkl-f95-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-core-ps-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-gnu-f-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-tbb-mic-rt-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-psxe-2018.1-038 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-gnu-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-mic-174     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-64bit-2017.4-061 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-f95-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-mic-rt-jp-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-196        Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-psxe-common-doc-2018 Intel(R) Parallel Studio XE 2018 Update 2 for Linux*
intel-mkl-ps-tbb-mic-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-c-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-core-c-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-cluster-rt-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-rt-jp-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-core-c-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-ss-tbb-rt-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-core-f-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-psxe-050       Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-64bit-2018.2-046 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-tbb-rt-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-ss-tbb-rt-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-doc-f       Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-c-174      Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-f95-common-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-rt-jp-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-gnu-f-rt-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-pgi-f-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-tbb-libs-32bit-2018.1-163 Intel(R) Threading Building Blocks 2018 Update 1 for Linux*
intel-mkl-common-c-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-rt-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-64bit-2018.0-033 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-mic-c-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-ss-tbb-239  Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-64bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-32bit-18.0.2-199 OpenMP for Intel(R) Compilers 18.0 Update 2 for Linux*
intel-mkl-ps-rt-jp-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-f-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-common-jp-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-tbb-mic-rt-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-psxe-common-061    Intel(R) Parallel Studio XE 2017 Update 5 for Linux*
intel-mkl-gnu-rt-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-common-f-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-mic-cluster-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-common-f-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-tbb-libs-196       Intel(R) Threading Building Blocks 2017 Update 6 for Linux*
intel-mkl-cluster-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-cluster-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-pgi-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-ss-tbb-rt-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-openmp-l-all-174   OpenMP for Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-tbb-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-pgi-c-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-64bit-2018.1-038 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-f95-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-gnu-c-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-239            Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-pgi-rt-174  Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-cluster-f-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-f95-common-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-f-64bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-cluster-common-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-cluster-f-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-common-jp-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-openmp-l-all-32bit-196 OpenMP for Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-tbb-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-psxe-common-056    Intel(R) Parallel Studio XE 2017 Update 4 for Linux*
intel-mkl-32bit-2018.0-033 Intel(R) Math Kernel Library 2018 for Linux*
intel-comp-l-all-vars-18.0.2-199 Intel(R) Compilers 18.0 Update 2 for Linux*
intel-mkl-common-ps-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-core-rt-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-pgi-c-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-common-c-ps-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-gnu-f-rt-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-f95-common-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-openmp-l-all-239   OpenMP for Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-174            Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-f-rt-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-tbb-libs-239       Intel(R) Threading Building Blocks 2017 Update 8 for Linux*
intel-mkl-common-f-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-common-f-64bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-cluster-common-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-comp-nomcu-vars-18.0.2-199 Intel(R) Compilers 18.0 Update 2 for Linux*
intel-mkl-32bit-196      Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-gnu-f-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-ps-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-common-196     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-rt-174         Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-common-f-64bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-cluster-rt-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-cluster-c-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-gnu-rt-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-doc         Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-rt-32bit-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-l-ps-libs-174 OpenMP for Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-ps-cluster-rt-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-common-64bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-c-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-pgi-c-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-gnu-f-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-pgi-f-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-f-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-core-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-common-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-pgi-c-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-rt-jp-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-common-c-64bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-gnu-f-rt-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-pgi-c-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-f-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-32bit-239      Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-rt-jp-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-mic-239     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-tbb-libs-2018.2-199 Intel(R) Threading Building Blocks 2018 Update 2 for Linux*
intel-mkl-f95-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-openmp-l-ps-libs-239 OpenMP for Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-rt-239         Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-core-c-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-openmp-l-all-32bit-174 OpenMP for Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-ps-pgi-239     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-f-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-gnu-f-rt-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-tbb-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-rt-32bit-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-mic-196     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-gnu-f-rt-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-32bit-239  Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-174  Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-common-c-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-f-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-f95-common-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-mic-f-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-196  Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-ps-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-cluster-64bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-64bit-2017.3-056 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-ss-tbb-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-32bit-2017.4-061 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-tbb-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-64bit-2017.2-050 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-mic-rt-196  Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-cluster-c-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-c-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-f-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-c-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-core-rt-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-common-239     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-2017.3-056     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-tbb-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-pgi-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-pgi-f-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-common-c-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-tbb-rt-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-openmp-l-all-32bit-239 OpenMP for Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-rt-196         Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-cluster-f-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-2017.4-061     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-common-c-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-c-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-psxe-common-doc    Intel(R) Parallel Studio XE 2017 Update 5 for Linux*
intel-tbb-libs-32bit-2018.2-199 Intel(R) Threading Building Blocks 2018 Update 2 for Linux*
intel-mkl-2017.2-050     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-tbb-mic-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-2018.1-038     Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-gnu-c-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-ps-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-tbb-mic-rt-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-tbb-rt-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-cluster-f-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-psxe-061       Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-ss-tbb-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-mic-rt-jp-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-common-c-ps-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-doc-jp      Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-core-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-common-f-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-c-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-cluster-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-mic-rt-239  Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-l-ps-libs-196 OpenMP for Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-common-c-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-32bit-196  Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-openmp-18.0.2-199  OpenMP for Intel(R) Compilers 18.0 Update 2 for Linux*
intel-mkl-ps-common-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-rt-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-32bit-18.0.1-163 OpenMP for Intel(R) Compilers 18.0 Update 1 for Linux*
intel-mkl-ps-pgi-174     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-comp-l-all-vars-239 Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-ps-mic-c-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-f-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-ss-tbb-174  Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-f-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-mic-c-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-mic-rt-jp-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-psxe-common-2018.0-033 Intel(R) Parallel Studio XE 2018 for Linux*
intel-mkl-ps-f95-mic-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-common-c-64bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-psxe-056       Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-rt-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-core-c-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-common-c-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-comp-l-all-vars-18.0.1-163 Intel(R) Compilers 18.0 Update 1 for Linux*
intel-mkl-psxe-2018.0-033 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-f95-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-openmp-l-ps-libs-jp-174 OpenMP for Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-tbb-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-mic-f-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-2018.0-033     Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-f95-239     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-doc            Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-c-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-common-f-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-gnu-f-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-cluster-c-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-common-c-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-tbb-mic-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-sta-common-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-core-f-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-32bit-2018.2-046 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-mic-f-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-c-239      Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-doc-c-jp    Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-doc-2018       Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-pgi-c-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-core-rt-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-common-239  Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-f95-common-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-cluster-c-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-mic-cluster-rt-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-cluster-f-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-pgi-f-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-common-c-ps-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-cluster-common-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-32bit-2017.3-056 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-gnu-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-eula-174       Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-ss-tbb-rt-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-pgi-rt-196  Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-psxe-common-2018.1-038 Intel(R) Parallel Studio XE 2018 Update 1 for Linux*
intel-mkl-pgi-f-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-core-ps-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-common-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-rt-jp-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-pgi-rt-239  Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-32bit-2017.2-050 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-f-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-pgi-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-tbb-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-comp-nomcu-vars-18.0.1-163 Intel(R) Compilers 18.0 Update 1 for Linux*
intel-mkl-common-f-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-tbb-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-common-174     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-c-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-mic-cluster-rt-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-f95-174     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-gnu-f-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-f95-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-mic-cluster-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-psxe-common-050    Intel(R) Parallel Studio XE 2017 Update 2 for Linux*
intel-mkl-cluster-c-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-rt-32bit-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-32bit-174      Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-openmp-l-ps-libs-32bit-jp-239 OpenMP for Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-ps-ss-tbb-rt-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-f-rt-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-174        Intel(R) Math Kernel Library 2017 Update 2 for Linux*
edd@rob:/tmp$

Install MKL

Now that we have everything set up, installing the MKL is as simple as:

apt-get install intel-mkl-64bit-2018.2-046

This picks the 64-bit only variant of the (currently) most recent builds.

There is a slight cost: a 500mb download of 39 packages which install to 1.9 gb! Other than that it is easy: one command! Compare that with the days of yore when we fetched shar archives of NETLIB...

Integrate MKL

One the key advantages of a Debian or Ubuntu system is the overall integration providing a raft of useful features. One of these is the seamless and automatic selection of alternatives. By declaring a particular set of BLAS and LAPACK libraries the default, all application linked against this interface will use the default. Better still, users can switch between these as well.

So here we can make the MKL default for BLAS and LAPACK:

## update alternatives
update-alternatives --install /usr/lib/x86_64-linux-gnu/libblas.so     \
                    libblas.so-x86_64-linux-gnu      /opt/intel/mkl/lib/intel64/libmkl_rt.so 50
update-alternatives --install /usr/lib/x86_64-linux-gnu/libblas.so.3   \
                    libblas.so.3-x86_64-linux-gnu    /opt/intel/mkl/lib/intel64/libmkl_rt.so 50
update-alternatives --install /usr/lib/x86_64-linux-gnu/liblapack.so   \
                    liblapack.so-x86_64-linux-gnu    /opt/intel/mkl/lib/intel64/libmkl_rt.so 50
update-alternatives --install /usr/lib/x86_64-linux-gnu/liblapack.so.3 \
                    liblapack.so.3-x86_64-linux-gnu  /opt/intel/mkl/lib/intel64/libmkl_rt.so 50

Next, we have to tell the dyanmic linker about two directories use by the MKL, and have it update its cache:

echo "/opt/intel/lib/intel64"     >  /etc/ld.so.conf.d/mkl.conf
echo "/opt/intel/mkl/lib/intel64" >> /etc/ld.so.conf.d/mkl.conf
ldconfig

Use the MKL

Now the MKL is 'known' and the default. If we start R, its sessionInfo() shows the MKL:

# Matrix products: default                            
# BLAS/LAPACK: /opt/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64_lin/libmkl_rt.so        

Benchmarks

# Vanilla r-base Rocker with default reference BLAS 
> n <- 1e3 ; X <- matrix(rnorm(n*n),n,n);  system.time(svd(X)) 
   user  system elapsed 
  2.239   0.004   2.266 
> 

# OpenBlas added to r-base Rocker
>  n <- 1e3 ; X <- matrix(rnorm(n*n),n,n);  system.time(svd(X)) 
   user  system elapsed 
  1.367   2.297   0.353 
> 

# MKL added to r-base Rocker
> n <- 1e3 ; X <- matrix(rnorm(n*n),n,n)  
> system.time(svd(X))                               
   user  system elapsed                             
  1.772   0.056   0.350                             
>  

So just R (with reference BLAS) is slow. (Using Docker is done here to have clean comparisons while not altering the outer host system; impact of running Docker on Linux should be minimal.) Adding OpenBLAS helps quite a bit already by offering multi-core processing -- the, and MKL does not yet improve materially over OpenBLAS. Now, this of course was not any serious benchmarking---we just ran one SVD. More to do as time permits...

Removal, if needed

Another rather nice benefit of the package management is that clean removal is also possible:

root@c9f8062fbd93:/tmp# apt-get autoremove intel-mkl-64bit-2018.2-046
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
  intel-comp-l-all-vars-18.0.2-199 intel-comp-nomcu-vars-18.0.2-199 intel-mkl-64bit-2018.2-046 
  intel-mkl-cluster-2018.2-199 intel-mkl-cluster-c-2018.2-199 intel-mkl-cluster-common-2018.2-199 
  intel-mkl-cluster-f-2018.2-199 intel-mkl-cluster-rt-2018.2-199 intel-mkl-common-2018.2-199 
  intel-mkl-common-c-2018.2-199 intel-mkl-common-c-ps-2018.2-199 intel-mkl-common-f-2018.2-199 
  intel-mkl-common-ps-2018.2-199 intel-mkl-core-2018.2-199 intel-mkl-core-c-2018.2-199 
  intel-mkl-core-f-2018.2-199 intel-mkl-core-ps-2018.2-199 intel-mkl-core-rt-2018.2-199 
  intel-mkl-doc-2018 intel-mkl-doc-ps-2018 intel-mkl-f95-2018.2-199 intel-mkl-f95-common-2018.2-199 
  intel-mkl-gnu-2018.2-199 intel-mkl-gnu-c-2018.2-199 intel-mkl-gnu-f-2018.2-199 intel-mkl-gnu-f-rt-2018.2-199 
  intel-mkl-gnu-rt-2018.2-199 intel-mkl-pgi-2018.2-199 intel-mkl-pgi-c-2018.2-199 intel-mkl-pgi-f-2018.2-199 
  intel-mkl-pgi-rt-2018.2-199 intel-mkl-psxe-2018.2-046 intel-mkl-tbb-2018.2-199 intel-mkl-tbb-rt-2018.2-199 
  intel-openmp-18.0.2-199 intel-psxe-common-2018.2-046 intel-psxe-common-doc-2018 intel-tbb-libs-2018.2-199 
  intel-tbb-libs-32bit-2018.2-199 libisl15
0 upgraded, 0 newly installed, 40 to remove and 0 not upgraded.
After this operation, 1,904 kB disk space will be freed.
Do you want to continue? [Y/n] n                    
Abort.                                              
root@c9f8062fbd93:/tmp#  

where we said 'no' just to illustrate the option.

Summary

Package management systems are fabulous. Kudos to Intel for supporting apt (and also yum in case you are on an rpm-based system). We can install the MKL with just a few commands (which we regrouped in this script).

The MKL has a serious footprint with an installed size of just under 2gb. But for those doing extended amounts of numerical analysis, installing this library may well be worth it.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 April, 2018 01:54PM

Hideki Yamane

Update desktop components for released version

I found RHEL7.5 desktop is rebased to GNOME 3.26. I hope Debian stable release could do such thing, then what's the blocker for it?

15 April, 2018 01:25PM by Hideki Yamane (noreply@blogger.com)

April 14, 2018

Russ Allbery

INN 2.6.2

(As usual, Julien finished this release a bit back, and then I got busy with life stuff and hadn't gotten the announcement out. And yes, I copied and pasted this parenthetical from the last announcement. Tradition!)

In the feature department, this release adds a new syntaxchecks parameter to inn.conf that can be used to disable message ID syntax checking, better header sanitization support in mailpost, support for TLS 1.3, and support for using GnuPG v1 (which is unfortunately important for control messages and NoCeM on Usenet still).

In the bug-fix department, this release always uses the OVDB helper server with OVDB to avoid various stability problems, fixes a header checking bug in inews that was incorrectly rejecting some long headers, fixes some control command reporting in the daily status report, and hopefully fixes buffindexed on systems with a native page size larger than 16KB.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

You can get the latest version from the official ISC download page (although that download link still points to INN 2.6.1 as of this writing) or from my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

14 April, 2018 11:54PM

hackergotchi for Norbert Preining

Norbert Preining

Analysing Debian packages with Neo4j – Part 1 – Debian

Overview on the blog series

The Ultimate Debian Database UDD collects a variety of data around Debian and Ubuntu: Packages and sources, bugs, history of uploads, just to name a few.

The database scheme reveals a highly de-normalized RDB. In this on-going work we extract (some) data from UDD and represent it as a graph database.

In the following series of blog entries we will report on this work. Part 1 (this one) will give a short introduction to Debian and the life time and structure of Debian packages. Part 2 will develop the graph database scheme (nodes and relations) from the inherent properties of Debian packages. The final part 3 will describe how to get the data from the UDD into Neo4j, give some sample queries, and discuss further work.

This work has been presented at the Neo4j Online Meetup and a video recording of the presentation is available on YouTube.

Part 1 – Debian

Debian is an open source Linux distribution, developed mostly by volunteers. With a history of already more than 20 years, Debian is one of the oldest Linux distributions. It sets itself apart from many other Linux distributions by a strict set of license rules that guarantees that everything within Debian is free according to the Debian Free Software Guidelines.

Debian also gave rise to a large set of offsprings, most widely known one is Ubuntu.

Debian contains not only the underlying operating system (Linux) and the necessary tools, but also a huge set of programs and applications, currently about 50000 software packages. All of these packages come with full source code but are already pre-compiled for easy consumption.

To understand what information we have transfered into Neo4j we need to take a look at how Debian is structured, and how a packages lives within this environment.

Debian releases

Debian employs release based software management, that is, a new Debian version is released in more or less regular intervals. The current stable release is Debian stretch (Debian 9.2) and was released first in June 2017, with the latest point release on October 7th, 2017.

To prepare packages for the next stable release, they have to go through a set of suites to make sure they conform to quality assurance criteria. These suites are:

  • Development (sid): the entrance point for all packages, where the main development takes place;
  • Testing: packages that are ready to be released as the next stable release;
  • Stable: the status of the current stable release.

There are a few other suites like experimental or targeting security updates, but we leave their discussion out here.

Package and suite transitions

Packages have a certain life cycle within Debian. Consider the following image (by Youheu Sasaki, CC-NC-SA):

Packages and Suites (Youhei Sasaki, CC-NC-SA)

Packages normally are uploaded into the unstable suite and remain there at least for 5 days. If no release critical bug has been reported, after these 5 days the package transitions automatically from unstable into the testing suite, which will be released as stable by the release managers at some point in the future.

Structure of Debian packages

Debian packages come as source packages and binary packges. Binary packages are available in a variety of architectures: amd64, i386, powerpc just to name a few.

Debian developers upload source packages (and often own’s own architecture’s binary package), and for other architectures auto-builders compile and package binary packages.

Debian auto-builders (from Debian Administrator’s Handbook, GPL)

Components of a package

Debian packages are not only a set of files, but contain a lot more information, let us listen a few important ones:

  • Maintainer: the entity (person, mailing list) responsible for the package
  • Uploaders: other developers who can upload a new version of the package
  • Version: a Debian version number (see below)
  • Dependency declarations (see below)

There are many further fields, but we want to concentrate here on the fields that we are representing the in the Graph database.

The Maintainer and Uploaders are standard emails, most commonly including a name. In the case of the packages I maintain the maintainer is set to a mailing list (debian-tex-maint AT ...) and myself put into the Uploaders field. This way bug reports will go not only to me but to the whole list – a very common pattern in Debian.

Next let us look at the version numbers: Since for a specific upstream release we sometimes do several packages in Debian (to fix packaging bugs, for different suites), the Debian version string is a bit more complicated then just the simple upstream version:

[epoch:]upstream_version[-debian_revision]

Here the upstream_version is the usual version under which a program is released. Taking for example one of the packages I maintain, asymptote, it currently has version number 2.41-4, indicating that upstream version is 2.41, and there have been four Debian revisions for it. A bit more complicated example would be musixtex which currently has the version 1:1.20.ctan20151216-4.

Some caveats concerning source and binary packages, and versions:

  • one source package can build many different binary packages
  • the names of source package and binary package are not necessary the same (necessarily different when building multiple binary packages)
  • binary packages of the same name (but differentversion) can be built from different source packages

Let us finally look at the most complicated part of the package meta-fields, the dependencies: There are two different sets of dependencies, one for source packages and one for binary packages:

  • source package relations: Build-Depends, Build-Depends-Indep, Build-Depends-Arch, Build-Conflicts, Build-Conflicts-Indep, Build-Conflicts-Arch
  • binary package relations: Depends, Pre-Depends, Recommends, Suggests, Enhances, Breaks, Conflicts

The former one specify package relations during package build, while the later package dependencies on the installed system.

A single package relation can take a variety of different forms providing various constraints on the relation:

  • Relation: pkg: no constraints at all
  • Relation: pkg (<< version): constraints on the version, can be strictly less, less or equal, etc
  • Relation: pkg | pkg: alternative relations
  • Relation: pkg [arch1 arch2]: constraints on the architectures

When properly registered for a package, these relations allow Debian to provide smooth upgrades between releases and guarantee functionality if a package is installed.


This concludes the short introduction to Debian and its packages. In the next blog entry we will describe the Ultimate Debian Database UDD and how to map the information presented here from the UDD into a Graph Database.

14 April, 2018 02:33PM by Norbert Preining

hackergotchi for Vasudev Kamath

Vasudev Kamath

Docker Private Registry and Self Signed Certificates

I was recently experimenting with hosting a private registry on an internal LAN network for publishing docker private images. I found out that docker-pull works only with TLS secured registry. There is possible to run insecure registry by editing daemon.json file but its better to use self-signed certificates instead.

Once I followed the step and started registry I tried to docker-pull and it started complaining about certificate not having any valid names. But this same certificate worked fine with browsers too, of course you need to add exception but no other errors were encountered.

Documentation for Docker does not speaks any specific settings needs to be done prior to generating a self-signed certificate so I was bit confused at beginning.A bit of searching showed up following issue filed against docker and then later re-assigned against *Golang* for its method of handling x509 certificate. It appears that with valid Subject Alternative Name Go crypto library ignores the Common Name.

From thread on Security Stack Exchange I found the command to create a self-signed certificate to contain self-signed certificate. Command in excepted answer does not work until you add --extensions option to it as mentioned in one of the comments. Full command is as shown below.

openssl req -new -sha256 -key domain.key \
             -subj "/C=US/ST=CA/O=Acme, Inc./CN=example.com" \
             -reqexts SAN -extensions SAN \
             -config \
<(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:example.com,DNS:www.example.com")) -out domain.crt

You would need to replace values in -subj and under [SAN] extension. Benefit of this command is you need not modify the /etc/ssl/openssl.conf file.

If you do not have a domain name for the registry and using IP address instead consider replacing [SAN] section in above command to have IP: <ip-address> instead of DNS.

Happy hacking.!

14 April, 2018 02:17PM by copyninja

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Match day

We're live!

Edit: Day 1 is over, and the videos are up, although not quite cut yet. We had some issues here and there, but overall, things seem to have come out well. More fun with the playoffs tomorrow :-)

14 April, 2018 08:31AM

hackergotchi for Vasudev Kamath

Vasudev Kamath

Docker container as Development Environment

When you have a distributed team working on a project, you need to make sure that every one uses similar development environment. This is critical if you are working on embedded systems project. There are multiple possibility for this scenario.

  1. Use a common development server and provide every developer in your team an account in it.
  2. Provide description to every one how to setup their development environment and trust them they will do so.
  3. Use Docker to provide the developer with ready made environment and use build server (CI) for creating final deployment binaries.

1st approach was most common approach used, but it has some drawbacks. Since its a shared system you should make sure not every one is able to install or modify system and hence have a single administrator so that no one accidentally breaks the development environment. Other problems include forced to use older OS due to specific requirements of compilers etc.

2nd approach makes you put your trust on your developer to get the correct development environment. Of course you need to trust your team, but every one is a human being and humans can make mistake.

Enter Docker

Dockerfile is best way to document/setup development environment. A developer can read Dockerfile and understand what is being done to setup the environment and simply execute docker build command to generate his/her development environment, or better you build the image and publish it in either public registry and if that is not possible put it in a private registry, and ask developers to pull a image of development environment.

Don't be scared by private registry, setting one up is not a humongous task! Its just couple of docker commands and there is a pretty good documentation available to set one up.

While setting up a development environment you need to make sure last instruction in your Dockerfile is executing the shell of your choice. This is because when you start a container this last instruction is what actually is run by docker, and in our case we need to provide developer with a shell all build toolchain and libraries.

...
CMD ["/bin/bash"]

Now developer just needs to get/build the image, start the container and use it for their work!. To summarize below command is sufficient for developer to run a fresh environment.

$ docker build -t devenv . # If they are building it

# If they are pulling it from say private registry
$ docker pull private-registry-ip/path/to/devenv

$ docker run -itd --name development devenv
$ docker attach development

When container is started it will execute the shell and the next attach command will attach your shell's input/output to container. Now it can be used just like normal shell to build the application.

Another good thing which can be done is sharing the workspace with container so your container can just contain the toolchain and library that is needed and all version control, code editing and likewise can be done in the host machine. One thing that would be needed to make sure is your UID on host and UID of the user inside the container is same. This can be easily done by creating a separate user in the container with UID same as your UID on host system.

Advantages

Some advantages of using docker container include

  1. They are easy to setup and saves a lot of time to the team as a whole compared to traditional approaches.
  2. Easy to throwaway and start fresh: If a developer thinks he did something wrong with container he can prune it and create a fresh one based on the development environment image. So this gives a lot of freedom to developer to experiment.
  3. Uniformity: You will be sure that all your team will be using same evnironment.

So you might ask what about other container technologies like systemd-nspawn or lxc etc. Of course they can also be used in the similar fashion, infact before experimenting with Docker I was a vivid user of systemd-nspawn container. You might have also seen my previous blog posts on systemd-nspawn too. Only reason I switched to Docker is its easy to setup unlike systemd-nspawn which needs so many tweaking and tuning of things besides it does not have a Dockerfile like approach which makes things more time consuming. So for me Docker won the war and I shifted to using Docker more.

This entire post was based on my experiment and experience with Docker. If you feel something can be done in a better way please feel free to write to me.

14 April, 2018 07:45AM by copyninja

hackergotchi for Shirish Agarwal

Shirish Agarwal

cleartext passwords and transparency

I had originally thought of talking about the recent autonomous car project which killed a homeless lady in Tempe but guess that would have to wait for another day. I saw Lars Wirzenius’s blog post which led me to change the direction a bit.

So let me just jump in with Lars blog post where he talks about cleartext passwords. While he has actually surmised and shared what a security problem they are, the pity is we come to know of this only because the people in question tacitly admitted to bad practises. How many more such bad actors are there, developers putting user credentials in cleartext god only knows. There was even an April Fool’s joke in 2014 which shared why putting passwords in cleartext is bad.

This is one lesson which web developers are neither taught nor learnt. Most web development courses in India may talk about web frameworks, CSS, front-end and back-end web development and even may talk about UX but security will be something which is supposed to be magically gained while you do the above things. Please note I said most, not all but yes there is needed a whole lot of awakening in terms of safe web development practices but that’s time for another day and another tale. Casual interactions with course publishers has been that most students are looking for buzz words and neither the employers look for ‘security’ as a strong point.

There even have been casual studies which shared that 0.01 of financial crimes are reported in India . I myself am guilty of this when a bank mis-appropriates or does something stupid, my only thing is to get the transaction rectified or get it corrected rather than worry about if some small, medium or large-scale conspiracy is happening in the bank. But that malaise has to many factors to put in this small blog post.

Few years back EFF did a tremendous job of pursuing and getting everyday users and vendors like mozilla, chromium to adopt https globally, but to my knowledge many Indian websites including some of the biggest behemoths in India with whom we have day-to-day activity keep all their user passwords in cleartext. What perhaps may or may not be a shocker to many people that many ATM’s at least in India don’t work on https even today. Is there even a wonder why skinners are still able to cheat honest people and taxpayers .

The reasons for all of the above could be ranging from sheer incompetence to being lazy to not being regulated at all. Rather than sharing anecdotes and also not having INR 100 crores or INR 1 billion rupees ( that statement will become clear in a while) with developers who under casual circumstances have shared they neither do one-way-encryption or salting or any of the methods of securing passwords either because financial companies don’t demand it or know about it even though they should know better.

I can however share an anecdote however which resulted in a suit of law which a media house won sometime back. It isn’t so much about unsafe web practices but more about companies lack of morals for financial web gains and our (the commons) own lack of understanding of such matters.

I had to search on my blog before sharing and turns out I didn’t share this anecdote before, surprise, surprise.

Since 2008, I know of a media house called moneylife which is run by a beautiful, very intelligent woman called Sucheta Dalal and her husband Debasis. I believe Debasis is more into the admin side of things while Sucheta bears both the investigative and editorial responsibilities on her shoulders. While I have never met her whole team, to have the kinds of breath and length of news you often find on moneylife.in you do need to have a strong and competent team which I guess she has.

Sucheta Dalal with the compensation cheque

Copyright – Moneylife.in

I have met her twice, and have been a fan of her work since she started reporting the frauds which were happening in SEBI in Indian Express from where she was consequently fired as she had too many ethics. I have been blessed to meet her couple of times but each time was dumb-founded as you meet someone whom you admired so much. I might have flustered and said thank you for the work you do but couldn’t ever muster the courage to say anything more than that to her face-to-face.

Anyways, fast forward a few years or back couple of years back, Sucheta wrote a column in moneylife that there was unauthorized algorithmic trading happening and some traders were profiting from it in National Stock Exchange. This was apparently done by a whistle-blower (A Singapore-based trader and hedge fund owner) and Sucheta and her team confirmed and then printed the same. Interestingly, SEBI which regulates how finance intermediaries (like brokers, stock analysts, stock exchanges and companies share their expansion plans or any news) didn’t say anything and chose to keep mum although this was happening right below their noses. Please keep this in mind, this happened under the present Government dispensation who had the mottos of ‘being the most transparent’ and ‘we will not eat and will not let corrupt people eat’ to paraphrase their election sloganeering.

Before starting with the story, it would also be interesting to state a bit about NSE. IIRC, BSE for a long-time was a monopoly for share trading, there was Kolkatta Stock Exchange also but due to political winds in Kokatta and many other factors they couldn’t keep up with change in technology and kind of faded on the national scene over the years.

Due to BSE’s bullish ways or being the only action in town, quite a few private and public institutions came together and formed NSE. The Harshad Mehta stock manipulation scandal probably also accelerated the formation of the institute. The goals at formation were laudable but as it happens in institutes which work and value money over everything else, it’s possible to be corroded as will be seen shortly.

NSE in many terms is a strange beast with having investors from Public and Private Companies who supposedly counsel and come under the finance ministry and SEBI (as most of their investors are Government Institutions including the finance ministry). There were also talks of taking NSE as a publicly listed company but dunno what happened about that.

What has never made been public if NSE filed the suit on its own behalf or was persuaded to do so either by finance ministry, SEBI or the traders who were doing the illegal trading, guess this is something we will ever know. The significance of this why will be known at the last of the blog post. AFAIK these algo traders control 40-50% of the daily trading so have a huge grip on the market.

I believe NSE filed the first case in Bombay small causes court which moneylife won and subsequently they even tried in Bombay High Court

Unfortunately for them, Sucheta and her teams were no cub reporters as she had years of experience working both with Times of India and then Indian Express hence she had hard documentary proof which she was able to show in the court to which as far as I know the Prosecutor had no answers.

To cut the long story short, NSE had to withdraw their suit and even pay damages of INR 50 lakh or INR 5 million rupees.

There are many things which I have not covered about the case, some of which can be understood by Shri Lokeshwarri SK’s excellent article which was posted in the Hindu Business Line years ago. He has framed many a questions which are still an open question even today.

The reason I shared this story is pretty simple, its only a very tiny amount of people who invest in the share market. I would say 1-2% of the population . Almost all of these people are highly literate and somewhat financially literate as well. If they didn’t know such things were happening then how can a common man/person on the road question or know if his data is being kept safe or not. All the contracts, terms of conditions especially those which either come in Population or finance or actually anything can come under ‘National Security’.

The best part, the irony is that algorithmic trading in India is now a legal activity and apparently was also legal in 2015 when the suit was done. AFAIK, that change could only be done by SEBI. The whole affair has also been framed in an article on Indian Legal Live which actually raises a whole host of disquieting questions. There seems to be lot of back-dating happening but as mere spectators we can’t even talk about that.

Even the judgement narrowly focussed on some of the questions raised as can be inferred from the article but in the present dispensation judicial activism is on the wane.

While I can’t help in the above, I can share about a tor meetup which probably may help in some direct or indirect way,

I do hope to go there and gain as well as much share whatever little I can.

14 April, 2018 06:54AM by shirishag75

April 13, 2018

Silva Arapi

Digital Born Media Carnival July 2017

As described in their website, Digital Born Media Carnival was a gathering of hundred of online media representatives, information explorers and digital rights enthusiasts. The event took place on 14 – 18 July in Kotor, Montenegro. I found out about it as one of the members of Open Labs Hackerspace shared the news on our forum. While struggling if I should attend or not because of a very busy period at work and at the University, the whole thing sounded very interesting and intriguing at the same time, so I decided to join the group of people who were also planning to go and apply with a workshop session too. No regrets at all! This turned out to be one of the greatest events I’ve attended so far and had a great impact in what I somehow decided to do next, regarding my work as a hacktivist and as a digital rights enthusiast.


The organizers of the Carnival had announced on the website that they were looking for online media representatives, journalists, bloggers, content creators, human right defenders, hacktivists, new media startups etc and as a hactivist I found myself willing to join and learn more about some topics which for a while had been very intriguing to me, while I was also looking at this as an opportunity to meet with other people with common interests as me.

I applied with a workshop where I was going to introduce some simple tools for people to better preserve their privacy online. The session was accepted and I was invited to lead altogether with Andrej Petrovski the sessions on Digital Security track, located in the Sailing club “Lahor”. I held my workshop there on Saturday late in the morning and I really enjoyed it. Most of the attendees where journalists or people not with a technical background, and they showed a lot of interest, asked me many questions and shared some stories, I also received very good feedback on the workshop and it really gave me some really good vibes since this was the first time for me speaking on cyber security in an important event of this kind, as it was the DBMC’17.

I spent the other days on the Carnival attending different workshops and talks, meeting new people, discussing with friends and enjoying the sun. We would go to the beach on the afternoon and had very cool drone photo shooting 😉

DBMC drone photo shootingDBMC drone photo shooting – Kotor, Montenegro

This was a great work from the SHARE Foundation and hopefully there will be other events as such in the near future and I would totally recommend for people to attend! If you are new with the topics discussed there, this is a great way to start. If you have been on the field for a while, this is the place to meet other professionals as you. If you are looking for an event which you can also combine with some days of vacation but also be in touch with causes you care about, this would once again be the place to go.

13 April, 2018 08:50PM by Silva

hackergotchi for Junichi Uekawa

Junichi Uekawa

Wishing the pollen season to end.

Wishing the pollen season to end.

13 April, 2018 01:23AM by Junichi Uekawa

hackergotchi for Kees Cook

Kees Cook

security things in Linux v4.16

Previously: v4.15

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

13 April, 2018 12:04AM by kees

April 12, 2018

Enrico Zini

ansible nspawn connection plugin

I have been playing with system images using ansible and chroots, and I figured that using systemd-nspawn to handle the chroots would make things nice, giving ansible commands the benefit of a running system.

There has been an attempt which was rejected.

Here is my attempt. It does boot the machine then run commands inside it, and it works nicely. The only thing I missed is a way of shutting down the machine at the end, since ansible seems to call close() at the end of each command, and I do not know enough ansible internals to do this right.

I hope this can serve as inspiration for something that works well.

# Based on chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# Based on chroot.py (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
# (c) 2018, Enrico Zini <enrico@debian.org>
#
# This is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible.  If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type

import distutils.spawn
import os
import os.path
import pipes
import subprocess
import time
import hashlib

from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.module_utils.basic import is_executable

try:
    from __main__ import display
except ImportError:
    from ansible.utils.display import Display
    display = Display()


class Connection(ConnectionBase):
    ''' Local chroot based connections '''

    transport = 'schroot'
    has_pipelining = True
    # su currently has an undiagnosed issue with calculating the file
    # checksums (so copy, for instance, doesn't work right)
    # Have to look into that before re-enabling this
    become_methods = frozenset(C.BECOME_METHODS).difference(('su',))

    def __init__(self, play_context, new_stdin, *args, **kwargs):
        super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)

        self.chroot = self._play_context.remote_addr
        # We need short and fast rather than secure
        m = hashlib.sha1()
        m.update(os.path.abspath(self.chroot))
        self.machine_name = "ansible-" + m.hexdigest()

        if os.geteuid() != 0:
            raise AnsibleError("nspawn connection requires running as root")

        # we're running as root on the local system so do some
        # trivial checks for ensuring 'host' is actually a chroot'able dir
        if not os.path.isdir(self.chroot):
            raise AnsibleError("%s is not a directory" % self.chroot)

        chrootsh = os.path.join(self.chroot, 'bin/sh')
        # Want to check for a usable bourne shell inside the chroot.
        # is_executable() == True is sufficient.  For symlinks it
        # gets really complicated really fast.  So we punt on finding that
        # out.  As long as it's a symlink we assume that it will work
        if not (is_executable(chrootsh) or (os.path.lexists(chrootsh) and os.path.islink(chrootsh))):
            raise AnsibleError("%s does not look like a chrootable dir (/bin/sh missing)" % self.chroot)

        self.nspawn_cmd = distutils.spawn.find_executable('systemd-nspawn')
        if not self.nspawn_cmd:
            raise AnsibleError("systemd-nspawn command not found in PATH")
        self.machinectl_cmd = distutils.spawn.find_executable('machinectl')
        if not self.machinectl_cmd:
            raise AnsibleError("machinectl command not found in PATH")
        self.run_cmd = distutils.spawn.find_executable('systemd-run')
        if not self.run_cmd:
            raise AnsibleError("systemd-run command not found in PATH")

        existing = subprocess.call([self.machinectl_cmd, "show", self.machine_name], stdout=open("/dev/null", "wb"))
        self.machine_exists = existing == 0

    def set_host_overrides(self, host, hostvars=None):
        super(Connection, self).set_host_overrides(host, hostvars)

    def _connect(self):
        ''' connect to the chroot; nothing to do here '''
        super(Connection, self)._connect()
        if not self._connected:
            if not self.machine_exists:
                display.vvv("Starting nspawn machine", host=self.chroot)
                self.chroot_proc = subprocess.Popen([self.nspawn_cmd, "-D", self.chroot, "-M", self.machine_name, "--register=yes", "--boot"], stdout=open("/dev/null", "w"))
                time.sleep(0.5)
            else:
                self.chroot_proc = None
                display.vvv("Reusing nspawn machine", host=self.chroot)
            self._connected = True

    def _local_run_cmd(self, cmd, stdin=None):
        display.vvv(" -exec %s" % repr(cmd), host=self.chroot)
        display.vvv(" -  or %s" % " ".join(pipes.quote(x) for x in cmd), host=self.chroot)
        p = subprocess.Popen(cmd, shell=False, stdin=subprocess.PIPE,
                             stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        stdout, stderr = p.communicate(stdin)
        display.vvv(" - got %d" % p.returncode, host=self.chroot)
        display.vvv(" - out %s" % repr(stdout), host=self.chroot)
        display.vvv(" - err %s" % repr(stderr), host=self.chroot)
        return p.returncode, stdout, stderr

    def _systemd_run_cmd(self, cmd, stdin=None):
        local_cmd = [self.run_cmd, "-M", self.machine_name, "-q", "--pipe", "--wait", "-E", "HOME=/root", "-E", "USER=root", "-E", "LOGNAME=root"] + cmd
        local_cmd = [x.encode("utf8") if isinstance(x, unicode) else x for x in local_cmd]
        return self._local_run_cmd(local_cmd, stdin=stdin)

    def exec_command(self, cmd, in_data=None, sudoable=False):
        ''' run a command on the chroot '''
        super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)

        display.vvv("cmd: %s" % repr(cmd), host=self.chroot)
        return self._systemd_run_cmd(["/bin/sh", "-c", cmd], stdin=in_data)

    def _prefix_login_path(self, remote_path):
        ''' Make sure that we put files into a standard path

            If a path is relative, then we need to choose where to put it.
            ssh chooses $HOME but we aren't guaranteed that a home dir will
            exist in any given chroot.  So for now we're choosing "/" instead.
            This also happens to be the former default.

            Can revisit using $HOME instead if it's a problem
        '''
        if not remote_path.startswith(os.path.sep):
            remote_path = os.path.join(os.path.sep, remote_path)
        return os.path.normpath(remote_path)

    def put_file(self, in_path, out_path):
        ''' transfer a file from local to chroot '''
        super(Connection, self).put_file(in_path, out_path)
        display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.chroot)

        out_path = pipes.quote(self._prefix_login_path(out_path))
        p = subprocess.Popen([self.machinectl_cmd, "-q", "copy-to", self.machine_name, in_path, out_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        stdout, stderr = p.communicate()
        if p.returncode != 0:
            raise AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))

    def fetch_file(self, in_path, out_path):
        ''' fetch a file from chroot to local '''
        super(Connection, self).fetch_file(in_path, out_path)
        display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self.chroot)

        in_path = pipes.quote(self._prefix_login_path(in_path))
        p = subprocess.Popen([self.machinectl_cmd, "-q", "copy-from", self.machine_name, in_path, out_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        stdout, stderr = p.communicate()
        if p.returncode != 0:
            raise AnsibleError("failed to transfer file %s from %s:\n%s\n%s" % (out_path, in_path, stdout, stderr))

    def close(self):
        super(Connection, self).close()

# FIXME: how can we power off the machine? close and __del__ seem to be called after each command
#    def __del__(self):
#        ''' terminate the connection; nothing to do here '''
#        # super(Connection, self).close()
#        display.vvv("CLOSE", host=self.chroot)
#        if self._connected:
#            p, stdout, stderr = self._local_run_cmd([self.machinectl_cmd, "poweroff", self.machine_name])
#            if p == 0 and self.chroot_proc:
#                self.chroot_proc.wait()
#            self._connected = False

12 April, 2018 11:39PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

meetup.com, summer cleaning and talking about creative writing-Taiwan

As around the week-end, took some time today to cleanup my mail inbox, replied some new and old mails which I had forgotten to and deleted the ones which were long past their expiry date. While doing it, came to know that I had joined meetup.com circa 2009. Almost a decade went by and didn’t even realize where it went.

Looking back, I realized I had forgotten what the early days were like on meetup.com. In those days, the only meetups were dating kind, now of course its much more of a fleshed out and caters from Information Technology, Writing, Cooking and even Salsa dancing. I just saw somebody start a salsa group in my hometown.

Anyways, last week after almost 6 months went to a meetup. I had seen How to start writing a short novel being hosted by my friend Dr. Swati Shome.

Interaction with Novelist Tanushree Podder

Copyright – Arun Paria

I had some other engagements and being a Sunday and knowing most Puneties and the laid back culture came at 11:30 . Meetup had shown me around 20 odd people so was expecting a small group of people but turns out there were few writers in making and many wanna-be hacky writers like yours truly, no offence meant to anyone 🙂

There were lot of interesting questions, anecdotes shared by Mme. Tanushree Poddar.

There were couple of young chaps who went to travels and had magical experiences. I was tempted to ask whether it was in Triund (Himalayas somewhere 😉 ) or Parvati Valley. There are lots of places out there where you feel the magic out there. Also if you are with friends and are safe, you can also try to heighten the experiences using hallucinogens like magic mushrooms. I can’t explain it but you really feel everything is communicating with you and it all makes sense. The best part is you don’t get addicted while understanding what some people who might be attuned to nature might be feeling. It’s a sacred feeling.

Coming to the meeting, I realized how much I missed talking and chatting with fellow writers, bloggers etc. I and probably few others helped Swati with reviewing and plugging holes in her non-fiction book on sexuality for teenagers. She showed me an initial print copy the publisher had shared with her, I guess it still has to go with few iterations as the final thing would be available in June.

While I love hardbacks, in this case I would make an exception. I do hope the publisher prices it properly and more and more children and their parents use the book to be able to talk about sex without shame etc. While in some spheres we, the Indian society have become bolder, talk between children and parents are still within the old boundaries while technology has marched on. I am talking here about the middle class only. Kids as small as 6-8 years know about sex which we didn’t know even after reaching majority (i.e. 18) but that’s Swati’s book will talk about.

Incidentally Swati chided me about sharing something like 8 A4 pages of feedback with her, what she didn’t know I probably shared less than half as sex is more a mental thing than anything else.

Anyways, during the interaction and talking with others, I re-realized again that there are so many people like me who feel the need to write. While everybody does pay homage to success, most of us driven by a need to explore ourselves through our characters. In a fictional setting, we could explore any character that we want, the only things that stops us from doing that is our own judgemental self.

Having access to a larger vocabulary also helps so I/we/you don’t become repetitive which is easy to happen as we are easily habituated to words, something like ‘comfort words’ which we use over and over.

At the interaction, I was also transported back to the South African wharf/dock which I had visited during Debconf 2016 as part of Day trip. I had read so many books where the dock and the people were described in great detail especially the late 19th century and the early 20th century. Going to the dock I could feel I was transported to that earlier era.

There was even a bar just next to the wharf and entering there did see quite a few people with tattoos. The only thing missing was perhaps a black eye patch, a peg leg and a talking parrot and I would have been left wondering whether I am in a Peter Pan movie 🙂 Even the shingle on top of the bar was something I had read in dozen of books, rusted, sea birds sitting on top of it as if was they were the rightful owners of the shingle and so many things. It was one of those times where you are perplexed whether you should take a photo or leave it because your mind captures it in so much essence, vitality that the photo or even a video would be pale copies at the best.

I don’t think any experience is going to top that. Of course never say never but still 🙂

Isn’t it fascinating how words, associations, ideas and images relate with each other and take you on a journey.

Talking of journeys, in a couple of months from now, I may or may not travel to Taiwan for a technical conference. While whatever happens is in the future, I have been having a ball of time exploring Taiwan on the web.

One of the things which puzzled me when starting to read about Taiwan I heard the term ‘formosa’ over and over again. Sometime later I opened the wikipedia page and learned that Formosa means ‘beautiful island‘. What has been interesting to read and find how Taiwan sees itself, some people feel it’s part of China and while many don’t. Of course the name ROC or Republic of China doesn’t make it any less confusing. Also the two countries share the same language (Mandarin) and it does fall into the Chinese sphere of influence.

What has made it more fascinating that if China thinks of Taiwan as being part of China, then why don’t any of the Chinese travel books mention Taiwan. I have had more luck finding books of English counties like Chesterfield and many other English counties then finding books about Taiwan. I usually look in seconds sale as nowadays you have lot of seconds sale in my hometown.

Two of the books I would like to read is ‘Forbidden Nation

Forbidden Nation: A History of Taiwan - Jonathan Manthorpe

and From Far Farmosa.

FROM FAR FORMOSA: The Island, its People and Missions - George Leslie. Mackay 1895

Apparently both these books supposedly give an idea to a lay reader how Taiwan came about.

For practical matters, did come to know that Taiwan has an embassy in New Delhi and perhaps one in Chennai if one needs to pursue Visa matters.

As somebody shared long time, we are indeed living in some interesting times 🙂

At the very end, somebody had asked me to share a hindi rock song. So if you want to impress your girlfriend or boyfriend with an Indian rock song here goes nothing

For p.d.o. people https://www.youtube.com/watch?v=6SncLwWFrJ0

12 April, 2018 09:03PM by shirishag75

hackergotchi for Julien Danjou

Julien Danjou

Lessons from OpenStack Telemetry: Incubation

Lessons from OpenStack Telemetry: Incubation

It was mostly around that time in 2012 that I and a couple of fellow open-source enthusiasts started working on Ceilometer, the first piece of software from the OpenStack Telemetry project. Six years have passed since then. I've been thinking about this blog post for several months (even years, maybe), but lacked the time and the hindsight needed to lay out my thoughts properly. In a series of posts, I would like to share my observations about the Ceilometer development history.

To understand the full picture here, I think it is fair to start with a small retrospective on the project. I'll try to keep it short, and it will be unmistakably biased, even if I'll do my best to stay objective – bear with me.

Incubation

Early 2012, I remember discussing with the first Ceilometer developers the right strategy to solve the problem we were trying to address. The company I worked for wanted to run a public cloud, and billing the resources usage was at the heart of the strategy. The fact that no components in OpenStack were exposing any consumption API was a problem.

We debated about how to implement those metering features in the cloud platform. There were two natural solutions: either achieving some resource accounting report in each OpenStack projects or building a new software on the side, covering for the lack of those functionalities.

At that time there were only less than a dozen of OpenStack projects. Still, the burden of patching every project seemed like an infinite task. Having code reviewed and merged in the most significant projects took several weeks, which, considering our timeline, was a show-stopper. We wanted to go fast.

Pragmatism won, and we started implementing Ceilometer using the features each OpenStack project was offering to help us: very little.

Our first and obvious candidate for usage retrieval was Nova, where Ceilometer aimed to retrieves statistics about virtual machines instances utilization. Nova offered no API to retrieve those data – and still doesn't. Since it was out of the equation to wait several months to have such an API exposed, we took the shortcut of polling directly libvirt, Xen or VMware from Ceilometer.

That's precisely how temporary hacks become historical design. Implementing this design broke the basis of the abstraction layer that Nova aims to offer.

As time passed, several leads were followed to mitigate those trade-offs in better ways. But on each development cycle, getting anything merged in OpenStack became harder and harder. It went from patches long to review, to having a long list of requirements to merge anything. Soon, you'd have to create a blueprint to track your work, write a full specification linked to that blueprint, with that specification being reviewed itself by a bunch of the so-called core developers. The specification had to be a thorough document covering every aspect of the work, from the problem that was trying to be solved, to the technical details of the implementation. Once the specification was approved, which could take an entire cycle (6 months), you'd have to make sure that the Nova team would make your blueprint a priority. To make sure it was, you would have to fly a few thousands of kilometers from home to an OpenStack Summit, and orally argue with developers in a room filled with hundreds of other folks about the urgency of your feature compared to other blueprints.

Lessons from OpenStack Telemetry: Incubation

An OpenStack design session in Hong-Kong, 2013

Even if you passed all of those ordeals, the code you'd send could be rejected, and you'd get back to updating your specification to shed light on some particular points that confused people. Back to square one.

Nobody wanted to play that game. Not in the Telemetry team at least.

So Ceilometer continued to grow, surfing the OpenStack hype curve. More developers were joining the project every cycle – each one with its list of ideas, features or requirements cooked by its in-house product manager.

But many features did not belong in Ceilometer. They should have been in different projects. Ceilometer was the first OpenStack project to pass through the OpenStack Technical Committee incubation process that existed before the rules were relaxed.

This incubation process was uncertain, long, and painful. We had to justify the existence of the project, and many technical choices that have been made. Where we were expecting the committee to challenge us at fundamental decisions, such as breaking abstraction layers, it was mostly nit-picking about Web frameworks or database storage.

Consequences

The rigidity of the process discouraged anyone to start a new project for anything related to telemetry. Therefore, everyone went ahead and started dumping its idea in Ceilometer itself. With more than ten companies interested, the frictions were high, and the project was at some point pulled apart in all directions. This phenomenon was happening to every OpenStack projects anyway.

On the one hand, many contributions brought marvelous pieces of technology to Ceilometer. We implemented several features you still don't find any metering system. Dynamically sharded, automatic horizontally scalable polling? Ceilometer has that for years, whereas you can't have it in, e.g., Prometheus.

On the other hand, there were tons of crappy features. Half-baked code merged because somebody needed to ship something. As the project grew further, some of us developers started to feel that this was getting out of control and could be disastrous. The technical debt was growing as fast as the project was.

Several technical choices made were definitely bad. The architecture was a mess; the messaging bus was easily overloaded, the storage engine was non-performant, etc. People would come to me (as I was the Project Team Leader at that time) and ask why the REST API would need 20 minutes to reply to an autoscaling request. The willingness to solve everything for everyone was killing Ceilometer. It's around that time that I decided to step out of my role of PTL and started working on Gnocchi to, at least, solve one of our biggest challenge: efficient data storage.

Ceilometer was also suffering from the poor quality of many OpenStack projects. As Ceilometer retrieves data from a dozen of other projects, it has to use their interface for data retrieval (API calls, notifications) – or sometimes, palliate for their lack of any interface. Users were complaining about Ceilometer dysfunctioning while the root of the problem was actually on the other side, in the polled project. The polling agent would try to retrieve the list of virtual machines running on Nova, but just listing and retrieving this information required several HTTP requests to Nova. And those basic retrieval requests would overload the Nova API. The API does not offer any genuine interface from where the data could be retrieved in a small number of calls. And it had terrible performances.
From the point of the view of the users, the load was generated by Ceilometer. Therefore, Ceilometer was the problem. We had to imagine new ways of circumventing tons of limitation from our siblings. That was exhausting.

At its peak, during the Juno and Kilo releases (early 2015), the code size of Ceilometer reached 54k lines of code, and the number of committers reached 100 individuals (20 regulars). We had close to zero happy user, operators were hating us, and everybody was wondering what the hell was going in those developer minds.

Nonetheless, despite the impediments, most of us had a great time working on Ceilometer. Nothing's ever perfect. I've learned tons of things during that period, which were actually mostly non-technical. Community management, social interactions, human behavior and politics were at the heart of the adventure, offering a great opportunity for self-improvement.

In the next blog post, I will cover what happened in the years that followed that booming period, up until today. Stay tuned!

12 April, 2018 12:50PM by Julien Danjou

Bits from Debian

Bursary applications for DebConf18 are closing in 48 hours!

If you intend to apply for a DebConf18 bursary and have not yet done so, please proceed as soon as possible!

Bursary applications for DebConf18 will be accepted until April 13th at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk or organising an event is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the wiki.

See you in Hsinchu!

DebConf18 logo

12 April, 2018 10:30AM by Laura Arjona Reina

April 11, 2018

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Streaming the Norwegian ultimate championships

As the Norwegian indoor frisbee season is coming to a close, the Norwegian ultimate nationals are coming up, too. Much like in Trøndisk 2017, we'll be doing the stream this year, replacing a single-camera Windows/XSplit setup with a multi-camera free software stack based on Nageru.

The basic idea is the same as in Trøndisk; two cameras (one wide and one zoomed) for the main action and two static ones above the goal zones. (The hall has more amenities for TV productions than the one in Trøndisk, so a basic setup is somewhat simpler.) But there are so many tweaks:

  • We've swapped out some of the cameras for more suitable ones; the DSLRs didn't do too well under the flicker of the fluorescent tubes, for instance, and newer GoPros have rectilinear modes). And there's a camera on the commentators now, with side-by-side view as needed.

  • There are tally lights on the two human-operated cameras (new Nageru feature).

  • We're doing CEF directly in Nageru (new Nageru feature) instead of through CasparCG, to finally get those 60 fps buttery smooth transitions (and less CPU usage!).

  • HLS now comes out directly of Cubemap (new Cubemap feature) instead of being generated by a shell script using FFmpeg.

  • Speaking of CPU usage, we now have six cores instead of four, for more x264 oomph (we wanted to do 1080p60 instead of 720p60, but alas, even x264 at nearly superfast can't keep up when there's too much motion).

  • And of course, a ton of minor bugfixes and improvements based on our experience with Trøndisk—nothing helps as much as battle-testing.

For extra bonus, we'll be testing camera-over-IP from Android for interviews directly on the field, which will be a fun challenge for the wireless network. Nageru does have support for taking in IP streams through FFmpeg (incidentally, a feature originally added for the now-obsolete CasparCG integration), but I'm not sure if the audio support is mature enough to run in production yet—most likely, we'll do the reception with a laptop and use that as a regular HDMI input. But we'll see; thankfully, it's a non-essential feature this time, so we can afford to have it break. :-)

Streaming starts Saturday morning CEST (UTC+2), will progress until late afternoon, and then restart on Sunday with the playoffs (the final starts at 14:05). There will be commentary in a mix of Norwegian and English depending on the mood of the commentators, so head over to www.plastkast.no if you want to watch :-) Exact schedule on the page.

11 April, 2018 11:36PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, March 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 2 hours from February. I worked 15 hours and will again carry over 2 hours to April.

I made another two releases on the Linux 3.2 longterm stable branch (3.2.100 and 3.2.101), the latter including mitigations for Spectre on x86. I rebased the Debian package onto 3.2.101 but didn't upload an update to Debian this month. We will need to add gcc-4.9 to wheezy before we can enable all the mitigations for Spectre variant 2.

11 April, 2018 08:41PM

Joerg Jaspert

Debian SecureBoot Sprint 2018

Monday morning I gave back the keys to Office Factory Fulda, who sponsored the location for the SecureBoot Sprint from Thursday, 4th April to Sunday, 8th April. Appearently we left a pretty positive impression (we managed to clean up), so are welcome again for future sprints.

The goal of this sprint was enabling SecureBoot in/for Debian, so that users who have SecureBoot enabled machines do not need to turn that off to be able to run Debian. That needs us to handle signing a certain set of packages in a defined way, handling it as automated as possible while ensuring that stuff is done in a safe/secure way.

Now add details like secure handling of keys, only signing pre-approved sets (to make abusing it harder), revocations, key rollovers, combine it all with the infrastructue and situation we have in Debian, say dak, buildd, security archive with somewhat different rules of visibility, reproducability, a huge set of architectures only some of which do SecureBoot, proper audit logging of signatures and you end up with 7 people from different teams taking the whole first day just discussing and hashing out a specification. Plus some joining in virtually.

I’m not going into actual details of all that, as a sprint report will follow soon.

Friday to Sunday was used for actual implementation of the agreed solution. The actual dak changes turned out to not be too large, and thankfully Ansgar was on them, so I could take time to push the FTPTeams move to the new Salsa service forward. I still have a few of our less-important repositories to move, but thats a simple process I will be doing during this week, the most important step was coming up with a sane way of using Salsa.

That does not mean the actual web interface, but getting code changes from there to the various Debian hosts we run our services on. In the past, we pushed the hosts directly, so all code changes appearing on them meant that someone who was in the right unix group on that machine made them appear.1 “Verified by ssh login” basically.

With Salsa, we now add a service that has a different set of administrators added on top. And a big piece of software too, with a huge possibility of bugs, worst case allowing random users access to our repositories. Which is a way larger problem area than “git push via ssh” as in the past, and as such more likely to be bad. If we blindly pull from a repository on such shared space, the confirmation “a FTPMaster said this code is good” is gone.

So it needs a way of adding that confirmation back, while still being able to use all the nice features that Salsa offers. Within Debian, whats better than using already established ways of trusting something, gnupg created signatures?!

So how to go forward? I have been lucky, i did not need to entirely invent this on my own, Enrico had similar concerns for the New-Maintainer web pages. He setup CI to test his stuff and, if successful, installs the tested stuff on the NM machine, provided that the commit is signed by a key from a defined set.

Unfortunately, for me, he deals with a Django app that listens somewhere and can be pushed to. No such thing for me, neither do I have Django nor do I have a service listening that I can tell about changes to fetch.

We also have to take care when a database schema upgrade needs to be done, no automatic deployment on database-using FTPMaster hosts for that, a human needs to trigger this.

So the actual implementation that I developed for us, and which is in use on all hosts that we maintain code on, is implemented in our standard framework for regular jobs, cronscript.2

It turns out to live in multiple files (as usual with cronscript), where the actual code is in deploy.functions, deploy.variables, and the order to call things is defined in deploy.tasks.

cronscript around it takes care to setup the environment and keep logs, and we now call the deploy every few minutes, securely getting our code deployed.

  1. Or someone abused root rights, but if you do not trust root, you lost anyways, and there is no reason to think that any DSA-member would do this. 

  2. A framework for FTPMaster scripts that ensures the same basic setup everywhere and makes it easy to call functions and stuff, with or without error checking, in background or foreground. ALso easy to restart in the middle of a script run after breakage, as it keeps track where it was. 

11 April, 2018 03:01PM

hackergotchi for Olivier Berger

Olivier Berger

Preventing resume immediately after suspend on Dell Latitude 5580 (Debian testing)

I’ve installed Debian buster (testing at the time of writing) on a new Dell Latitude 5580 laptop, and one annoyance I’ve found is that the laptop would almost always resume as soon as it was suspended.

AFAIU, it seems the culprit is the network card (Ethernet controller: Intel Corporation Ethernet Connection (4) I219-LM) which would be configured with Wake-On-Lan (wol) set to the “magic packet” mode (ethtool enp0s31f6 | grep Wake-on would return ‘g’). One hint is that grep enabled /proc/acpi/wakeup returns GLAN.

There are many ways to change that for the rest of the session with a command like ethtool -s enp0s31f6 wol d.

But I had a hard time figuring out if there was a preferred way to make this persistant among the many hits in so many tutorials and forum posts.

My best hit so far is to add the a file named /etc/systemd/network/50-eth0.link containing :

[Match]
 Driver=e1000e

[Link]
 WakeOnLan=off

The driver can be found by checking udev settings as reported by udevadm info -a /sys/class/net/enp0s31f6

There are other ways to do that with systemd, but so far it seems to be working for me. Hth,

11 April, 2018 11:14AM by Olivier Berger

hackergotchi for Steve Kemp

Steve Kemp

Bread and data

For the past two weeks I've mostly been baking bread. I'm not sure what made me decide to make some the first time, but it actually turned out pretty good so I've been doing every day or two ever since.

This is the first time I've made bread in the past 20 years or so - I recall in the past I got frustrated that it never rose, or didn't turn out well. I can't see that I'm doing anything differently, so I'll just write it off as younger-Steve being daft!

No doubt I'll get bored of the delicious bread in the future, but for the moment I've got a good routine going - juggling going to the shops, child-care, and making bread.

Bread I've made includes the following:

Beyond that I've spent a little while writing a simple utility to embed resources in golang projects, after discovering the tool I'd previously been using, go-bindata, had been abandoned.

In short you feed it a directory of files and it will generate a file static.go with contents like this:

files[ "data/index.html" ] = "<html>....
files[ "data/robots.txt" ] = "User-Agent: * ..."

It's a bit more complex than that, but not much. As expected getting the embedded data at runtime is trivial, and it allows you to distribute a single binary even if you want/need some configuration files, templates, or media to run.

For example in the project I discussed in my previous post there is a HTTP-server which serves a user-interface based upon bootstrap. I want the HTML-files which make up that user-interface to be embedded in the binary, rather than distributing them seperately.

Anyway it's not unique, it was a fun experience writing, and I've switched to using it now:

11 April, 2018 09:01AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

DRM, DRM, oh how I hate DRM...

I love flexibility. I love when the rules of engagement are not set in stone and allow us to lead a full, happy, simple life. (Apologies to Felipe and Marianne for using their very nice sculpture for this rant. At least I am not desperately carrying a brick! ☺)

I have been very, very happy after I switched to a Thinkpad X230. This is the first computer I have with an option for a cellular modem, so after thinking it a bit, I got myself one:

After waiting for a couple of weeks, it arrived in a nonexciting little envelope straight from Hong Kong. If you look closely, you can even appreciate there's a line (just below the smaller barcode) that reads "Lenovo"). I soon found how to open this laptop (kudos to Lenovo for a very sensible and easy opening process, great documentation... So far, it's the "openest" computer I have had!) and installed my new card!

The process was decently easy, and after patting myself in the back, I eagerly turned on my computer... Only to find the BIOS to halt with the following message:

1802: Unauthorized network card is plugged in - Power off and remove the miniPCI network card (1199/6813).

System is halted

So... Got everything back to its original state. Stupid DRM in what I felt the openest laptop I have ever had. Gah.

Anyway... As you can see, I have a brand new cellular modem. I am willing to give it to the first person that offers me a nice beer in exchange, here in Mexico or wherever you happen to cross my path (just tell me so I bring the little bugger along!)

Of course, I even tried to get one of the nice volunteers to install Libreboot in my computer now that I was to Libreplanet, which would have solved the issue. But they informed me that Libreboot is supported only in the (quite a bit older) X200 machines, not in the X230.

AttachmentSize
IMG_20180409_225503.jpg1003.02 KB
IMG_20180409_225835.jpg1.77 MB
IMG_20180409_230000.jpg113.36 KB
IMG_20180409_225835.jpg1.77 MB
IMG_20180408_085157.jpg3.44 MB

11 April, 2018 04:43AM by gwolf

April 09, 2018

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in March 2018

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java

  • I spent most of my free time on Java packages because…OpenJDK 9 is now the default Java runtime environment in Debian! As of today I count 319 RC bugs (bugs with severity normal would be serious today as well) of which 227 are already resolved. That means one third of the Java team’s packages have to be adjusted for the new OpenJDK version. Java 9 comes with a new module system called Jigsaw. Undoubtedly it represents a lot of new interesting ideas but it is also a major paradigm shift. For us mere packagers it means more work than any other version upgrade in the past. Let’s say we are a handful of regular contributors (I’m generous) and we spend most of our time to stabilize the Java ecosystem in Debian to the point that we can build all of our packages again. Repeat for every new Debian release. Unfortunately not much time is actually spent on packaging new and cool applications or libraries unless they are strictly required to fix a specific Java 9 issue. It just doesn’t feel right at the moment. Most upstreams are rather indifferent or relaxed when it comes to porting their applications to Java 9 because they still can use Java 8, so why can’t we? They don’t have to provide security support for five years and can make the switch to Java 9 much later. They can also cherry-pick certain versions of libraries whereas we have to ensure that everything works with one specific version of a library. But that’s not all: Java 9 will not be shipped with Buster and we even aim for OpenJDK 11! Releases of OpenJDK will be more frequent from now on, expect a new release every six months, and there are certain versions which will receive extended security support like OpenJDK 11. One thing we can look forward to: Apparently more commercial features of Oracle JDK will be merged into OpenJDK and it appears the longterm goal is to make Oracle JDK and OpenJDK builds completely interchangeable. So maybe one day only one free software JDK for everything and everyone? I hope so.
  • I worked on the following packages to address Java 9 or other bugs: activemq, snakeyaml, libjchart2d-java, jackson-dataformat-yaml, jboss-threads, jboss-logmanager, jboss-logging-tools, qdox2, wildfly-common, activemq-activeio, jackson-datatype-joda, antlr, axis, libitext5-java, libitext1-java, libitext-java, jedit, conversant-disruptor, beansbinding, cglib, undertow, entagged, jackson-databind, libslf4j-java, proguard, libhtmlparser-java, libjackson-json-java and sweethome3d (patch by Emmanuel Bourg)
  • New upstream versions: jboss-threads, okio, libokhttp-java, snakeyaml, robocode.
  • I NMUed jtb and applied a patch from Tiago Stürmer Daitx.

Debian LTS

This was my twenty-fifth month as a paid contributor and I have been paid to work 23,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 19.03.2018 until 25.03.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in imagemagick, libvirt, freeplane, exempi, calibre, gpac, ipython, binutils, libraw, memcached, mosquitto, sdl-image1.2, slurm-llnl, graphicsmagick, libslf4j-java, radare2, sam2p, net-snmp, apache2, ldap-account-manager, librelp, ruby-rack-protection, libvncserver, zsh and xerces-c.
  • DLA-1310-1. Issued a security update for exempi fixing 6 CVE.
  • DLA-1315-1. Issued a security update for libvirt fixing 2 CVE.
  • DLA-1316-1. Issued a security update for freeplane fixing 1 CVE.
  • DLA-1322-1. Issued a security update for graphicsmagick fixing 6 CVE.
  • DLA-1325-1. Issued a security update for drupal7 fixing 1 CVE.
  • DLA-1326-1. Issued a security update for php5 fixing 1 CVE.
  • DLA-1328-1. Issued a security update for xerces-c fixing 1 CVE.
  • DLA-1335-1. Issued a security update for zsh fixing 2 CVE.
  • DLA-1340-1. Issued a security update for sam2p fixing 5 CVE. I also prepared a security update for Jessie. (#895144)
  • DLA-1341-1. Issued a security update for sdl-image1.2 fixing 6 CVE.

Misc

  • I triaged all open bugs in imlib2 and forwarded the issues upstream. The current developer of imlib2 was very responsive and helpful. Thanks to Kim Woelders several longstanding bugs could be fixed.
  • There was also a new upstream release for xarchiver. Check it out!

Thanks for reading and see you next time.

09 April, 2018 09:58PM by Apo

Lucas Kanashiro

Migrating PET features to distro-tracker

After joining the Debian Perl Team some time ago, PET has helped me a lot to find work to do in the team context, and also helped the whole team in our workflow. For those who do not know what PET is: “a collection of scripts that gather information about your (or your group’s) packages. It allows you to see in a bird’s eye view the health of hundreds of packages, instantly realizing where work is needed.”. PET became an important project since about 20 Debian teams were using it, including Perl and Ruby teams in which I am more active.

In Cape Town, during the DebConf16, I had a conversation with Raphael Hertzog about the possibility to migrate PET features to distro-tracker. He is one of the distro-tracker maintainers, and we found some similarities between those tools. Altough, after that I did not have enough time to push it forward. However, after the migration from Alioth to Salsa PET became almost unuseful because a lot of things were done based on Alioth. This brought me the motivation to get this migration idea off the drawing board, and support the PET features in distro-tracker team’s visualization.

In the meantime, the Debian Outreach team published a GSoC call for mentors for this year. I was a Debian GSoC student in 2014 and 2015, and this was a great opportunity for me to join the community. With that in mind and the wish to give this opportunity to others, I decided to become a mentor this year and proposed a project to implement the PET features in distro-tracker, called Improving distro-tracker to better support Debian Teams. We are at the selection students phase and I received great proposals. I am looking forward to the start of the program and finally have the PET features available in tracker.debian.org. And of course, bring new blood to the Debian Project, since this is the idea behind those outreach programs.

09 April, 2018 01:30PM