subscribe to this blog - en

News from Logilab and our Free Software projects, as well as on topics dear to our hearts (Python, Debian, Linux, the semantic web, scientific computing...)

show 197 results
  • ngReact: getting angular and react to work together

    2016/08/03 by Nicolas Chauvat

    ngReact is an Angular module that allows React components to be used in AngularJS applications.

    I had to work on enhancing an Angular-based application and wanted to provide the additionnal functionnality as an isolated component that I could develop and test without messing with a large Angular controller that several other people were working on.

    Here is my Angular+React "Hello World", with a couple gotchas that were not underlined in the documentation and took me some time to figure out.

    To set things up, just run:

    $ mkdir angulareacthello && cd angulareacthello
    $ npm init && npm install --save angular ngreact react react-dom

    Then write into index.html:

    <!doctype html>
                 <title>my angular react demo</title>
         <body ng-app="app" ng-controller="helloController">
                         <input type="text" ng-model="" placeholder="Enter a name here">
                         <h1><react-component name="HelloComponent" props="person" /></h1>
         <script src="node_modules/angular/angular.js"></script>
         <script src="node_modules/react/dist/react.js"></script>
         <script src="node_modules/react-dom/dist/react-dom.js"></script>
         <script src="node_modules/ngreact/ngReact.js"></script>
         // include the ngReact module as a dependency for this Angular app
         var app = angular.module('app', ['react']);
         // define a controller that has the name attribute
         app.controller('helloController', function($scope) {
                 $scope.person = { name: 'you' };
         // define a React component that displays "Hello {name}"
         var HelloComponent = React.createClass({
                 render: function() {
                         return React.DOM.span(null, "Hello ";
         // tell Angular about this React component
         app.value('HelloComponent', HelloComponent);

    I took me time to get a couple things clear in my mind.

    <react-component> is not a React component, but an Angular directive that delegates to a React component. Therefore, you should not expect the interface of this tag to be the same as the one of a React component. More precisely, you can only use the props attribute and can not set your react properties by adding more attributes to this tag. If you want to be able to write something like <react-component firstname="person.firstname" lastname="person.lastname"> you will have to use reactDirective to create a specific Angular directive.

    You have to set an object as the props attribute of the react-component tag, because it will be used as the value of this.props in the code of your React class. For example if you set the props attribute to a string ( instead of person in the above example) , you will have trouble using it on the React side because you will get an object built from the enumeration of the string. Therefore, the above example can not be made simpler. If we had written $ = 'you' we could not have passed it correctly to the react component.

    The above was tested with angular@1.5.8, ngreact@0.3.0, react@15.3.0 and react-dom@15.3.0.

    All in all, it worked well. Thank you to all the developers and contributors of these projects.

  • Testing salt formulas with testinfra

    2016/07/21 by Philippe Pepiot

    In a previous post we talked about an environment to develop salt formulas. To add some spicy requirements, the formula must now handle multiple target OS (Debian and Centos), have tests and a continuous integration (CI) server setup.

    I started a year ago to write a framework to this purpose, it's called testinfra and is used to execute commands on remote systems and make assertions on the state and the behavior of the system. The modules API provides a pythonic way to inspect the system. It has a smooth integration with pytest that adds some useful features out of the box like parametrization to run tests against multiple systems.

    Writing useful tests is not an easy task, my advice is to test code that triggers implicit actions, code that has caused issues in the past or simply test the application is working correctly like you would do in the shell.

    For instance this is one of the tests I wrote for the saemref formula

    def test_saemref_running(Process, Service, Socket, Command):
        assert Service("supervisord").is_enabled
        supervisord = Process.get(comm="supervisord")
        # Supervisor run as root
        assert supervisord.user == "root"
        assert == "root"
        cubicweb = Process.get(
        # Cubicweb should run as saemref user
        assert cubicweb.user == "saemref"
        assert == "saemref"
        assert cubicweb.comm == "uwsgi"
        # Should have 2 worker process with 8 thread each and 1 http proccess with one thread
        child_threads = sorted([c.nlwp for c in Process.filter(])
        assert child_threads == [1, 8, 8]
        # uwsgi should bind on all ipv4 adresses
        assert Socket("tcp://").is_listening
        html = Command.check_output("curl http://localhost:8080")
        assert "<title>accueil (Référentiel SAEM)</title>" in html

    Now we can run tests against a running container by giving its name or docker id to testinfra:

    % testinfra --hosts=docker://1a8ddedf8164
    test/[docker:/1a8ddedf8164] PASSED

    The immediate advantage of writing such test is that you can reuse it for monitoring purpose, testinfra can behave like a nagios plugin:

    % testinfra -qq --nagios --hosts=ssh://prod
    TESTINFRA OK - 1 passed, 0 failed, 0 skipped in 2.31 seconds

    We can now integrate the test suite in our by adding some code to build and run a provisioned docker image and add a test command that runs testinfra tests against it.

    provision_option = click.option('--provision', is_flag=True, help="Provision the container")
    @cli.command(help="Build an image")
    def build(image, provision=False):
        dockerfile = "test/{0}.Dockerfile".format(image)
        tag = "{0}-formula:{1}".format(formula, image)
        if provision:
            dockerfile_content = open(dockerfile).read()
            dockerfile_content += "\n" + "\n".join([
                "ADD test/minion.conf /etc/salt/minion.d/minion.conf",
                "ADD {0} /srv/formula/{0}".format(formula),
                "RUN salt-call --retcode-passthrough state.sls {0}".format(formula),
            ]) + "\n"
            dockerfile = "test/{0}_provisioned.Dockerfile".format(image)
            with open(dockerfile, "w") as f:
            tag += "-provisioned"
        subprocess.check_call(["docker", "build", "-t", tag, "-f", dockerfile, "."])
        return tag
    @cli.command(help="Spawn an interactive shell in a new container")
    def dev(ctx, image, provision=False):
        tag = ctx.invoke(build, image=image, provision=provision)[
            "docker", "run", "-i", "-t", "--rm", "--hostname", image,
            "-v", "{0}/test/minion.conf:/etc/salt/minion.d/minion.conf".format(BASEDIR),
            "-v", "{0}/{1}:/srv/formula/{1}".format(BASEDIR, formula),
            tag, "/bin/bash",
    @cli.command(help="Run tests against a provisioned container",
                 context_settings={"allow_extra_args": True})
    def test(ctx, image):
        import pytest
        tag = ctx.invoke(build, image=image, provision=True)
        docker_id = subprocess.check_output([
            "docker", "run", "-d", "--hostname", image,
            "-v", "{0}/test/minion.conf:/etc/salt/minion.d/minion.conf".format(BASEDIR),
            "-v", "{0}/{1}:/srv/formula/{1}".format(BASEDIR, formula),
            tag, "tail", "-f", "/dev/null",
            ctx.exit(pytest.main(["--hosts=docker://" + docker_id] + ctx.args))
            subprocess.check_call(["docker", "rm", "-f", docker_id])

    Tests can be run on a local CI server or on travis, they "just" require a docker server, here is an example of .travis.yml

    sudo: required
      - docker
    language: python
      - "2.7"
        - IMAGE=centos7
        - IMAGE=jessie
      - pip install testinfra
      - python test $IMAGE -- -v

    I wrote a dummy formula with the above code, feel free to use it as a template for your own formula or open pull requests and break some tests.

    There is a highly enhanced version of this code in the saemref formula repository, including:

    • Building a provisioned docker image with custom pillars, we use it to run an online demo
    • Destructive tests where each test is run in a dedicated "fresh" container
    • Run Systemd in the containers to get a system close to the production one (this enables the use of Salt service module)
    • Run a postgresql container linked to the tested container for specific tests like upgrading a Cubicweb instance.

    Destructive tests rely on advanced pytest features that may produce weird bugs when mixed together, too much magic involved here. Also, handling Systemd in docker is really painful and adds a lot of complexity, for instance some systemctl commands require a running systemd as PID 1 and this is not the case during the docker build phase. So the trade-off between complexity and these features may not be worth.

    There is also a lot of quite new tools to develop and test infrastructure code that you could include in your stack like test-kitchen, serverspec, and goss. Choose your weapon and go test your infrastructure code.

  • Developing salt formulas with docker

    2016/07/21 by Philippe Pepiot

    While developing salt formulas I was looking for a simple and reproducible environment to allow faster development, less bugs and more fun. The formula must handle multiple target OS (Debian and Centos).

    The first barrier is the master/minion installation of Salt, but fortunately Salt has a masterless mode. The idea is quite simple, bring up a virtual machine, install a Salt minion on it, expose the code inside the VM and call Salt states.

    At Logilab we like to work with docker, a lightweight OS-level virtualization solution. One of the key features is docker volumes to share local files inside the container. So I started to write a simple Python script to build a container with a Salt minion installed and run it with formula states and a few config files shared inside the VM.

    The formula I was working on is used to deploy the saemref project, which is a Cubicweb based application:

    % cat test/centos7.Dockerfile
    FROM centos:7
    RUN yum -y install epel-release && \
        yum -y install && \
        yum clean expire-cache && \
        yum -y install salt-minion
    % cat test/jessie.Dockerfile
    FROM debian:jessie
    RUN apt-get update && apt-get -y install wget
    RUN wget -O - | apt-key add -
    RUN echo "deb jessie main" > /etc/apt/sources.list.d/saltstack.list
    RUN apt-get update && apt-get -y install salt-minion
    % cat test/minion.conf
    file_client: local
        - /srv/salt
        - /srv/formula

    And finally the file, using the beautiful click module

    #!/usr/bin/env python
    import os
    import subprocess
    import click
    def cli():
    formula = "saemref"
    BASEDIR = os.path.abspath(os.path.dirname(__file__))
    image_choice = click.argument("image", type=click.Choice(["centos7", "jessie"]))
    @cli.command(help="Build an image")
    def build(image):
        dockerfile = "test/{0}.Dockerfile".format(image)
        tag = "{0}-formula:{1}".format(formula, image)
        subprocess.check_call(["docker", "build", "-t", tag, "-f", dockerfile, "."])
        return tag
    @cli.command(help="Spawn an interactive shell in a new container")
    def dev(ctx, image):
        tag = ctx.invoke(build, image=image)[
            "docker", "run", "-i", "-t", "--rm", "--hostname", image,
            "-v", "{0}/test/minion.conf:/etc/salt/minion.d/minion.conf".format(BASEDIR),
            "-v", "{0}/{1}:/srv/formula/{1}".format(BASEDIR, formula),
            tag, "/bin/bash",
    if __name__ == "__main__":

    Now I can run quickly multiple containers and test my Salt states inside the containers while editing the code locally:

    % ./ dev centos7
    [root@centos7 /]# salt-call state.sls saemref
    [ ... ]
    [root@centos7 /]# ^D
    % # The container is destroyed when it exits

    Notice that we could add some custom pillars and state files simply by adding specific docker shared volumes.

    With a few lines we created a lightweight vagrant like, but faster, with docker instead of virtualbox and it remain fully customizable for future needs.

  • Introduction to thesauri and SKOS

    2016/06/27 by Yann Voté

    Recently, I've faced the problem to import the European Union thesaurus, Eurovoc, into cubicweb using the SKOS cube. Eurovoc doesn't follow the SKOS data model and I'll show here how I managed to adapt Eurovoc to fit in SKOS.

    This article is in two parts:

    • this is the first part where I introduce what a thesaurus is and what SKOS is,
    • the second part will show how to convert Eurovoc to plain SKOS.

    The whole text assumes familiarity with RDF, as describing RDF would require more than a blog entry and is out of scope.

    What is a thesaurus ?

    A common need in our digital lives is to attach keywords to documents, web pages, pictures, and so on, so that search is easier. For example, you may want to add two keywords:

    • lily,
    • lilium

    in a picture's metadata about this flower. If you have a large collection of flower pictures, this will make your life easier when you want to search for a particular species later on.

    free-text keywords on a picture

    In this example, keywords are free: you can choose whatever keyword you want, very general or very specific. For example you may just use the keyword:

    • flower

    if you don't care about species. You are also free to use lowercase or uppercase letters, and to make typos...

    free-text keyword on a picture

    On the other side, sometimes you have to select keywords from a list. Such a constrained list is called a controlled vocabulary. For instance, a very simple controlled vocabulary with only two keywords is the one about a person's gender:

    • male (or man),
    • female (or woman).
    a simple controlled vocabulary

    But there are more complex examples: think about how a library organizes books by themes: there are very general themes (eg. Science), then more and more specific ones (eg. Computer science -> Software -> Operating systems). There may also be synonyms (eg. Computing for Computer science) or referrals (eg. there may be a "see also" link between keywords Algebra and Geometry). Such a controlled vocabulary where keywords are organized in a tree structure, and with relations like synonym and referral, is called a thesaurus.

    an example thesaurus with a tree of keywords

    For the sake of simplicity, in the following we will call thesaurus any controlled vocabulary, even a simple one with two keywords like male/female.


    SKOS, from the World Wide Web Consortium (W3C), is an ontology for the semantic web describing thesauri. To make it simple, it is a common data model for thesauri that can be used on the web. If you have a thesaurus and publish it on the web using SKOS, then anyone can understand how your thesaurus is organized.

    SKOS is very versatile. You can use it to produce very simple thesauri (like male/female) and very complex ones, with a tree of keywords, even in multiple languages.

    To cope with this complexity, SKOS data model splits each keyword into two entities: a concept and its labels. For example, the concept of a male person have multiple labels: male and man in English, homme and masculin in French. The concept of a lily flower also has multiple labels: lily in English, lilium in Latin, lys in French.

    Among all labels for a given concept, some can be preferred, while others are alternative. There may be only one preferred label per language. In the person's gender example, man may be the preferred label in English and male an alternative one, while in French homme would be the preferred label and masculin and alternative one. In the flower example, lily (resp. lys) is the preferred label in English (resp. French), and lilium is an alternative label in Latin (no preferred label in Latin).

    SKOS concepts and labels

    And of course, in SKOS, it is possible to say that a concept is broader than another one (just like topic Science is broader than topic Computer science).

    So to summarize, in SKOS, a thesaurus is a tree of concepts, and each concept have one or more labels, preferred or alternative. A thesaurus is also called a concept scheme in SKOS.

    Also, please note that SKOS data model is slightly more complicated than what we've shown here, but this will be sufficient for our purpose.

    RDF URIs defined by SKOS

    In order to publish a thesaurus in RDF using SKOS ontology, SKOS introduces the "skos:" namespace associated to the following URI:

    Within that namespace, SKOS defines some classes and predicates corresponding to what has been described above. For example:

    • the triple (<uri>, rdf:type, skos:ConceptScheme) says that <uri> belongs to class skos:ConceptScheme (that is, is a concept scheme),
    • the triple (<uri>, rdf:type, skos:Concept) says that <uri> belongs to class skos:Concept (that is, is a concept),
    • the triple (<uri>, skos:prefLabel, <literal>) says that <literal> is a preferred label for concept <uri>,
    • the triple (<uri>, skos:altLabel, <literal>) says that <literal> is an alternative label for concept <uri>,
    • the triple (<uri1>, skos:broader, <uri2>) says that concept <uri2> is a broder concept of <uri1>.

  • One way to convert Eurovoc into plain SKOS

    2016/06/27 by Yann Voté

    This is the second part of an article where I show how to import the Eurovoc thesaurus from the European Union into an application using a plain SKOS data model. I've recently faced the problem of importing Eurovoc into CubicWeb using the SKOS cube, and the solution I've chose is discussed here.

    The first part was an introduction to thesauri and SKOS.

    The whole article assumes familiarity with RDF, as describing RDF would require more than a blog entry and is out of scope.

    Difficulties with Eurovoc and SKOS


    Eurovoc is the main thesaurus covering European Union business domains. It is published and maintained by the EU commission. It is quite complex and big, structured as a tree of keywords.

    You can see Eurovoc keywords and browse the tree from the Eurovoc homepage using the link Browse the subject-oriented version.

    For example, when publishing statistics about education in the EU, you can tag the published data with the broadest keyword Education and communications. Or you can be more precise and use the following narrower keywords, in increasing order of preference: Education, Education policy, Education statistics.

    Problem: hierarchy of thesauri

    The EU commission uses SKOS to publish its Eurovoc thesaurus, so it should be straightforward to import Eurovoc into our own application. But things are not that simple...

    For some reasons, Eurovoc uses a hierarchy of concept schemes. For example, Education and communications is a sub-concept scheme of Eurovoc (it is called a domain), and Education is a sub-concept scheme of Education and communications (it is called a micro-thesaurus). Education policy is (a label of) the first concept in this hierarchy.

    But with SKOS this is not possible: a concept scheme cannot be contained into another concept scheme.

    Possible solutions

    So to import Eurovoc into our SKOS application, and not loose data, one solution is to turn sub-concept schemes into concepts. We have two strategies:

    • keep only one concept scheme (Eurovoc) and turn domains and micro-thesauri into concepts,
    • keep domains as concept schemes, drop Eurovoc concept scheme, and only turn micro-thesauri into concepts.

    Here we will discuss the latter solution.

    Lets get to work

    Eurovoc thesaurus can be downloaded at the following URL:

    The ZIP archive contains only one XML file named eurovoc_skos.rdf. Put it somewhere where you can find it easily.

    To read this file easily, we will use the RDFLib Python library. This library makes it really convenient to work with RDF data. It has only one drawback: it is very slow. Reading the whole Eurovoc thesaurus with it takes a very long time. Make the process faster is the first thing to consider for later improvements.

    Reading the Eurovoc thesaurus is as simple as creating an empty RDF Graph and parsing the file. As said above, this takes a long long time (from half an hour to two hours).

    import rdflib
    eurovoc_graph = rdflib.Graph()
    eurovoc_graph.parse('<path/to/eurovoc_skos.rdf>', format='xml')
    <Graph identifier=N52834ca3766d4e71b5e08d50788c5a13 (<class 'rdflib.graph.Graph'>)>

    We can see that Eurovoc contains more than 2 million triples.


    Now, before actually converting Eurovoc to plain SKOS, lets introduce some helper functions:

    • the first one, uriref(), will allow us to build RDFLib URIRef objects from simple prefixed URIs like skos:prefLabel or dcterms:title,
    • the second one, capitalized_eurovoc_domains(), is used to convert Eurovoc domain names, all uppercase (eg. 32 EDUCATION ET COMMUNICATION) to a string where only first letter is uppercase (eg. 32 Education and communication)
    import re
    from rdflib import Literal, Namespace, RDF, URIRef
    from rdflib.namespace import DCTERMS, SKOS
    eu_ns = Namespace('')
    thes_ns = Namespace('')
    prefixes = {
        'dcterms': DCTERMS,
        'skos': SKOS,
        'eu': eu_ns,
        'thes': thes_ns,
    def uriref(prefixed_uri):
        prefix, value = prefixed_uri.split(':', 1)
        ns = prefixes[prefix]
        return ns[value]
    def capitalized_eurovoc_domain(domain):
        """Return the given Eurovoc domain name with only the first letter uppercase."""
        return re.sub(r'^(\d+\s)(.)(.+)$',
                      lambda m: u'{0}{1}{2}'.format(,,,
                      domain, re.UNICODE)

    Now the actual work. After using variables to reference URIs, the loop will parse each triple in original graph and:

    • discard it if it contains deprecated data,
    • if triple is like (<uri>, rdf:type, eu:Domain), replace it with (<uri>, rdf:type, skos:ConceptScheme),
    • if triple is like (<uri>, rdf:type, eu:MicroThesaurus), replace it with (<uri>, rdf:type, skos:Concept) and add triple (<uri>, skos:inScheme, <domain_uri>),
    • if triple is like (<uri>, rdf:type, eu:ThesaurusConcept), replace it with (<uri>, rdf:type, skos:Concept),
    • if triple is like (<uri>, skos:topConceptOf, <microthes_uri>), replace it with (<uri>, skos:broader, <microthes_uri>),
    • if triple is like (<uri>, skos:inScheme, <microthes_uri>), replace it with (<uri>, skos:inScheme, <domain_uri>),
    • keep triples like (<uri>, skos:prefLabel, <label_uri>), (<uri>, skos:altLabel, <label_uri>), and (<uri>, skos:broader, <concept_uri>),
    • discard all other non-deprecated triples.

    Note that, to replace a micro thesaurus with a domain, we have to build a mapping between each micro thesaurus and its containing domain (microthes2domain dict).

    This loop is also quite long.

    eurovoc_ref = URIRef(u'')
    deprecated_ref = URIRef(u'')
    title_ref = uriref('dcterms:title')
    status_ref = uriref('thes:status')
    class_domain_ref = uriref('eu:Domain')
    rel_domain_ref = uriref('eu:domain')
    microthes_ref = uriref('eu:MicroThesaurus')
    thesconcept_ref = uriref('eu:ThesaurusConcept')
    concept_scheme_ref = uriref('skos:ConceptScheme')
    concept_ref = uriref('skos:Concept')
    pref_label_ref = uriref('skos:prefLabel')
    alt_label_ref = uriref('skos:altLabel')
    in_scheme_ref = uriref('skos:inScheme')
    broader_ref = uriref('skos:broader')
    top_concept_ref = uriref('skos:topConceptOf')
    microthes2domain = dict((mt, next(eurovoc_graph.objects(mt, uriref('eu:domain'))))
                            for mt in eurovoc_graph.subjects(RDF.type, uriref('eu:MicroThesaurus')))
    new_graph = rdflib.ConjunctiveGraph()
    for subj_ref, pred_ref, obj_ref in eurovoc_graph:
        if deprecated_ref in list(eurovoc_graph.objects(subj_ref, status_ref)):
        # Convert eu:Domain into a skos:ConceptScheme
        if obj_ref == class_domain_ref:
            new_graph.add((subj_ref, RDF.type, concept_scheme_ref))
            for title in eurovoc_graph.objects(subj_ref, pref_label_ref):
                if title.language == u'en':
                    new_graph.add((subj_ref, title_ref,
        # Convert eu:MicroThesaurus into a skos:Concept
        elif obj_ref == microthes_ref:
            new_graph.add((subj_ref, RDF.type, concept_ref))
            scheme_ref = next(eurovoc_graph.objects(subj_ref, rel_domain_ref))
            new_graph.add((subj_ref, in_scheme_ref, scheme_ref))
        # Convert eu:ThesaurusConcept into a skos:Concept
        elif obj_ref == thesconcept_ref:
            new_graph.add((subj_ref, RDF.type, concept_ref))
        # Replace <concept> topConceptOf <MicroThesaurus> by <concept> broader <MicroThesaurus>
        elif pred_ref == top_concept_ref:
            new_graph.add((subj_ref, broader_ref, obj_ref))
        # Replace <concept> skos:inScheme <MicroThes> by <concept> skos:inScheme <Domain>
        elif pred_ref == in_scheme_ref and obj_ref in microthes2domain:
            new_graph.add((subj_ref, in_scheme_ref, microthes2domain[obj_ref]))
        # Keep label triples
        elif (subj_ref != eurovoc_ref and obj_ref != eurovoc_ref
              and pred_ref in (pref_label_ref, alt_label_ref)):
            new_graph.add((subj_ref, pred_ref, obj_ref))
        # Keep existing skos:broader relations and existing concepts
        elif pred_ref == broader_ref or obj_ref == concept_ref:
            new_graph.add((subj_ref, pred_ref, obj_ref))

    We can check that we now have far less triples than before.


    Now we dump this new graph to disk. We choose the Turtle format as it is far more readable than RDF/XML for humans, and slightly faster to parse for machines. This file will contain plain SKOS data that can be directly imported into any application able to read SKOS.

    with open('eurovoc.n3', 'w') as f:
        new_graph.serialize(f, format='n3')

    With CubicWeb using the SKOS cube, it is a one command step:

    cubicweb-ctl skos-import --cw-store=massive <instance_name> eurovoc.n3

  • Installing Debian Jessie on a "pure UEFI" system

    2016/06/13 by David Douard

    At the core of the Logilab infrastructure is a highly-available pair of small machines dedicated to our main directory and authentication services: LDAP, DNS, DHCP, Kerberos and Radius.

    The machines are small fanless boxes powered by a 1GHz Via Eden processor, 512Mb of RAM and 2Gb of storage on a CompactFlash module.

    They have served us well for many years, but now is the time for an improvement. We've bought a pair of Lanner FW-7543B that have the same form-factor. They are not fanless, but are much more powerful. They are pretty nice, but have one major drawback: their firmware does not boot on a legacy BIOS-mode device when set up in UEFI. Another hard point is that they do not have a video connector (there is a VGA output on the motherboard, but the connector is optional), so everything must be done via the serial console.

    I knew the Debian Jessie installer would provide everything that is required to handle an UEFI-based system, but it took me a few tries to get it to boot.

    First, I tried the standard netboot image, but the firmware did not want to boot from a USB stick, probably because the image requires a MBR-based bootloader.

    Then I tried to boot from the Refind bootable image and it worked! At least I had the proof this little beast could boot in UEFI. But, although it is probably possible, I could not figure out how to tweak the Refind config file to make it boot properly the Debian installer kernel and initrd.

    Finally I gave a try to something I know much better: Grub. Here is what I did to have a working UEFI Debian installer on a USB key.


    First, in the UEFI world, you need a GPT partition table with a FAT partition typed "EFI System":

    david@laptop:~$ sudo fdisk /dev/sdb
    Welcome to fdisk (util-linux 2.25.2).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    Command (m for help): g
    Created a new GPT disklabel (GUID: 52FFD2F9-45D6-40A5-8E00-B35B28D6C33D).
    Command (m for help): n
    Partition number (1-128, default 1): 1
    First sector (2048-3915742, default 2048): 2048
    Last sector, +sectors or +size{K,M,G,T,P} (2048-3915742, default 3915742):  +100M
    Created a new partition 1 of type 'Linux filesystem' and of size 100 MiB.
    Command (m for help): t
    Selected partition 1
    Partition type (type L to list all types): 1
    Changed type of partition 'Linux filesystem' to 'EFI System'.
    Command (m for help): p
    Disk /dev/sdb: 1.9 GiB, 2004877312 bytes, 3915776 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 52FFD2F9-45D6-40A5-8E00-B35B28D6C33D
    Device     Start    End Sectors  Size Type
    /dev/sdb1   2048 206847  204800  100M EFI System
    Command (m for help): w

    Install Grub

    Now we need to install a grub-efi bootloader in this partition:

    david@laptop:~$ pmount sdb1
    david@laptop:~$ sudo grub-install --target x86_64-efi --efi-directory /media/sdb1/ --removable --boot-directory=/media/sdb1/boot
    Installing for x86_64-efi platform.
    Installation finished. No error reported.

    Copy the Debian Installer

    Our next step is to copy the Debian's netboot kernel and initrd on the USB key:

    david@laptop:~$ mkdir /media/sdb1/EFI/debian
    david@laptop:~$ wget -O /media/sdb1/EFI/debian/linux
    --2016-06-13 18:40:02--  /images/netboot/debian-installer/amd64/linux
    Resolving (, 2a01:e0c:1:1598::2
    Connecting to (||:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 3120416 (3.0M) [text/plain]
    Saving to: ‘/media/sdb1/EFI/debian/linux’
    /media/sdb1/EFI/debian/linux      100%[========================================================>]   2.98M      464KB/s   in 6.6s
    2016-06-13 18:40:09 (459 KB/s) - ‘/media/sdb1/EFI/debian/linux’ saved [3120416/3120416]
    david@laptop:~$ wget -O /media/sdb1/EFI/debian/initrd.gz
    --2016-06-13 18:41:30--
    Resolving (, 2a01:e0c:1:1598::2
    Connecting to (||:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 15119287 (14M) [application/x-gzip]
    Saving to: ‘/media/sdb1/EFI/debian/initrd.gz’
    /media/sdb1/EFI/debian/initrd.g    100%[========================================================>]  14.42M    484KB/s   in 31s
    2016-06-13 18:42:02 (471 KB/s) - ‘/media/sdb1/EFI/debian/initrd.gz’ saved [15119287/15119287]

    Configure Grub

    Then, we must write a decent grub.cfg file to load these:

    david@laptop:~$ echo >/media/sdb1/boot/grub/grub.cfg <<EOF
    menuentry "Jessie Installer" {
      insmod part_msdos
      insmod ext2
      insmod part_gpt
      insmod fat
      insmod gzio
      echo  'Loading Linux kernel'
      linux /EFI/debian/linux --- console=ttyS0,115200
      echo 'Loading InitRD'
      initrd /EFI/debian/initrd.gz

    Et voilà, piece of cake!

  • Our work for the OpenDreamKit project during the 77th Sage days

    2016/04/18 by Florent Cayré

    Logilab is part of OpenDreamKit, a Horizon 2020 European Research Infrastructure project that will run until 2019 and provides substantial funding to the open source computational mathematics ecosystem.

    One of the goals of this project is improve the packaging and documentation of SageMath, the open source alternative to Maple and Mathematica.

    The core developers of SageMath organised the 77th Sage days last week and Logilab has taken part, with David Douard, Julien Cristau and I, Florent Cayre.

    David and Julien have been working on packaging SageMath for Debian. This is a huge task (several man-months of work), split into two sub-tasks for now:

    • building SageMath with Debian-packaged versions of its dependencies, if available;
    • packaging some of the missing dependencies, starting with the most expected ones, like the latest releases of Jupyter and IPython.

    As a first result, the following packages have been pushed into Debian experimental:

    There is still a lot of work to be done, and packaging the notebook is the next task on the list.

    One hiccup along the way was a python crash involving multiple inheritance from Cython extensions classes. Having people nearby who knew the SageMath codebase well (or even wrote the relevant parts) was invaluable for debugging, and allowed us to blame a recent CPython change.

    Julien also gave a hand to Florent Hivert and Robert Lehmann who were trying to understand why building SageMath's documentation needed this much memory.

    As far as I am concerned, I made a prototype of a structured HTML documentation produced with Sphinx and containing Python executable code ran on thanks to the Thebe javascript library that interfaces statically delivered HTML pages with a Jupyter notebook server.

    The Sage days have been an excellent opportunity to efficiently work on the technical tasks with skillfull and enthusiastic people. We would like to thank the OpenDreamKit core team for the organization and their hard work. We look forward to the next workshop.

  • 3D Visualization of simulation data with x3dom

    2016/02/16 by Yuanxiang Wang

    X3DOM Plugins

    As part of the Open Dream Kit project, we are working at Logilab on the creation of tools for mesh data visualization and analysis in a web application. Our goal was to create widgets to use in Jupyter notebook (formerly IPython) for 3D visualization and analysis.

    We found two interesting technologies for 3D rendering: ThreeJS and X3DOM. ThreeJS is a large JavaScript 3D library and X3DOM a HTML5 framework for 3D. After working with both, we chose to use X3DOM because of its high level architecture. With X3DOM the 3D model is defined in the DOM in HTML5, so the parameters of the nodes can be changed easily with the setAttribute DOM function. This makes the creation of user interfaces and widgets much easier.

    We worked to create new DOM nodes that integrate nicely in a standard X3DOM tree, namely IsoColor, Threshold and ClipPlane.

    We had two goals in mind:

    1. create an X3DOM plugin API that allows one to create new DOM nodes which extend X3DOM functionality;
    2. keep a simple X3DOM-like interface for the final users.

    Example of the plugins Threshold and IsoColor:


    The Threshold and IsoColor nodes work like any X3DOM node and react to attribute changes performed with the setAttribute method. This makes it easy to use HTML widgets like sliders / buttons to drive the plugin's parameters.

    X3Dom API

    The goal is to create custom nodes that affect the rendering based on data (positions, pressure, temperature...). The idea is to manipulate the shaders, since it gives low-level manipulation on the 3D rendering. Shaders give more freedom and efficiency compared to reusing other X3DOM nodes. (Reminder : Shaders are parts of GLSL, used to work with the GPU).

    X3DOM has a native node to all users to write shaders : the ComposedShader node. The problem of this node is it overwrites the shaders generated by X3DOM. For example, nodes like ClipPlane are disabled with a ComposedShader node in the DOM. Another example is image texturing, the computation of the color from texture coordinate should be written within the ComposedShader.

    In order to add parts of shader to the generated shader without overwriting it I created a new node: CustomAttributeNode. This node is a generic node to add uniforms, varying and shader parts into X3DOW. The data of the geometry (attributes) are set using the X3DOM node named FloatVertexAttribute.

    Example of CustomAttributeNode to create a threshold node:


    The CustomAttributeNode is the entry point in x3dom for the javascript API.

    JavaScript API

    The idea of the the API is to create a new node inherited from CustomAttributeNode. We wrote some functions to make the implementation of the node easier.

    Ideas for future improvement

    There are still some points that need improvement

    • Create a tree widget using the grouping nodes in X3DOM
    • Add high level functions to X3DGeometricPropertyNode to set the values. For instance the IsoColor node is only a node that set the values of the TextureCoordinate node from the FloatVertexAttribute node.
    • Add high level function to return the variable name needed to overwrite the basic attributes like positions in a Geometry. With my API, the IsoColor use a varying defined in X3DOM to overwrite the values of the texture coordinate. Because there are no documentation, it is hard for the users to find the varying names. On the other hand there are no specification on the varying names so it might need to be maintained.
    • Maybe the CustomAttributeNode should be a X3DChildNode instead of a X3DGeometricPropertyNode.


    This structure might allow the "use" attribute in X3DOM. Like that, X3DOM avoid data duplication and writing too much HTML. The following code illustrate what I expect.


  • We went to cfgmgmtcamp 2016 (after FOSDEM)

    2016/02/09 by Arthur Lutz

    Following a day at FOSDEM (another post about it), we spend two days at cfgmgmtcamp in Gent. At cfgmgmtcamp, we obviously spent some time in the Salt track since it's our tool of choice as you might have noticed. But checking out how some of the other tools and communities are finding solutions to similar problems is also great.

    cfgmgmtcamp logo

    I presented Roll out active Supervision with Salt, Graphite and Grafana (mirrored on slideshare), you can find the code on bitbucket.

    We saw :

    Day 1

    • Mark Shuttleworth from Canonical presenting Juju and its ecosystem, software modelling. MASS (Metal As A Service) was demoed on the nice "OrangeBox". It promises to spin up an OpenStack infrastructure in 15 minutes. One of the interesting things with charms and bundles of charms is the interfaces that need to be established between different service bricks. In the salt community we have salt-formulas but they lack maturity in the sense that there's no possibility to plug in multiple formulas that interact with each other... yet.
    juju deploy of openstack
    • Mitch Michell from Hashicorp presented vault. Vault stores your secrets (certificates, passwords, etc.) and we will probably be trying it out in the near future. A lot of concepts in vault are really well thought out and resonate with some of the things we want to do and automate in our infrastructure. The use of Shamir Secret Sharing technique (also used in the debian infrastructure team) for the N-man challenge to unvault the secrets is quite nice. David is already looking into automating it with Salt and having GSSAPI (kerberos) authentication. bikes!

    Day 2

    • Gareth Rushgrove from PuppetLabs talked about the importance of metadata in docker images and docker containers by explaining how these greatly benefit tools like dpkg and rpm and that the container community should be inspired by the amazing skills and experience that has been built by these package management communities (think of all the language-specific package managers that each reinvent the wheel one after the other).
    • Testing Immutable Infrastructure: we found some inspiration from test-kitchen and running the tests inside a docker container instead of vagrant virtual machine. We'll have to take a look at the SaltStack provisioner for test-kitchen. We already do some of that stuff in docker and OpenStack using salt-cloud. But maybe we can take it further with such tools (or testinfra whose author will be joining Logilab next month).
    coreos, rkt, kubernetes
    • How CoreOS is built, modified, and updated: From repo sync to Omaha by Brian "RedBeard" Harrington. Interesting presentation of the CoreOS system. Brian also revealed that CoreOS is now capable of using the TPM to enforce a signed OS, but also signed containers. Official CoreOS images shipped through Omaha are now signed with a root key that can be installed in the TPM of the host (ie. they didn't use a pre-installed Microsoft key), along with a modified TPM-aware version of GRUB. For now, the Omaha platform is not open source, so it may not be that easy to build one's own CoreOS images signed with a personal root key, but it is theoretically possible. Brian also said that he expect their Omaha server implementation to become open source some day.
    • The use of Salt in Foreman was presented and demoed by Stephen Benjamin. We'll have to retry using that tool with the newest features of the smart proxy.
    • Jonathan Boulle from CoreOS presented "rkt and Kubernetes: What’s new with Container Runtimes and Orchestration" In this last talk, Johnathan gave a tour of the rkt project and how it is used to build, coupled with kubernetes, a comprehensive, secure container running infrastructure (which uses saltstack!). He named the result "rktnetes". The idea is to use rkt as the kubelet's (primany node agent) container runtime of a kubernetes cluster powered by CoreOS. Along with the new CoreOS support for TPM-based trust chain, it allows to ensure completely secured executions, from the bootloader to the container! The possibility to run fully secured containers is one of the reasons why CoreOS developed the rkt project.

    We would like to thank the cfgmgmntcamp organisation team, it was a great conference, we highly recommend it. Thanks for the speaker event the night before the conference, and the social event on Monday evening. (and thanks for the chocolate!).

  • We went to FOSDEM 2016 (and cfgmgmtcamp)

    2016/02/09 by Arthur Lutz

    David & I went to FOSDEM and cfgmgmtcamp this year to see some conferences, do two presentations, and discuss with the members of the open source communities we contribute to.

    At FOSDEM, we started early by doing a presentation at 9.00 am in the "Configuration Management devroom", which to our surprise was a large room which was almost full. The presentation was streamed over the Internet and should be available to view shortly.

    I presented "Once you've configured your infrastructure using salt, monitor it by re-using that definition". (mirrored on slideshare. The main part was a demo, the code being published on bitbucket.

    The presentation was streamed live (I came across someone that watched it on the Internet to "sleep in"), and should be available to watch when it gets encoded on

    FOSDEM video box

    We then saw the following talks :

    • Unified Framework for Big Data Foreign Data Wrappers (FDW) by Shivram Mani in the Postgresql Track
    • Mainflux Open Source IoT Cloud
    • EzBench, a tool to help you benchmark and bisect the Graphics Stack's performance
    • The RTC components in the debian infrastructure
    • CoreOS: A Linux distribution designed for application containers that scale
    • Using PostgreSQL for Bibliographic Data (since we've worked on with and PostgreSQL)
    • The FOSDEM infrastructure review

    Congratulations to all the FOSDEM organisers, volunteers and speakers. We will hopefully be back for more.

    We then took the train to Gent where we spent two days learning and sharing about Configuration Management Systems and all the ecosystem around it (orchestration, containers, clouds, testing, etc.).

    More on our cfmgmtcamp experience in another blog post.

    Photos under creative commons CC-BY, by Ludovic Hirlimann and Deborah Bryant here and here

show 197 results