show 198 results

Blog entries

  • SciviJS

    2016/10/10 by Martin Renou

    Introduction

    The goal of my work at Logilab is to create tools to visualize scientific 3D volumic-mesh-based data (mechanical data, electromagnetic...) in a standard web browser. It's a part of the european OpenDreamKit project. Franck Wang has been working on this subject last year. I based my work on his results and tried to improve them.

    Our goal is to create widgets to be used in Jupyter Notebook (formerly IPython) for easy 3D visualization and analysis. We also want to create a graphical user interface in order to enable users to intuitively compute multiple effects on their meshes.

    As Franck Wang worked with X3DOM, which is an open source JavaScript framework that makes it possible to display 3D scenes using HTML nodes, we first thought it was a good idea to keep on working with this framework. But X3DOM is not very well maintained these days, as can be seen on their GitHub repository.

    As a consequence, we decided to take a look at another 3D framework. Our best candidates were:

    • ThreeJS
    • BabylonJS

    ThreeJS and BabylonJS are two well-known Open Source frameworks for 3D web visualization. They are well maintained by hundreds of contributors since several years. Even if BabylonJS was first thought for video games, these two engines are interesting for our project. Some advantages of ThreeJS are:

    Finally, the choice of using ThreeJS was quite obvious because of its Nodes feature, contributed by Sunag Entertainment. It allows users to compose multiple effects like isocolor, threshold, clip plane, etc. As ThreeJS is an Open Source framework, it is quite easy to propose new features and contributors are very helpful.

    ThreeJS

    As we want to compose multiple effects like isocolor and threshold (the pixel color correspond to a pressure but if this pressure is under a certain threshold we don't want to display it), it seems a good idea to compose shaders instead of creating a big shader with all the features we want to implement. The problem is that WebGL is still limited (as of the 1.x version) and it's not possible for shaders to exchange data with other shaders. Only the vertex shader can send data to the fragment shader through varyings.

    So it's not really possible to compose shaders, but the good news is we can use the new node system of ThreeJS to easily compute and compose a complex material for a mesh.

    alternate text

    It's the graphical view of what you can do in your code, but you can see that it's really simple to implement effects in order to visualize your data.

    SciviJS

    With this great tools as a solid basis, I designed a first version of a javascript library, SciviJS, that aims at loading, displaying and analyzing mesh data in a standard web browser (i.e. without any plugin).

    You can define your visualization in a .yml file containing urls to your mesh and data and a hierarchy of effects (called block structures).

    See https://demo.logilab.fr/SciviJS/ for an online demo.

    You can see the block structure like following:

    https://www.logilab.org/file/8719790/raw

    Data blocks are instantiated to load the mesh and define basic parameters like color, position etc. Blocks are connected together to form a tree that helps building a visual analysis of your mesh data. Each block receives data (like mesh variables, color and position) from its parent and can modify them independently.

    Following parameters must be set on dataBlocks:

    • coordURL: URL to the binary file containing coordinate values of vertices.
    • facesURL: URL to the binary file containing indices of faces defining the skin of the mesh.
    • tetrasURL: URL to the binary file containing indices of tetrahedrons. Default is ''.
    • dataURL: URL to the binary file containing data that you want to visualize for each vertices.

    Following parameters can be set on dataBlocks or plugInBlocks:

    • type: type of the block, which is dataBlock or the name of the plugInBlock that you want.
    • colored: define whether or not the 3D object is colored. Default is false, object is rendered gray.
    • colorMap: color map used for coloration, available values are rainbow and gray. Default is rainbow.
    • colorMapMin and colorMapMax: bounds for coloration scaled in [0, 1]. Default is (0, 1).
    • visualizedData: data used as input for coloration. If data are 3D vectors available values are magnitude, X, Y, Z, and default is magnitude. If data are scalar values you don't need to set this parameter.
    • position, rotation, scale: 3D vectors representing position, rotation and scale of the object. Default are [0., 0., 0.], [0., 0., 0.] and [1., 1., 1.].
    • visible: define whether or not the object is visible. Default is true if there's no childrenBlock, false otherwise.
    • childrenBlocks: array of children blocks. Default is empty.

    As of today, there are 6 types of plug-in blocks:

    • Threshold: hide areas of your mesh based on a variable's value and bound parameters

      • lowerBound: lower bound used for threshold. Default is 0 (representing dataMin). If inputData is under lowerBound, then it's not displayed.
      • upperBound: upper bound used for threshold. Default is 1 (representing dataMax). If inputData is above upperBound, then it's not displayed.
      • inputData: data used for threshold effect. Default is visualizedData, but you can set it to magnitude, X, Y or Z.
    • ClipPlane: hide a part of the mesh by cutting it with a plane

      • planeNormal: 3D array representing the normal of the plane used for section. Default is [1., 0., 0.].
      • planePosition: position of the plane for the section. It's a scalar scaled bewteen -1 and 1. Default is 0.
    • Slice: make a slice of your mesh

      • sliceNormal
      • slicePosition
    • Warp: deform the mesh along the direction of an input vector data

      • warpFactor: deformation factor. Default is 1, can be negative.
      • inputData: vector data used for warp effect. Default is data, but you can set it to X, Y or Z to use only one vector component.
    • VectorField: represent the input vector data with arrow glyphs

      • lengthFactor: factor of length of vectors. Default is 1, can be negative.
      • inputData
      • nbVectors: max number of vectors. Default is the number of vertices of the mesh (which is the maximum value).
      • mode: mode of distribution. Default is volume, you can set it to surface.
      • distribution: type of distribution. Default is regular, you can set it to random.
    • Points: represent the data with points

      • pointsSize: size of points in pixels. Default is 3.
      • nbPoints
      • mode
      • distribution

    Using those blocks you can easily render interesting 3D scenes like this:

    https://www.logilab.org/file/8571787/raw https://www.logilab.org/file/8572007/raw

    Future works

    • Integration to Jupyter Notebook
    • As of today you only can define a .yml file defining the tree of blocks, we plan to develop a Graphical User Interface to enable users to define this tree interactively with drag and drop
    • Support of most file types (for now it only supports binary files)

  • ngReact: getting angular and react to work together

    2016/08/03 by Nicolas Chauvat

    ngReact is an Angular module that allows React components to be used in AngularJS applications.

    I had to work on enhancing an Angular-based application and wanted to provide the additionnal functionnality as an isolated component that I could develop and test without messing with a large Angular controller that several other people were working on.

    Here is my Angular+React "Hello World", with a couple gotchas that were not underlined in the documentation and took me some time to figure out.

    To set things up, just run:

    $ mkdir angulareacthello && cd angulareacthello
    $ npm init && npm install --save angular ngreact react react-dom
    

    Then write into index.html:

    <!doctype html>
    <html>
         <head>
                 <title>my angular react demo</title>
         </head>
         <body ng-app="app" ng-controller="helloController">
                 <div>
                         <label>Name:</label>
                         <input type="text" ng-model="person.name" placeholder="Enter a name here">
                         <hr>
                         <h1><react-component name="HelloComponent" props="person" /></h1>
                 </div>
         </body>
         <script src="node_modules/angular/angular.js"></script>
         <script src="node_modules/react/dist/react.js"></script>
         <script src="node_modules/react-dom/dist/react-dom.js"></script>
         <script src="node_modules/ngreact/ngReact.js"></script>
         <script>
         // include the ngReact module as a dependency for this Angular app
         var app = angular.module('app', ['react']);
    
         // define a controller that has the name attribute
         app.controller('helloController', function($scope) {
                 $scope.person = { name: 'you' };
         });
    
         // define a React component that displays "Hello {name}"
         var HelloComponent = React.createClass({
                 render: function() {
                         return React.DOM.span(null, "Hello "+this.props.name);
                 }
         });
    
         // tell Angular about this React component
         app.value('HelloComponent', HelloComponent);
    
         </script>
    </html>
    

    I took me time to get a couple things clear in my mind.

    <react-component> is not a React component, but an Angular directive that delegates to a React component. Therefore, you should not expect the interface of this tag to be the same as the one of a React component. More precisely, you can only use the props attribute and can not set your react properties by adding more attributes to this tag. If you want to be able to write something like <react-component firstname="person.firstname" lastname="person.lastname"> you will have to use reactDirective to create a specific Angular directive.

    You have to set an object as the props attribute of the react-component tag, because it will be used as the value of this.props in the code of your React class. For example if you set the props attribute to a string (person.name instead of person in the above example) , you will have trouble using it on the React side because you will get an object built from the enumeration of the string. Therefore, the above example can not be made simpler. If we had written $scope.name = 'you' we could not have passed it correctly to the react component.

    The above was tested with angular@1.5.8, ngreact@0.3.0, react@15.3.0 and react-dom@15.3.0.

    All in all, it worked well. Thank you to all the developers and contributors of these projects.


  • Testing salt formulas with testinfra

    2016/07/21 by Philippe Pepiot

    In a previous post we talked about an environment to develop salt formulas. To add some spicy requirements, the formula must now handle multiple target OS (Debian and Centos), have tests and a continuous integration (CI) server setup.

    http://testinfra.readthedocs.io/en/latest/_static/logo.png

    I started a year ago to write a framework to this purpose, it's called testinfra and is used to execute commands on remote systems and make assertions on the state and the behavior of the system. The modules API provides a pythonic way to inspect the system. It has a smooth integration with pytest that adds some useful features out of the box like parametrization to run tests against multiple systems.

    Writing useful tests is not an easy task, my advice is to test code that triggers implicit actions, code that has caused issues in the past or simply test the application is working correctly like you would do in the shell.

    For instance this is one of the tests I wrote for the saemref formula

    def test_saemref_running(Process, Service, Socket, Command):
        assert Service("supervisord").is_enabled
    
        supervisord = Process.get(comm="supervisord")
        # Supervisor run as root
        assert supervisord.user == "root"
        assert supervisord.group == "root"
    
        cubicweb = Process.get(ppid=supervisord.pid)
        # Cubicweb should run as saemref user
        assert cubicweb.user == "saemref"
        assert cubicweb.group == "saemref"
        assert cubicweb.comm == "uwsgi"
        # Should have 2 worker process with 8 thread each and 1 http proccess with one thread
        child_threads = sorted([c.nlwp for c in Process.filter(ppid=cubicweb.pid)])
        assert child_threads == [1, 8, 8]
    
        # uwsgi should bind on all ipv4 adresses
        assert Socket("tcp://0.0.0.0:8080").is_listening
    
        html = Command.check_output("curl http://localhost:8080")
        assert "<title>accueil (Référentiel SAEM)</title>" in html
    

    Now we can run tests against a running container by giving its name or docker id to testinfra:

    % testinfra --hosts=docker://1a8ddedf8164 test_saemref.py
    [...]
    test/test_saemref.py::test_saemref_running[docker:/1a8ddedf8164] PASSED
    

    The immediate advantage of writing such test is that you can reuse it for monitoring purpose, testinfra can behave like a nagios plugin:

    % testinfra -qq --nagios --hosts=ssh://prod test_saemref.py
    TESTINFRA OK - 1 passed, 0 failed, 0 skipped in 2.31 seconds
    .
    

    We can now integrate the test suite in our run-tests.py by adding some code to build and run a provisioned docker image and add a test command that runs testinfra tests against it.

    provision_option = click.option('--provision', is_flag=True, help="Provision the container")
    
    @cli.command(help="Build an image")
    @image_choice
    @provision_option
    def build(image, provision=False):
        dockerfile = "test/{0}.Dockerfile".format(image)
        tag = "{0}-formula:{1}".format(formula, image)
        if provision:
            dockerfile_content = open(dockerfile).read()
            dockerfile_content += "\n" + "\n".join([
                "ADD test/minion.conf /etc/salt/minion.d/minion.conf",
                "ADD {0} /srv/formula/{0}".format(formula),
                "RUN salt-call --retcode-passthrough state.sls {0}".format(formula),
            ]) + "\n"
            dockerfile = "test/{0}_provisioned.Dockerfile".format(image)
            with open(dockerfile, "w") as f:
                f.write(dockerfile_content)
            tag += "-provisioned"
        subprocess.check_call(["docker", "build", "-t", tag, "-f", dockerfile, "."])
        return tag
    
    
    @cli.command(help="Spawn an interactive shell in a new container")
    @image_choice
    @provision_option
    @click.pass_context
    def dev(ctx, image, provision=False):
        tag = ctx.invoke(build, image=image, provision=provision)
        subprocess.call([
            "docker", "run", "-i", "-t", "--rm", "--hostname", image,
            "-v", "{0}/test/minion.conf:/etc/salt/minion.d/minion.conf".format(BASEDIR),
            "-v", "{0}/{1}:/srv/formula/{1}".format(BASEDIR, formula),
            tag, "/bin/bash",
        ])
    
    
    @cli.command(help="Run tests against a provisioned container",
                 context_settings={"allow_extra_args": True})
    @click.pass_context
    @image_choice
    def test(ctx, image):
        import pytest
        tag = ctx.invoke(build, image=image, provision=True)
        docker_id = subprocess.check_output([
            "docker", "run", "-d", "--hostname", image,
            "-v", "{0}/test/minion.conf:/etc/salt/minion.d/minion.conf".format(BASEDIR),
            "-v", "{0}/{1}:/srv/formula/{1}".format(BASEDIR, formula),
            tag, "tail", "-f", "/dev/null",
        ]).strip()
        try:
            ctx.exit(pytest.main(["--hosts=docker://" + docker_id] + ctx.args))
        finally:
            subprocess.check_call(["docker", "rm", "-f", docker_id])
    

    Tests can be run on a local CI server or on travis, they "just" require a docker server, here is an example of .travis.yml

    sudo: required
    services:
      - docker
    language: python
    python:
      - "2.7"
    env:
      matrix:
        - IMAGE=centos7
        - IMAGE=jessie
    install:
      - pip install testinfra
    script:
      - python run-tests.py test $IMAGE -- -v
    

    I wrote a dummy formula with the above code, feel free to use it as a template for your own formula or open pull requests and break some tests.

    There is a highly enhanced version of this code in the saemref formula repository, including:

    • Building a provisioned docker image with custom pillars, we use it to run an online demo
    • Destructive tests where each test is run in a dedicated "fresh" container
    • Run Systemd in the containers to get a system close to the production one (this enables the use of Salt service module)
    • Run a postgresql container linked to the tested container for specific tests like upgrading a Cubicweb instance.

    Destructive tests rely on advanced pytest features that may produce weird bugs when mixed together, too much magic involved here. Also, handling Systemd in docker is really painful and adds a lot of complexity, for instance some systemctl commands require a running systemd as PID 1 and this is not the case during the docker build phase. So the trade-off between complexity and these features may not be worth.

    There is also a lot of quite new tools to develop and test infrastructure code that you could include in your stack like test-kitchen, serverspec, and goss. Choose your weapon and go test your infrastructure code.


  • Developing salt formulas with docker

    2016/07/21 by Philippe Pepiot
    https://www.logilab.org/file/248336/raw/Salt-Logo.png

    While developing salt formulas I was looking for a simple and reproducible environment to allow faster development, less bugs and more fun. The formula must handle multiple target OS (Debian and Centos).

    The first barrier is the master/minion installation of Salt, but fortunately Salt has a masterless mode. The idea is quite simple, bring up a virtual machine, install a Salt minion on it, expose the code inside the VM and call Salt states.

    https://www.logilab.org/file/7159870/raw/docker.png

    At Logilab we like to work with docker, a lightweight OS-level virtualization solution. One of the key features is docker volumes to share local files inside the container. So I started to write a simple Python script to build a container with a Salt minion installed and run it with formula states and a few config files shared inside the VM.

    The formula I was working on is used to deploy the saemref project, which is a Cubicweb based application:

    % cat test/centos7.Dockerfile
    FROM centos:7
    RUN yum -y install epel-release && \
        yum -y install https://repo.saltstack.com/yum/redhat/salt-repo-latest-1.el7.noarch.rpm && \
        yum clean expire-cache && \
        yum -y install salt-minion
    
    % cat test/jessie.Dockerfile
    FROM debian:jessie
    RUN apt-get update && apt-get -y install wget
    RUN wget -O - https://repo.saltstack.com/apt/debian/8/amd64/latest/SALTSTACK-GPG-KEY.pub | apt-key add -
    RUN echo "deb http://repo.saltstack.com/apt/debian/8/amd64/latest jessie main" > /etc/apt/sources.list.d/saltstack.list
    RUN apt-get update && apt-get -y install salt-minion
    
    % cat test/minion.conf
    file_client: local
    file_roots:
      base:
        - /srv/salt
        - /srv/formula
    

    And finally the run-tests.py file, using the beautiful click module

    #!/usr/bin/env python
    import os
    import subprocess
    
    import click
    
    @click.group()
    def cli():
        pass
    
    formula = "saemref"
    BASEDIR = os.path.abspath(os.path.dirname(__file__))
    
    image_choice = click.argument("image", type=click.Choice(["centos7", "jessie"]))
    
    
    @cli.command(help="Build an image")
    @image_choice
    def build(image):
        dockerfile = "test/{0}.Dockerfile".format(image)
        tag = "{0}-formula:{1}".format(formula, image)
        subprocess.check_call(["docker", "build", "-t", tag, "-f", dockerfile, "."])
        return tag
    
    
    @cli.command(help="Spawn an interactive shell in a new container")
    @image_choice
    @click.pass_context
    def dev(ctx, image):
        tag = ctx.invoke(build, image=image)
        subprocess.call([
            "docker", "run", "-i", "-t", "--rm", "--hostname", image,
            "-v", "{0}/test/minion.conf:/etc/salt/minion.d/minion.conf".format(BASEDIR),
            "-v", "{0}/{1}:/srv/formula/{1}".format(BASEDIR, formula),
            tag, "/bin/bash",
        ])
    
    
    if __name__ == "__main__":
        cli()
    

    Now I can run quickly multiple containers and test my Salt states inside the containers while editing the code locally:

    % ./run-tests.py dev centos7
    [root@centos7 /]# salt-call state.sls saemref
    
    [ ... ]
    
    [root@centos7 /]# ^D
    % # The container is destroyed when it exits
    

    Notice that we could add some custom pillars and state files simply by adding specific docker shared volumes.

    With a few lines we created a lightweight vagrant like, but faster, with docker instead of virtualbox and it remain fully customizable for future needs.


  • Introduction to thesauri and SKOS

    2016/06/27 by Yann Voté

    Recently, I've faced the problem to import the European Union thesaurus, Eurovoc, into cubicweb using the SKOS cube. Eurovoc doesn't follow the SKOS data model and I'll show here how I managed to adapt Eurovoc to fit in SKOS.

    This article is in two parts:

    • this is the first part where I introduce what a thesaurus is and what SKOS is,
    • the second part will show how to convert Eurovoc to plain SKOS.

    The whole text assumes familiarity with RDF, as describing RDF would require more than a blog entry and is out of scope.

    What is a thesaurus ?

    A common need in our digital lives is to attach keywords to documents, web pages, pictures, and so on, so that search is easier. For example, you may want to add two keywords:

    • lily,
    • lilium

    in a picture's metadata about this flower. If you have a large collection of flower pictures, this will make your life easier when you want to search for a particular species later on.

    free-text keywords on a picture

    In this example, keywords are free: you can choose whatever keyword you want, very general or very specific. For example you may just use the keyword:

    • flower

    if you don't care about species. You are also free to use lowercase or uppercase letters, and to make typos...

    free-text keyword on a picture

    On the other side, sometimes you have to select keywords from a list. Such a constrained list is called a controlled vocabulary. For instance, a very simple controlled vocabulary with only two keywords is the one about a person's gender:

    • male (or man),
    • female (or woman).
    a simple controlled vocabulary

    But there are more complex examples: think about how a library organizes books by themes: there are very general themes (eg. Science), then more and more specific ones (eg. Computer science -> Software -> Operating systems). There may also be synonyms (eg. Computing for Computer science) or referrals (eg. there may be a "see also" link between keywords Algebra and Geometry). Such a controlled vocabulary where keywords are organized in a tree structure, and with relations like synonym and referral, is called a thesaurus.

    an example thesaurus with a tree of keywords

    For the sake of simplicity, in the following we will call thesaurus any controlled vocabulary, even a simple one with two keywords like male/female.

    SKOS

    SKOS, from the World Wide Web Consortium (W3C), is an ontology for the semantic web describing thesauri. To make it simple, it is a common data model for thesauri that can be used on the web. If you have a thesaurus and publish it on the web using SKOS, then anyone can understand how your thesaurus is organized.

    SKOS is very versatile. You can use it to produce very simple thesauri (like male/female) and very complex ones, with a tree of keywords, even in multiple languages.

    To cope with this complexity, SKOS data model splits each keyword into two entities: a concept and its labels. For example, the concept of a male person have multiple labels: male and man in English, homme and masculin in French. The concept of a lily flower also has multiple labels: lily in English, lilium in Latin, lys in French.

    Among all labels for a given concept, some can be preferred, while others are alternative. There may be only one preferred label per language. In the person's gender example, man may be the preferred label in English and male an alternative one, while in French homme would be the preferred label and masculin and alternative one. In the flower example, lily (resp. lys) is the preferred label in English (resp. French), and lilium is an alternative label in Latin (no preferred label in Latin).

    SKOS concepts and labels

    And of course, in SKOS, it is possible to say that a concept is broader than another one (just like topic Science is broader than topic Computer science).

    So to summarize, in SKOS, a thesaurus is a tree of concepts, and each concept have one or more labels, preferred or alternative. A thesaurus is also called a concept scheme in SKOS.

    Also, please note that SKOS data model is slightly more complicated than what we've shown here, but this will be sufficient for our purpose.

    RDF URIs defined by SKOS

    In order to publish a thesaurus in RDF using SKOS ontology, SKOS introduces the "skos:" namespace associated to the following URI: http://www.w3.org/2004/02/skos/core#.

    Within that namespace, SKOS defines some classes and predicates corresponding to what has been described above. For example:

    • the triple (<uri>, rdf:type, skos:ConceptScheme) says that <uri> belongs to class skos:ConceptScheme (that is, is a concept scheme),
    • the triple (<uri>, rdf:type, skos:Concept) says that <uri> belongs to class skos:Concept (that is, is a concept),
    • the triple (<uri>, skos:prefLabel, <literal>) says that <literal> is a preferred label for concept <uri>,
    • the triple (<uri>, skos:altLabel, <literal>) says that <literal> is an alternative label for concept <uri>,
    • the triple (<uri1>, skos:broader, <uri2>) says that concept <uri2> is a broder concept of <uri1>.

  • One way to convert Eurovoc into plain SKOS

    2016/06/27 by Yann Voté

    This is the second part of an article where I show how to import the Eurovoc thesaurus from the European Union into an application using a plain SKOS data model. I've recently faced the problem of importing Eurovoc into CubicWeb using the SKOS cube, and the solution I've chose is discussed here.

    The first part was an introduction to thesauri and SKOS.

    The whole article assumes familiarity with RDF, as describing RDF would require more than a blog entry and is out of scope.

    Difficulties with Eurovoc and SKOS

    Eurovoc

    Eurovoc is the main thesaurus covering European Union business domains. It is published and maintained by the EU commission. It is quite complex and big, structured as a tree of keywords.

    You can see Eurovoc keywords and browse the tree from the Eurovoc homepage using the link Browse the subject-oriented version.

    For example, when publishing statistics about education in the EU, you can tag the published data with the broadest keyword Education and communications. Or you can be more precise and use the following narrower keywords, in increasing order of preference: Education, Education policy, Education statistics.

    Problem: hierarchy of thesauri

    The EU commission uses SKOS to publish its Eurovoc thesaurus, so it should be straightforward to import Eurovoc into our own application. But things are not that simple...

    For some reasons, Eurovoc uses a hierarchy of concept schemes. For example, Education and communications is a sub-concept scheme of Eurovoc (it is called a domain), and Education is a sub-concept scheme of Education and communications (it is called a micro-thesaurus). Education policy is (a label of) the first concept in this hierarchy.

    But with SKOS this is not possible: a concept scheme cannot be contained into another concept scheme.

    Possible solutions

    So to import Eurovoc into our SKOS application, and not loose data, one solution is to turn sub-concept schemes into concepts. We have two strategies:

    • keep only one concept scheme (Eurovoc) and turn domains and micro-thesauri into concepts,
    • keep domains as concept schemes, drop Eurovoc concept scheme, and only turn micro-thesauri into concepts.

    Here we will discuss the latter solution.

    Lets get to work

    Eurovoc thesaurus can be downloaded at the following URL: http://publications.europa.eu/mdr/resource/thesaurus/eurovoc/skos/eurovoc_skos.zip

    The ZIP archive contains only one XML file named eurovoc_skos.rdf. Put it somewhere where you can find it easily.

    To read this file easily, we will use the RDFLib Python library. This library makes it really convenient to work with RDF data. It has only one drawback: it is very slow. Reading the whole Eurovoc thesaurus with it takes a very long time. Make the process faster is the first thing to consider for later improvements.

    Reading the Eurovoc thesaurus is as simple as creating an empty RDF Graph and parsing the file. As said above, this takes a long long time (from half an hour to two hours).

    import rdflib
    
    eurovoc_graph = rdflib.Graph()
    eurovoc_graph.parse('<path/to/eurovoc_skos.rdf>', format='xml')
    
    <Graph identifier=N52834ca3766d4e71b5e08d50788c5a13 (<class 'rdflib.graph.Graph'>)>
    

    We can see that Eurovoc contains more than 2 million triples.

    len(eurovoc_graph)
    
    2828910
    

    Now, before actually converting Eurovoc to plain SKOS, lets introduce some helper functions:

    • the first one, uriref(), will allow us to build RDFLib URIRef objects from simple prefixed URIs like skos:prefLabel or dcterms:title,
    • the second one, capitalized_eurovoc_domains(), is used to convert Eurovoc domain names, all uppercase (eg. 32 EDUCATION ET COMMUNICATION) to a string where only first letter is uppercase (eg. 32 Education and communication)
    import re
    
    from rdflib import Literal, Namespace, RDF, URIRef
    from rdflib.namespace import DCTERMS, SKOS
    
    eu_ns = Namespace('http://eurovoc.europa.eu/schema#')
    thes_ns = Namespace('http://purl.org/iso25964/skos-thes#')
    
    prefixes = {
        'dcterms': DCTERMS,
        'skos': SKOS,
        'eu': eu_ns,
        'thes': thes_ns,
    }
    
    def uriref(prefixed_uri):
        prefix, value = prefixed_uri.split(':', 1)
        ns = prefixes[prefix]
        return ns[value]
    
    def capitalized_eurovoc_domain(domain):
        """Return the given Eurovoc domain name with only the first letter uppercase."""
        return re.sub(r'^(\d+\s)(.)(.+)$',
                      lambda m: u'{0}{1}{2}'.format(m.group(1), m.group(2).upper(), m.group(3).lower()),
                      domain, re.UNICODE)
    

    Now the actual work. After using variables to reference URIs, the loop will parse each triple in original graph and:

    • discard it if it contains deprecated data,
    • if triple is like (<uri>, rdf:type, eu:Domain), replace it with (<uri>, rdf:type, skos:ConceptScheme),
    • if triple is like (<uri>, rdf:type, eu:MicroThesaurus), replace it with (<uri>, rdf:type, skos:Concept) and add triple (<uri>, skos:inScheme, <domain_uri>),
    • if triple is like (<uri>, rdf:type, eu:ThesaurusConcept), replace it with (<uri>, rdf:type, skos:Concept),
    • if triple is like (<uri>, skos:topConceptOf, <microthes_uri>), replace it with (<uri>, skos:broader, <microthes_uri>),
    • if triple is like (<uri>, skos:inScheme, <microthes_uri>), replace it with (<uri>, skos:inScheme, <domain_uri>),
    • keep triples like (<uri>, skos:prefLabel, <label_uri>), (<uri>, skos:altLabel, <label_uri>), and (<uri>, skos:broader, <concept_uri>),
    • discard all other non-deprecated triples.

    Note that, to replace a micro thesaurus with a domain, we have to build a mapping between each micro thesaurus and its containing domain (microthes2domain dict).

    This loop is also quite long.

    eurovoc_ref = URIRef(u'http://eurovoc.europa.eu/100141')
    deprecated_ref = URIRef(u'http://publications.europa.eu/resource/authority/status/deprecated')
    title_ref = uriref('dcterms:title')
    status_ref = uriref('thes:status')
    class_domain_ref = uriref('eu:Domain')
    rel_domain_ref = uriref('eu:domain')
    microthes_ref = uriref('eu:MicroThesaurus')
    thesconcept_ref = uriref('eu:ThesaurusConcept')
    concept_scheme_ref = uriref('skos:ConceptScheme')
    concept_ref = uriref('skos:Concept')
    pref_label_ref = uriref('skos:prefLabel')
    alt_label_ref = uriref('skos:altLabel')
    in_scheme_ref = uriref('skos:inScheme')
    broader_ref = uriref('skos:broader')
    top_concept_ref = uriref('skos:topConceptOf')
    
    microthes2domain = dict((mt, next(eurovoc_graph.objects(mt, uriref('eu:domain'))))
                            for mt in eurovoc_graph.subjects(RDF.type, uriref('eu:MicroThesaurus')))
    
    new_graph = rdflib.ConjunctiveGraph()
    for subj_ref, pred_ref, obj_ref in eurovoc_graph:
        if deprecated_ref in list(eurovoc_graph.objects(subj_ref, status_ref)):
            continue
        # Convert eu:Domain into a skos:ConceptScheme
        if obj_ref == class_domain_ref:
            new_graph.add((subj_ref, RDF.type, concept_scheme_ref))
            for title in eurovoc_graph.objects(subj_ref, pref_label_ref):
                if title.language == u'en':
                    new_graph.add((subj_ref, title_ref,
                                   Literal(capitalized_eurovoc_domain(title))))
                    break
        # Convert eu:MicroThesaurus into a skos:Concept
        elif obj_ref == microthes_ref:
            new_graph.add((subj_ref, RDF.type, concept_ref))
            scheme_ref = next(eurovoc_graph.objects(subj_ref, rel_domain_ref))
            new_graph.add((subj_ref, in_scheme_ref, scheme_ref))
        # Convert eu:ThesaurusConcept into a skos:Concept
        elif obj_ref == thesconcept_ref:
            new_graph.add((subj_ref, RDF.type, concept_ref))
        # Replace <concept> topConceptOf <MicroThesaurus> by <concept> broader <MicroThesaurus>
        elif pred_ref == top_concept_ref:
            new_graph.add((subj_ref, broader_ref, obj_ref))
        # Replace <concept> skos:inScheme <MicroThes> by <concept> skos:inScheme <Domain>
        elif pred_ref == in_scheme_ref and obj_ref in microthes2domain:
            new_graph.add((subj_ref, in_scheme_ref, microthes2domain[obj_ref]))
        # Keep label triples
        elif (subj_ref != eurovoc_ref and obj_ref != eurovoc_ref
              and pred_ref in (pref_label_ref, alt_label_ref)):
            new_graph.add((subj_ref, pred_ref, obj_ref))
        # Keep existing skos:broader relations and existing concepts
        elif pred_ref == broader_ref or obj_ref == concept_ref:
            new_graph.add((subj_ref, pred_ref, obj_ref))
    

    We can check that we now have far less triples than before.

    len(new_graph)
    
    388582
    

    Now we dump this new graph to disk. We choose the Turtle format as it is far more readable than RDF/XML for humans, and slightly faster to parse for machines. This file will contain plain SKOS data that can be directly imported into any application able to read SKOS.

    with open('eurovoc.n3', 'w') as f:
        new_graph.serialize(f, format='n3')
    

    With CubicWeb using the SKOS cube, it is a one command step:

    cubicweb-ctl skos-import --cw-store=massive <instance_name> eurovoc.n3
    

  • Installing Debian Jessie on a "pure UEFI" system

    2016/06/13 by David Douard

    At the core of the Logilab infrastructure is a highly-available pair of small machines dedicated to our main directory and authentication services: LDAP, DNS, DHCP, Kerberos and Radius.

    The machines are small fanless boxes powered by a 1GHz Via Eden processor, 512Mb of RAM and 2Gb of storage on a CompactFlash module.

    They have served us well for many years, but now is the time for an improvement. We've bought a pair of Lanner FW-7543B that have the same form-factor. They are not fanless, but are much more powerful. They are pretty nice, but have one major drawback: their firmware does not boot on a legacy BIOS-mode device when set up in UEFI. Another hard point is that they do not have a video connector (there is a VGA output on the motherboard, but the connector is optional), so everything must be done via the serial console.

    https://www.logilab.org/file/6679313/raw/FW-7543_front.jpg

    I knew the Debian Jessie installer would provide everything that is required to handle an UEFI-based system, but it took me a few tries to get it to boot.

    First, I tried the standard netboot image, but the firmware did not want to boot from a USB stick, probably because the image requires a MBR-based bootloader.

    Then I tried to boot from the Refind bootable image and it worked! At least I had the proof this little beast could boot in UEFI. But, although it is probably possible, I could not figure out how to tweak the Refind config file to make it boot properly the Debian installer kernel and initrd.

    https://www.logilab.org/file/6679257/raw/uefi_lanner_nope.png

    Finally I gave a try to something I know much better: Grub. Here is what I did to have a working UEFI Debian installer on a USB key.

    Partitionning

    First, in the UEFI world, you need a GPT partition table with a FAT partition typed "EFI System":

    david@laptop:~$ sudo fdisk /dev/sdb
    Welcome to fdisk (util-linux 2.25.2).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    
    Command (m for help): g
    Created a new GPT disklabel (GUID: 52FFD2F9-45D6-40A5-8E00-B35B28D6C33D).
    
    Command (m for help): n
    Partition number (1-128, default 1): 1
    First sector (2048-3915742, default 2048): 2048
    Last sector, +sectors or +size{K,M,G,T,P} (2048-3915742, default 3915742):  +100M
    
    Created a new partition 1 of type 'Linux filesystem' and of size 100 MiB.
    
    Command (m for help): t
    Selected partition 1
    Partition type (type L to list all types): 1
    Changed type of partition 'Linux filesystem' to 'EFI System'.
    
    Command (m for help): p
    Disk /dev/sdb: 1.9 GiB, 2004877312 bytes, 3915776 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 52FFD2F9-45D6-40A5-8E00-B35B28D6C33D
    
    Device     Start    End Sectors  Size Type
    /dev/sdb1   2048 206847  204800  100M EFI System
    
    Command (m for help): w
    

    Install Grub

    Now we need to install a grub-efi bootloader in this partition:

    david@laptop:~$ pmount sdb1
    david@laptop:~$ sudo grub-install --target x86_64-efi --efi-directory /media/sdb1/ --removable --boot-directory=/media/sdb1/boot
    Installing for x86_64-efi platform.
    Installation finished. No error reported.
    

    Copy the Debian Installer

    Our next step is to copy the Debian's netboot kernel and initrd on the USB key:

    david@laptop:~$ mkdir /media/sdb1/EFI/debian
    david@laptop:~$ wget -O /media/sdb1/EFI/debian/linux http://ftp.fr.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/debian-installer/amd64/linux
    --2016-06-13 18:40:02--  http://ftp.fr.debian.org/debian/dists/jessie/main/installer-amd64/current  /images/netboot/debian-installer/amd64/linux
    Resolving ftp.fr.debian.org (ftp.fr.debian.org)... 212.27.32.66, 2a01:e0c:1:1598::2
    Connecting to ftp.fr.debian.org (ftp.fr.debian.org)|212.27.32.66|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 3120416 (3.0M) [text/plain]
    Saving to: ‘/media/sdb1/EFI/debian/linux’
    
    /media/sdb1/EFI/debian/linux      100%[========================================================>]   2.98M      464KB/s   in 6.6s
    
    2016-06-13 18:40:09 (459 KB/s) - ‘/media/sdb1/EFI/debian/linux’ saved [3120416/3120416]
    
    david@laptop:~$ wget -O /media/sdb1/EFI/debian/initrd.gz http://ftp.fr.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/debian-installer/amd64/initrd.gz
    --2016-06-13 18:41:30--  http://ftp.fr.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/debian-installer/amd64/initrd.gz
    Resolving ftp.fr.debian.org (ftp.fr.debian.org)... 212.27.32.66, 2a01:e0c:1:1598::2
    Connecting to ftp.fr.debian.org (ftp.fr.debian.org)|212.27.32.66|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 15119287 (14M) [application/x-gzip]
    Saving to: ‘/media/sdb1/EFI/debian/initrd.gz’
    
    /media/sdb1/EFI/debian/initrd.g    100%[========================================================>]  14.42M    484KB/s   in 31s
    
    2016-06-13 18:42:02 (471 KB/s) - ‘/media/sdb1/EFI/debian/initrd.gz’ saved [15119287/15119287]
    

    Configure Grub

    Then, we must write a decent grub.cfg file to load these:

    david@laptop:~$ echo >/media/sdb1/boot/grub/grub.cfg <<EOF
    menuentry "Jessie Installer" {
      insmod part_msdos
      insmod ext2
      insmod part_gpt
      insmod fat
      insmod gzio
      echo  'Loading Linux kernel'
      linux /EFI/debian/linux --- console=ttyS0,115200
      echo 'Loading InitRD'
      initrd /EFI/debian/initrd.gz
    }
    EOF
    

    Et voilà, piece of cake!


  • Our work for the OpenDreamKit project during the 77th Sage days

    2016/04/18 by Florent Cayré

    Logilab is part of OpenDreamKit, a Horizon 2020 European Research Infrastructure project that will run until 2019 and provides substantial funding to the open source computational mathematics ecosystem.

    https://www.logilab.org/file/5545539/raw

    One of the goals of this project is improve the packaging and documentation of SageMath, the open source alternative to Maple and Mathematica.

    The core developers of SageMath organised the 77th Sage days last week and Logilab has taken part, with David Douard, Julien Cristau and I, Florent Cayre.

    David and Julien have been working on packaging SageMath for Debian. This is a huge task (several man-months of work), split into two sub-tasks for now:

    • building SageMath with Debian-packaged versions of its dependencies, if available;
    • packaging some of the missing dependencies, starting with the most expected ones, like the latest releases of Jupyter and IPython.
    http://ipython.org/_static/IPy_header.png http://jupyter.org/assets/nav_logo.svg https://www.debian.org/Pics/hotlink/swirl-debian.png

    As a first result, the following packages have been pushed into Debian experimental:

    There is still a lot of work to be done, and packaging the notebook is the next task on the list.

    One hiccup along the way was a python crash involving multiple inheritance from Cython extensions classes. Having people nearby who knew the SageMath codebase well (or even wrote the relevant parts) was invaluable for debugging, and allowed us to blame a recent CPython change.

    Julien also gave a hand to Florent Hivert and Robert Lehmann who were trying to understand why building SageMath's documentation needed this much memory.

    As far as I am concerned, I made a prototype of a structured HTML documentation produced with Sphinx and containing Python executable code ran on https://tmpnb.org/ thanks to the Thebe javascript library that interfaces statically delivered HTML pages with a Jupyter notebook server.

    The Sage days have been an excellent opportunity to efficiently work on the technical tasks with skillfull and enthusiastic people. We would like to thank the OpenDreamKit core team for the organization and their hard work. We look forward to the next workshop.


  • 3D Visualization of simulation data with x3dom

    2016/02/16 by Yuanxiang Wang

    X3DOM Plugins

    As part of the Open Dream Kit project, we are working at Logilab on the creation of tools for mesh data visualization and analysis in a web application. Our goal was to create widgets to use in Jupyter notebook (formerly IPython) for 3D visualization and analysis.

    We found two interesting technologies for 3D rendering: ThreeJS and X3DOM. ThreeJS is a large JavaScript 3D library and X3DOM a HTML5 framework for 3D. After working with both, we chose to use X3DOM because of its high level architecture. With X3DOM the 3D model is defined in the DOM in HTML5, so the parameters of the nodes can be changed easily with the setAttribute DOM function. This makes the creation of user interfaces and widgets much easier.

    We worked to create new DOM nodes that integrate nicely in a standard X3DOM tree, namely IsoColor, Threshold and ClipPlane.

    We had two goals in mind:

    1. create an X3DOM plugin API that allows one to create new DOM nodes which extend X3DOM functionality;
    2. keep a simple X3DOM-like interface for the final users.

    Example of the plugins Threshold and IsoColor:

    image0

    The Threshold and IsoColor nodes work like any X3DOM node and react to attribute changes performed with the setAttribute method. This makes it easy to use HTML widgets like sliders / buttons to drive the plugin's parameters.

    X3Dom API

    The goal is to create custom nodes that affect the rendering based on data (positions, pressure, temperature...). The idea is to manipulate the shaders, since it gives low-level manipulation on the 3D rendering. Shaders give more freedom and efficiency compared to reusing other X3DOM nodes. (Reminder : Shaders are parts of GLSL, used to work with the GPU).

    X3DOM has a native node to all users to write shaders : the ComposedShader node. The problem of this node is it overwrites the shaders generated by X3DOM. For example, nodes like ClipPlane are disabled with a ComposedShader node in the DOM. Another example is image texturing, the computation of the color from texture coordinate should be written within the ComposedShader.

    In order to add parts of shader to the generated shader without overwriting it I created a new node: CustomAttributeNode. This node is a generic node to add uniforms, varying and shader parts into X3DOW. The data of the geometry (attributes) are set using the X3DOM node named FloatVertexAttribute.

    Example of CustomAttributeNode to create a threshold node:

    image2

    The CustomAttributeNode is the entry point in x3dom for the javascript API.

    JavaScript API

    The idea of the the API is to create a new node inherited from CustomAttributeNode. We wrote some functions to make the implementation of the node easier.

    Ideas for future improvement

    There are still some points that need improvement

    • Create a tree widget using the grouping nodes in X3DOM
    • Add high level functions to X3DGeometricPropertyNode to set the values. For instance the IsoColor node is only a node that set the values of the TextureCoordinate node from the FloatVertexAttribute node.
    • Add high level function to return the variable name needed to overwrite the basic attributes like positions in a Geometry. With my API, the IsoColor use a varying defined in X3DOM to overwrite the values of the texture coordinate. Because there are no documentation, it is hard for the users to find the varying names. On the other hand there are no specification on the varying names so it might need to be maintained.
    • Maybe the CustomAttributeNode should be a X3DChildNode instead of a X3DGeometricPropertyNode.

    image4

    This structure might allow the "use" attribute in X3DOM. Like that, X3DOM avoid data duplication and writing too much HTML. The following code illustrate what I expect.

    image5


  • We went to cfgmgmtcamp 2016 (after FOSDEM)

    2016/02/09 by Arthur Lutz

    Following a day at FOSDEM (another post about it), we spend two days at cfgmgmtcamp in Gent. At cfgmgmtcamp, we obviously spent some time in the Salt track since it's our tool of choice as you might have noticed. But checking out how some of the other tools and communities are finding solutions to similar problems is also great.

    cfgmgmtcamp logo

    I presented Roll out active Supervision with Salt, Graphite and Grafana (mirrored on slideshare), you can find the code on bitbucket.

    http://image.slidesharecdn.com/cfgmgmtcamp2016activesupervisionwithsalt-160203131954/95/cfgmgmtcamp-2016-roll-out-active-supervision-with-salt-graphite-and-grafana-1-638.jpg?cb=1454505737

    We saw :

    Day 1

    • Mark Shuttleworth from Canonical presenting Juju and its ecosystem, software modelling. MASS (Metal As A Service) was demoed on the nice "OrangeBox". It promises to spin up an OpenStack infrastructure in 15 minutes. One of the interesting things with charms and bundles of charms is the interfaces that need to be established between different service bricks. In the salt community we have salt-formulas but they lack maturity in the sense that there's no possibility to plug in multiple formulas that interact with each other... yet.
    juju deploy of openstack
    • Mitch Michell from Hashicorp presented vault. Vault stores your secrets (certificates, passwords, etc.) and we will probably be trying it out in the near future. A lot of concepts in vault are really well thought out and resonate with some of the things we want to do and automate in our infrastructure. The use of Shamir Secret Sharing technique (also used in the debian infrastructure team) for the N-man challenge to unvault the secrets is quite nice. David is already looking into automating it with Salt and having GSSAPI (kerberos) authentication.
    https://www.vaultproject.io/assets/images/hero-95b4a434.png bikes!

    Day 2

    • Gareth Rushgrove from PuppetLabs talked about the importance of metadata in docker images and docker containers by explaining how these greatly benefit tools like dpkg and rpm and that the container community should be inspired by the amazing skills and experience that has been built by these package management communities (think of all the language-specific package managers that each reinvent the wheel one after the other).
    • Testing Immutable Infrastructure: we found some inspiration from test-kitchen and running the tests inside a docker container instead of vagrant virtual machine. We'll have to take a look at the SaltStack provisioner for test-kitchen. We already do some of that stuff in docker and OpenStack using salt-cloud. But maybe we can take it further with such tools (or testinfra whose author will be joining Logilab next month).
    coreos, rkt, kubernetes
    • How CoreOS is built, modified, and updated: From repo sync to Omaha by Brian "RedBeard" Harrington. Interesting presentation of the CoreOS system. Brian also revealed that CoreOS is now capable of using the TPM to enforce a signed OS, but also signed containers. Official CoreOS images shipped through Omaha are now signed with a root key that can be installed in the TPM of the host (ie. they didn't use a pre-installed Microsoft key), along with a modified TPM-aware version of GRUB. For now, the Omaha platform is not open source, so it may not be that easy to build one's own CoreOS images signed with a personal root key, but it is theoretically possible. Brian also said that he expect their Omaha server implementation to become open source some day.
    • The use of Salt in Foreman was presented and demoed by Stephen Benjamin. We'll have to retry using that tool with the newest features of the smart proxy.
    • Jonathan Boulle from CoreOS presented "rkt and Kubernetes: What’s new with Container Runtimes and Orchestration" In this last talk, Johnathan gave a tour of the rkt project and how it is used to build, coupled with kubernetes, a comprehensive, secure container running infrastructure (which uses saltstack!). He named the result "rktnetes". The idea is to use rkt as the kubelet's (primany node agent) container runtime of a kubernetes cluster powered by CoreOS. Along with the new CoreOS support for TPM-based trust chain, it allows to ensure completely secured executions, from the bootloader to the container! The possibility to run fully secured containers is one of the reasons why CoreOS developed the rkt project.
    coffee!

    We would like to thank the cfgmgmntcamp organisation team, it was a great conference, we highly recommend it. Thanks for the speaker event the night before the conference, and the social event on Monday evening. (and thanks for the chocolate!).


  • We went to FOSDEM 2016 (and cfgmgmtcamp)

    2016/02/09 by Arthur Lutz

    David & I went to FOSDEM and cfgmgmtcamp this year to see some conferences, do two presentations, and discuss with the members of the open source communities we contribute to.

    https://www.logilab.org/file/4253021/raw/16312670359_565eec1e3d_k.jpg

    At FOSDEM, we started early by doing a presentation at 9.00 am in the "Configuration Management devroom", which to our surprise was a large room which was almost full. The presentation was streamed over the Internet and should be available to view shortly.

    I presented "Once you've configured your infrastructure using salt, monitor it by re-using that definition". (mirrored on slideshare. The main part was a demo, the code being published on bitbucket.

    http://image.slidesharecdn.com/fosdem2016describeitmonitorit-160203131836/95/fosdem-2016-after-describing-your-infrastructure-as-code-reuse-that-to-monitor-it-1-638.jpg?cb=1454505792

    The presentation was streamed live (I came across someone that watched it on the Internet to "sleep in"), and should be available to watch when it gets encoded on http://video.fosdem.org/.

    FOSDEM video box

    We then saw the following talks :

    • Unified Framework for Big Data Foreign Data Wrappers (FDW) by Shivram Mani in the Postgresql Track
    • Mainflux Open Source IoT Cloud
    • EzBench, a tool to help you benchmark and bisect the Graphics Stack's performance
    • The RTC components in the debian infrastructure
    • CoreOS: A Linux distribution designed for application containers that scale
    • Using PostgreSQL for Bibliographic Data (since we've worked on http://data.bnf.fr/ with http://cubicweb.org/ and PostgreSQL)
    • The FOSDEM infrastructure review

    Congratulations to all the FOSDEM organisers, volunteers and speakers. We will hopefully be back for more.

    We then took the train to Gent where we spent two days learning and sharing about Configuration Management Systems and all the ecosystem around it (orchestration, containers, clouds, testing, etc.).

    More on our cfmgmtcamp experience in another blog post.

    Photos under creative commons CC-BY, by Ludovic Hirlimann and Deborah Bryant here and here


  • DebConf15 wrap-up

    2015/08/25 by Julien Cristau
    //www.logilab.org/file/856155/raw/heidelberg-panorama-2.jpg

    I just came back from two weeks in Heidelberg for DebCamp15 and DebConf15.

    In the first week, besides helping out DebConf's infrastructure team with network setup, I tried to make some progress on the library transitions triggered by libstdc++6's C++11 changes. At first, I spent many hours going through header files for a bunch of libraries trying to figure out if the public API involved std::string or std::list. It turns out that is time-consuming, error-prone, and pretty efficient at making me lose the will to live. So I ended up stealing a script from Steve Langasek to automatically rename library packages for this transition. This ended in 29 non-maintainer uploads to the NEW queue, quickly processed by the FTP team. Sadly the transition is not quite there yet, as making progress with the initial set of packages reveals more libraries that need renaming.

    Building on some earlier work from Laurent Bigonville, I've also moved the setuid root Xorg wrapper from the xserver-xorg package to xserver-xorg-legacy, which is now in experimental. Hopefully that will make its way to sid and stretch soon (need to figure out what to do with non-KMS drivers first).

    Finally, with the help of the security team, the security tracker was moved to a new VM that will hopefully not eat its root filesystem every week as the old one was doing the last few months. Of course, the evening we chose to do this was the night DebConf15's network was being overhauled, which made things more interesting.

    DebConf itself was the opportunity to meet a lot of people. I was particularly happy to meet Andreas Boll, who has been a member of pkg-xorg for two years now, working on our mesa package, among other things. I didn't get to see a lot of talks (too many other things going on), but did enjoy Enrico's stand up comedy, the CitizenFour screening, and Jake Applebaum's keynote. Thankfully, for the rest the video team has done a great job as usual.

    Note

    Above picture is by Aigars Mahinovs, licensed under CC-BY 2.0


  • Going to DebConf15

    2015/08/11 by Julien Cristau

    On Sunday I travelled to Heidelberg, Germany, to attend the 16th annual Debian developer's conference, DebConf15.

    The conference itself is not until next week, but this week is DebCamp, a hacking session. I've already met a few of my DSA colleagues, who've been working on setting up the network infrastructure. My other plans for this week involve helping the Big Transition of 2015 along, and trying to remove the setuid bit from /usr/bin/X in the default Debian install (bug #748203 in particular).

    As for next week, there's a rich schedule in which I'll need to pick a few things to go see.

    //www.logilab.org/file/524206/raw/Dc15going1.png

  • Experiments on building a Jenkins CI service with Salt

    2015/06/17 by Denis Laxalde

    In this blog post, I'll talk about my recent experiments on building a continuous integration service with Jenkins that is, as much as possible, managed through Salt. We've been relying on a Jenkins platform for quite some time at Logilab (Tolosa team). The service was mostly managed by me with sporadic help from other team-mates but I've never been entirely satisfied about the way it was managed because it involved a lot of boilerplate configuration through Jenkins user interface and this does not scale very well nor does it make long term maintenance easy.

    So recently, I've taken a stance and decided to go through a Salt-based configuration and management of our Jenkins CI platform. There are actually two aspects here. The first concerns the setup of Jenkins itself (this includes installation, security configuration, plugins management amongst other things). The second concerns the management of client projects (or jobs in Jenkins jargon). For this second aspect, one of the design goals was to enable easy configuration of jobs by users not necessarily familiar with Jenkins setup and to make collaborative maintenance easy. To tackle these two aspects I've essentially been using (or developing) two distinct Salt formulas which I'll detail hereafter.

    Jenkins jobs salt

    Core setup: the jenkins formula

    The core setup of Jenkins is based on an existing Salt formula, the jenkins-formula which I extended a bit to support map.jinja and which was further improved to support installation of plugins by Yann and Laura (see 3b524d4).

    With that, deploying a Jenkins server is as simple as adding the following to your states and pillars top.sls files:

    base:
      "jenkins":
        - jenkins
        - jenkins.plugins
    

    Base pillar configuration is used to declare anything that differs from the default Jenkins settings in a jenkins section, e.g.:

    jenkins:
      lookup:
        - home: /opt/jenkins
    

    Plugins configuration is declared in plugins subsection as follows:

    jenkins:
      lookup:
        plugins:
          scm-api:
            url: 'http://updates.jenkins-ci.org/download/plugins/scm-api/0.2/scm-api.hpi'
            hash: 'md5=9574c07bf6bfd02a57b451145c870f0e'
          mercurial:
            url: 'http://updates.jenkins-ci.org/download/plugins/mercurial/1.54/mercurial.hpi'
            hash: 'md5=1b46e2732be31b078001bcc548149fe5'
    

    (Note that plugins dependency is not handled by Jenkins when installing from the command line, neither by this formula. So in the preceding example, just having an entry for the Mercurial plugin would have not been enough because this plugin depends on scm-api.)

    Other aspects (such as security setup) are not handled yet (neither by the original formula, nor by our extension), but I tend to believe that this is acceptable to manage this "by hand" for now.

    Jobs management : the jenkins_jobs formula

    For this task, I leveraged the excellent jenkins-job-builder tool which makes it possible to configure jobs using a declarative YAML syntax. The tool takes care of installing the job and also handles any housekeeping tasks such as checking configuration validity or deleting old configurations. With this tool, my goal was to let end-users of the Jenkins service add their own project by providing a minima a YAML job description file. So for instance, a simple Job description for a CubicWeb job could be:

    - scm:
        name: cubicweb
        scm:
          - hg:
             url: http://hg.logilab.org/review/cubicweb
             clean: true
    
    - job:
        name: cubicweb
        display-name: CubicWeb
        scm:
          - cubicweb
        builders:
          - shell: "find . -name 'tmpdb*' -delete"
          - shell: "tox --hashseed noset"
        publishers:
          - email:
              recipients: cubicweb@lists.cubicweb.org
    

    It consists of two parts:

    • the scm section declares, well, SCM information, here the location of the review Mercurial repository, and,

    • a job section which consists of some metadata (project name), a reference of the SCM section declared above, some builders (here simple shell builders) and a publisher part to send results by email.

    Pretty simple. (Note that most test running configuration is here declared within the source repository, via tox (another story), so that the CI bot holds minimum knowledge and fetches information from the sources repository directly.)

    To automate the deployment of this kind of configurations, I made a jenkins_jobs-formula which takes care of:

    1. installing jenkins-job-builder,
    2. deploying YAML configurations,
    3. running jenkins-jobs update to push jobs into the Jenkins instance.

    In addition to installing the YAML file and triggering a jenkins-jobs update run upon changes of job files, the formula allows for job to list distribution packages that it would require for building.

    Wrapping things up, a pillar declaration of a Jenkins job looks like:

    jenkins_jobs:
      lookup:
        jobs:
          cubicweb:
            file: <path to local cubicweb.yaml>
            pkgs:
              - mercurial
              - python-dev
              - libgecode-dev
    

    where the file section indicates the source of the YAML file to install and pkgs lists build dependencies that are not managed by the job itself (typically non Python package in our case).

    So, as an end user, all is needed to provide is the YAML file and a pillar snippet similar to the above.

    Outlook

    This initial setup appears to be enough to greatly reduce the burden of managing a Jenkins server and to allow individual users to contribute jobs for their project based on simple contribution to a Salt configuration.

    Later on, there is a few things I'd like to extend on jenkins_jobs-formula side. Most notably the handling of distant sources for YAML configuration file (as well as maybe the packages list file). I'd also like to experiment on configuring slaves for the Jenkins server, possibly relying on Docker (taking advantage of another of my experiment...).


  • Running a local salt-master to orchestrate docker containers

    2015/05/20 by David Douard

    In a recent blog post, Denis explained how to build Docker containers using Salt.

    What's missing there is how to have a running salt-master dedicated to Docker containers.

    There is not need the salt-master run as root for this. A test config of mine looks like:

    david@perseus:~$ mkdir -p salt/etc/salt
    david@perseus:~$ cd salt
    david@perseus:~salt/$ cat << EOF >etc/salt/master
    interface: 192.168.127.1
    user: david
    
    root_dir: /home/david/salt/
    pidfile: var/run/salt-master.pid
    pki_dir: etc/salt/pki/master
    cachedir: var/cache/salt/master
    sock_dir: var/run/salt/master
    
    file_roots:
      base:
        - /home/david/salt/states
        - /home/david/salt/formulas/cubicweb
    
    pillar_roots:
      base:
        - /home/david/salt/pillar
    EOF
    

    Here, 192.168.127.1 is the ip of my docker0 bridge. Also note that path in file_roots and pillar_roots configs must be absolute (they are not relative to root_dir, see the salt-master configuration documentation).

    Now we can start a salt-master that will be accessible to Docker containers:

    david@perseus:~salt/$ /usr/bin/salt-master -c etc/salt
    

    Warning

    with salt 2015.5.0, salt-master really wants to execute dmidecode, so add /usr/sbin to the $PATH variable before running the salt-master as non-root user.

    From there, you can talk to your test salt master by adding -c ~/salt/etc/salt option to all salt commands. Fortunately, you can also set the SALT_CONFIG_DIR environment variable:

    david@perseus:~salt/$ export SALT_CONFIG_DIR=~/salt/etc/salt
    david@perseus:~salt/$ salt-key
    Accepted Keys:
    Denied Keys:
    Unaccepted Keys:
    Rejected Keys:
    

    Now, you need to have a Docker images with salt-minion already installed, as explained in Denis' blog post. (I prefer using supervisord as PID 1 in my dockers, but that's not important here.)

    david@perseus:~salt/ docker run -d --add-host salt:192.168.127.1  logilab/salted_debian:wheezy
    53bf7d8db53001557e9ae25f5141cd9f2caf7ad6bcb7c2e3442fcdbb1caf5144
    david@perseus:~salt/ docker run -d --name jessie1 --hostname jessie1 --add-host salt:192.168.127.1  logilab/salted_debian:jessie
    3da874e58028ff6dcaf3999b29e2563e1bc4d6b1b7f2f0b166f9a8faffc8aa47
    david@perseus:~salt/ salt-key
    Accepted Keys:
    Denied Keys:
    Unaccepted Keys:
    53bf7d8db530
    jessie1
    Rejected Keys:
    david@perseus:~/salt$ salt-key -y -a 53bf7d8db530
    The following keys are going to be accepted:
    Unaccepted Keys:
    53bf7d8db530
    Key for minion 53bf7d8db530 accepted.
    david@perseus:~/salt$ salt-key -y -a jessie1
    The following keys are going to be accepted:
    Unaccepted Keys:
    jessie1
    Key for minion jessie1 accepted.
    david@perseus:~/salt$ salt '*' test.ping
    jessie1:
        True
    53bf7d8db530:
        True
    

    You can now build Docker images as explained by Denis, or test your sls config files in containers.


  • Mini-Debconf Lyon 2015

    2015/04/29 by Julien Cristau
    //www.logilab.org/file/291628/raw/debian-france.png

    A couple of weeks ago I attended the mini-DebConf organized by Debian France in Lyon.

    It was a really nice week-end, and the first time a French mini-DebConf wasn't in Paris :)

    Among the highlights, Juliette Belin reported on her experience as a new contributor to Debian: she authored the awesome "Lines" theme which was selected as the default theme for Debian 8.

    //www.logilab.org/file/291626/raw/juliette.jpg

    As a non-developer and newcomer to the free software community, she had quite intesting insights and ideas about areas where development processes need to improve.

    And Raphael Geissert reported on the new httpredir.debian.org service (previously http.debian.net), an http redirector to automagically pick the closest Debian archive mirror. So long, manual sources.list updates on laptops whenever travelling!

    //www.logilab.org/file/291627/raw/raphael.jpg

    Finally the mini-DebConf was a nice opportunity to celebrate the release of Debian 8, two weeks in advance.

    Now it's time to go and upgrade all our infrastructure to jessie.


  • Building Docker containers using Salt

    2015/04/07 by Denis Laxalde

    In this blog post, I'll talk about a way to use Salt to automate the build and configuration of Docker containers. I will not consider the deployment of Docker containers with Salt as this subject is already covered elsewhere (here for instance). The emphasis here is really on building (or configuring) a container for future deployment.

    Motivation

    Salt is a remote execution framework that can be used for configuration management. It's already widely used at Logilab to manage our infrastructure as well as on a semi-daily basis during our application development activities.

    Docker is a tool that helps automating the deployment of applications within Linux containers. It essentially provides a convenient abstraction and a set of utilities for system level virtualization on Linux. Amongst other things, Docker provides container build helpers around the concept of dockerfile.

    So, the first question is why would you use Salt to build Docker containers when you already have this dockerfile building tool. My first motivation is to encompass the limitations of the available declarations one could insert in a Dockerfile. First limitation: you can only execute instructions in a sequential manner using a Dockerfile, there's is no possibility of declaring dependencies between instructions or even of making an instruction conditional (apart from using the underlying shell conditional machinery of course). Then, you have only limited possibilities of specializing a Dockerfile. Finally, it's no so easy to apply a configuration step-by-step, for instance during the development of said configuration.

    That's enough for an introduction to lay down the underlying motivation of this post. Let's move on to more practical things!

    A Dockerfile for the base image

    Before jumping into the usage of Salt for the configuration of a Docker image, the first thing you need to do is to build a Docker container into a proper Salt minion.

    Assuming we're building on top of some a base image of Debian flavour subsequently referred to as <debian> (I won't tell you where it comes from, since you ought to build your own base image -- or find some friend you trust to provide you with one!), the following Dockerfile can be used to initialize a working image which will serve as the starting point for further configuration with Salt:

    FROM <debian>
    RUN apt-get update
    RUN apt-get install -y salt-minion
    

    Then, run docker build . docker_salt/debian_salt_minion and you're done.

    Plugin the minion container with the Salt master

    The next thing to do with our fresh Debian+salt-minion image is to turn it into a container running salt-minion, waiting for the Salt master to instruct it.

    docker run --add-host=salt:10.1.1.1 --hostname docker_minion \
        --name minion_container \
        docker_salt/debian/salt_minion salt-minion
    

    Here:

    • --hostname is used to specify the network name of the container, for easier query by the Salt master,
    • 10.1.1.1 is usually the IP address of the host, which in our example will serve as the Salt master,
    • --name is just used for easier book-keeping.

    Finally,

    salt-key -a docker_minion
    

    will register the new minion's key into the master's keyring.

    If all went well, the following command should succeed:

    salt 'docker_minion' test.ping
    

    Configuring the container with a Salt formula

    salt 'docker_minion' state.sls some_formula
    salt 'docker_minion' state.highstate
    

    Final steps: save the configured image and build a runnable image

    (Optional step, cleanup salt-minion installation.)

    Make a snapshot image of your configured container.

    docker stop minion_container
    docker commit -m 'Install something with Salt' \
        minion_container me/something
    

    Try out your new image:

    docker run -p 8080:80 me/something <entry point>
    

    where <entry point> will be the main program driving the service provided by the container (typically defined through the Salt formula).

    Make a fully configured image for you service:

    FROM me/something
    [...anything else you need, such as EXPOSE, etc...]
    CMD <entry point>
    

  • Monitoring our websites before we deploy them using Salt

    2015/03/11 by Arthur Lutz

    As you might have noticed we're quite big fans of Salt. One of the things that Salt enables us to do, it to apply what we're used to doing with code to our infrastructure. Let's look at TDD (Test Driven Development).

    Write the test first, make it fail, implement the code, test goes green, you're done.

    Apply the same thing to infrastructure and you get TDI (Test Driven Infrastructure).

    So before you deploy a service, you make sure that your supervision (shinken, nagios, incinga, salt based monitoring, etc.) is doing the correct test, you deploy and then your supervision goes green.

    Let's take a look at website supervision. At Logilab we weren't too satisfied with how our shinken/http_check were working so we started using uptime (nodejs + mongodb). Uptime has a simple REST API to get and add checks, so we wrote a salt execution module and a states module for it.

    https://www.logilab.org/file/288174/raw/68747470733a2f2f7261772e6769746875622e636f6d2f667a616e696e6f74746f2f757074696d652f646f776e6c6f6164732f636865636b5f64657461696c732e706e67.png

    For the sites that use the apache-formula we simply loop on the domains declared in the pillars to add checks :

    {% for domain in salt['pillar.get']('apache:sites').keys() %}
    uptime {{ domain }} (http):
      uptime.monitored:
        - name : http://{{ domain }}
    {% endfor %}
    

    For other URLs (specific URL such as sitemaps) we can list them in pillars and do :

    {% for url in salt['pillar.get']('uptime:urls') %}
    uptime {{ url }}:
      uptime.monitored:
        - name : {{ url }}
    {% endfor %}
    

    That's it. Monitoring comes before deployment.

    We've also contributed a formula for deploying uptime.

    Follow us if you are interested in Test Driven Infrastructure for we intend to write regular reports as we make progress exploring this new domain.


  • A report on the Salt Sprint 2015 in Paris

    2015/03/05 by Arthur Lutz

    On Wednesday the 4th of march 2015, Logilab hosted a sprint on salt on the same day as the sprint at SaltConf15. 7 people joined in and hacked on salt for a few hours. We collaboratively chose some subjects on a pad which is still available.

    //www.logilab.org/file/248336/raw/Salt-Logo.png

    We started off by familiarising those who had never used them to using tests in salt. Some of us tried to run the tests via tox which didn't work any more, a fix was found and will be submitted to the project.

    We organised in teams.

    Boris & Julien looked at the authorisation code and wrote a few issues (minion enumeration, acl documentation). On saltpad (client side) they modified the targeting to adapt to the permissions that the salt-api sends back.

    We discussed the salt permission model (external_auth) : where should the filter happen ? the master ? should the minion receive information about authorisation and not execute what is being asked for ? Boris will summarise some of the discussion about authorisations in a new issue.

    //www.logilab.org/file/288010/raw/IMG_3034.JPG

    Sofian worked on some unification on execution modules (refresh_db which will be ignored for the modules that don't understand that). He will submit a pull request in the next few days.

    Georges & Paul added some tests to hg_pillar, the test creates a mercurial repository, adds a top.sls and a file and checks that they are visible. Here is the diff. They had some problems while debugging the tests.

    David & Arthur implemented the execution module for managing postgresql clusters (create, list, exists, remove) in debian. A pull request was submitted by the end of the day. A state module should follow shortly. On the way we removed some dead code in the postgres module.

    All in all, we had some interesting discussions about salt, it's architecture, shared tips about developing and using it and managed to get some code done. Thanks to all for participating and hopefully we'll sprint again soon...


  • Generate stats from your SaltStack infrastructure

    2014/12/15 by Arthur Lutz

    As presented at the November french meetup of saltstack users, we've published code to generate some statistics about a salstack infrastructure. We're using it, for the moment, to identify which parts of our infrastructure need attention. One of the tools we're using to monitor this distance is munin.

    You can grab the code at bitbucket salt-highstate-stats, fork it, post issues, discuss it on the mailing lists.

    If you're french speaking, you can also read the slides of the above presentation (mirrored on slideshare).

    Hope you find it useful.


  • Using Saltstack to limit impact of Poodle SSLv3 vulnerability

    2014/10/15 by Arthur Lutz

    Here at Logilab, we're big fans of SaltStack automation. As seen with Heartbleed, controlling your infrastructure and being able to fix your servers in a matter of a few commands as documented in this blog post. Same applies to Shellshock more recently with this blog post.

    Yesterday we got the news that a big vulnerability on SSL was going to be released. Code name : Poodle. This morning we got the details and started working on a fix through salt.

    So far, we've handled configuration changes and services restart for apache, nginx, postfix and user configuration for iceweasel (debian's firefox) and chromium (adapting to firefox and chrome should be a breeze). Some credit goes to mtpettyp for his answer on askubuntu.

    http://www.logilab.org/file/267853/raw/saltstack_poodlebleed.jpg
    {% if salt['pkg.version']('apache2') %}
    poodle apache server restart:
        service.running:
            - name: apache2
      {% for foundfile in salt['cmd.run']('rgrep -m 1 SSLProtocol /etc/apache*').split('\n') %}
        {% if 'No such file' not in foundfile and 'bak' not in foundfile and foundfile.strip() != ''%}
    poodle {{ foundfile.split(':')[0] }}:
        file.replace:
            - name : {{ foundfile.split(':')[0] }}
            - pattern: "SSLProtocol all -SSLv2[ ]*$"
            - repl: "SSLProtocol all -SSLv2 -SSLv3"
            - backup: False
            - show_changes: True
            - watch_in:
                service: apache2
        {% endif %}
      {% endfor %}
    {% endif %}
    
    {% if salt['pkg.version']('nginx') %}
    poodle nginx server restart:
        service.running:
            - name: nginx
      {% for foundfile in salt['cmd.run']('rgrep -m 1 ssl_protocols /etc/nginx/*').split('\n') %}
        {% if 'No such file' not in foundfile and 'bak' not in foundfile and foundfile.strip() != ''%}
    poodle {{ foundfile.split(':')[0] }}:
        file.replace:
            - name : {{ foundfile.split(':')[0] }}
            - pattern: "ssl_protocols .*$"
            - repl: "ssl_protocols TLSv1 TLSv1.1 TLSv1.2;"
            - show_changes: True
            - watch_in:
                service: nginx
        {% endif %}
      {% endfor %}
    {% endif %}
    
    {% if salt['pkg.version']('postfix') %}
    poodle postfix server restart:
        service.running:
            - name: postfix
    poodle /etc/postfix/main.cf:
    {% if 'main.cf' in salt['cmd.run']('grep smtpd_tls_mandatory_protocols /etc/postfix/main.cf') %}
        file.replace:
            - pattern: "smtpd_tls_mandatory_protocols=.*"
            - repl: "smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3"
    {% else %}
        file.append:
            - text: |
                # poodle fix
                smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3
    {% endif %}
            - name: /etc/postfix/main.cf
            - watch_in:
                service: postfix
    {% endif %}
    
    {% if salt['pkg.version']('chromium') %}
    /usr/share/applications/chromium.desktop:
        file.replace:
            - pattern: Exec=/usr/bin/chromium %U
            - repl: Exec=/usr/bin/chromium --ssl-version-min=tls1 %U
    {% endif %}
    
    {% if salt['pkg.version']('iceweasel') %}
    /etc/iceweasel/pref/poodle.js:
        file.managed:
            - text : pref("security.tls.version.min", "1")
    {% endif %}
    

    The code is also published as a gist on github. Feel free to comment and fork the gist. There is room for improvement, and don't forget that by disabling SSLv3 you might prevent some users with "legacy" browsers from accessing your services.


  • Report from DebConf14

    2014/09/05 by Julien Cristau

    Last week I attended DebConf14 in Portland, Oregon. As usual the conference was a blur, with lots of talks, lots of new people, and lots of old friends. The organizers tried to do something different this year, with a longer conference (9 days instead of a week) and some dedicated hack time, instead of a pre-DebConf "DebCamp" week. That worked quite well for me, as it meant the schedule was not quite so full with talks, and even though I didn't really get any hacking done, it felt a bit more relaxed and allowed some more hallway track discussions.

    http://www.logilab.org/file/264666/raw/Screenshot%20from%202014-09-05%2015%3A09%3A38.png

    On the talks side, the keynotes from Zack and Biella provided some interesting thoughts. Some nice progress was made on making package builds reproducible.

    I gave two talks: an introduction to salt (odp),

    http://www.logilab.org/file/264663/raw/slide2.jpg

    and a report on the Debian jessie release progress (pdf).

    http://www.logilab.org/file/264665/raw/slide3.jpg

    And as usual all talks were streamed live and recorded, and many are already available thanks to the awesome DebConf video team. Also for a change, and because I'm a sucker for punishment, I came back with more stuff to do.


  • Logilab at Debconf 2014 - Debian annual conference

    2014/08/21 by Arthur Lutz

    Logilab is proud to contribute to the annual debian conference which will take place in Portland (USA) from the 23rd to the 31st of august.

    Julien Cristau (debian page) will be giving two talks at the conference :

    http://www.logilab.org/file/263602/raw/debconf2014.png

    Logilab is also contributing to the conference as a sponsor for the event.

    Here is what we previously blogged about salt and the previous debconf . Stay tuned for a blog post about what we saw and heard at the conference.

    https://www.debian.org/logos/openlogo-100.png

  • Pylint 1.3 / Astroid 1.2 released

    2014/07/28 by Sylvain Thenault

    The EP14 Pylint sprint team (more on this here and there) is proud to announce they just released Pylint 1.3 together with its companion Astroid 1.2. As usual, this includes several new features as well and bug fixes. You'll find below some structured list of the changes.

    Packages are uploaded to pypi, debian/ubuntu packages should be soon provided by Logilab, until they get into the standard packaging system of your favorite distribution.

    Please notice Pylint 1.3 will be the last release branch support python 2.5 and 2.6. Starting from 1.4, we will only support python greater or equal to 2.7. This will be the occasion to do some great cleanup in the code base. Notice this is only about the Pylint's runtime, you should still be able to run Pylint on your Python 2.5 code, through using Python 2.7 at least.

    New checks

    • Add multiple checks for PEP 3101 advanced string formatting: 'bad-format-string', 'missing-format-argument-key', 'unused-format-string-argument', 'format-combined-specification', 'missing-format-attribute' and 'invalid-format-index'
    • New 'invalid-slice-index' and 'invalid-sequence-index' for invalid sequence and slice indices
    • New 'assigning-non-slot' warning, which detects assignments to attributes not defined in slots

    Improved checkers

    • Fixed 'fixme' false positive (#149)
    • Fixed 'unbalanced-iterable-unpacking' false positive when encountering starred nodes (#273)
    • Fixed 'bad-format-character' false positive when encountering the 'a' format on Python 3
    • Fixed 'unused-variable' false positive when the variable is assigned through an import (#196)
    • Fixed 'unused-variable' false positive when assigning to a nonlocal (#275)
    • Fixed 'pointless-string-statement' false positive for attribute docstrings (#193)
    • Emit 'undefined-variable' when using the Python 3 metaclass= argument. Also fix 'unused-import' false for that construction (#143)
    • Emit 'broad-except' and 'bare-except' even if the number of except handlers is different than 1. Fixes issue (#113)
    • Emit 'attribute-defined-outside-init' for all statements in the same module as the offended class, not just for the last assignment (#262, as well as a long standing output mangling problem in some edge cases)
    • Emit 'not-callable' when calling properties (#268)
    • Don't let ImportError propagate from the imports checker, leading to crash in some namespace package related cases (#203)
    • Don't emit 'no-name-in-module' for ignored modules (#223)
    • Don't emit 'unnecessary-lambda' if the body of the lambda call contains call chaining (#243)
    • Definition order is considered for classes, function arguments and annotations (#257)
    • Only emit 'attribute-defined-outside-init' for definition within the same module as the offended class, avoiding to mangle the output in some cases
    • Don't emit 'hidden-method' message when the attribute has been monkey-patched, you're on your own when you do that.

    Others changes

    • Checkers are now properly ordered to respect priority(#229)
    • Use the proper mode for pickle when opening and writing the stats file (#148)

    Astroid changes

    • Function nodes can detect decorator call chain and see if they are decorated with builtin descriptors (classmethod and staticmethod).
    • infer_call_result called on a subtype of the builtin type will now return a new Class rather than an Instance.
    • Class.metaclass() now handles module-level __metaclass__ declaration on python 2, and no longer looks at the __metaclass__ class attribute on python 3.
    • Add slots method to Class nodes, for retrieving the list of valid slots it defines.
    • Expose function annotation to astroid: Arguments node exposes 'varargannotation', 'kwargannotation' and 'annotations' attributes, while Function node has the 'returns' attribute.
    • Backported most of the logilab.common.modutils module there, as most things there are for pylint/astroid only and we want to be able to fix them without requiring a new logilab.common release
    • Fix names grabed using wildcard import in "absolute import mode" (i.e. with absolute_import activated from the __future__ or with python 3) (pylint issue #58)
    • Add support in brain for understanding enum classes.

  • EP14 Pylint sprint Day 2 and 3 reports

    2014/07/28 by Sylvain Thenault
    https://ep2014.europython.eu/static_media/assets/images/logo.png

    Here are the list of things we managed to achieve during those last two days at EuroPython.

    After several attempts, Michal managed to have pylint running analysis on several files in parallel. This is still in a pull request (https://bitbucket.org/logilab/pylint/pull-request/82/added-support-for-checking-files-in) because of some limitations, so we decided it won't be part of the 1.3 release.

    Claudiu killed maybe 10 bugs or so and did some heavy issues cleanup in the trackers. He also demonstrated some experimental support of python 3 style annotation to drive a better inference. Pretty exciting! Torsten also killed several bugs, restored python 2.5 compat (though that will need a logilab-common release as well), introduced a new functional test framework that will replace the old one once all the existing tests will be backported. On wednesday, he did show us a near future feature they already have at Google: some kind of confidence level associated to messages so that you can filter out based on that. Sylvain fixed a couple of bugs (including https://bitbucket.org/logilab/pylint/issue/58/ which was annoying all the numpy community), started some refactoring of the PyLinter class so it does a little bit fewer things (still way too many though) and attempted to improve the pylint note on both pylint and astroid, which went down recently "thanks" to the new checks like 'bad-continuation'.

    Also, we merged the pylint-brain project into astroid to simplify things, so you should now submit your brain plugins directly to the astroid project. Hopefuly you'll be redirected there on attempt to use the old (removed) pylint-brain project on bitbucket.

    And, the good news is that now both Torsten and Claudiu have new powers: they should be able to do some releases of pylint and astroid. To celebrate that and the end of the sprint, we published Pylint 1.3 together with Astroid 1.2. More on this here.


  • EP14 Pylint sprint Day 1 report

    2014/07/24 by Sylvain Thenault
    https://ep2014.europython.eu/static_media/assets/images/logo.png

    We've had a fairly enjoyable and productive first day in our little hidden room at EuroPython in Berlin ! Below are some noticeable things we've worked on and discussed about.

    First, we discussed and agreed that while we should at some point cut the cord to the logilab.common package, it will take some time notably because of the usage logilab.common.configuration which would be somewhat costly to replace (and is working pretty well). There are some small steps we should do but basically we should mostly get back some pylint/astroid specific things from logilab.common to astroid or pylint. This should be partly done during the sprint, and remaining work will go to tickets in the tracker.

    We also discussed about release management. The point is that we should release more often, so every pylint maintainers should be able to do that easily. Sylvain will write some document about the release procedure and ensure access are granted to the pylint and astroid projects on pypi. We shall release pylint 1.3 / astroid 1.2 soon, and those releases branches will be the last one supporting python < 2.7.

    During this first day, we also had the opportunity to meet Carl Crowder, the guy behind http://landscape.io, as well as David Halter which is building the Jedi completion library (https://github.com/davidhalter/jedi). Landscape.io runs pylint on thousands of projects, and it would be nice if we could test beta release on some part of this panel. On the other hand, there are probably many code to share with the Jedi library like the parser and ast generation, as well as a static inference engine. That deserves a sprint on his own though, so we agreed that a nice first step would be to build a common library for import resolution without relying on the python interpreter for that, while handling most of the python dark import features like zip/egg import, .pth files and so one. Indeed that may be two nice future collaborations!

    Last but not least, we got some actual work done:

    • Michal Nowikowski from Intel in Poland joined us to work on the ability to run pylint in different processes so it may drastically improve performance on multiple cores box.
    • Torsten did continue some work on various improvements of the functionnal test framework.
    • Sylvain did merge logilab.common.modutils module into astroid as it's mostly driven by astroid and pylint needs. Also fixed the annoying namespace package crash.
    • Claudiu keep up the good work he does daily at improving and fixing pylint :)

  • Nazca notebooks

    2014/07/04 by Vincent Michel

    We have just published the following ipython notebooks explaining how to perform record linkage and entities matching with Nazca:


  • Open Legislative Data Conference 2014

    2014/06/10 by Nicolas Chauvat

    I was at the Open Legislative Data Conference on may 28 2014 in Paris, to present a simple demo I worked on since the same event that happened two years ago.

    The demo was called "Law is Code Rebooted with CubicWeb". It featured the use of the cubicweb-vcreview component to display the amendments of the hospital law ("loi hospitalière") gathered into a version control system (namely Mercurial).

    The basic idea is to compare writing code and writing law, for both are collaborative and distributed writing processes. Could we reuse for the second one the tools developed for the first?

    Here are the slides and a few screenshots.

    http://www.logilab.org/file/253394/raw/lawiscode1.png

    Statistics with queries embedded in report page.

    http://www.logilab.org/file/253400/raw/lawiscode2.png

    List of amendments.

    http://www.logilab.org/file/253396/raw/lawiscode3.png

    User comment on an amendment.

    While attending the conference, I enjoyed several interesting talks and chats with other participants, including:

    1. the study of co-sponsorship of proposals in the french parliament
    2. data.senat.fr announcing their use of PostgreSQL and JSON.
    3. and last but not least, the great work done by RegardsCitoyens and SciencesPo MediaLab on visualizing the law making process.

    Thanks to the organisation team and the other speakers. Hope to see you again!


  • SaltStack Meetup with Thomas Hatch in Paris France

    2014/05/22 by Arthur Lutz

    This monday (19th of may 2014), Thomas Hatch was in Paris for dotScale 2014. After presenting SaltStack there (videos will be published at some point), he spent the evening with members of the French SaltStack community during a meetup set up by Logilab at IRILL.

    http://www.logilab.org/file/248338/raw/thomas-hatch.png

    Here is a list of what we talked about :

    • Since Salt seems to have pushed ZMQ to its limits, SaltStack has been working on RAET (Reliable Asynchronous Event Transport Protocol ), a transport layer based on UDP and elliptic curve cryptography (Dan Berstein's CURVE-255-19) that works more like a stack than a socket and has reliability built in. RAET will be released as an optionnal beta feature in the next Salt release.
    • Folks from Dailymotion bumped into a bug that seems related to high latency networks and the auth_timeout. Updating to the very latest release should fix the issue.
    • Thomas told us about how a dedicated team at SaltStack handles pull requests and another team works on triaging github issues to input them into their internal SCRUM process. There are a lot of duplicate issues and old inactive issues that need attention and clutter the issue tracker. Help will be welcome.
    http://www.logilab.org/file/248336/raw/Salt-Logo.png
    • Continuous integration is based on Jenkins and spins up VMs to test pull request. There is work in progress to test multiple clouds, various latencies and loads.
    • For the Docker integration, salt now keeps track of forwarded ports and relevant information about the containers.
    • salt-virt bumped into problems with chroots and timeouts due to ZMQ.
    • Multi-master: the problem lies with syncronisation of data which is sent to minions but also the data that is sent to the masters. Possible solutions to be explored are : the use of gitfs, there is no built-in solution for keys (salt-key has to be run on all masters), mine.send should send the data at both masters, for the jobs cache: one could use an external returner.
    • Thomas talked briefly about ioflo which should bring queuing, data hierarchy and data pub-sub to Salt.
    http://www.logilab.org/file/248335/raw/ioflo.png
    • About the rolling release question: versions in Salt are definitely not git snapshots, things get backported into previous versions. No clear definition yet of length of LTS versions.
    • salt-cloud and libcloud : in the next release, libcloud will not be a hard dependency. Some clouds didn't work in libcloud (for example AWS), so these providers got implemented directly in salt-cloud or by using third-party libraries (eg. python-boto).
    • Documentation: a sprint is planned next week. Reference documentation will not be completly revamped, but tutorial content will be added.

    Boris Feld showed a demo of vagrant images orchestrated by salt and a web UI to monitor a salt install.

    http://www.vagrantup.com/images/logo_vagrant-81478652.png

    Thanks again to Thomas Hatch for coming and meeting up with (part of) the community here in France.


  • Salt April Meetup in Paris (France)

    2014/05/14 by Arthur Lutz

    On the 15th of april, in Paris (France), we took part in yet another Salt meetup. The community is now meeting up once every two months.

    We had two presentations:

    • Arthur Lutz made an introduction to returners and the scheduler using the SalMon monitoring system as an example. Salt is not only about configuration management Indeed!
    • The folks from Is Cool Entertainment did a presentation about how they are using salt-cloud to deploy and orchestrate clusters of EC2 machines (islands in their jargon) to reproduce parts of their production environment for testing and developement.

    More discussions about various salty subjects followed and were pursued in an Italian restaurant (photos here).

    In case it is not already in your diary : Thomas Hatch is coming to Paris next week, on Monday the 19th of May, and will be speaking at dotscale during the day and at a Salt meetup in the evening. The Salt Meetup will take place at IRILL (like the previous meetups, thanks again to them) and should start at 19h. The meetup is free and open to the public, but registering on this framadate would be appreciated.


  • Pylint 1.2 released!

    2014/04/22 by Sylvain Thenault

    Once again, a lot of work has been achieved since the latest 1.1 release. Claudiu, who joined the maintainer team (Torsten and me) did a great work in the past few months. Also lately Torsten has backported a lot of things from their internal G[oogle]Pylint. Last but not least, various people contributed by reporting issues and proposing pull requests. So thanks to everybody!

    Notice Pylint 1.2 depends on astroid 1.1 which has been released at the same time. Currently, code is available on Pypi, and Debian/Ubuntu packages should be ready shortly on Logilab's acceptance repositories.

    Below is the changes summary, check the changelog for more info.

    New and improved checks:

    • New message 'eval-used' checking that the builtin function eval was used.
    • New message 'bad-reversed-sequence' checking that the reversed builtin receive a sequence (i.e. something that implements __getitem__ and __len__, without being a dict or a dict subclass) or an instance which implements __reversed__.
    • New message 'bad-exception-context' checking that raise ... from ... uses a proper exception context (None or an exception).
    • New message 'abstract-class-instantiated' warning when abstract classes created with abc module and with abstract methods are instantied.
    • New messages checking for proper class __slots__: 'invalid-slots-object' and 'invalid-slots'.
    • New message 'undefined-all-variable' if a package's __all__ variable contains a missing submodule (#126).
    • New option logging-modules giving the list of module names that can be checked for 'logging-not-lazy'.
    • New option include-naming-hint to show a naming hint for invalid name (#138).
    • Mark file as a bad function when using python2 (#8).
    • Add support for enforcing multiple, but consistent name styles for different name types inside a single module.
    • Warn about empty docstrings on overridden methods.
    • Inspect arguments given to constructor calls, and emit relevant warnings.
    • Extend the number of cases in which logging calls are detected (#182).
    • Enhance the check for 'used-before-assignment' to look for nonlocal uses.
    • Improve cyclic import detection in the case of packages.

    Bug fixes:

    • Do not warn about 'return-arg-in-generator' in Python 3.3+.
    • Do not warn about 'abstract-method' when the abstract method is implemented through assignment (#155).
    • Do not register most of the 'newstyle' checker warnings with python >= 3.
    • Fix 'unused-import' false positive with augment assignment (#78).
    • Fix 'access-member-before-definition' false negative with augment assign (#164).
    • Do not crash when looking for 'used-before-assignment' in context manager assignments (#128).
    • Do not attempt to analyze non python file, eg '.so' file (#122).
    • Pass the current python path to pylint process when invoked via epylint (#133).

    Command line:

    • Add -i / --include-ids and -s / --symbols back as completely ignored options (#180).
    • Ensure init-hooks is evaluated before other options, notably load-plugins (#166).

    Other:

    • Improve pragma handling to not detect 'pylint:*' strings in non-comments (#79).
    • Do not crash with UnknownMessage if an unknown message identifier/name appears in disable or enable in the configuration (#170).
    • Search for rc file in ~/.config/pylintrc if ~/.pylintrc doesn't exists (#121).
    • Python 2.5 support restored (#50 and #62).

    Astroid:

    • Python 3.4 support
    • Enhanced support for metaclass
    • Enhanced namedtuple support

    Nice easter egg, no?


  • Code_Aster back in Debian unstable

    2014/03/31 by Denis Laxalde

    Last week, a new release of Code_Aster entered Debian unstable. Code_Aster is a finite element solver for partial differential equations in mechanics, mainly developed by EDF R&D (Électricité de France). It is arguably one of the most feature complete free software available in this domain.

    Aster has been in Debian since 2012 thanks to the work of debian-science team. Yet it has always been somehow a problematic package with a couple of persistent Release Critical (RC) bugs (FTBFS, instalability issues) and actually never entered a stable release of Debian.

    Logilab has committed to improving Code_Aster for a long time in various areas, notably through the LibAster friendly fork, which aims at turning the monolithic Aster into a library, usable from Python.

    Recently, the EDF R&D team in charge of the development of Code_Aster took several major decisions, including:

    • the move to Bitbucket forge as a sign of community opening (following the path opened by LibAster that imported the code of Code_Aster into a Mercurial repository) and,
    • the change of build system from a custom makefile-style architecture to a fine-grained Waf system (taken from that of LibAster).

    The latter obviously led to significant changes on the Debian packaging side, most of which going into a sane direction: the debian/rules file slimed down from 239 lines to 51 and a bunch of tricky install-step manipulations were dropped leading to something much simpler and closer to upstream (see #731211 for details). From upstream perspective, this re-packaging effort based on the new build-system may be the opportunity to update the installation scheme (in particular by declaring the Python library as private).

    Clearly, there's still room for improvements on both side (like building with the new metis library, shipping several versions of Aster stable/testing, MPI/serial). All in all, this is good for both Debian users and upstream developers. At Logilab, we hope that this effort will consolidate our collaboration with EDF R&D.


  • Second Salt Meetup builds the french community

    2014/03/04 by Arthur Lutz

    On the 6th of February, the Salt community in France met in Paris to discuss Salt and choose the tools to federate itself. The meetup was kindly hosted by IRILL.

    There were two formal presentations :

    • Logilab did a short introduction of Salt,
    • Majerti presented a feedback of their experience with Salt in various professional contexts.

    The presentation space was then opened to other participants and Boris Feld did a short presentation of how Salt was used at NovaPost.

    http://www.logilab.org/file/226420/raw/saltstack_meetup.jpeg

    We then had a short break to share some pizza (sponsored by Logilab).

    After the break, we had some open discussion about various subjects, including "best practices" in Salt and some specific use cases. Regis Leroy talked about the states that Makina Corpus has been publishing on github. The idea of reconciling the documentation and the monitoring of an infrastructure was brought up by Logilab, that calls it "Test Driven Infrastructure".

    The tools we collectively chose to form the community were the following :

    • a mailing-list kindly hosted by the AFPY (a pythonic french organization)
    • a dedicated #salt-fr IRC channel on freenode

    We decided that the meetup would take place every two months, hence the third one will be in April. There is already some discussion about organizing events to tell as many people as possible about Salt. It will probably start with an event at NUMA in March.

    After the meetup was officially over, a few people went on to have some drinks nearby. Thank you all for coming and your participation.

    login or register to comment on this blog


  • FOSDEM PGDay 2014

    2014/02/11 by Rémi Cardona

    I attended PGDay on January 31st, in Brussels. This event was held just before FOSDEM, which I also attended (expect another blog post). Here are some of the notes I took during the conference.

    https://fosdem.org/2014/support/promote/wide.png

    Statistics in PostgreSQL, Heikki Linnakangas

    Due to transit delays, I only caught the last half of that talk.

    The main goal of this talk was to explain some of Postgres' per-column statistics. In a nutshell, Postgres needs to have some idea about tables' content in order to choose an appropriate query plan.

    Heikki explained which sorts of statistics gathers, such as most common values and histograms. Another interesting stat is the correlation between physical pages and data ordering (see CLUSTER).

    Column statistics are gathered when running ANALYZE and stored in the pg_statistic system catalog. The pg_stats view provides a human-readable version of these stats.

    Heikki also explained how to determine whether performance issues are due to out-of-date statistics or not. As it turns out, EXPLAIN ANALYZE shows for each step of the query planner how many rows it expects to process and how many it actually processed. The rule of thumb is that similar values (no more than an order of magnitude apart) mean that column statistics are doing their job. A wider margin between expected and actual rows mean that statistics are possibly preventing the query planner from picking a more optimized plan.

    It was noted though that statistics-related performance issues often happen on tables with very frequent modifications. Running ANALYZE manually or increasing the frequency of the automatic ANALYZE may help in those situations.

    Advanced Extension Use Cases, Dimitri Fontaine

    Dimitri explained with very simple cases the use of some of Postgres' lesser-known extensions and the overall extension mechanism.

    Here's a grocery-list of the extensions and types he introduced:

    • intarray extension, which adds operators and functions to the standard ARRAY type, specifically tailored for arrays of integers,
    • the standard POINT type which provides basic 2D flat-earth geometry,
    • the cube extension that can represent N-dimensional points and volumes,
    • the earthdistance extension that builds on cube to provide distance functions on a sphere-shaped Earth (a close-enough approximation for many uses),
    • the pg_trgm extension which provides text similarity functions based on trigram matching (a much simpler thus faster alternative to Levenshtein distances), especially useful for "typo-resistant" auto-completion suggestions,
    • the hstore extension which provides a simple-but-efficient key value store that has everyone talking in the Postgres world (it's touted as the NoSQL killer),
    • the hll extensions which implements the HyperLogLog algorithm which seems very well suited to storing and counting unique visitor on a web site, for example.

    An all-around great talk with simple but meaningful examples.

    http://tapoueh.org/images/fosdem_2014.jpg

    Integrated cache invalidation for better hit ratios, Magnus Hagander

    What Magnus presented almost amounted to a tutorial on caching strategies for busy web sites. He went through simple examples, using the ubiquitous Django framework for the web view part and Varnish for the HTTP caching part.

    The whole talk revolved around adding private (X-prefixed) HTTP headers in replies containing one or more "entity IDs" so that Varnish's cache can be purged whenever said entities change. The hard problem lies in how and when to call PURGE on Varnish.

    The obvious solution is to override Django's save() method on Model-derived objects. One can then use httplib (or better yet requests) to purge the cache. This solution can be slightly improved by using Django's signal mechanism instead, which sound an awful-lot like CubicWeb's hooks.

    The problem with the above solution is that any DB modification not going through Django (and they will happen) will not invalidate the cached pages. So Magnus then presented how to write the same cache-invalidating code in PL/Python in triggers.

    While this does solve that last issue, it introduces synchronous HTTP calls in the DB, killing write performance completely (or killing it completely if the HTTP calls fail). So to fix those problems, while introducing limited latency, is to use SkyTools' PgQ, a simple message queue based on Postgres. Moving the HTTP calls outside of the main database and into a Consumer (a class provided by PgQ's python bindings) makes the cache-invalidating trigger asynchronous, reducing write overhead.

    http://www.logilab.org/file/210615/raw/varnish_django_postgresql.png

    A clear, concise and useful talk for any developer in charge of high-traffic web sites or applications.

    The Worst Day of Your Life, Christophe Pettus

    Christophe humorously went back to that dreadful day in the collective Postgres memory: the release of 9.3.1 and the streaming replication chaos.

    My overall impression of the talk: Thank $DEITY I'm not a DBA!

    But Christophe also gave some valuable advice, even for non-DBAs:

    • Provision 3 times the necessary disk space, in case you need to pg_dump or otherwise do a snapshot of your currently running database,
    • Do backups and test them:
      • give them to developers,
      • use them for analytics,
      • test the restore, make it foolproof, try to automate it,
    • basic Postgres hygiene:
      • fsync = on (on by default, DON'T TURN IT OFF, there are better ways)
      • full_page_writes = on (on by default, don't turn it off)
      • deploy minor versions as soon as possible,
      • plan upgrade strategies before EOL,
      • 9.3+ checksums (createdb option, performance cost is minimal),
      • application-level consistency checks (don't wait for auto vacuum to "discover" consistency errors).

    Materialised views now and in the future, Thom Brown

    Thom presented on of the new features of Postgres 9.3, materialized views.

    In a nutshell, materialized views (MV) are read-only snapshots of queried data that's stored on disk, mostly for performance reasons. An interesting feature of materialized views is that they can have indexes, just like regular tables.

    The REFRESH MATERIALIZED VIEW command can be used to update an MV: it will simply run the original query again and store the new results.

    There are a number of caveats with the current implementation of MVs:

    • pg_dump never saves the data, only the query used to build it,
    • REFRESH requires an exclusive lock,
    • due to implementation details (frozen rows or pages IIRC), MVs may exhibit non-concurrent behavior with other running transactions.

    Looking towards 9.4 and beyond, here are some of the upcoming MV features:

    • 9.4 adds the CONCURRENTLY keyword:
      • + no longer needs an exclusive lock, doesn't block reads
      • - requires a unique index
      • - may require VACUUM
    • roadmap (no guarantees):
      • unlogged (disables the WAL),
      • incremental refresh,
      • lazy automatic refresh,
      • planner awareness of MVs (would use MVs as cache/index).

    Indexes: The neglected performance all-rounder, Markus Winand

    http://use-the-index-luke.com/img/alchemie.png

    Markus' goal with this talk showed that very few people in the SQL world actually know - let alone really care - about indexes. According to his own experience and that of others (even with competing RDBMS), poorly written SQL is still a leading cause of production downtime (he puts the number at around 50% of downtime though others he quoted put that number higher). SQL queries can indeed put such stress on DB systems and cause them to fail.

    One major issue, he argues, is poorly designed indexes. He went back in time to explain possible reasons for the lack of knowledge about indexes with both SQL developers and DBAs. One such reason may be that indexes are not part of the SQL standard and are left as implementation-specific details. Thus many books about SQL barely cover indexes, if at all.

    He then took us through a simple quiz he wrote on the topic, with only 5 questions. The questions and explanations were very insightful and I must admit my knowledge of indexes was not up to par. I think everyone in the room got his message loud and clear: indexes are part of the schema, devs should care about them too.

    Try out the test : http://use-the-index-luke.com/3-minute-test

    PostgreSQL - Community meets Business, Michael Meskes

    For the last talk of the day, Michael went back to the history of the Postgres project and its community. Unlike other IT domains such as email, HTTP servers or even operating systems, RDBMS are still largely dominated by proprietary vendors such as Oracle, IBM and Microsoft. He argues that the reasons are not technical: from a developer stand point, Postgres has all the features of the leading RDMBS (and many more) and the few missing administrative features related to scalability are being addressed.

    Instead, he argues decision makers inside companies don't yet fully trust Postgres due to its (perceived) lack of corporate backers.

    He went on to suggest ways to overcome those perceptions, for example with an "official" Postgres certification program.

    A motivational talk for the Postgres community.

    http://fosdem2014.pgconf.eu/files/img/frontrotate/slonik.jpg

  • A Salt Configuration for C++ Development

    2014/01/24 by Damien Garaud
    http://www.logilab.org/file/204916/raw/SaltStack-Logo.png

    At Logilab, we've been using Salt for one year to manage our own infrastructure. I wanted to use it to manage a specific configuration: C++ development. When I instantiate a Virtual Machine with a Debian image, I don't want to spend time to install and configure a system which fits my needs as a C++ developer:

    This article is a very simple recipe to get a C++ development environment, ready to use, ready to hack.

    Give Me an Editor and a DVCS

    Quite simple: I use the YAML file format used by Salt to describe what I want. To install these two editors, I just need to write:

    vim-nox:
      pkg.installed
    
    emacs23-nox:
      pkg.installed
    

    For Mercurial, you'll guess:

    mercurial:
     pkg.installed
    

    You can write these lines in the same init.sls file, but you can also decide to split your configuration into different subdirectories: one place for each thing. I decided to create a dev and editor directories at the root of my salt config with two init.sls inside.

    That's all for the editors. Next step: specific C++ development packages.

    Install Several "C++" Packages

    In a cpp folder, I write a file init.sls with this content:

    gcc:
        pkg.installed
    
    g++:
        pkg.installed
    
    gdb:
        pkg.installed
    
    cmake:
        pkg.installed
    
    automake:
        pkg.installed
    
    libtool:
        pkg.installed
    
    pkg-config:
        pkg.installed
    
    colorgcc:
        pkg.installed
    

    The choice of these packages is arbitrary. You add or remove some as you need. There is not a unique right solution. But I want more. I want some LLVM packages. In a cpp/llvm.sls, I write:

    llvm:
     pkg.installed
    
    clang:
        pkg.installed
    
    libclang-dev:
        pkg.installed
    
    {% if not grains['oscodename'] == 'wheezy' %}
    lldb-3.3:
        pkg.installed
    {% endif %}
    

    The last line specifies that you install the lldb package if your Debian release is not the stable one, i.e. jessie/testing or sid in my case. Now, just include this file in the init.sls one:

    # ...
    # at the end of 'cpp/init.sls'
    include:
      - .llvm
    

    Organize your sls files according to your needs. That's all for packages installation. You Salt configuration now looks like this:

    .
    |-- cpp
    |   |-- init.sls
    |   `-- llvm.sls
    |-- dev
    |   `-- init.sls
    |-- edit
    |   `-- init.sls
    `-- top.sls
    

    Launching Salt

    Start your VM and install a masterless Salt on it (e.g. apt-get install salt-minion). For launching Salt locally on your naked VM, you need to copy your configuration (through scp or a DVCS) into /srv/salt/ directory and to write the file top.sls:

    base:
      '*':
        - dev
        - edit
        - cpp
    

    Then just launch:

    > salt-call --local state.highstate
    

    as root.

    And What About Configuration Files?

    You're right. At the beginning of the post, I talked about a "ready to use" Mercurial with some HG extensions. So I use and copy the default /etc/mercurial/hgrc.d/hgext.rc file into the dev directory of my Salt configuration. Then, I edit it to set some extensions such as color, rebase, pager. As I also need Evolve, I have to clone the source code from https://bitbucket.org/marmoute/mutable-history. With Salt, I can tell "clone this repo and copy this file" to specific places.

    So, I add some lines to dev/init.sls.

    https://bitbucket.org/marmoute/mutable-history:
        hg.latest:
          - rev: tip
          - target: /opt/local/mutable-history
          - require:
             - pkg: mercurial
    
    /etc/mercurial/hgrc.d/hgext.rc:
        file.managed:
          - source: salt://dev/hgext.rc
          - user: root
          - group: root
          - mode: 644
    

    The require keyword means "install (if necessary) this target before cloning". The other lines are quite self-explanatory.

    In the end, you have just six files with a few lines. Your configuration now looks like:

    .
    |-- cpp
    |   |-- init.sls
    |   `-- llvm.sls
    |-- dev
    |   |-- hgext.rc
    |   `-- init.sls
    |-- edit
    |   `-- init.sls
    `-- top.sls
    

    You can customize it and share it with your teammates. A step further would be to add some configuration files for your favorite editor. You can also imagine to install extra packages that your library depends on. Quite simply add a subdirectory amazing_lib and write your own init.sls. I know I often need Boost libraries for example. When your Salt configuration has changed, just type: salt-call --local state.highstate.

    As you can see, setting up your environment on a fresh system will take you only a couple commands at the shell before you are ready to compile your C++ library, debug it, fix it and commit your modifications to your repository.


  • What's New in Pandas 0.13?

    2014/01/19 by Damien Garaud
    http://www.logilab.org/file/203841/raw/pandas_logo.png

    Do you know pandas, a Python library for data analysis? Version 0.13 came out on January the 16th and this post describes a few new features and improvements that I think are important.

    Each release has its list of bug fixes and API changes. You may read the full release note if you want all the details, but I will just focus on a few things.

    You may be interested in one of my previous blog post that showed a few useful Pandas features with datasets from the Quandl website and came with an IPython Notebook for reproducing the results.

    Let's talk about some new and improved Pandas features. I suppose that you have some knowledge of Pandas features and main objects such as Series and DataFrame. If not, I suggest you watch the tutorial video by Wes McKinney on the main page of the project or to read 10 Minutes to Pandas in the documentation.

    Refactoring

    I welcome the refactoring effort: the Series type, subclassed from ndarray, has now the same base class as DataFrame and Panel, i.e. NDFrame. This work unifies methods and behaviors for these classes. Be aware that you can hit two potential incompatibilities with versions less that 0.13. See internal refactoring for more details.

    Timeseries

    to_timedelta()

    Function pd.to_timedelta to convert a string, scalar or array of strings to a Numpy timedelta type (np.timedelta64 in nanoseconds). It requires a Numpy version >= 1.7. You can handle an array of timedeltas, divide it by an other timedelta to carry out a frequency conversion.

    from datetime import timedelta
    import numpy as np
    import pandas as pd
    
    # Create a Series of timedelta from two DatetimeIndex.
    dr1 = pd.date_range('2013/06/23', periods=5)
    dr2 = pd.date_range('2013/07/17', periods=5)
    td = pd.Series(dr2) - pd.Series(dr1)
    
    # Set some Na{N,T} values.
    td[2] -= np.timedelta64(timedelta(minutes=10, seconds=7))
    td[3] = np.nan
    td[4] += np.timedelta64(timedelta(hours=14, minutes=33))
    td
    
    0   24 days, 00:00:00
    1   24 days, 00:00:00
    2   23 days, 23:49:53
    3                 NaT
    4   24 days, 14:33:00
    dtype: timedelta64[ns]
    

    Note the NaT type (instead of the well-known NaN). For day conversion:

    td / np.timedelta64(1, 'D')
    
    0    24.000000
    1    24.000000
    2    23.992975
    3          NaN
    4    24.606250
    dtype: float64
    

    You can also use the DateOffSet as:

    td + pd.offsets.Minute(10) - pd.offsets.Second(7) + pd.offsets.Milli(102)
    

    Nanosecond Time

    Support for nanosecond time as an offset. See pd.offsets.Nano. You can use N of this offset in the pd.date_range function as the value of the argument freq.

    Daylight Savings

    The tz_localize method can now infer a fall daylight savings transition based on the structure of the unlocalized data. This method, as the tz_convert method is available for any DatetimeIndex, Series and DataFrame with a DatetimeIndex. You can use it to localize your datasets thanks to the pytz module or convert your timeseries to a different time zone. See the related documentation about time zone handling. To use the daylight savings inference in the method tz_localize, set the infer_dst argument to True.

    DataFrame Features

    New Method isin()

    New DataFrame method isin which is used for boolean indexing. The argument to this method can be an other DataFrame, a Series, or a dictionary of a list of values. Comparing two DataFrame with isin is equivalent to do df1 == df2. But you can also check if values from a list occur in any column or check if some values for a few specific columns occur in the DataFrame (i.e. using a dict instead of a list as argument):

    df = pd.DataFrame({'A': [3, 4, 2, 5],
                       'Q': ['f', 'e', 'd', 'c'],
                       'X': [1.2, 3.4, -5.4, 3.0]})
    
       A  Q    X
    0  3  f  1.2
    1  4  e  3.4
    2  2  d -5.4
    3  5  c  3.0
    

    and then:

    df.isin(['f', 1.2, 3.0, 5, 2, 'd'])
    
           A      Q      X
    0   True   True   True
    1  False  False  False
    2   True   True  False
    3   True  False   True
    

    Of course, you can use the previous result as a mask for the current DataFrame.

    mask = _
    df[mask.any(1)]
    
          A  Q    X
       0  3  f  1.2
       2  2  d -5.4
       3  5  c  3.0
    
    When you pass a dictionary to the ``isin`` method, you can specify the column
    labels for each values.
    
    mask = df.isin({'A': [2, 3, 5], 'Q': ['d', 'c', 'e'], 'X': [1.2, -5.4]})
    df[mask]
    
        A    Q    X
    0   3  NaN  1.2
    1 NaN    e  NaN
    2   2    d -5.4
    3   5    c  NaN
    

    See the related documentation for more details or different examples.

    New Method str.extract

    The new vectorized extract method from the StringMethods object, available with the suffix str on Series or DataFrame. Thus, it is possible to extract some data thanks to regular expressions as followed:

    s = pd.Series(['doe@umail.com', 'nobody@post.org', 'wrong.mail', 'pandas@pydata.org', ''])
    # Extract usernames.
    s.str.extract(r'(\w+)@\w+\.\w+')
    

    returns:

    0       doe
    1    nobody
    2       NaN
    3    pandas
    4       NaN
    dtype: object
    

    Note that the result is a Series with the re match objects. You can also add more groups as:

    # Extract usernames and domain.
    s.str.extract(r'(\w+)@(\w+\.\w+)')
    
            0           1
    0     doe   umail.com
    1  nobody    post.org
    2     NaN         NaN
    3  pandas  pydata.org
    4     NaN         NaN
    

    Elements that do no math return NaN. You can use named groups. More useful if you want a more explicit column names (without NaN values in the following example):

    # Extract usernames and domain with named groups.
    s.str.extract(r'(?P<user>\w+)@(?P<at>\w+\.\w+)').dropna()
    
         user          at
    0     doe   umail.com
    1  nobody    post.org
    3  pandas  pydata.org
    

    Thanks to this part of the documentation, I also found out other useful strings methods such as split, strip, replace, etc. when you handle a Series of str for instance. Note that the most of them have already been available in 0.8.1. Take a look at the string handling API doc (recently added) and some basics about vectorized strings methods.

    Interpolation Methods

    DataFrame has a new interpolate method, similar to Series. It was possible to interpolate missing data in a DataFrame before, but it did not take into account the dates if you had index timeseries. Now, it is possible to pass a specific interpolation method to the method function argument. You can use scipy interpolation functions such as slinear, quadratic, polynomial, and others. The time method is used to take your index timeseries into account.

    from datetime import date
    # Arbitrary timeseries
    ts = pd.DatetimeIndex([date(2006,5,2), date(2006,12,23), date(2007,4,13),
                           date(2007,6,14), date(2008,8,31)])
    df = pd.DataFrame(np.random.randn(5, 2), index=ts, columns=['X', 'Z'])
    # Fill the DataFrame with missing values.
    df['X'].iloc[[1, -1]] = np.nan
    df['Z'].iloc[3] = np.nan
    df
    
                       X         Z
    2006-05-02  0.104836 -0.078031
    2006-12-23       NaN -0.589680
    2007-04-13 -1.751863  0.543744
    2007-06-14  1.210980       NaN
    2008-08-31       NaN  0.566205
    

    Without any optional argument, you have:

    df.interpolate()
    
                       X         Z
    2006-05-02  0.104836 -0.078031
    2006-12-23 -0.823514 -0.589680
    2007-04-13 -1.751863  0.543744
    2007-06-14  1.210980  0.554975
    2008-08-31  1.210980  0.566205
    

    With the time method, you obtain:

    df.interpolate(method='time')
    
                       X         Z
    2006-05-02  0.104836 -0.078031
    2006-12-23 -1.156217 -0.589680
    2007-04-13 -1.751863  0.543744
    2007-06-14  1.210980  0.546496
    2008-08-31  1.210980  0.566205
    

    I suggest you to read more examples in the missing data doc part and the scipy documentation about the module interpolate.

    Misc

    Convert a Series to a single-column DataFrame with its method to_frame.

    Misc & Experimental Features

    Retrieve R Datasets

    Not a killing feature but very pleasant: the possibility to load into a DataFrame all R datasets listed at http://stat.ethz.ch/R-manual/R-devel/library/datasets/html/00Index.html

    import pandas.rpy.common as com
    titanic = com.load_data('Titanic')
    titanic.head()
    
      Survived    Age     Sex Class value
    0       No  Child    Male   1st   0.0
    1       No  Child    Male   2nd   0.0
    2       No  Child    Male   3rd  35.0
    3       No  Child    Male  Crew   0.0
    4       No  Child  Female   1st   0.0
    

    for the datasets about survival of passengers on the Titanic. You can find several and different datasets about New York air quality measurements, body temperature series of two beavers, plant growth results or the violent crime rates by US state for instance. Very useful if you would like to show pandas to a friend, a colleague or your Grandma and you do not have a dataset with you.

    And then three great experimental features.

    Eval and Query Experimental Features

    The eval and query methods which use numexpr which can fastly evaluate array expressions as x - 0.5 * y. For numexpr, x and y are Numpy arrays. You can use this powerfull feature in pandas to evaluate different DataFrame columns. By the way, we have already talked about numexpr a few years ago in EuroScipy 09: Need for Speed.

    df = pd.DataFrame(np.random.randn(10, 3), columns=['x', 'y', 'z'])
    df.head()
    
              x         y         z
    0 -0.617131  0.460250 -0.202790
    1 -1.943937  0.682401 -0.335515
    2  1.139353  0.461892  1.055904
    3 -1.441968  0.477755  0.076249
    4 -0.375609 -1.338211 -0.852466
    
    df.eval('x + 0.5 * y - z').head()
    
    0   -0.184217
    1   -1.267222
    2    0.314395
    3   -1.279340
    4   -0.192248
    dtype: float64
    

    About the query method, you can select elements using a very simple query syntax.

    df.query('x >= y > z')
    
              x         y         z
    9  2.560888 -0.827737 -1.326839
    

    msgpack Serialization

    New reading and writing functions to serialize your data with the great and well-known msgpack library. Note this experimental feature does not have a stable storage format. You can imagine to use zmq to transfer msgpack serialized pandas objects over TCP, IPC or SSH for instance.

    Google BigQuery

    A recent module pandas.io.gbq which provides a way to load into and extract datasets from the Google BigQuery Web service. I've not installed the requirements for this feature now. The example of the release note shows how you can select the average monthly temperature in the year 2000 across the USA. You can also read the related pandas documentation. Nevertheless, you will need a BigQuery account as the other Google's products.

    Take Your Keyboard

    Give it a try, play with some data, mangle and plot them, compute some stats, retrieve some patterns or whatever. I'm convinced that pandas will be more and more used and not only for data scientists or quantitative analysts. Open an IPython Notebook, pick up some data and let yourself be tempted by pandas.

    I think I will use more the vectorized strings methods that I found out about when writing this post. I'm glad to learn more about timeseries because I know that I'll use these features. I'm looking forward to the two experimental features such as eval/query and msgpack serialization.

    You can follow me on Twitter (@jazzydag). See also Logilab (@logilab_org).


  • Pylint 1.1 christmas release

    2013/12/24 by Sylvain Thenault

    Pylint 1.1 eventually got released on pypi!

    A lot of work has been achieved since the latest 1.0 release. Various people have contributed to add several new checks as well as various bug fixes and other enhancement.

    Here is the changes summary, check the changelog for more info.

    New checks:

    • 'deprecated-pragma', for use of deprecated pragma directives "pylint:disable-msg" or "pylint:enable-msg" (was previously emmited as a regular warn().
    • 'superfluous-parens' for unnecessary parentheses after certain keywords.
    • 'bad-context-manager' checking that '__exit__' special method accepts the right number of arguments.
    • 'raising-non-exception' / 'catching-non-exception' when raising/catching class non inheriting from BaseException
    • 'non-iterator-returned' for non-iterators returned by '__iter__'.
    • 'unpacking-non-sequence' for unpacking non-sequences in assignments and 'unbalanced-tuple-unpacking' when left-hand-side size doesn't match right-hand-side.

    Command line:

    • New option for the multi-statement warning to allow single-line if statements.
    • Allow to run pylint as a python module 'python -m pylint' (anatoly techtonik).
    • Various fixes to epylint

    Bug fixes:

    • Avoid false used-before-assignment for except handler defined identifier used on the same line (#111).
    • 'useless-else-on-loop' not emited if there is a break in the else clause of inner loop (#117).
    • Drop 'badly-implemented-container' which caused several problems in its current implementation.
    • Don't mark input as a bad function when using python3 (#110).
    • Use attribute regexp for properties in python3, as in python2
    • Fix false-positive 'trailing-whitespace' on Windows (#55)

    Other:

    • Replaced regexp based format checker by a more powerful (and nit-picky) parser, combining 'no-space-after-operator', 'no-space-after-comma' and 'no-space-before-operator' into a new warning 'bad-whitespace'.
    • Create the PYLINTHOME directory when needed, it might fail and lead to spurious warnings on import of pylint.config.
    • Fix setup.py so that pylint properly install on Windows when using python3.
    • Various documentation fixes and enhancements

    Packages will be available in Logilab's Debian and Ubuntu repository in the next few weeks.

    Happy christmas!


  • SaltStack Paris Meetup on Feb 6th, 2014 - (S01E02)

    2013/12/20 by Nicolas Chauvat

    Logilab has set up the second meetup for salt users in Paris on Feb 6th, 2014 at IRILL, near Place d'Italie, starting at 18:00. The address is 23 avenue d'Italie, 75013 Paris.

    Here is the announce in french http://www.logilab.fr/blogentry/1981

    Please forward it to whom may be interested, underlining that pizzas will be offered to refuel the chatters ;)

    Conveniently placed a week after the Salt Conference, topics will include anything related to salt and its uses, demos, new ideas, exchange of salt formulas, commenting the talks/videos of the saltconf, etc.

    If you are interested in Salt, Python and Devops and will be in Paris at that time, we hope to see you there !


  • A quick take on continuous integration services for Bitbucket

    2013/12/19 by Sylvain Thenault

    Some time ago, we moved Pylint from this forge to Bitbucket (more on this here).

    https://bitbucket-assetroot.s3.amazonaws.com/c/photos/2012/Oct/11/master-logo-2562750429-5_avatar.png

    Since then, I somewhat continued to use the continuous integration (CI) service we provide on logilab.org to run tests on new commits, and to do the release job (publish a tarball on pypi, on our web site, build Debian and Ubuntu packages, etc.). This is fine, but not really handy since the logilab.org's CI service is not designed to be used for projects hosted elsewhere. Also I wanted to see what others have to offer, so I decided to find a public CI service to host Pylint and Astroid automatic tests at least.

    Here are the results of my first swing at it. If you have others suggestions, some configuration proposal or whatever, please comment.

    First, here are the ones I didn't test along with why:

    The first one I actually tested, also the first one to show up when looking for "bitbucket continuous integration" on Google is https://drone.io. The UI is really simple, I was able to set up tests for Pylint in a matter of minutes: https://drone.io/bitbucket.org/logilab/pylint. Tests are automatically launched when a new commit is pushed to Pylint's Bitbucket repository and that setup was done automatically.

    Trying to push Drone.io further, one missing feature is the ability to have different settings for my project, e.g. to launch tests on all the python flavor officially supported by Pylint (2.5, 2.6, 2.7, 3.2, 3.3, pypy, jython, etc.). Last but not least, the missing killer feature I want is the ability to launch tests on top of Pull Requests, which travis-ci supports.

    Then I gave http://wercker.com a shot, but got stuck at the Bitbucket repository selection screen: none were displayed. Maybe because I don't own Pylint's repository, I'm only part of the admin/dev team? Anyway, wercker seems appealing too, though the configuration using yaml looks a bit more complicated than drone.io's, but as I was not able to test it further, there's not much else to say.

    https://www.logilab.org/file/4758432/raw/wercker.png

    So for now the winner is https://drone.io, but the first one allowing me to test on several Python versions and to launch tests on pull requests will be the definitive winner! Bonus points for automating the release process and checking test coverage on pull requests as well.

    https://drone.io/drone3000/images/alien-zap-header.png

  • A retrospective of 10 years animating the pylint free software projet

    2013/11/25 by Sylvain Thenault

    was the topic of the talk I gave last saturday at the Capitol du Libre in Toulouse.

    Here are the slides (pdf) for those interested (in french). A video of the talk should be available soon on the Capitol du Libre web site. The slides are mirrored on slideshare (see below):


show 198 results