subscribe to this blog - en

News from Logilab and our Free Software projects, as well as on topics dear to our hearts (Python, Debian, Linux, the semantic web, scientific computing...)

show 208 results
  • We're happy to host the mercurial Sprint

    2010/02/02 by Arthur Lutz

    We're very happy to be hosting the next mercurial sprint in our brand new offices in central Paris. It is quite an honor to be chosen when the other contender was Google.

    So a bunch of mercurial developers are heading out to our offices this coming Friday to sprint for three days on mercurial. We use mercurial a lot here over at Logilab and we also contribute a tool to visualize and manipulate a mercurial repository : hgview.

    To check out the things that we will be working on with the mercurial crew, check out the program of the sprint on their wiki.

    What is a sprint? "A sprint (sometimes called a Code Jam or hack-a-thon) is a short time period (three to five days) during which software developers work on a particular chunk of functionality. "The whole idea is to have a focused group of people make progress by the end of the week," explains Jeff Whatcott" [source]. For geographically distributed open source communities, it is also a way of physically meeting and working in the same room for a period of time.

    Sprinting is a practice that we encourage at Logilab, with CubicWeb we organize as often as possible open sprints, which is an opportunity for users and developers to come and code with us. We even use the sprint format for some internal stuff.

    photo by Sebastian Mary under creative commons licence.

  • hgview 1.2.0 released

    2010/01/21 by David Douard

    Here is at last the release of the version 1.2.0 of hgview.

    In a nutshell, this release includes:

    • a basic support for mq extension,
    • a basic support for hg-bfiles extension,
    • working directory is now displayed as a node of the graph (if there are local modifications of course),
    • it's now possible to display only the subtree from a given revision (a bit like hg log -f)
    • it's also possible to activate an annotate view (make navigation slower however),
    • several improvements in the graph filling and rendering mecanisms,
    • I also added toolbar icons for the search and goto "quickbars" so they are not "hidden" any more to the one reluctant to user manuals,
    • it's now possible to go directly to the common ancestor of 2 revisions,
    • when on a merge node, it's now possible to choose the parent the diff is computed against,
    • make search also search in commit messages (it used to search only in diff contents),
    • and several bugfixes of course.
    there are packages for debian lenny, squeeze and sid, and for ubuntu hardy, interpid, jaunty and karmic. However, for lenny and hardy, provided packages won't work on pure distribs since hgview 1.2 depends on mercurial 1.1. Thus for these 2 distributions, packages will only work if you have installed backported mercurial packages.

  • New supported repositories for Debian and Ubuntu

    2010/01/21 by Arthur Lutz

    For the release of hgview 1.2.0 in our Karmic Ubuntu repository, we would like to announce that we are now going to generate packages for the following distributions :

    • Debian Lenny (because it's stable)
    • Debian Sid (because it's the dev branch)
    • Ubuntu Hardy (because it has Long Term Support)
    • Ubuntu Karmic (because it's the current stable)
    • Ubuntu Lucid (because it's the next stable) - no repo yet, but soon...

    The old packages in the previously supported architectures are still accessible (etch, jaunty, intrepid), but new versions will not be generated for these repositories. Packages will be coming in as versions get released, if before that you need a package, give us a shout and we'll see what we can do.

    For instructions on how to use the repositories for Ubuntu or Debian, go to the following page :

  • Open Source/Design Hardware

    2009/12/13 by Nicolas Chauvat

    I have been doing free software since I discovered it existed. I bought an OpenMoko some time ago, since I am interested in anything that is open, including artwork like books, music, movies and... hardware.

    I just learned about two lists, one at Wikipedia and another one at MakeOnline, but Google has more. Explore and enjoy!

  • Solution to a common Mercurial task

    2009/12/10 by David Douard

    An interesting question has just been sent by Greg Ward on the Mercurial devel mailing-list (as a funny coincidence, it happened that I had to solve this problem a few days ago).

    Let me quote his message:

    here's my problem: imagine a customer is running software built from
    changeset A, and we want to upgrade them to a new version, built from
    changeset B.  So I need to know what bugs are fixed in B that were not
    fixed in A.  I have already implemented a changeset/bug mapping, so I
    can trivially lookup the bugs fixed by any changeset.  (It even handles
    "ongoing" and "reverted" bugs in addition to "fixed".)

    And he gives an example of situation where a tricky case may be found:

                    +--- 75 -- 78 -- 79 ------------+
                   /                                 \
                  /     +-- 77 -- 80 ---------- 84 -- 85
                 /     /                        /
    0 -- ... -- 74 -- 76                       /
                       \                      /
                        +-- 81 -- 82 -- 83 --+

    So what is the problem?

    Imagine the lastest distributed stable release is built on rev 81. Now, I need to publish a new bugfix release based on this latest stable version, including every changeset that is a bugfix, but that have not yet been applied at revision 81.

    So the first problem we need to solve is answering: what are the revisions ancestors of revision 85 that are not ancestor of revision 81?

    Command line solution

    Using hg commands, the solution is proposed by Steve Losh:

    hg log --template '{rev}\n' --rev 85:0 --follow --prune 81

    or better, as suggested by Matt:

    hg log -q --template '{rev}\n' --rev 85:0 --follow --prune 81

    The second is better since it does only read the index, and thus is much faster. But on big repositories, this command remains quite slow (with Greg's situation, a repo of more than 100000 revisions, the command takes more than 2 minutes).

    Python solution

    Using Python, one may think about using revlog.nodesbetween(), but it won't work as wanted here, not listing revisions 75, 78 and 79.

    On the mailing list, Matt gave the most simple and efficient solution:

    cl = repo.changelog
    a = set(cl.ancestors(81))
    b = set(cl.ancestors(85))
    revs = b - a

    Idea for a new extension

    Using this simple python code, it should be easy to write a nice Mercurial extension (which could be named missingrevisions) to do this job.

    Then, it should be interesting to also implement some filtering feature. For example, if there are simple conventions used in commit messages, eg. using something like "[fix #1245]" or "[close #1245]" in the commit message when the changeset is a fix for a bug listed in the bugtracker, then we may type commands like:

    hg missingrevs REV -f bugfix


    hg missingrevs REV -h HEADREV -f bugfix

    to find bugfix revisions ancestors of HEADREV that are not ancestors of REV.

    With filters (bugfix here) may be configurables in hgrc using regexps.

  • pylint bug day report

    2009/12/04 by Pierre-Yves David

    The first pylint bug day took place on wednesday 25th. Four members of the Logilab crew and two other people spent the day working on pylint.

    Several patches submitted before the bug day were processed and some tickets were closed.

    Charles Hébert added James Lingard's patches for string formatting and is working on several improvements. Vincent Férotin submitted a patch for simple message listings. Sylvain Thenault fixed significant inference bugs in astng (an underlying module of pylint managing the syntax tree). Émile Anclin began a major astng refactoring to take advantage of new python2.6 functionality. For my part, I made several improvements to the test suite. I applied James Lingard patches for ++ operator and generalised it to -- too. I also added a new checker for function call arguments submitted by James Lingard once again. Finally I improved the message filtering of the --errors-only options.

    We thank Maarten ter Huurne, Vincent Férotin for their participation and of course James Lingard for submitting numerous patches.

    Another pylint bug day will be held in a few months.

    image under creative commons by smccann

  • Resume of the first Coccinelle users day

    2009/11/30 by Andre Espaze

    A matching and transformation tool for systems code

    The Coccinelle's goal is to ease code maintenance by first revealing code smells based on design patterns and second easing an API (Application Programming Interface) change for a heavily used library. Coccinelle can thus be seen as two tools inside one. The first one matches patterns, the second applies transformations. However facing such a big problem, the project needed to define boundaries in order to increase chances of success. The building motivation was thus to target the Linux kernel. This choice has implied a tool working on the C programming language before the preprocessor step. Moreover the Linux code base adds interesing constraints as it is huge, contains many possible configurations depending on C macros, may contain many bugs and evolves a lot. What was the Coccinelle solution for easing the kernel maintenance?

    Generating diff files from the semantic patch langage

    The Linux community reads lot of diff files for following the kernel evolution. As a consequence the diff file syntax is widely spread and commonly understood. However this syntax concerns a particular change between two files, its does not allow to match a generic pattern.

    The Coccinelle's solution is to build its own langage allowing to declare rules describing a code pattern and a possible transformation. This langage is the Semantic Patch Langage (SmPL), based on the declarative approach of the diff file syntax. It allows to propagate a change rule to many files by generating diff files. Then those results can be directly applied by using the patch command but most of the time they will be reviewed and may be slightly adapted to the programmer's need.

    A Coccinelle's rule is made of two parts: metavariable declaration and a code pattern match followed by a possible transformation. A metavariable means a control flow variable, its possibles names inside the program do not matter. Then the code pattern will describe a particular control flow in the program by using the C and SmPL syntaxes manipulating the metavariables. As a result, Coccinelle succeeds to generate diff files because it works on the C program control flow.

    A complete SmPL description will not be given here because it can be found in the Coccinelle's documentation. However a brief introduction will be made on a rule declaration. The metavariable part will look like this:

    expression E;
    constant C;

    'expression' means a variable or the result of a function. However 'constant' means a C constant. Then for negating the result of an and operation between an expression and a constant instead of negating the expression first, the transformation part will be:

    - !E & C
    + !(E & C)

    A file containing several rules like that will be called a semantic patch. It will be applied by using the Coccinelle 'spatch' command that will generate a change written in the diff file syntax each time the above pattern is matched. The next section will illustrate this way of work.

    A working example on the Linux kernel 2.6.30

    You can download and install Coccinelle 'spatch' command from its website: if you want to execute the following example. Let's first consider the following structure with accessors in the header 'device.h':

    struct device {
        void *driver_data;
    static inline void *dev_get_drvdata(const struct device *dev)
        return dev->driver_data;
    static inline void dev_set_drvdata(struct device *dev, void* data)
        dev->driver_data = data;

    it imitates the 2.6.30 kernel header 'include/linux/device.h'. Let's now consider the following client code that does not make use of the accessors:

    #include <stdlib.h>
    #include <assert.h>
    #include "device.h"
    int main()
        struct device devs[2], *dev_ptr;
        int data[2] = {3, 7};
        void *a = NULL, *b = NULL;
        devs[0].driver_data = (void*)(&data[0]);
        a = devs[0].driver_data;
        dev_ptr = &devs[1];
        dev_ptr->driver_data = (void*)(&data[1]);
        b = dev_ptr->driver_data;
        assert(*((int*)a) == 3);
        assert(*((int*)b) == 7);
        return 0;

    Once this code saved in the file 'fake_device.c', we can check that the code compiles and runs by:

    $ gcc fake_device.c && ./a.out

    We will now create a semantic patch 'device_data.cocci' trying to add the getter accessor with this first rule:

    struct device dev;
    - dev.driver_data
    + dev_get_drvdata(&dev)

    The 'spatch' command is then run by:

    $ spatch -sp_file device_data.cocci fake_device.c

    producing the following change in a diff file:

    -    devs[0].driver_data = (void*)(&data[0]);
    -    a = devs[0].driver_data;
    +    dev_get_drvdata(&devs[0]) = (void*)(&data[0]);
    +    a = dev_get_drvdata(&devs[0]);

    which illustrates the great Coccinelle's way of work on program flow control. However the transformation has also matched code where the setter accessor should be used. We will thus add a rule above the previous one, the semantic patch becomes:

    struct device dev;
    expression data;
    - dev.driver_data = data
    + dev_set_drvdata(&dev, data)
    struct device dev;
    - dev.driver_data
    + dev_get_drvdata(&dev)

    Running the command again will produce the wanted output:

    $ spatch -sp_file device_data.cocci fake_device.c
    -    devs[0].driver_data = (void*)(&data[0]);
    -    a = devs[0].driver_data;
    +    dev_set_drvdata(&devs[0], (void *)(&data[0]));
    +    a = dev_get_drvdata(&devs[0]);

    It is important to write the setter rule before the getter rule else the getter rule will be applied first to the whole file.

    At this point our semantic patch is still incomplete because it does not work on 'device' structure pointers. By using the same logic, let's add it to the 'device_data.cocci' semantic patch:

    struct device dev;
    expression data;
    - dev.driver_data = data
    + dev_set_drvdata(&dev, data)
    struct device * dev;
    expression data;
    - dev->driver_data = data
    + dev_set_drvdata(dev, data)
    struct device dev;
    - dev.driver_data
    + dev_get_drvdata(&dev)
    struct device * dev;
    - dev->driver_data
    + dev_get_drvdata(dev)

    Running Coccinelle again:

    $ spatch -sp_file device_data.cocci fake_device.c

    will add the remaining transformations for the 'fake_device.c' file:

    -    dev_ptr->driver_data = (void*)(&data[1]);
    -    b = dev_ptr->driver_data;
    +    dev_set_drvdata(dev_ptr, (void *)(&data[1]));
    +    b = dev_get_drvdata(dev_ptr);

    but a new problem appears: the 'device.h' header is also modified. We meet here an important point of the Coccinelle's philosophy described in the first section. 'spatch' is a tool to ease code maintenance by propagating a code pattern change to many files. However the resulting diff files are supposed to be reviewed and in our case the unwanted modification should be removed. Note that it would be possible to avoid the 'device.h' header modification by using SmPL syntax but the explanation would be too much for a starting tutorial. Instead, we will simply cut the unwanted part:

    $ spatch -sp_file device_data.cocci fake_device.c | cut -d $'\n' -f 16-34

    This result will now be kept in a diff file by moreover asking 'spatch' to produce it for the current working directory:

    $ spatch -sp_file device_data.cocci -patch "" fake_device.c | \
    cut -d $'\n' -f 16-34 > device_data.patch

    It is now time to apply the change for getting a working C code using accessors:

    $ patch -p1 < device_data.patch

    The final result for 'fake_device.c' should be:

    #include <stdlib.h>
    #include <assert.h>
    #include "device.h"
    int main()
        struct device devs[2], *dev_ptr;
        int data[2] = {3, 7};
        void *a = NULL, *b = NULL;
        dev_set_drvdata(&devs[0], (void *)(&data[0]));
        a = dev_get_drvdata(&devs[0]);
        dev_ptr = &devs[1];
        dev_set_drvdata(dev_ptr, (void *)(&data[1]));
        b = dev_get_drvdata(dev_ptr);
        assert(*((int*)a) == 3);
        assert(*((int*)b) == 7);
        return 0;

    Finally, we can test that the code compiles and runs:

    .. sourcecode:: sh
    $ gcc fake_device.c && ./a.out

    The semantic patch is now ready to be used on the Linux's 2.6.30 kernel:

    $ wget
    $ tar xjf linux-2.6.30.tar.bz2
    $ spatch -sp_file device_data.cocci -dir linux-2.6.30/drivers/net/ \
      > device_drivers_net.patch
    $ wc -l device_drivers_net.patch

    You may also try the 'drivers/ieee1394' directory.


    Coccinelle is made of around 60 thousands lines of Objective Caml. As illustrated by the above example on the linux kernel, the 'spatch' command succeeds to ease code maintenance. For the Coccinelle's team working on the kernel code base, a semantic patch is usually around 100 lines and will generated diff files to sometimes hundred of files. Moreover the processing is rather fast, the average time per file is said to be 0.7s.

    Two tools using the 'spatch' engine have already been built: 'spdiff' and 'herodotos'. With the first one you could almost avoid to learn the SmPL language because the idea is to generate a semantic patch by looking to transformations between files pairs. The second allows to correlate defects over software versions once the corresponding code smells have been described in SmPL.

    One of the Coccinelle's problem is to not being easily extendable to another language as the engine was designed for analyzing control flows on C programs. The C++ langage may be added but required obviously lot of work. It would be great to also have such a tool on dynamic languages like Python.

    image under creative commons by Rémi Vannier

  • pylint bug day next wednesday!

    2009/11/23 by Sylvain Thenault

    Remember that the first pylint bug day will be held on wednesday, november 25, from around 8am to 8pm in the Paris (France) time zone.

    We'll be a few people at Logilab and hopefuly a lot of other guys all around the world, trying to make pylint better.

    Join us on the #public conference room of, or if you prefer using an IRC client, join #public on which is a gateway to the jabber forum. And if you're in Paris, come to work with us in our office.

    People willing to help but without knowledge of pylint internals are welcome, it's the perfect occasion to learn a lot about it, and to be able to hack on pylint in the future!

  • First contact with pupynere

    2009/11/06 by Pierre-Yves David

    I spent some time this week evaluating Pupynere, the PUre PYthon NEtcdf REader written by Roberto De Almeida. I see several advantages in pupynere.

    First it's a pure Python module with no external dependency. It doesn't even depend on the NetCDF lib and it is therefore very easy to deploy.

    Second, it offers the same interface as Scientific Python's NetCDF bindings which makes transitioning from one module to another very easy.

    Third pupynere is being integrated into Scipy as the module. Once integrated, this could ensure a wide adoption by the python community.

    Finally it's easy to dig in this clear and small code base of about 600 lines. I have just sent several fixes and bug reports to the author.

    However pupynere isn't mature yet. First it seems pupynere has been only used for simple cases so far. Many common cases are broken. Moreover there is no support for new NetCDF formats such as long-NetCDF and NetCDF4, and important features such as file update are still missing. In addition, The lack of a test suite is a serious issue. In my opinion, various bugs could already have been detected and fixed with simple unit tests. Contributions would be much more comfortable with the safety net offered by a test suite. I am not certain that the fixes and improvements I made this week did not introduce regressions.

    To conclude, pupynere seems too young for production use. But I invite people to try it and provide feedback and fixes to the author. I'm looking forward to using this project in production in the future.

  • First Pylint Bug Day on Nov 25th, 2009 !

    2009/10/21 by Sylvain Thenault

    Since we don't stop being overloaded here at Logilab, and we've got some encouraging feedback after the "Pylint needs you" post, we decided to take some time to introduce more "community" in pylint.

    And the easiest thing to do, rather sooner than later, is a irc/jabber synchronized bug day, which will be held on Wednesday november 25. We're based in France, so main developpers will be there between around 8am and 19pm UTC+1. If a few of you guys are around Paris at this time and wish to come at Logilab to sprint with us, contact us and we'll try to make this possible.

    The focus for this bug killing day could be:

    • using tracker : getting an account, submitting tickets, triaging existing tickets...
    • using mercurial to develop pylint / astng
    • guide people in the code so they're able to fix simple bugs

    We will of course also try to kill a hella-lotta bugs, but the main idea is to help whoever wants to contribute to pylint... and plan for the next bug-killing day !

    As we are in the process of moving to another place, we can't organize a sprint yet, but we should have some room available for the next time, so stay tuned :)

show 208 results