show 208 results

Blog entries

  • PyLint 0.25.2 and related projects released

    2012/07/18 by Sylvain Thenault

    I'm pleased to announce the new release of Pylint and related projects (i.e. logilab-astng and logilab-common)!

    By installing PyLint 0.25.2, ASTNG 0.24 and logilab-common 0.58.1, you'll get a bunch of bug fixes and a few new features. Among the hot stuff:

    • PyLint should now work with alternative python implementations such as Jython, and at least go further with PyPy and IronPython (but those have not really been tested, please try it and provide feedback so we can improve their support)
    • the new ASTNG includes a description of dynamic code it is not able to understand. This is handled by a bitbucket hosted project described in another post.

    Many thanks to everyone who contributed to these releases, Torsten Marek / Boris Feld in particular (both sponsored by Google by the way, Torsten as an employee and Boris as a GSoC student).


  • Introducing the pylint-brain project

    2012/07/18 by Sylvain Thenault

    Huum, along with the new PyLint release, it's time to introduce the PyLint-Brain project I've recently started.

    Despite its name, PyLint-Brain is actually a collection of extensions for ASTNG, with the goal of making ASTNG smarter (and this directly benefits PyLint) by describing stuff that is too dynamic to be understood automatically (such as functions in the hashlib module, defaultdict, etc.).

    The PyLint-Brain collection of extensions is developped outside of ASTNG itself and hosted on a bitbucket project to ease community involvement and to allow distinct development cycles. Basically, ASTNG will include the PyLint-Brain extensions, but you may use earlier/custom versions by tweaking your PYTHONPATH.

    Take a look at the code, it's fairly easy to contribute new descriptions, and help us make pylint smarter!

  • Debian science sprint and workshop at ESRF

    2012/06/22 by Julien Cristau

    esrf debian

    From June 24th to June 26th, the European Synchrotron organises a workshop centered around Debian. On Monday, a number of talks about the use of Debian in scientific facilities will be featured. On Sunday and Tuesday, members of the Debian Science group will meet for a sprint focusing on the upcoming Debian 7.0 release.

    Among the speakers will be Stefano Zacchiroli, the current Debian project leader. Logilab will be present with Nicolas Chauvat at Monday's conference, and Julien Cristau at both the sprint and the conference.

    At the sprint we'll be discussing packaging of scientific libraries such as blas or MPI implementations, and working on polishing other scientific packages, such as python-related ones (including Salome on which we are currently working).

  • A Python dev day at La Cantine. Would like to have more PyCon?

    2012/06/01 by Damien Garaud

    We were at La Cantine on May 21th 2012 in Paris for the " Replay session".

    La Cantine is a coworking space where hackers, artists, students and so on can meet and work. It also organises some meetings and conferences about digital culture, computer science, ...

    On May 21th 2012, it was a dev day about Python. "Would you like to have more PyCon?" is a french wordplay where PyCon sounds like Picon, a french "apéritif" which traditionally accompanies beer. A good thing because the meeting began at 6:30 PM! Presentations and demonstrations were about some Python projects presented at PyCon 2012 in Santa Clara (California) last March. The original pycon presentations are accessible on

    PDB Introduction

    By Gael Pasgrimaud (@gawel_).

    pdb is the well-known Python debugger. Gael showed us how to easily use this almost-mandatory tool when you develop in Python. As with the gdb debugger, you can stop the execution at a breakpoint, walk up the stack, print the value of local variables or modify temporarily some local variables.

    The best way to define a breakpoint in your source code, it's to write:

    import pdb; pdb.set_trace()

    Insert that where you would like pdb to stop. Then, you can step trough the code with s, c or n commands. See help for more information. Following, the help command in pdb command-line interpreter:

    (Pdb) help
    Documented commands (type help <topic>):
    EOF    bt         cont      enable  jump  pp       run      unt
    a      c          continue  exit    l     q        s        until
    alias  cl         d         h       list  quit     step     up
    args   clear      debug     help    n     r        tbreak   w
    b      commands   disable   ignore  next  restart  u        whatis
    break  condition  down      j       p     return   unalias  where
    Miscellaneous help topics:
    exec  pdb

    It is also possible to invoke the module pdb when you run a Python script such as:

    $> python -m pdb


    By Alexis Metereau (@ametaireau).

    Pyramid is an open source Python web framework from Pylons Project. It concentrates on providing fast, high-quality solutions to the fundamental problems of creating a web application:

    • the mapping of URLs to code ;
    • templating ;
    • security and serving static assets.

    The framework allows to choose different approaches according the simplicity//feature tradeoff that the programmer need. Alexis, from the French team of Services Mozilla, is working with it on a daily basis and seemed happy to use it. He told us that he uses Pyramid more as web Python library than a web framework.


    By Benoit Chesneau (@benoitc).

    Circus is a process watcher and runner. Python scripts, via an API, or command-line interface can be used to manage and monitor multiple processes.

    A very useful web application, called circushttpd, provides a way to monitor and manage Circus through the web. Circus uses zeromq, a well-known tool used at Logilab.

    matplotlib demo

    This session was a well prepared and funny live demonstration by Julien Tayon of matplotlib, the Python 2D plotting library . He showed us some quick and easy stuff.

    For instance, how to plot a sinus with a few code lines with matplotlib and NumPy:

    import numpy as np
    import matplotlib.pyplot as plt
    fig = plt.figure()
    ax = fig.add_subplot(111)
    # A simple sinus.
    ax.plot(np.sin(np.arange(-10., 10., 0.05)))

    which gives:

    You can make some fancier plots such as:

    # A sinus and a fancy Cardioid.
    a = np.arange(-5., 5., 0.1)
    ax_sin = fig.add_subplot(211)
    ax_sin.plot(np.sin(a), '^-r', lw=1.5)
    ax_sin.set_title("A sinus")
    # Cardioid.
    ax_cardio = fig.add_subplot(212)
    x = 0.5 * (2. * np.cos(a) - np.cos(2 * a))
    y = 0.5 * (2. * np.sin(a) - np.sin(2 * a))
    ax_cardio.plot(x, y, '-og')
    ax_cardio.set_xlabel(r"$\frac{1}{2} (2 \cos{t} - \cos{2t})$", fontsize=16)

    where you can type some LaTeX equations as X label for instance.

    The force of this plotting library is the gallery of several examples with piece of code. See the matplotlib gallery.

    Using Python for robotics

    Dimitri Merejkowsky reviewed how Python can be used to control and program Aldebaran's humanoid robot NAO.

    Wrap up

    Unfortunately, Olivier Grisel who was supposed to make three interesting presentations was not there. He was supposed to present :

    • A demo about injecting arbitrary code and monitoring Python process with Pyrasite.
    • Another demo about Interactive Data analysis with Pandas and the new IPython NoteBook.
    • Wrap up : Distributed computation on cluster related project: IPython.parallel, picloud and Storm + Umbrella

    Thanks to La Cantine and the different organisers for this friendly dev day.

  • Mercurial 2.3 sprint, Day 1-2-3

    2012/05/15 by Pierre-Yves David

    I'm now back from Copenhagen were I attended the mercurial 2.3 sprint with twenty other people. A huge amount of work was done in a very friendly atmosphere.

    Regarding mercurial's core:

    • Bookmark behaviour was improved to get closer to named branch's behaviour.
    • Several performance improvements regarding branches and heads caches. The heads cache refactoring improves rebase performance on huge repository (thanks to Facebook and Atlassian).
    • The concept I'm working on, Obsolete markers, was a highly discussed subject and is expected to get partly into the core in the near future. Thanks to my employer Logilab for paying me to work on this topic.
    • General code cleanup and lock validation.

    Regarding the bundled extension :

    • Some fixes where made to progress which is now closer to getting into mercurial's core.
    • Histedit and keyring extensions are scheduled to be shipped with mercurial.
    • Some old and unmaintained extensions (children, hgtk) are now deprecated.
    • The LargeFile extension got some new features (thanks to the folks from Unity3D)
    • Rebase will use the --detach flag by default in the next release.

    Regarding the project itself:

    Regarding other extensions:

    And I'm probably forgetting some stuff. Special thanks to Unity3D for hosting the sprint and providing power, network and food during these 3 days.

  • Mercurial 2.3 day 0

    2012/05/10 by Pierre-Yves David

    I'm now at Copenhagen to attend the mercurial "2.3" sprint.

    About twenty people are attending, including staff from Atlassian, Facebook, Google and Mozilla.

    I expect code and discussion about various topic among:

    • the development process of mercurial itself,
    • performance improvement on huge repository,
    • integration of Obsolete Markers into mercurial core,
    • improvement on various aspect (merge diff, moving some extension in core, ...)

    I'm of course very interested in the Obsolete Markers topic. I've been working on an experimental implementation for several months. An handful of people are using them at Logilab for two months and feedback are very promising.

  • Debian bug squashing party in Paris

    2012/02/16 by Julien Cristau

    Logilab will be present at the upcoming Debian BSP in Paris this week-end. This event will focus on fixing as many "release critical" bugs as possible, to help with the preparation of the upcoming Debian 7.0 "wheezy" release. It will also provide an opportunity to introduce newcomers to the processes of Debian development and bug fixing, as well as provide an opportunity for contributors in various areas of the project to interact "in real life".

    The current stable release, Debian 6.0 "squeeze", came out in February 2011. The development of "wheezy" is scheduled to freeze in June 2012, for an eventual release later this year.

    Among the things we hope to work on during this BSP, the latest HDF5 release (1.8.8) includes API and packaging changes that require some changes in dependent packages. With the number of scientific packages relying on HDF5, this is a pretty big change, as tracked in this Debian bug.

  • Introduction To Mercurial Phases (Part III)

    2012/02/03 by Pierre-Yves David

    This is the final part of a series of posts about the new phases feature we implemented for mercurial 2.1. The first part talks about how phases will help mercurial users, the second part explains how to control them. This one explains what people should take care of when upgrading.

    Important upgrade note and backward compatibility

    Phases do not require any conversion of your repos. Phase information is not stored in changesets. Everybody using a new client will take advantage of phases on any repository they touch.

    However there is some points you need to be aware of regarding interaction between the old world without phases and the new world with phases:

    Talking over the wire to a phaseless server using a phased client

    As ever, the Mercurial wire protocol (used to communicate through http and ssh) is fully backward compatible [1]. But as old Mercurial versions are not aware of phases, old servers will always be treated as publishing.

    Direct file system access to a phaseless repository using a phased client

    A new client has no way to determine which parts of the history should be immutable and which parts should not. In order to fail safely, a new repo will mark everything as public when no data is available. For example, in the scenario described in part I, if an old version of mercurial were used to clone and commit, a new version of mercurial will see them as public and refuse to rebase them.


    Some extensions (like mq) may provide smarter logic to set some changesets to the draft or even secret phases.

    The phased client will write phase data to the old repo on its first write operation.

    Direct file system access to a phased repository using a phaseless client

    Everything works fine except that the old client is unable to see or manipulate phases:

    • Changesets added to the repo inherit the phase of their parents, whatever the parents' phase. This could result in new commits being seen as public or pulled content seen as draft or even secret when a newer client uses the repo again!
    • Changesets pushed to a publishing server won't be set public.
    • Secret changesets are exchanged.
    • Old clients are willing to rewrite immutable changesets (as they don't know that they shouldn't).

    So, if you actively rewrite your history or use secret changesets, you should ensure that only new clients touch those repositories where the phase matters.

    Fixing phases error

    Several situations can result in bad phases in a repository:

    • When upgrading from phaseless to phased Mercurial, the default phases picked may be too restrictive.
    • When you let an old client touch your repository.
    • When you push to a publishing server that should not actually be publishing.

    The easiest way to restore a consistant state is to use the phase command. In most cases, changesets marked as public but absent from your real public server should be moved to draft:

    hg phase --force --draft 'public() and outgoing()'

    If you have multiple public servers, you can pull from the others to retrieve their phase data too.


    Mercurial's phases are a simple concept that adds always on and transparent safety for most users while not preventing advanced ones from doing whatever they want.

    Behind this safety-enabling and useful feature, phases introduce in Mercurial code the concept of sharing mutable parts of history. The introduction of this feature paves the way for advanced history rewriting solutions while allowing safe and easy sharing of mutable parts of history. I'll post about those future features shortly.

    [1]You can expect the 0.9.0 version of Mercurial to interoperate cleanly with one released 5 years later.

    [Images by Crystian Cruz (cc-nd) and C.J. Peters (cc-by-sa)]

  • Introduction To Mercurial Phases (Part II)

    2012/02/02 by Pierre-Yves David

    This is the second part of a series of posts about the new phases feature we implemented for mercurial 2.1. The first part talks about how phases will help mercurial users, this second part explains how to control them.

    Controlling automatic phase movement

    Sometimes it may be desirable to push and pull changesets in the draft phase to share unfinished work. Below are some cases:

    • pushing to continuous integration,
    • pushing changesets for review,
    • user has multiple machines,
    • branch clone.

    You can disable publishing behavior in a repository configuration file [1]:


    When a repository is set to non-publishing, people push changesets without altering their phase. draft changesets are pushed as draft and public changesets are pushed as public:

    celeste@Chessy ~/palace $ hg showconfig phases
    babar@Chessy ~/palace $ hg log --graph
       @  [draft] add a carpet (2afbcfd2af83)
       o  [public] Add a table in the kichen (139ead8a540f)
       o  [public] Add wall color (0d1feb1bca54)
       babar@Chessy ~/palace $ hg outgoing ~celeste/palace/
       [public] Add wall color (0d1feb1bca54)
       [public] Add a table in the kichen (139ead8a540f)
       [draft] add a carpet (3c1b19d5d3f5)
       babar@Chessy ~/palace $ hg push ~celeste/palace/
       pushing to ~celeste/palace/
       searching for changes
       adding changesets
       adding manifests
       adding file changes
       added 3 changesets with 3 changes to 2 files
       babar@Chessy ~/palace $ hg log --graph
       @  [draft] add a carpet (2afbcfd2af83)
       o  [public] Add a table in the kichen (139ead8a540f)
       o  [public] Add wall color (0d1feb1bca54)
    celeste@Chessy ~/palace $ hg log --graph
       o  [draft] add a carpet (2afbcfd2af83)
       o  [public] Add a table in the kichen (139ead8a540f)
       o  [public] Add wall color (0d1feb1bca54)

    And pulling gives the phase as in the remote repository:

    celeste@Chessy ~/palace $ hg up 139ead8a540f
       celeste@Chessy ~/palace $ echo The wall will be decorated with portraits >> bedroom
       celeste@Chessy ~/palace $ hg ci -m 'Decorate the wall.'
       created new head
       celeste@Chessy ~/palace $ hg log --graph
       @  [draft] Decorate the wall. (3389164e92a1)
       | o  [draft] add a carpet (3c1b19d5d3f5)
       o  [public] Add a table in the kichen (139ead8a540f)
       o  [public] Add wall color (0d1feb1bca54)
       babar@Chessy ~/palace $ hg pull ~celeste/palace/
       pulling from ~celeste/palace/
       searching for changes
       adding changesets
       adding manifests
       adding file changes
       added 1 changesets with 1 changes to 1 files (+1 heads)
       babar@Chessy ~/palace $ hg log --graph
       @  [draft] Decorate the wall. (3389164e92a1)
       | o  [draft] add a carpet (3c1b19d5d3f5)
       o  [public] Add a table in the kichen (139ead8a540f)
       o  [public] Add wall color (0d1feb1bca54)

    Phase information is exchanged during pull and push operations. When a changeset exists on both sides but within different phases, its phase is unified to the lowest [2] phase. For instance, if a changeset is draft locally but public remotely, it is set public:

    celeste@Chessy ~/palace $ hg push -r 3389164e92a1
       pushing to
       searching for changes
       adding changesets
       adding manifests
       adding file changes
       added 1 changesets with 1 changes to 1 files
       celeste@Chessy ~/palace $ hg log --graph
       @  [public] Decorate the wall. (3389164e92a1)
       | o  [draft] add a carpet (3c1b19d5d3f5)
       o  [public] Add a table in the kichen (139ead8a540f)
       o  [public] Add wall color (0d1feb1bca54)
       babar@Chessy ~/palace $ hg pull ~celeste/palace/
       pulling from ~celeste/palace/
       searching for changes
       no changes found
       babar@Chessy ~/palace $ hg log --graph
       @  [public] Decorate the wall. (3389164e92a1)
       | o  [draft] add a carpet (3c1b19d5d3f5)
       o  [public] Add a table in the kichen (139ead8a540f)
       o  [public] Add wall color (0d1feb1bca54)


    pull is read-only operation and does not alter phases in remote repositories.

    You can also control the phase in which a new changeset is committed. If you don't want new changesets to be pushed without explicit consent, update your configuration with:


    You will need to use manual phase movement before you can push them. See the next section for details:


    With what have been done so far for 2.1, the "most practical way to make a new commit secret" is to use:

       hg commit --config
    [1]You can use this setting in your user hgrc too.
    [2]Phases as ordered as follow: public < draft < secret

    Manual phase movement

    Most phase movements should be automatic and transparent. However it is still possible to move phase manually using the hg phase command:

    babar@Chessy ~/palace $ hg log --graph
       @    [draft] merge with Celeste works (f728ef4eba9f)
       o |  [draft] add a carpet (3c1b19d5d3f5)
       | |
       | o  [public] Decorate the wall. (3389164e92a1)
       o  [public] Add a table in the kichen (139ead8a540f)
       babar@Chessy ~/palace $ hg phase --public 3c1b19d5d3f5
       babar@Chessy ~/palace $ hg log --graph
       @    [draft] merge with Celeste works (f728ef4eba9f)
       o |  [public] add a carpet (3c1b19d5d3f5)
       | |
       | o  [public] Decorate the wall. (3389164e92a1)
       o  [public] Add a table in the kichen (139ead8a540f)

    Changesets only move to lower [#] phases during normal operation. By default, the phase command enforces this rule:

    babar@Chessy ~/palace $ hg phase --draft 3c1b19d5d3f5
       no phases changed
       babar@Chessy ~/palace $ hg log --graph
       @    [draft] merge with Celeste works (f728ef4eba9f)
       o |  [public] add a carpet (3c1b19d5d3f5)
       | |
       | o  [public] Decorate the wall. (3389164e92a1)
       o  [public] Add a table in the kichen (139ead8a540f)

    If you are confident in what your are doing you can still use the --force switch to override this behavior:


    Phases are designed to avoid forcing people to use hg phase --force. If you need to use --force on a regular basis, you are probably doing something wrong. Read the previous section again to see how to configure your environment for automatic phase movement suitable to your needs.

    babar@Chessy ~/palace $ hg phase --verbose --force --draft 3c1b19d5d3f5
       phase change for 1 changesets
       babar@Chessy ~/palace $ hg log --graph
       @    [draft] merge with Celeste works (f728ef4eba9f)
       o |  [draft] add a carpet (3c1b19d5d3f5)
       | |
       | o  [public] Decorate the wall. (3389164e92a1)
       o  [public] Add a table in the kichen (139ead8a540f)

    Note that a phase defines a consistent set of revisions in your history graph. This means that to have a public (immutable) changeset all its ancestors need to be immutable too. Once you have a secret (not exchanged) changeset, all its descendant will be secret too.

    This means that changing the phase of a changeset may result in phase movement for other changesets:

    babar@Chessy ~/palace $ hg phase -v --public f728ef4eba9f # merge with Celeste works
       phase change for 2 changesets
       babar@Chessy ~/palace $ hg log --graph
       @    [public] merge with Celeste works (f728ef4eba9f)
       o |  [public] add a carpet (3c1b19d5d3f5)
       | |
       | o  [public] Decorate the wall. (3389164e92a1)
       o  [public] Add a table in the kichen (139ead8a540f)
       babar@Chessy ~/palace $ hg phase -vf --draft 3c1b19d5d3f5 # add a carpet
       phase change for 2 changesets
       babar@Chessy ~/palace $ hg log --graph
       @    [draft] merge with Celeste works (f728ef4eba9f)
       o |  [draft] add a carpet (3c1b19d5d3f5)
       | |
       | o  [public] Decorate the wall. (3389164e92a1)
       o  [public] Add a table in the kichen (139ead8a540f)

    The next and final post will explain how older mercurial versions interact with newer versions that support phases.

    [Images by Jimmy Smith (cc-by-nd) and Cory Doctorow (cc-by-sa)]

  • Introduction To Mercurial Phases (Part I)

    2012/02/02 by Pierre-Yves David
    credit: redshirtjosh,

    On the behalf of Logilab I put a lot of efforts to include a new core feature named phases in Mercurial 2.1. Phases are a system for tracking which changesets have been or should be shared. This helps to prevent common mistakes when modifying history (for instance, with the mq or rebase extensions). It will transparently benefit to all users. This concept is the first step towards simple, safe and powerful rewritting mecanisms for history in mercurial.

    This serie of three blog entries will explain:

    1. how phases will help mercurial users,
    2. how one can control them,
    3. how older mercurial versions interact with newer versions that support phases.

    Preventing erroneous history rewriting

    credit: anita.priks,

    History rewriting is a common practice in DVCS. However when done the wrong way the most common error results in duplicated history. The phase concept aims to make rewriting history safer. For this purpose Mercurial 2.1 introduces a distinction between the "past" part of your history (that is expected to stay there forever) and the "present" part of the history (that you are currently evolving). The old and immutable part is called public and the mutable part of your history is called draft.

    Let's see how this happens using a simple scenario.

    A new Mercurial user clones a repository:

    babar@Chessy ~ $ hg clone
    requesting all changes
    adding changesets
    adding manifests
    adding file changes
    added 2 changesets with 2 changes to 2 files
    updating to branch default
    2 files updated, 0 files merged, 0 files removed, 0 files unresolved
    babar@Chessy ~/palace $ cd palace
    babar@Chessy ~/palace $ hg log --graph
    @  changeset:   1:2afbcfd2af83
    |  tag:         tip
    |  user:        Celeste the Queen <>
    |  date:        Wed Jan 25 16:41:56 2012 +0100
    |  summary:     We need a kitchen too.
    o  changeset:   0:898889b143fb
       user:        Celeste the Queen <>
       date:        Wed Jan 25 16:39:07 2012 +0100
       summary:     First description of the throne room

    The repository already contains some changesets. Our user makes some improvements and commits them:

    babar@Chessy ~/palace $ echo The wall shall be Blue >> throne-room
    babar@Chessy ~/palace $ hg ci -m 'Add wall color'
    babar@Chessy ~/palace $ echo In the middle stands a three meters round table >> kitchen
    babar@Chessy ~/palace $ hg ci -m 'Add a table in the kichen'

    But when he tries to push new changesets, he discovers that someone else already pushed one:

    babar@Chessy ~/palace $ hg push
    pushing to
    searching for changes
    abort: push creates new remote head bcd4d53319ec!
    (you should pull and merge or use push -f to force)
    babar@Chessy ~/palace $ hg pull
    pulling from
    searching for changes
    adding changesets
    adding manifests
    adding file changes
    added 1 changesets with 1 changes to 1 files (+1 heads)
    (run 'hg heads' to see heads, 'hg merge' to merge)
    babar@Chessy ~/palace $ hg log --graph
    o  changeset:   4:0a5b3d7e4e5f
    |  tag:         tip
    |  parent:      1:2afbcfd2af83
    |  user:        Celeste the Queen <>
    |  date:        Wed Jan 25 16:58:23 2012 +0100
    |  summary:     Some bedroom description.
    | @  changeset:   3:bcd4d53319ec
    | |  user:        Babar the King <>
    | |  date:        Wed Jan 25 16:52:02 2012 +0100
    | |  summary:     Add a table in the kichen
    | |
    | o  changeset:   2:f9f14815935d
    |/   user:        Babar the King <>
    |    date:        Wed Jan 25 16:51:51 2012 +0100
    |    summary:     Add wall color
    o  changeset:   1:2afbcfd2af83
    |  user:        Celeste the Queen <>
    |  date:        Wed Jan 25 16:41:56 2012 +0100
    |  summary:     We need a kitchen too.
    o  changeset:   0:898889b143fb
       user:        Celeste the Queen <>
       date:        Wed Jan 25 16:39:07 2012 +0100
       summary:     First description of the throne room


    From here on this scenario becomes very unlikely. Mercurial is simple enough for a new user not to be that confused by such a trivial situation. But we keep the example simple to focus on phases.

    Recently, our new user read some hype blog about "rebase" and the benefit of linear history. So, he decides to rewrite his history instead of merging.

    Despite reading the wonderful rebase help, our new user makes the wrong decision when it comes to using it. He decides to rebase the remote changeset 0a5b3d7e4e5f:"Some bedroom description." on top of his local changeset.

    With previous versions of mercurial, this mistake was allowed and would result in a duplication of the changeset 0a5b3d7e4e5f:"Some bedroom description."

    babar@Chessy ~/palace $ hg rebase -s 4 -d 3
    babar@Chessy ~/palace $ hg push
    pushing to
    searching for changes
    abort: push creates new remote head bcd4d53319ec!
    (you should pull and merge or use push -f to force)
    babar@Chessy ~/palace $ hg pull
    pulling from
    searching for changes
    adding changesets
    adding manifests
    adding file changes
    added 1 changesets with 1 changes to 1 files (+1 heads)
    (run 'hg heads' to see heads, 'hg merge' to merge)
    babar@Chessy ~/palace $ hg log --graph
    @  changeset:   5:55d9bae1e1cb
    |  tag:         tip
    |  parent:      3:bcd4d53319ec
    |  user:        Celeste the Queen <>
    |  date:        Wed Jan 25 16:58:23 2012 +0100
    |  summary:     Some bedroom description.
    | o  changeset:   4:0a5b3d7e4e5f
    | |  parent:      1:2afbcfd2af83
    | |  user:        Celeste the Queen <>
    | |  date:        Wed Jan 25 16:58:23 2012 +0100
    | |  summary:     Some bedroom description.
    | |
    o |  changeset:   3:bcd4d53319ec
    | |  user:        Babar the King <>
    | |  date:        Wed Jan 25 16:52:02 2012 +0100
    | |  summary:     Add a table in the kichen
    | |
    o |  changeset:   2:f9f14815935d
    |/   user:        Babar the King <>
    |    date:        Wed Jan 25 16:51:51 2012 +0100
    |    summary:     Add wall color
    o  changeset:   1:2afbcfd2af83
    |  user:        Celeste the Queen <>
    |  date:        Wed Jan 25 16:41:56 2012 +0100
    |  summary:     We need a kitchen too.
    o  changeset:   0:898889b143fb
       user:        Celeste the Queen <>
       date:        Wed Jan 25 16:39:07 2012 +0100
       summary:     First description of the throne room

    In more complicated setups it's a fairly common mistake, Even in big and successful projects and with other DVCSs.

    In the new Mercurial version the user won't be able to make this mistake anymore. Trying to rebase the wrong way will result in:

    babar@Chessy ~/palace $ hg rebase -s 4 -d 3
    abort: can't rebase immutable changeset 0a5b3d7e4e5f
    (see hg help phases for details)

    The correct rebase still works as expected:

    babar@Chessy ~/palace $ hg rebase -s 2 -d 4
    babar@Chessy ~/palace $ hg log --graph
    @  changeset:   4:139ead8a540f
    |  tag:         tip
    |  user:        Babar the King <>
    |  date:        Wed Jan 25 16:52:02 2012 +0100
    |  summary:     Add a table in the kichen
    o  changeset:   3:0d1feb1bca54
    |  user:        Babar the King <>
    |  date:        Wed Jan 25 16:51:51 2012 +0100
    |  summary:     Add wall color
    o  changeset:   2:0a5b3d7e4e5f
    |  user:        Celeste the Queen <>
    |  date:        Wed Jan 25 16:58:23 2012 +0100
    |  summary:     Some bedroom description.
    o  changeset:   1:2afbcfd2af83
    |  user:        Celeste the Queen <>
    |  date:        Wed Jan 25 16:41:56 2012 +0100
    |  summary:     We need a kitchen too.
    o  changeset:   0:898889b143fb
       user:        Celeste the Queen <>
       date:        Wed Jan 25 16:39:07 2012 +0100
       summary:     First description of the throne room

    What is happening here:

    • Changeset 0a5b3d7e4e5f from Celeste was set to the public phase because it was pulled from the outside. The public phase is immutable.
    • Changesets f9f14815935d and bcd4d53319ec (rebased as 0d1feb1bca54 and 139ead8a540f) have been commited locally and haven't been transmitted from this repository to another. As such, they are still in the draft phase. Unlike the public phase, the draft phase is mutable.

    Let's watch the whole action in slow motion, paying attention to phases:

    babar@Chessy ~ $ cat >> ~/.hgrc << EOF
    username=Babar the King <>
    logtemplate='[{phase}] {desc} ({node|short})\\n'

    First, changesets cloned from a public server are public:

    babar@Chessy ~ $ hg clone --quiet
    babar@Chessy ~/palace $ cd palace
    babar@Chessy ~/palace $ hg log --graph
    @  [public] We need a kitchen too. (2afbcfd2af83)
    o  [public] First description of the throne room (898889b143fb)

    Second, new changesets committed locally are in the draft phase:

    babar@Chessy ~/palace $ echo The wall shall be Blue >> throne-room
    babar@Chessy ~/palace $ hg ci -m 'Add wall color'
    babar@Chessy ~/palace $ echo In the middle stand a three meters round table >> kitchen
    babar@Chessy ~/palace $ hg ci -m 'Add a table in the kichen'
    babar@Chessy ~/palace $ hg log --graph
    @  [draft] Add a table in the kichen (bcd4d53319ec)
    o  [draft] Add wall color (f9f14815935d)
    o  [public] We need a kitchen too. (2afbcfd2af83)
    o  [public] First description of the throne room (898889b143fb)

    Third, changesets pulled from a public server are public:

    babar@Chessy ~/palace $ hg pull --quiet
    babar@Chessy ~/palace $ hg log --graph
    o  [public] Some bedroom description. (0a5b3d7e4e5f)
    | @  [draft] Add a table in the kichen (bcd4d53319ec)
    | |
    | o  [draft] Add wall color (f9f14815935d)
    o  [public] We need a kitchen too. (2afbcfd2af83)
    o  [public] First description of the throne room (898889b143fb)


    rebase preserves the phase of rebased changesets

    babar@Chessy ~/palace $ hg rebase -s 2 -d 4
    babar@Chessy ~/palace $ hg log --graph
    @  [draft] Add a table in the kichen (139ead8a540f)
    o  [draft] Add wall color (0d1feb1bca54)
    o  [public] Some bedroom description. (0a5b3d7e4e5f)
    o  [public] We need a kitchen too. (2afbcfd2af83)
    o  [public] First description of the throne room (898889b143fb)

    Finally, once pushed to the public server, changesets are set to the public (immutable) phase

    babar@Chessy ~/palace $ hg push
    pushing to
    searching for changes
    adding changesets
    adding manifests
    adding file changes
    added 2 changesets with 2 changes to 2 files
    babar@Chessy ~/palace $ hg log --graph
    @  [public] Add a table in the kichen (139ead8a540f)
    o  [public] Add wall color (0d1feb1bca54)
    o  [public] Some bedroom description. (0a5b3d7e4e5f)
    o  [public] We need a kitchen too. (2afbcfd2af83)
    o  [public] First description of the throne room (898889b143fb)

    To summarize:

    • Changesets exchanged with the outside are public and immutable.
    • Changesets committed locally are draft until exchanged with the outside.
    • As a user, you should not worry about phases. Phases move transparently.

    Preventing premature exchange of history

    credit: Richard Elzey,

    The public phases prevent user from accidentally rewriting public history. It's a good step forward but phases can go further. Phases can prevent you from accidentally making history public in the first place.

    For this purpose, a third phase is available, the secret phase. To explain it, I'll use the mq extension which is nicely integrated with this secret phase:

    Our fellow user enables the mq extension

    babar@Chessy ~/palace $ vim ~/.hgrc
    babar@Chessy ~/palace $ cat ~/.hgrc
    username=Babar the King <>
    # enable the mq extension included with Mercurial
    # Enable secret phase integration.
    # This integration is off by default for backward compatibility.

    New patches (not general commits) are now created as secret

    babar@Chessy ~/palace $ echo A red carpet on the floor. >> throne-room
    babar@Chessy ~/palace $ hg qnew -m 'add a carpet' carpet.diff
    babar@Chessy ~/palace $ hg log --graph
    @  [secret] add a carpet (3c1b19d5d3f5)
    @  [public] Add a table in the kichen (139ead8a540f)
    o  [public] Add wall color (0d1feb1bca54)

    this secret changeset is excluded from outgoing and push:

    babar@Chessy ~/palace $ hg outgoing
    comparing with
    searching for changes
    no changes found (ignored 1 secret changesets)
    babar@Chessy ~/palace $ hg push
    pushing to
    searching for changes
    no changes found (ignored 1 secret changesets)

    And other users do not see it:

    celeste@Chessy ~/palace $ hg incoming ~babar/palace/
    comparing with ~babar/palace
    searching for changes
    [public] Add wall color (0d1feb1bca54)
    [public] Add a table in the kichen (139ead8a540f)

    The mq integration take care of phase movement for the user. Changeset are made draft by qfinish

    babar@Chessy ~/palace $ hg qfinish .
    babar@Chessy ~/palace $ hg log --graph
    @  [draft] add a carpet (2afbcfd2af83)
    o  [public] Add a table in the kichen (139ead8a540f)
    o  [public] Add wall color (0d1feb1bca54)

    And changesets are made secret again by qimport

    babar@Chessy ~/palace $ hg qimport -r 2afbcfd2af83
    babar@Chessy ~/palace $ hg log --graph
    @  [secret] add a carpet (2afbcfd2af83)
    o  [public] Add a table in the kichen (139ead8a540f)
    o  [public] Add wall color (0d1feb1bca54)

    As expected, mq refuses to qimport public changesets

    babar@Chessy ~/palace $ hg qimport -r 139ead8a540f
    abort: revision 4 is not mutable

    In the next part I'll details how to control phases movement.

  • Generating a user interface from a Yams model

    2012/01/09 by Nicolas Chauvat

    Yams is a pythonic way to describe an entity-relationship model. It is used at the core of the CubicWeb semantic web framework in order to automate lots of things, including the generation and validation of forms. Although we have been using the MVC design pattern to write user interfaces with Qt and Gtk before we started CubicWeb, we never got to reuse Yams. I am on my way to fix this.

    Here is the simplest possible example that generates a user interface (using dialog and python-dialog) to input data described by a Yams data model.

    First, let's write a function that builds the data model:

    def mk_datamodel():
        from yams.buildobjs import EntityType, RelationDefinition, Int, String
        from yams.reader import build_schema_from_namespace
        class Question(EntityType):
            number = Int()
            text = String()
        class Form(EntityType):
            title = String()
        class in_form(RelationDefinition):
            subject = 'Question'
            object = 'Form'
            cardinality = '*1'
        return build_schema_from_namespace(vars().items())

    Here is what you get using graphviz or xdot to display the schema of that data model with:

    import os
    from yams import schema2dot
    datamodel = mk_datamodel()
    schema2dot.schema2dot(datamodel, '/tmp/')
    os.system('xdot /tmp/')

    To make a step in the direction of genericity, let's add a class that abstracts the dialog API:

    class InterfaceDialog:
        """Dialog-based Interface"""
        def __init__(self, dlg):
            self.dlg = dlg
        def input_list(self, invite, options) :
            assert len(options) != 0, str(invite)
            choice = self.dlg.radiolist(invite, list=options, selected=1)
            if choice is not None:
                return choice.lower()
                raise Exception('operation cancelled')
        def input_string(self, invite, default):
            return self.dlg.inputbox(invite, init=default).decode(sys.stdin.encoding)

    And now let's put everything together:

    datamodel = mk_datamodel()
    import dialog
    ui = InterfaceDialog(dialog.Dialog())
    ui.dlg.setBackgroundTitle('Dialog Interface with Yams')
    objs = []
    for entitydef in datamodel.entities():
        obj = {}
        for attr in entitydef.attribute_definitions():
            if attr[1].type in ('String','Int'):
                obj[str(attr[0])] = ui.input_string('%s.%s' % (entitydef,attr[0]), '')
        except Exception, exc:
    print objs

    The result is a program that will prompt the user for the title of a form and the text/number of a question, then enforce the type constraints and display the inconsistencies.

    The above is very simple and does very little, but if you read the documentation of Yams and if you think about generating the UI with Gtk or Qt instead of dialog, or if you have used the form mechanism of CubicWeb, you'll understand that this proof of concept opens a door to a lot of possibilities.

    I will come back to this topic in a later article and give an example of integrating the above with pigg, a simple MVC library for Gtk, to make the programming of user-interfaces even more declarative and bug-free.

  • Interesting things seen at the Afpy Computer Camp

    2011/11/28 by Pierre-Yves David

    This summer I spent three days in Burgundy at the Afpy Computer Camps. This yearly meeting gathered French speaking python developers for talking and coding. The main points of this 2011 edition were:

    The new IPython 0.11 was shown by Olivier Grisel. This new version contains lots of impressive feature like inline figures, asynchronous execution, exportable sessions, and a web-browser based client. IPython was also presented by its main author Fernando Perez during his keynote talk at EuroSciPy. Since then Logilab got involved with IPython. We contributed to the Debian packaging of iPython dependencies and we joined the discussion about Restructured Text formatting for note book.

    Tarek Ziade bootstrapped his new Red Barrel project and small framework to build modern webservices with multiple back-end including the new protocol.

    Alexis Métaireau and Feth Arezki discovered their common interest into account tracking application. The discussion's result is a first release of I hate money a few months later.

    For my part, I spent most of my time working with Boris Feld on the Python Testing Infrastructure , a continuous integration tool to test python distributions available at PyPI.

    This yearly Afpy Computer Camps is an event intended for python developers but the Afpy also organize events for non python developer. The next one is tonight in Paris at La cantine : Vous reprendrez bien un peu de python ?. See you tonight ?

  • Python in Finance (and Derivative Analytics)

    2011/10/25 by Damien Garaud

    The Logilab team attended (and co-organized) EuroScipy 2011, at the end of August in Paris.

    We saw some interesting posters and a presentation dealing with Python in finance and derivative analytics [1].

    In order to debunk the idea that "all computation libraries dedicated to financial applications must be written in C/C++ or some other compiled programming language", I would like to introduce a more Pythonic way.

    You may know that financial applications such as risk management have in most cases high computational needs. For instance, it can be necessary to quickly perform a large number of Monte Carlo simulations to evaluate an American option in a few seconds.

    The Python community provides several reliable and efficient libraries and packages dedicated to numerical computations:
    • the well-known SciPy and NumPy libraries. They provide a complete set of tools to work with matrix, linear algebra operations, singular values decompositions, multi-variate regression models, ...
    • scikits is a set of add-on toolkits for SciPy. For instance there are statistical models in statsmodels packages, a toolkit dedicated to timeseries manipulation and another one dedicated to numerical optimization;
    • pandas is a recent Python package which provides "fast, flexible, and expressive data structures designed to make working with relational or labeled data both easy and intuitive.". pandas uses Cython to improve its performance. Moreover, pandas has been used extensively in production in financial applications;
    • Cython is a way to write C extensions for the Python language. Since you write Cython code in the same way as you write Python code, it's easy to use it. Is it fast? Yes ! I compared a simple example from Cython's official documentation with a full Python code -- a piece of code which computes the first kth prime numbers. The Cython code is almost thirty times faster than the full-Python code (non-official). Furthermore, you can use NumPy in Cython code !

    I believe that thanks to several useful tools and libraries, Python can be used in numerical computation, even in Finance (both research and production). It is easy-to-maintain without sacrificing performances.

    Note that you can find some other references on Visixion webpages:

  • Rss feeds aggregator based on Scikits.learn and CubicWeb

    2011/10/17 by Vincent Michel

    During Euroscipy, the Logilab Team presented an original approach for querying news using semantic information: "Rss feeds aggregator based on Scikits.learn and CubicWeb" by Vincent Michel This work is based on two major pieces of software:
    • CubicWeb, the pythonic semantic web framework, is used to store and query Dbpedia information. CubicWeb is able to reconstruct links from rdf/nt files, and can easily execute complex queries in a database with more than 8 millions entities and 75 millions links when using a PostgreSQL backend.
    • Scikit.learn is a cutting-edge python toolbox for machine learning. It provides algorithms that are simple and easy to use.

    Based on these tools, we built a pure Python application to query the news:

    • Named Entities are extracted from RSS articles of a few mainstream English newspapers (New York Times, Reuteurs, BBC News, etc.), for each group of words in an article, we check if a Dbpedia entry has the same label. If so, we create a semantic link between the article and the Dbpedia entry.
    • An occurrence matrix of "RSS Articles" times "Named Entities" is constructed and may be used against several machine learning algorithms (MeanShift algorithm, Hierachical Clustering) in order to provide original and informative views of recent events.

    Moreover, queries may be used jointly with semantic information from Dbpedia:

    • All musical artists in the news:

      DISTINCT Any E, R WHERE E appears_in_rss R, E has_type T, T label "musical artist"
    • All living office holder persons in the news:

      DISTINCT Any E WHERE E appears_in_rss R, E has_type T, T label "office holder", E has_subject C, C label "Living people"
    • All news that talk about Barack Obama and any scientist:

      DISTINCT Any R WHERE E1 label "Barack Obama", E1 appears_in_rss R, E2 appears_in_rss R, E2 has_type T, T label "scientist"
    • All news that talk about a drug:

      Any X, R WHERE X appears_in_rss R, X has_type T, T label "drug"

    Such a tool may be used for informetrics and news analysis. Feel free to download the complete slides of the presentation.

  • Helping pylint to understand things it doesn't

    2011/10/10 by Sylvain Thenault

    The latest release of logilab-astng (0.23), the underlying source code representation library used by PyLint, provides a new API that may change pylint users' life in the near future...

    It aims to allow registration of functions that will be called after a module has been parsed. While this sounds dumb, it gives a chance to fix/enhance the understanding PyLint has about your code.

    I see this as a major step towards greatly enhanced code analysis, improving the situation where PyLint users know that when running it against code using their favorite framework (who said CubicWeb? :p ), they should expect a bunch of false positives because of black magic in the ORM or in decorators or whatever else. There are also places in the Python standard library where dynamic code can cause false positives in PyLint.

    The problem

    Let's take a simple example, and see how we can improve things using the new API. The following code:

    import hashlib
    def hexmd5(value):
        """"return md5 checksum hexadecimal digest of the given value"""
        return hashlib.md5(value).hexdigest()
    def hexsha1(value):
        """"return sha1 checksum hexadecimal digest of the given value"""
        return hashlib.sha1(value).hexdigest()

    gives the following output when analyzed through pylint:

    [syt@somewhere ~]$ pylint -E
    No config file found, using default configuration
    ************* Module smarter_astng
    E:  5,11:hexmd5: Module 'hashlib' has no 'md5' member
    E:  9,11:hexsha1: Module 'hashlib' has no 'sha1' member


    [syt@somewhere ~]$ python
    Python 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53)
    [GCC 4.5.2] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import smarter_astng
    >>> smarter_astng.hexmd5('hop')
    >>> smarter_astng.hexsha1('hop')

    The code runs fine... Why does pylint bother me then? If we take a look at the hashlib module, we see that there are no sha1 or md5 defined in there. They are defined dynamically according to Openssl library availability in order to use the fastest available implementation, using code like:

    for __func_name in __always_supported:
        # try them all, some may not work due to the OpenSSL
        # version not supporting that algorithm.
            globals()[__func_name] = __get_hash(__func_name)
        except ValueError:
            import logging
            logging.exception('code for hash %s was not found.', __func_name)

    Honestly I don't blame PyLint for not understanding this kind of magic. The situation on this particular case could be improved, but that's some tedious work, and there will always be "similar but different" case that won't be understood.

    The solution

    The good news is that thanks to the new astng callback, I can help it be smarter! See the code below:

    from logilab.astng import MANAGER, scoped_nodes
    def hashlib_transform(module):
        if == 'hashlib':
            for hashfunc in ('sha1', 'md5'):
                module.locals[hashfunc] = [scoped_nodes.Class(hashfunc, None)]
    def register(linter):
        """called when loaded by pylint --load-plugins, register our tranformation
        function here

    What's in there?

    • A function that will be called with each astng module built during a pylint execution, i.e. not only the one that you analyses, but also those accessed for type inference.
    • This transformation function is fairly simple: if the module is the 'hashlib' module, it will insert into its locals dictionary a fake class node for each desired name.
    • It is registered using the register_transformer method of astng's MANAGER (the central access point to built syntax tree). This is done in the pylint plugin API register callback function (called when module is imported using 'pylint --load-plugins'.

    Now let's try it! Suppose I stored the above code in a '' module in my PYTHONPATH, I can now run pylint with the plugin activated:

    [syt@somewhere ~]$ pylint -E --load-plugins astng_hashlib
    No config file found, using default configuration
    ************* Module smarter_astng
    E:  5,11:hexmd5: Instance of 'md5' has no 'hexdigest' member
    E:  9,11:hexsha1: Instance of 'sha1' has no 'hexdigest' member

    Huum. We have now a different error :( Pylint grasp there are some md5 and sha1 classes but it complains they don't have a hexdigest method. Indeed, we didn't give a clue about that.

    We could continue on and on to give it a full representation of hashlib public API using the astng nodes API. But that would be painful, trust me. Or we could do something clever using some higher level astng API:

    from logilab.astng import MANAGER
    from logilab.astng.builder import ASTNGBuilder
    def hashlib_transform(module):
        if == 'hashlib':
        fake = ASTNGBuilder(MANAGER).string_build('''
    class md5(object):
      def __init__(self, value): pass
      def hexdigest(self):
        return u''
    class sha1(object):
      def __init__(self, value): pass
      def hexdigest(self):
        return u''
        for hashfunc in ('sha1', 'md5'):
            module.locals[hashfunc] = fake.locals[hashfunc]
    def register(linter):
        """called when loaded by pylint --load-plugins, register our tranformation
        function here

    The idea is to write a fake python implementation only documenting the prototype of the desired class, and to get an astng from it, using the string_build method of the astng builder. This method will return a Module node containing the astng for the given string. It's then easy to replace or insert additional information into the original module, as you can see in the above example.

    Now if I run pylint using the updated plugin:

    [syt@somewhere ~]$ pylint -E --load-plugins astng_hashlib
    No config file found, using default configuration

    No error anymore, great!

    What's next?

    This fairly simple change could quickly provide great enhancements. We should probably improve the astng manipulation API now that it's exposed like that. But we can also easily imagine a code base of such pylint plugins maintained by each community around a python library or framework. One could then use a plugins stack matching stuff used by its software, and have a greatly enhanced experience of using pylint.

    For a start, it would be great if pylint could be shipped with a plugin that explains all the magic found in the standard library, wouldn't it? Left as an exercice to the reader!

  • Text mode makes it into hgview 1.4.0

    2011/10/06 by Alain Leufroy

    Here is at last the release of the version 1.4.0 of hgview.

    Small description

    Besides the classic bugfixes this release introduces a new text based user interface thanks to the urwid library.

    Running hgview in a shell, in a terminal, over a ssh session is now possible! If you are trying not to use X (or use it less), have a geek mouse-killer window manager such as wmii/dwm/ion/awesome/... this is for you!

    This TUI (Text User Interface!) adopts the principal features of the Qt4 based GUI. Although only the main view has been implemented for now.

    In a nutshell, this interface includes the following features :

    • display the revision graph (with working directory as a node, and basic support for the mq extension),
    • display the files affected by a selected changeset (with basic support for the bfiles extension)
    • display diffs (with syntax highlighting thanks to pygments),
    • automatically refresh the displayed revision graph when the repository is being modified (requires pyinotify),
    • easy key-based navigation in revisions' history of a repo (same as the GUI),
    • a command system for special actions (see help)


    There are packages for debian and ubuntu in the logilab's debian repository.

    Note:you have to install the hgview-curses package to get the text based interface.

    Or you can simply clone our Mercurial repository:

    hg clone

    (more on the hgview home page)

    Running the text based interface

    A new --interface option is now available to choose the interface:

    hgview --interface curses

    Or you can fix it in the [hgview] section of your ~/.hgrc:

    interface = curses # or qt or raw

    Then run:


    What's next

    We'll be working on including other features from the Qt4 interface and making it fully configurable.

    We'll also work on bugfixes and new features, so stay tuned! And feel free to file bugs and feature requests.

  • Drawing UML diagrams with Python

    2011/09/26 by Nicolas Chauvat

    It started with a desire to draw diagrams of hierarchical systems with Python. Since this is similar to what we do in CubicWeb with schemas of the data model, I read the code and realized we had that graph submodule in the logilab.common library. This module uses dot from graphviz as a backend to draw the diagrams.

    Reading about UML diagrams drawn with GraphViz, I learned about UMLGraph, that uses GNU Pic to draw sequence diagrams. Pic is a language based on groff and the pic2plot tool is part of plotutils (apt-get install plotutils). Here is a tutorial. I have found some Python code wrapping pic2plot available as plugin to wikipad. It is worth noticing that TeX seems to have a nice package for UML sequence diagrams called pgf-umlsd.

    Since nowadays everything is moving into the web browser, I looked for a javascript library that does what graphviz does and I found canviz which looks nice.

    If (only) I had time, I would extend pyreverse to draw sequence diagrams and not only class diagrams...

  • EuroSciPy'11 - Annual European Conference for Scientists using Python.

    2011/08/24 by Alain Leufroy

    The EuroScipy2011 conference will be held in Paris at the Ecole Normale Supérieure from August 25th to 28th and is co-organized and sponsored by INRIA, Logilab and other companies.

    The conference is dedicated to cross-disciplinary gathering focused on the use and development of the Python language in scientific research.

    August 25th and 26th are dedicated to tutorial tracks -- basic and advanced tutorials. August 27th and 28th are dedicated to talks, posters and demos sessions.

    Damien Garaud, Vincent Michel and Alain Leufroy (and others) from Logilab will be there. We will talk about a RSS feeds aggregator based on Scikits.learn and CubicWeb and we have a poster about LibAster (a python library for thermomechanical simulation based on Code_Aster).

  • Pylint 0.24 / logilab-astng 0.22

    2011/07/21 by Sylvain Thenault

    Hi there!

    I'm pleased to announce new releases of pylint and its underlying library logilab-astng. See and for more info.

    Those releases include mostly fixes and a few enhancements. Python 2.6 relative / absolute imports should now work fine and Python 3 support has been enhanced. There are still two remaining failures in astng test suite when using python 3, but we're unfortunatly missing resources to fix them yet.

    Many thanks to everyone who contributed to this release by submitting patches or by participating to the latest bugs day.

  • pylint bug day #3 on july 8, 2011

    2011/07/04 by Sylvain Thenault

    Hey guys,

    we'll hold the next pylint bug day on july 8th 2011 (friday). If some of you want to come and work with us in our Paris office, you'll be welcome.

    You can also join us on jabber / irc:

    I know the announce is a bit late, but I hope some of you will be able to come or be online anyway!

    Regarding the program, the goal is to decrease the number of tickets in the tracker. I'll try to do some triage earlier this week so you'll get a chance to talk about your super-important ticket that has not been selected. Of course, if you intend to work on it, there is a bigger chance of it being fixed next week-end ;)

  • Setting up my Microsoft Natural Keyboard under Debian Squeeze

    2011/06/08 by Nicolas Chauvat

    I upgraded to Debian Squeeze over the week-end and it broke my custom Xmodmap. While I was fixing it, I realized that the special keys of my Microsoft Natural keyboard that were not working under Lenny were now functionnal. The only piece missing was the "zoom" key. Here is how I got it to work.

    I found on the askubuntu forum an solution to the same problem, that is missing the following details.

    To find which keysym to map, I listed input devices:

    $ ls /dev/input/by-id/
    usb-Logitech_USB-PS.2_Optical_Mouse-mouse        usb-Logitech_USB-PS_2_Optical_Mouse-mouse
    usb-Logitech_USB-PS_2_Optical_Mouse-event-mouse  usb-Microsoft_Natural??_Ergonomic_Keyboard_4000-event-kbd

    then used evtest to find the keysym:

    $ evtest /dev/input/by-id/usb-Microsoft*

    then used udevadm to find the identifiers:

    $ udevadm info --export-db | less

    then edited /lib/udev/rules.d/95-keymap.rules to add:

    ENV{ID_VENDOR}=="Microsoft", ENV{ID_MODEL_ID}=="00db", RUN+="keymap $name microsoft-natural-keyboard-4000"

    in the section keyboard_usbcheck

    and created the keymap file:

    $ cat /lib/udev/keymaps/microsoft-natural-keyboard-4000
    0xc022d pageup
    0xc022e pagedown

    then loaded the keymap:

    $ /lib/udev/keymap /dev/input/by-id/usb-Microsoft_Natural®_Ergonomic_Keyboard_4000-event-kbd /lib/udev/keymaps/microsoft-natural-keyboard-4000

    then used evtest again to check it was working.

    Of course, you do not have to map the events to pageup and pagedown, but I found it convenient to use that key to scroll up and down pages.

    Hope this helps :)

  • Coding sprint scikits.learn

    2011/03/22 by Vincent Michel

    We are planning a one day coding sprint on scikits.learn the 1st April.
    Venues, or remote participation on IRC are more than welcome !

    More information can be found on the wiki:

  • Distutils2 Sprint at Logilab (first day)

    2011/01/28 by Alain Leufroy

    We're very happy to host the Distutils2 sprint this week in Paris.

    The sprint has started yesterday with some of Logilab's developers and others contributors. We'll sprint during 4 days, trying to pull up the new python package manager.

    Let's sumarize this first day:

    • Boris Feld and Pierre-Yves David worked on the new system for detecting and dispatching data-files.
    • Julien Miotte worked on
      • moving qGitFilterBranch from setuptools to distutils2
      • testing distutils2 installation and register (see the tutorial)
      • the backward compatibility to distutils in, using setup.cfg to fill the setup arguments of setup for helping users to switch to distutils2.
    • André Espaze and Alain Leufroy worked on the python script that help developers build a setup.cfg by recycling their existing (track).

    Join us on IRC at #distutils on !

  • The Python Package Index is not a "Software Distribution"

    2011/01/26 by Pierre-Yves David

    Recent discussions on the #disutils irc channel and with my logilab co-workers led me to the following conclusions:

    • The Python Package Index is not a software distribution
    • There is more than one way to distribute python software
    • Distribution packagers are power users and need super cow-powers
    • Users want it to "just works"
    • The Python Package Index is used by many as a software distribution
    • Pypi has a lot of contributions because requirements are low.

    The Python Package Index is not a software distribution

    I would define a software distribution as :

    • Organised group of people
    • Who apply a Unified Quality process
    • To a finite set of software
    • Which includes all its dependencies
    • With a consistent set of versions that work together
    • For a finite set of platforms
    • Managed and installed by dedicated tools.

    Pypi is a public index where:

    • Any python developer
    • Can upload any tarball containing something related
    • To any python package
    • Which might have external dependencies (outside Pypi)
    • The latest version of something is always available disregarding its compatibility with other packages.
    • Binary packages can be provided for any platform but are usually not.
    • There are several tools to install and manage python packages from pypi.

    Pypi is not a software distribution, it is a software index.

    Card File by Mr. Ducke / Matt

    There is more than one way to distribute python software

    There is a long way from the pure source used by the developer to the software installed on the system of the end user.

    First, the source must be extracted from a (D)VCS to make a version tarball, while executing several release specific actions (eg: changelog generation from a tracker) Second, the version tarball is used to generate a platform independent build, while executing several build steps (eg, Cython compilation into C files or documentation generation). Third, the platform independent build is used to generate a platform dependant build, while executing several platforms dependant build (eg, compilation of C extension). Finally, the platform dependant build is installed and each file gets dispatched to its proper location during the installation process.

    Pieces of software can be distributed as development snapshots taken from the (D)VCS, version tarballs, source packages, platform independent package or platform dependent package.

    package! by Beck Gusler

    Distribution packagers are power users and need super cow-powers

    Distribution packagers usually have the necessary infrastructure and skills to build packages from version tarballs. Moreover they might have specific needs that require as much control as possible over the various build steps. For example:

    • Specific help system requiring a custom version of sphinx.
    • Specific security or platform constraint that require a specific version of Cython
    Cheese Factory by James Yu

    Users want it to "just work"

    Standard users want it to "just work". They prefer simple and quick ways to install stuff. Build steps done on their machine increase the duration of the installation, add potential new dependencies and may trigger an error. Standard users are very disappointed when an installed failed because an error occurred while building the documentation. User give up when they have to download extra dependency and setup complicated compilation environment.

    Users want as many build steps as possible to be done by someone else. That's why many users usually choose a distribution that do the job for them (eg, ubuntu, red-hat, python xy)

    The Python Package Index is used by many as a software distribution

    But there are several situations where the user can't rely on his distribution to install python software:

    • There is no distribution available for the platform (Windows, Mac OS X)
    • They want to install a python package outside of their distribution system (to test or because they do not have the credentials to install it system-wide)
    • The software or version they need is not included in the finite set of software included in their distribution.

    When this happens, the user will use Pypi to fetch python packages. To help them, Pypi accepts binary packages of python modules and people have developed dedicated tools that ease installation of packages and their dependencies: pip, easy_install.

    Pip + Pypi provides the tools of a distribution without its consistency. This is better than nothing.

    Pypi has a lot of contributions because requirements are low

    Pypi should contain version tarballs of all known python modules. It is the first purpose of an index. Version tarball should let distribution and power user perform as many build steps as possible. Pypi will continue to be used as a distribution by people without a better option. Packages provided to these users should require as little as possible to be installed, meaning they either have no build step to perform or have only platforms dependent build step (that could not be executed by the developer).

    Thomas Fisher Rare Book Library by bookchen

    If the incoming distutils2 provides a way to differentiate platform dependent build steps from platform independent ones, python developers will be able to upload three different kind of package on Pypi.

    sdist:Pure source version released by upstream targeted at packagers and power users.
    idist:Platform-independent package with platform independent build steps done (Cython, docs). If there is no such build step, the package is the same as sdist.
    bdist:Platform-dependent package with all build steps performed. For package with no platform dependent build step this package is the same that idist.

    (Image under creative commons Card File by-nc-nd by Mr. Ducke / Matt, Thomas Fisher Rare Book Library by bookchen, package! by Beck Gusler, Cheese Factory by James Yu)

  • Fresh release of lutin77, Logilab Unit Test IN fortran 77

    2011/01/11 by Andre Espaze

    I am pleased to annouce the 0.2 release of lutin77 for running Fortran 77 tests by using a C compiler as the only dependency. Moreover this very light framework of 97 lines of C code makes a very good demo of Fortran and C interfacing. The next level could be to write it in GAS (GNU Assembler).

    For the over excited maintainers of legacy code, here comes a screenshot:

    $ cat test_error.f
       subroutine success
       subroutine error
       integer fid
       open(fid, status="old", file="nofile.txt")
       write(fid, *) "Ola"
       subroutine checke
       call check(.true.)
       call check(.false.)
       call abort
       program run
       call runtest("error")
       call runtest("success")
       call runtest("absent")
       call runtest("checke")
       call resume

    Then you can build the framework by:

    $ gcc -Wall -pedantic -c lutin77.c

    An now run your tests:

    $ gfortran -o test_error test_error.f lutin77.o -ldl -rdynamic
    $ ./test_error
      At line 6 of file test_error.f
      Fortran runtime error: File 'nofile.txt' does not exist
      Error with status 512 for the test "error".
      "absent" test not found.
      Failure at check statement number 2.
      Error for the test "checke".
      4 tests run (1 PASSED, 0 FAILED, 3 ERRORS)

    See also the list of test frameworks for Fortran.

  • Distutils2 January Sprint in Paris

    2011/01/07 by Pierre-Yves David

    At Logilab, we have the pleasure to host a distutils2 sprint in January. Sprinters are welcome in our Paris office from 9h on the 27th of January to 19h the 30th of January. This sprint will focus on polishing distutils2 for the next alpha release and on the install/remove scripts.

    Distutils2 is an important project for Python. Every contribution will help to improve the current state of packaging in Python. See the wiki page on for details about participation. If you can't attend or join us in Paris, you can participate on the #distutils channel of the freenode irc network

    For additional details, see Tarek Ziadé's original announce, read the wiki page on or contact us

  • Accessing data on a virtual machine without network

    2010/12/02 by Andre Espaze

    At Logilab, we work a lot with virtual machines for testing and developping code on customers architecture. We access virtual machines through the network and copy data with scp command. However in case you get a network failure, there is still a way to access your data by mounting a rescue disk on the virtual machine. The following commands will use qemu but the idea could certainly be adapted for others emulators.

    Creating and mounting the rescue disk

    For later mounting the rescue disk on your system, it is necessary to use the raw image format (by default on qemu):

    $ qemu-img create data-rescue.img 10M

    Then run your virtual machine with the 'data-rescue.img' attached (you need to add a disk storage on virtmanager). Once in your virtual system, you will have to partition and format your new hard disk. As a an example with Linux (win32 users will prefer right clicks):

    $ fdisk /dev/sdb
    $ mke2fs -j /dev/sdb1

    Then the new disk can be mounted and used:

    $ mount /dev/sdb1 /media/usb
    $ cp /home/dede/important-customer-code.tar.bz2 /media/usb
    $ umount /media/usb

    You can then stop your virtual machine.

    Getting back data from the rescue disk

    You will then have to carry your 'data-rescue.img' on a system where you can mount a file with the 'loop' option. But first we need to find where our partition start:

    $ fdisk -ul data.img
    You must set cylinders.
    You can do this from the extra functions menu.
    Disk data.img: 0 MB, 0 bytes
    255 heads, 63 sectors/track, 0 cylinders, total 0 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Disk identifier: 0x499b18da
    Device Boot      Start         End      Blocks   Id  System
    data.img1           63       16064        8001   83  Linux

    Now we can mount the partition and get back our code:

    $ mkdir /media/rescue
    $ mount -o loop,offset=$((63 * 512)) data-rescue.img /media/rescue/
    $ ls /media/rescue/

  • Thoughts on the python3 conversion workflow

    2010/11/30 by Emile Anclin


    The 2to3 script is a very useful tool. We can just use it to run over all code base, and end up with a python3 compatible code whilst keeping a python2 code base. To make our code python3 compatible, we do (or did) two things:

    • small python2 compatible modifications of our source code
    • run 2to3 over our code base to generate a python3 compatible version

    However, we not only want to have one python3 compatible version, but also keep developping our software. Hence, we want to be able to easily test it for both python2 and python3. Furthermore if we use patches to get nice commits, this is starting to be quite messy. Let's consider this in the case of Pylint. Indeed, the workflow described before proved to be unsatisfying.

    • I have two repositories, one for python2, one for python3. On the python3 side, I run 2to3 and store the modifications in a patch or a commit.

    • Whenever I implement a fix or a functionality on either side, I have to test if it still works on the other side; but as the 2to3 modifications are often quite heavy, directly creating patches on one side and applying them on the other side won't work most of the time.

    • Now say, I implement something in my python2 base and hold it in a patch or commit it. I can then pull it to my python3 repo:

      • running 2to3 on all Pylint is quite slow: around 30 sec for Pylint without the tests, and around 2 min with the tests. (I'd rather not imagine how long it would take for say CubicWeb).

      • even if I have all my 2to3 modifications on a patch, it takes 5-6 sec to "qpush" or "qpop" them all. Commiting the 2to3 changes instead and using:

        hg pull -u --rebase

        is not much faster. If I don't use --rebase, I will have merges on each pull up. Furthermore, we often have either a patch application failure, merge conflict or end up with something which is not python3 compatible (like a newly introduced "except Error, exc").

    • So quite often, I will have to fix it with:

      hg revert -r REV <broken_files>
      2to3 -nw <broken_files>
      hg qref # or hg resolve -m; hg rebase -c
    • Suppose that 2to3 transition worked fine, or that we fixed it. I run my tests with python3 and see it does not work; so I modify the patch: it all starts again; and the new patch or the patch modification will create a new head in my python3 repo...

    2to3 Fixers

    Considering all that, let's investigate 2to3: it comes with a lot of fixers that can be activated or desactived. Now, a lot of them fix just very seldom use cases or stuff deprecated since years. On the other hand, the 2to3 fixers work with regular expressions, so the more we remove, the faster 2to3 should be. Depending on the project, most cases will just not appear, and for the others, we should be able to find other means of disabling them. The lists proposed here after are just suggestions, it will depend on the source base and other overall considerations which and how fixers could actually be disabled.

    python2 compatible

    Following fixers are 2.x compatible and should be run once and for all (and can then be disabled on daily conversion usage):

    • apply
    • execfile (?)
    • exitfunc
    • getcwdu
    • has_key
    • idioms
    • ne
    • nonzero
    • paren
    • repr
    • standarderror
    • sys_exec
    • tuple_params
    • ws_comma


    This can be fixed using imports from a "compat" module like the logilab.common.compat module which holds convenient compatible objects.

    • callable
    • exec
    • filter (Wraps filter() usage in a list call)
    • input
    • intern
    • itertools_imports
    • itertools
    • map (Wraps map() in a list call)
    • raw_input
    • reduce
    • zip (Wraps zip() usage in a list call)

    strings and bytes

    Maybe they could also be handled by compat:

    • basestring
    • unicode
    • print

    For print for example, we could think of a once-and-for-all custom fixer, that would replace it by a convenient echo function (or whatever name you like) defined in compat.


    Following issues could probably be fixed manually:

    • dict (it fixes dict iterator methods; it should be possible to have code where we can disable this fixer)
    • import (Detects sibling imports; we could convert them to absolute import)
    • imports, imports2 (renamed modules)


    These changes seem to be necessary:

    • except
    • long
    • funcattrs
    • future
    • isinstance (Fixes duplicate types in the second argument of isinstance(). For example, isinstance(x, (int, int)) is converted to isinstance(x, (int)))
    • metaclass
    • methodattrs
    • numliterals
    • next
    • raise

    Consider however that a lot of them might never be used in some projects, like long, funcattrs, methodattrs and numliterals or even metaclass. Also, isinstance is probably motivated by long to int and unicode to str conversions and hence might also be somehow avoided.

    don't know

    Can we fix these one also with compat ?

    • renames
    • throw
    • types
    • urllib
    • xrange
    • xreadlines

    2to3 and Pylint

    Pylint is a special case since its test suite has a lot of bad and deprecated code which should stay there. However, in order to have a reasonable work flow, it seems that something must be done to reduce the 1:30 minutes of 2to3 parsing of the tests. Probably nothing could be gained from the above considerations since most cases just should be in the tests, and actually are. Realise that We can expect to be supporting python2 and python3 for several years in parallel.

    After a quick look, we see that 90 % of the refactorings of test/input files are just concerning the print statements; more over most of them have nothing to do with the tested functionality. Hence a solution might be to avoid to run 2to3 on the test/input directory, since we already have a mechanism to select depending on python version whether a test file should be tested or not. To some extend, astng is a similar case, but the test suite and the whole project is much smaller.

  • Notes on making "logilab-common" Py3k-compatible

    2010/09/28 by Emile Anclin

    The version 3 of Python is incompatible with the 2.x series. In order to make pylint usable with Python3, I did some work on making the logilab-common library Python3 compatible, since pylint depends on it.

    The strategy is to have one source code version, and to use the 2to3 tool for publishing a Python3 compatible version.

    Pytest vs. Unittest

    The first problem was that we use the pytest runner, that depends on logilab.common.testlib which extends the unittest module.

    Without major modification we could use unittest2 instead of unittest in Python2.6. I thought that the unittest2 module was equivalent to the unittest in Python3, but then realized I was wrong:

    • Python3.1/unittest is some strange "forward port" of unittest. Both are a single file, but they must be quite different since 3.1 has 1623 lines compared to 875 from 2.6...
    • Python2.x/unittest2 is a python package, backported from the alpha-release of Python3.2/unittest.

    I did not investigate if there are other unittest and unittest2 versions corresponding.

    What we can see is that the 3.1 version of unittest is different from everything else; whereas the 2.6-unittest2 is equivalent to 3.2-unittest. So, after trying to run pytest on Python3.1 and since there is a backport of unittest2 for Python3.1, it became clear that the best is to ignore py3.1-unittest and work on Python3.2 and unittest2 directly.

    Meanwhile, some work was being done on logilab-common to switch from unittest to unittest2. This was included in logilab.common-0.52.

    'python2.6 -3' and 2to3

    The -3 option of python2.6 warns about Python3 incompatible stuff.

    Since I already knew that pytest would work with unittest2, I wanted to know as fast as possible if pytest would run on Python3.x. So I run all logilab.common tests with "python2.6 -3 bin/pytest" and found a couple of problems that I quick-fixed or discarded, waiting to know the real solution.

    The 2to3 script (from the 2to3 library) does its best to transform Python2.x code into Python3 compatible code, but manual work is often needed to handle some cases. For example file is not considered a deprecated base class, calls to raw_input(...) are handled but not using raw_input as an instance attribute, etc. At times, 2to3 can be overzealous, and for example do modifications such as:

    -                for name, local_node in node.items():
    +                for name, local_node in list(node.items()):


    After a while, I found that the best solution was to adopt the following working procedure:

    • run the tests with python2.6 -3 and solve the appearing issues.
    • run 2to3 on all that has to be transformed:
    2to3-2.6 -n -w *py test/*py ureports/*py

    Since we are in a mercurial repository we don't need backups (-n) and we can write the modifications to the files directly (-w).

    • create a 223.diff patch that will be applied and removed repeatedly.

      Now, we will push and pop this patch (which is much faster than running 2to3), and only regenerate it from time to time to make sure it still works:

    • run "python3.2 bin/pytest -x", to find problems and solutions for crashes and tests that do not work. Note that after some quick fixes on logilab.common.testlib, pytest works quite well, and that we can use the "-x" option. Using Python's Whatsnew_3.0 documentation for hints is quite useful.

    • hg qpop 223.diff

    • write the solution into the 2.x code, convert it into a patch or a commit, and run the tests: some trivial things might not work or not be 2.4 compatible.

    • hg qpush 223.diff

    • repeat the procedure

    I used two repositories when working on logilab.common, one for Python2 and one for Python3, because other tools, like astng and pylint, depend on that library. Setting the PYTHONPATH was enough to get astng and pylint to use the right version.

    Concrete examples

    • We had to remove "os.path.walk" by replacing it with "os.walk".

    • The renaming of raw_input to input, __builtin__ to builtins and IOString to io could easily be resolved by using the improved logilab.common.compat technique: write a python version dependent definition of a variable, function, or class in logilab.common.compat and import it from there.

      For builtin, it is even easier: as 2to3 recognizes direct imports, so we can write in

    import __builtin__ as builtins # 2to3 will tranform '__builtin__' to 'builtins'

    The most difficult point is the replacement of str/unicode by bytes/str.

    In Python3.x, we only use unicode strings called just str (the u'' syntax and unicode disappear), but everything written on disk will have to be converted to bytes, with some explicit encoding. In Python3.x, file descriptors have a defined encoding, and will automatically transform the strings to bytes.

    I wrote two functions in logilab.common.compat. One converts str to bytes and the other simply ignores the encoding in case of 3.x where it was expected in 2.x. But there might be a need to write additional tests to make sure the modifications work as expected.


    • After less than a week of work, most of the logilab.common tests pass. The biggest remaining problem are the tests for But we can already start working on the Python3 compatibility for astng and finally pylint.
    • Looking at the lib2to3 library, one can see that 2to3 works with regular expressions which reproduce the Python grammar. Hence, it can not do much code investigation or static inference like astng. I think that using astng, we could improve 2to3 without too much effort.
    • for astng the difficulties are quite different: syntax changes become semantic changes, we will have to add new types of astng nodes.
    • For testing astng and pylint we will probably have to check the different test examples, a lot of them being code snippets which 2to3 will not parse; they will have to be corrected by hand.

    As a general conclusion, I found no need for using sa2to3, although it might be a very good tool. I would instead suggest to have a small compat module and keep only one version of the code, as far as possible. The code base being either on 2.x or on 3.x and using the (possibly customized) 2to3 or 3to2 scripts to publish two different versions.

  • SemWeb.Pro - first french Semantic Web conference, Jan 17/18 2011

    2010/09/20 by Nicolas Chauvat

    SemWeb.Pro, the first french conference dedicated to the Semantic Web will take place in Paris on January 17/18 2011.

    One day of talks, one day of tutorials.

    Want to grok the Web 3.0? Be there.

    Something you want to share? Call for papers ends on October 15, 2010.

  • Discovering logilab-common Part 1 - deprecation module

    2010/09/02 by Stéphanie Marcu

    logilab-common library contains a lot of utilities which are often unknown. I will write a series of blog entries to explore nice features of this library.

    We will begin with the logilab.common.deprecation module which contains utilities to warn users when:

    • a function or a method is deprecated
    • a class has been moved into another module
    • a class has been renamed
    • a callable has been moved to a new module


    When a function or a method is deprecated, you can use the deprecated decorator. It will print a message to warn the user that the function is deprecated.

    The decorator takes two optional arguments:

    • reason: the deprecation message. A good practice is to specify at the beginning of the message, between brackets, the version number from which the function is deprecated. The default message is 'The function "[function name]" is deprecated'.
    • stacklevel: This is the option of the warnings.warn function which is used by the decorator. The default value is 2.

    We have a class Person defined in a file The get_surname method is deprecated, we must use the get_lastname method instead. For that, we use the deprecated decorator on the get_surname method.

    from logilab.common.deprecation import deprecated
    class Person(object):
        def __init__(self, firstname, lastname):
            self._firstname = firstname
            self._lastname = lastname
        def get_firstname(self):
            return self._firstname
        def get_lastname(self):
            return self._lastname
        @deprecated('[1.2] use get_lastname instead')
        def get_surname(self):
            return self.get_lastname()
    def create_user(firstname, lastname):
        return Person(firstname, lastname)
    if __name__ == '__main__':
        person = create_user('Paul', 'Smith')
        surname = person.get_surname()

    When running we have the message below: DeprecationWarning: [1.2] use get_lastname instead
    surname = person.get_surname()


    Now we moved the class Person in a file. We notice in the file that the class has been moved:

    from logilab.common.deprecation import class_moved
    import new_person
    Person = class_moved(new_person.Person)
    if __name__ == '__main__':
        person = Person('Paul', 'Smith')

    When we run the file, we have the following message: DeprecationWarning: class Person is now available as new_person.Person
    person = Person('Paul', 'Smith')

    The class_moved function takes one mandatory argument and two optional:

    • new_class: this mandatory argument is the new class
    • old_name: this optional argument specify the old class name. By default it is the same name than the new class. This argument is used in the default printed message.
    • message: with this optional argument, you can specify a custom message


    The class_renamed function automatically creates a class which fires a DeprecationWarning when instantiated.

    The function takes two mandatory arguments and one optional:

    • old_name: a string which contains the old class name
    • new_class: the new class
    • message: an optional message. The default one is '[old class name] is deprecated, use [new class name]'

    We now rename the Person class into User class in the file. Here is the new file:

    from logilab.common.deprecation import class_renamed
    from new_person import User
    Person = class_renamed('Person', User)
    if __name__ == '__main__':
        person = Person('Paul', 'Smith')

    When running, we have the following message: DeprecationWarning: Person is deprecated, use User
    person = Person('Paul', 'Smith')


    The moved function is used to tell that a callable has been moved to a new module. It returns a callable wrapper, so that when the wrapper is called, a warning is printed telling where the object can be found. Then the import is done (and not before) and the actual object is called.


    The usage is somewhat limited on classes since it will fail if the wrapper is used in a class ancestors list: use the class_moved function instead (which has no lazy import feature though).

    The moved function takes two mandatory parameters:

    • modpath: a string representing the path to the new module
    • objname: the name of the new callable

    We will use in, the create_user function which is now defined in the file:

    from logilab.common.deprecation import moved
    create_user = moved('new_person', 'create_user')
    if __name__ == '__main__':
        person = create_user('Paul', 'Smith')

    When running, we have the following message: DeprecationWarning: object create_user has been moved to module new_person
    person = create_user('Paul', 'Smith')

  • pdb.set_trace no longer working: problem solved


    I had a bad case of bug hunting today which took me > 5 hours to track down (with the help of Adrien in the end).

    I was trying to start a CubicWeb instance on my computer, and was encountering some strange pyro error at startup. So I edited some source file to add a pdb.set_trace() statement and restarted the instance, waiting for Python's debugger to kick in. But that did not happen. I was baffled. I first checked for standard problems:

    • no or pdb.pyc was lying around in my Python sys.path
    • the pdb.set_trace function had not been silently redefined
    • no other thread was bugging me
    • the standard input and output were what they were supposed to be
    • I was not able to reproduce the issue on other machines

    After triple checking everything, grepping everywhere, I asked a question on StackOverflow before taking a lunch break (if you go there, you'll see the answer). After lunch, no useful answer had come in, so I asked Adrien for help, because two pairs of eyes are better than one in some cases. We dutifully traced down the pdb module's code to the underlying bdb and cmd modules and learned some interesting things on the way down there. Finally, we found out that the Python code frames which should have been identical where not. This discovery caused further bafflement. We looked at the frames, and saw that one of those frames's class was a psyco generated wrapper.

    It turned out that CubicWeb can use two implementation of the RQL module: one which uses gecode (a C++ library for constraint based programming) and one which uses logilab.constraint (a pure python library for constraint solving). The former is the default, but it would not load on my computer, because the gecode library had been replaced by a more recent version during an upgrade. The pure python implementation tries to use psyco to speed up things. Installing the correct version of libgecode solved the issue. End of story.

    When I checked out StackOverflow, Ned Batchelder had provided an answer. I didn't get the satisfaction of answering the question myself...

    Once this was figured out, solving the initial pyro issue took 2 minutes...

  • EuroSciPy'10

    2010/07/13 by Adrien Chauve

    The EuroSciPy2010 conference was held in Paris at the Ecole Normale Supérieure from July 8th to 11th and was organized and sponsored by Logilab and other companies.

    July, 8-9: Tutorials

    The first two days were dedicated to tutorials and I had the chance to talk about SciPy with André Espaze, Gaël Varoquaux and Emanuelle Gouillart in the introductory track. This was nice but it was a bit tricky to present SciPy in such a short time while trying to illustrate the material with real and interesting examples. One very nice thing for the introductory track is that all the material was contributed by different speakers and is freely available in a github repository (licensed under CC BY).

    July, 10-11: Scientific track

    The next two days were dedicated to scientific presentations and why python is such a great tool to develop scientific software and carry out research.


    I had a very great time listening to the presentations, starting with the two very nice keynotes given by Hans Petter Langtangen and Konrad Hinsen. The latter gave us a very nice summary of what happened in the scientific python world during the past 15 years, what is happening now and of course what could happen during the next 15 years. Using a crystal ball and a very humorous tone, he made it very clear that the challenge in the next years will be about how using our hundreds, thousands or even more cores in a bug-free and efficient way. Functional programming may be a very good solution to this challenge as it provides a deterministic way of parallelizing our programs. Konrad also provided some hints about future versions of python that could provide a deeper and more efficient support of functional programming and maybe the addition of a keyword 'async' to handle the computation of a function in another core.

    In fact, the PEP 3148 entitled "Futures - execute computations asynchronously" was just accepted two days ago. This PEP describes the new package called "futures" designed to facilitate the evaluation of callables using threads and processes in future versions of python. A full implementation is already available.


    Parallelization was indeed a very popular issue across presentations, and as for resolving embarrassingly parallel problems, several solutions were presented.

    • Playdoh: Distributes computations over computers connected to a secure network (see playdoh presentation).

      Distributing the computation of a function over two machines is as simple as:

      import playdoh
      result1, result2 =, [arg1, arg2], _machines = ['', ''])
    • Theano: Allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. In particular it can use GPU transparently and generate optimized C code (see theano presentation).

    • joblib: Provides among other things helpers for embarrassingly parallel problems. It's built over the multiprocessing package introduced in python 2.6 and brings more readable code and easier debugging.


    Concerning speed, Fransesc Alted has showed us interesting tools for memory optimization currently successfully used in PyTables 2.2. You can read more details on these kind of optimizations in EuroSciPy'09 (part 1/2): The Need For Speed.


    Last but not least, I talked with Cristophe Pradal who is one of the core developer of OpenAlea. He convinced me that SCons is worth using once you have built a nice extension for it: SConsX. I'm looking forward to testing it.

  • HOWTO install lodgeit pastebin under Debian/Ubuntu

    2010/06/24 by Arthur Lutz

    Lodge it is a simple open source pastebin... and it's written in Python!

    The installation under debian/ubuntu goes as follows:

    sudo apt-get update
    sudo apt-get -uVf install python-imaging python-sqlalchemy python-jinja2 python-pybabel python-werkzeug python-simplejson
    cd local
    hg clone
    cd lodgeit-main

    For debian squeeze you have to downgrade python-werkzeug, so get the old version of python-werkzeug from at


    Modify the dburi and the SECRET_KEY. And launch application:

    python runserver

    Then off you go configure your apache or lighthttpd.

    An easy (and dirty) way of running it at startup is to add the following command to the www-data crontab

    @reboot cd /tmp/; nohup /usr/bin/python /usr/local/lodgeit-main/ runserver &

    This should of course be done in an init script.

    Hopefully we'll find some time to package this nice webapp for debian/ubuntu.

  • EuroSciPy 2010 schedule is out !

    2010/06/06 by Nicolas Chauvat

    The EuroSciPy 2010 conference will be held in Paris from july 8th to 11th at Ecole Normale Supérieure. Two days of tutorials, two days of conference, two interesting keynotes, a lightning talk session, an open space for collaboration and sprinting, thirty quality talks in the schedule and already 100 delegates registered.

    If you are doing science and using Python, you want to be there!

  • Salomé accepted into Debian unstable

    2010/06/03 by Andre Espaze

    Salomé is a platform for pre and post-processing of numerical simulation available at It is now available as a Debian package and should soon appear in Ubuntu as well.

    A difficult packaging work

    A first package of Salomé 3 was made by the courageous Debian developper Adam C. Powell, IV on January 2008. Such packaging is very resources intensive because of the building of many modules. But the most difficult part was to bring Salomé to an unported environment. Even today, Salomé 5 binaries are only provided by upstream as a stand-alone piece of software ready to unpack on a Debian Sarge/Etch or a Mandriva 2006/2008. This is the first reason why several patches were required for adapting the code to new versions of the dependencies. The version 3 of Salomé was so difficult and time consuming to package that Adam decided to stop during two years.

    The packaging of Salomé started back with the version 5.1.3 in January 2010. Thanks to Logilab and the OpenHPC project, I could join him during 14 weeks of work for adapting every module to Debian unstable. Porting to the new versions of the dependencies was a first step, but we had also to adapt the code to the Debian packaging philosophy with binaries, librairies and data shipped to dedicated directories.

    A promising future

    Salomé being accepted to Debian unstable means that porting it to Ubuntu should follow in a near future. Moreover the work done for adapting Salomé to a GNU/Linux distribution may help developpers on others platforms as well.

    That is excellent news for all people involved in numerical simulation because they are going to have access to Salomé services by using their packages management tools. It will help the spreading of Salomé code on any fresh install and moreover keep it up to date.

    Join the fun

    For mechanical engineers, a derived product called Salomé-Méca has recently been published. The goal is to bring the functionalities from the Code Aster finite element solver to Salomé in order to ease simulation workflows. If you are as well interested in Debian packages for those tools, you are invited to come with us and join the fun.

    I have submitted a proposal to talk about Salomé at EuroSciPy 2010. I look forward to meet other interested parties during this conference that will take place in Paris on July 8th-11th.

  • Enable and disable encrypted swap - Ubuntu

    2010/05/18 by Arthur Lutz

    With the release of Ubuntu Lucid Lynx, the use of an encrypted /home is becoming a pretty common and simple to setup thing. This is good news for privacy reasons obviously. The next step which a lot of users are reluctant to accomplish is the use of an encrypted swap. One of the most obvious reasons is that in most cases it breaks the suspend and hibernate functions.

    Here is a little HOWTO on how to switch from normal swap to encrypted swap and back. That way, when you need a secure laptop (trip to a conference, or situtation with risk of theft) you can active it, and then deactivate it when you're at home for example.

    Turn it on

    That is pretty simple

    sudo ecryptfs-setup-swap

    Turn it off

    The idea is to turn off swap, remove the ecryptfs layer, reformat your partition with normal swap and enable it. We use sda5 as an example for the swap partition, please use your own (fdisk -l will tell you which swap partition you are using - or in /etc/crypttab)

    sudo swapoff -a
    sudo cryptsetup remove /dev/mapper/cryptswap1
    sudo vim /etc/crypttab
    *remove the /dev/sda5 line*
    sudo /sbin/mkswap /dev/sda5
    sudo swapon /dev/sda5
    sudo vim /etc/fstab
    *replace /dev/mapper/cryptswap1 with /dev/sda5*

    If this is is useful, you can probably stick it in a script to turn on and off... maybe we could get an ecryptfs-unsetup-swap into ecryptfs.

  • The DEBSIGN_KEYID trick

    2010/05/12 by Nicolas Chauvat

    I have been wondering for some time why debsign would not use the DEBSIGN_KEYID environment variable that I exported from my bashrc. Debian bug 444641 explains the trick: debsign ignores environment variables and sources ~/.devscripts instead. A simple export DEBSIGN_KEYID=ABCDEFG in ~/.devscripts is enough to get rid of the -k argument once and for good.

  • pylint bug days #2 report

    2010/04/19 by Sylvain Thenault

    First of all, I've to say that pylint bugs day wasn't that successful in term of 'community event': I've been sprinting almost alone. My Logilab's felows were tied to customer projects, and no outside people shown up on jabber. Fortunatly Tarek Ziade came to visit us, and that was a nice opportunity to talk about pylint, distribute, etc ... Thank you Tarek, you saved my day ;)

    As I felt a bit alone, I decided to work on somethings funnier than bug fixing: refactoring!

    First, I've greatly simplified the command line: enable-msg/enable-msg-cat/enable-checker/enable-report and their disable-* counterparts were all merged into single --enable/--disable options.

    I've also simplified "pylint --help" output, providing a --long-help option to get what we had before. Generic support in `logilab.common.configuration of course.

    And last but not least, I refactored pylint so we can have multiple checkers with the same name. The idea behind this is that we can split checker into smaller chunks, basically only responsible for one or a few related messages. When pylint runs, it only uses necessary checkers according to activated messages and reports. When all checkers will be splitted, it should improve performance of "pylint --error-only".

    So, I can say I'm finally happy with the results of that pylint bugs day! And hopefuly we will be more people for the next edition...

  • Virtualenv - Play safely with a Python

    2010/03/26 by Alain Leufroy

    virtualenv, pip and Distribute are tree tools that help developers and packagers. In this short presentation we will see some virtualenv capabilities.

    Please, keep in mind that all above stuff has been made using : Debian Lenny, python 2.5 and virtualenv 1.4.5.


    virtualenv builds python sandboxes where it is possible to do whatever you want as a simple user without putting in jeopardy your global environment.

    virtualenv allows you to safety:

    • install any python packages
    • add debug lines everywhere (not only in your scripts)
    • switch between python versions
    • try your code as you are a final user
    • and so on ...

    Install and usage


    Prefered way

    Just download the virtualenv python script at and call it using python (e.g. python

    For conveinience, we will refers to this script using virtualenv.

    Other ways

    For Debian (ubuntu as well) addicts, just do :

    $ sudo aptitude install python-virtualenv

    Fedora users would do:

    $ sudo yum install python-virtualenv

    And others can install from PyPI (as superuser):

    $ pip install virtualenv


    $ easy_install pip && pip install virtualenv

    You could also get the source here.

    Quick Guide

    To work in a python sandbox, do as follow:

    $ virtualenv my_py_env
    $ source my_py_env/bin/activate
    (my_py_env)$ python

    "That's all Folks !"

    Once you have finished just do:

    (my_py_env)$ deactivate

    or quit the tty.

    What does virtualenv actually do ?

    At creation time

    Let's start again ... more slowly. Consider the following environment:

    $ pwd
    $ ls

    Now create a sandbox called my-sandbox:

    $ virtualenv my-sandbox
    New python executable in "my-sandbox/bin/python"
    Installing setuptools............done.

    The output said that you have a new python executable and specific install tools. Your current directory now looks like:

    $ ls -Cl
    my-sandbox/ README
    $ tree -L 3 my-sandbox
    |-- bin
    |   |-- activate
    |   |--
    |   |-- easy_install
    |   |-- easy_install-2.5
    |   |-- pip
    |   `-- python
    |-- include
    |   `-- python2.5 -> /usr/include/python2.5
    `-- lib
        `-- python2.5
            |-- ...
            |-- orig-prefix.txt
            |-- -> /usr/lib/python2.5/
            |-- -> /usr/lib/python2.5/
            |-- ...
            |-- site-packages
            |   |-- easy-install.pth
            |   |-- pip-0.6.3-py2.5.egg
            |   |-- setuptools-0.6c11-py2.5.egg
            |   `-- setuptools.pth
            |-- ...

    In addition to the new python executable and the install tools you have an whole new python environment containing libraries, a site-packages/ (where your packages will be installed), a bin directory, ...

    virtualenv does not create every file needed to get a whole new python environment. It uses links to global environment files instead in order to save disk space end speed up the sandbox creation. Therefore, there must already have an active python environment installed on your system.

    At activation time

    At this point you have to activate the sandbox in order to use your custom python. Once activated, python still has access to the global environment but will look at your sandbox first for python's modules:

    $ source my-sandbox/bin/activate
    (my-sandbox)$ which python
    $ echo $PATH
    (pyver)$ python -c 'import sys;print sys.prefix;'
    (pyver)$ python -c 'import sys;print "\n".join(sys.path)'

    First of all, a (my-sandbox) message is automatically added to your prompt in order to make it clear that you're using a python sandbox environment.

    Secondly, my-sandbox/bin/ is added to your PATH. So, running python calls the specific python executable placed in my-sandbox/bin.

    It is possible to improve the sandbox isolation by ignoring the global paths and your PYTHONPATH (see Improve isolation section).

    Installing package

    It is possible to install any packages in the sandbox without any superuser privilege. For instance, we will install the pylint development revision in the sandbox.

    Suppose that you have the pylint stable version already installed in your global environment:

    (my-sandbox)$ deactivate
    $ python -c 'from pylint.__pkginfo__ import version;print version'

    Once your sandbox activated, install the development revision of pylint as an update:

    $ source /home/you/some/where/my-sandbox/bin/activate
    (my-sandbox)$ pip install -U hg+

    The new package and its dependencies are only installed in the sandbox:

    (my-sandbox)$ python -c 'import pylint.__pkginfo__ as p;print p.version, p.__file__'
    0.19.0 /home/you/some/where/my-sandbox/lib/python2.6/site-packages/pylint/__pkginfo__.pyc
    (my-sandbox)$ deactivate
    $ python -c 'import pylint.__pkginfo__ as p;print p.version, p.__file__'
    0.18.0 /usr/lib/pymodules/python2.6/pylint/__pkginfo__.pyc

    You can safely do any change in the new pylint code or in others sandboxed packages because your global environment is still unchanged.

    Useful options

    Improve isolation

    As said before, your sandboxed python sys.path still references the global system path. You can however hide them by:

    • either use the --no-site-packages that do not give access to the global site-packages directory to the sandbox
    • or change your PYTHONPATH in my-sandbox/bin/activate in the same way as for PATH (see tips)
    $ virtualenv --no-site-packages closedPy
    $ sed -i '9i PYTHONPATH="$_OLD_PYTHON_PATH"
          9i export PYTHONPATH
          9i unset _OLD_PYTHON_PATH
          40i PYTHONPATH="."
          40i export PYTHONPATH' closedPy/bin/activate
    $ source closedPy/bin/activate
    (closedPy)$ python -c 'import sys; print "\n".join(sys.path)'
    $ deactivate

    This way, you'll get an even more isolated sandbox, just as with a brand new python environment.

    Work with different versions of Python

    It is possible to dedicate a sandbox to a particular version of python by using the --python=PYTHON_EXE which specifies the interpreter that virtualenv was installed with (default is /usr/bin/python):

    $ virtualenv --python=python2.4 pyver24
    $ source pyver24/bin/activate
    (pyver24)$ python -V
    Python 2.4.6
    $ deactivate
    $ virtualenv --python=python2.5 pyver25
    $ source pyver25/bin/activate
    (pyver25)$ python -V
    Python 2.5.2
    $ deactivate

    Distribute a sandbox

    To distribute your sandbox, you must use the --relocatable option that makes an existing sandbox relocatable. This fixes up scripts and makes all .pth files relative This option should be called just before you distribute the sandbox (each time you have changed something in your sandbox).

    An important point is that the host system should be similar to your own.


    Speed up sandbox manipulation

    Add these scripts to your .bashrc in order to help you using virtualenv and automate the creation and activation processes.

    rel2abs() {
      [ "$#" -eq 1 ] || return 1
      ls -Ld -- "$1" > /dev/null || return
      dir=$(dirname -- "$1" && echo .) || return
      dir=$(cd -P -- "${dir%??}" && pwd -P && echo .) || return
      file=$(basename -- "$1" && echo .) || return
      case $dir in
        /) printf '%s\n' "/$file";;
        /*) printf '%s\n' "$dir/$file";;
        *) return 1;;
      return 0
    function activate(){
        if [[ "$1" == "--help" ]]; then
            echo -e "usage: activate PATH\n"
            echo -e "Activate the sandbox where PATH points inside of.\n"
        if [[ "$1" == '' ]]; then
            local target=$(pwd)
            local target=$(rel2abs "$1")
        until  [[ "$target" == '/' ]]; do
            if test -e "$target/bin/activate"; then
                source "$target/bin/activate"
                echo "$target sandbox activated"
            target=$(dirname "$target")
        echo 'no sandbox found'
    function mksandbox(){
        if [[ "$1" == "--help" ]]; then
            echo -e "usage: mksandbox NAME\n"
            echo -e "Create and activate a highly isaolated sandbox named NAME.\n"
        local name='sandbox'
        if [[ "$1" != "" ]]; then
        if [[ -e "$1/bin/activate" ]]; then
            echo "$1 is already a sandbox"
        virtualenv --no-site-packages --clear --distribute "$name"
        sed -i '9i PYTHONPATH="$_OLD_PYTHON_PATH"
                9i export PYTHONPATH
                9i unset _OLD_PYTHON_PATH
               40i _OLD_PYTHON_PATH="$PYTHONPATH"
               40i PYTHONPATH="."
               40i export PYTHONPATH' "$name/bin/activate"
        activate "$name"
    The virtualenv-commands and virtualenvwrapper projects add some very interesting features to virtualenv. So, put on eye on them for more advanced features than the above ones.


    I found it to be irreplaceable for testing new configurations or working on projects with different dependencies. Moreover, I use it to learn about other python projects, how my project exactly interacts with its dependencies (during debugging) or to test the final user experience.

    All of this stuff can be done without virtualenv but not in such an easy and secure way.

    I will continue the series by introducing other useful projects to enhance your productivity : pip and Distribute. See you soon.

show 208 results