subscribe to this blog - en

News from Logilab and our Free Software projects, as well as on topics dear to our hearts (Python, Debian, Linux, the semantic web, scientific computing...)

show 204 results
  • Launching Python scripts via Condor


    As part of an ongoing customer project, I've been learning about the Condor queue management system (actually it is more than just a batch queue management system, tacking the High-throughput computing problem, but in my current project, we're not using the full possibilities of Condor, and the choice was dictated by other considerations outside the scope of this note). The documentation is excellent, and the features of the product are really amazing (pity the project runs on Windows, and we cannot use 90% of these...).

    To launch a job on a computer participating in the Condor farm, you just have to write a job file which looks like this:


    and then run condor_submit my_job_file and use condor_q to monitor the status your job (queued, running...)

    My program is generating Condor job files and submitting them, and I've spent hours yesterday trying to understand why they were all failing : the stderr file contained a message from Python complaining that it could not import site and exiting.

    A point which was not clear in the documentation I read (but I probably overlooked it) is that the executable mentionned in the job file is supposed to be a local file on the submission host which is copied to the computer running the job. In the jobs generated by my code, I was using sys.executable for the Executable field, and a path to the python script I wanted to run in the Arguments field. This resulted in the Python interpreter being copied on the execution host and not being able to run because it was not able to find the standard files it needs at startup.

    Once I figured this out, the fix was easy: I made my program write a batch script which launched the Python script and changed the job to run that script.

    UPDATE : I'm told there is a Transfer_executable=False line I could have put in the script to achieve the same thing.

    (photo by gudi&cris licenced under CC-BY-ND)

  • Adding Mercurial build identification to Python

    2010/02/15 by Andre Espaze

    This work is a part of the build identification task found in the PEP 385, Migrating from svn to Mercurial: It was done during the Mercurial sprint hosted at Logilab. If you would like to see the result, just follow the steps:

    hg clone
    cd pymigr/build-identification

    Setting up the environment

    The current Python development branch is first checkout:

    svn co

    A patch will be applied for adding the 'sys.mercurial' attribute and modifying the build informations:

    cp add-hg-build-id.diff trunk/
    cd trunk
    svn up -r 78019
    patch -p0 < add-hg-build-id.diff

    The changed made to '' need then to be propagated to the configure script:


    The configuration is then done by:

    ./configure --enable-shared --prefix=/dev/null

    You should now see changes propagated to the Makefile for finding the revision, the tag and the branch:

    grep MERCURIAL Makefile

    Finally, Python can be built:


    The sys.mercurial attribute should already be present:

    LD_LIBRARY_PATH=. ./python
    >>> import sys
    >>> sys.mercurial
    ('CPython', '', '')

    No tag nor revision have been found as there was no mercurial repository. A test by the Py_GetBuildInfo() in the C API will also be built:

    gcc -o show-build-info -I. -IInclude -L. -lpython2.7 ../show-build-info.c

    You can test its result by:

    LD_LIBRARY_PATH=. ./show-build-info
    -> default, Feb  7 2010, 15:07:46

    Manual test

    First a fake mercurial tree is built:

    hg init
    hg add README
    hg ci -m "Initial repo"
    hg id
    -> 84a6de74e48f tip

    Now Python needs to be built with the given mercurial information:

    rm Modules/getbuildinfo.o

    You should then see the current revision number:

    LD_LIBRARY_PATH=. ./python
    >>> import sys
    >>> sys.mercurial
    ('CPython', 'default', '84a6de74e48f')

    and the C API can be tested by:

    LD_LIBRARY_PATH=. ./show-build-info
    -> default:84a6de74e48f, Feb  7 2010, 15:10:13

    The fake mercurial repository can now be cleaned:

    rm -rf .hg

    Automatic tests

    Automatic tests for checking the behavior for every cases will now work build Python and clean afterward. Those tests work only when run from the trunk svn directory of Python:

    python ../

    Further work

    The current work is only an attempt to add the mercurial build identification to Python, it still needs to be checked on production cases. Moreover the build identification on Windows has not been started yet, it will need to be integrated to the Microsoft Visual Studio building process.

  • Why you shoud get rid of os.system, os.popen, etc. in your code


    I regularly come across code such as:

    output = os.popen('diff -u %s %s' % (appl_file, ref_file), 'r')

    Code like this might well work machine but it is buggy and will fail (preferably during the demo or once shipped).

    Where is the bug?

    It is in the use of %s, which can inject in your command any string you want and also strings you don't want. The problem is that you probably did not check appl_file and ref_file for weird things (spaces, quotes, semi colons...). Putting quotes around the %s in the string will not solve the issue.

    So what should you do? The answer is "use the subprocess module": subprocess.Popen takes a list of arguments as first parameter, which are passed as-is to the new process creation system call of your platform, and not interpreted by the shell:

    pipe = subprocess.Popen(['diff', '-u', appl_file, ref_file], stdout=subprocess.PIPE)
    output = pipe.stdout

    By now, you should have guessed that the shell=True parameter of subprocess.Popen should not be used unless you really really need it (and even them, I encourage you to question that need).

  • Apycot for Mercurial

    2010/02/11 by Pierre-Yves David

    What is apycot

    apycot is a highly extensible test automatization tool used for Continuous Integration. It can:

    • download the project from a version controlled repository (like SVN or Hg);
    • install it from scratch with all dependencies;
    • run various checkers;
    • store the results in a CubicWeb database;
    • post-process the results;
    • display the results in various format (html, xml, pdf, mail, RSS...);
    • repeat the whole procedure with various configurations;
    • get triggered by new changesets or run periodically.

    For an example, take a look at the "test reports" tab of the logilab-common project.

    Setting up an apycot for Mercurial

    During the mercurial sprint, we set up a proof-of-concept environment running six different checkers:

    • Check syntax of all python files.
    • Check syntax of all documentation files.
    • Run pylint on the mercurial source code with the mercurial pylintrc.
    • Run the script included in mercurial checking style and python errors
    • Run the Mercurial's test suite.
    • Run Mercurial's benchmark on a reference repository.

    The first three checkers, shipped with apycot, were set up quickly. The last three are mercurial specific and required few additional tweaks to be integrated to apycot.

    The bot was setup to run with all public mercurial repositories. Five checkers immediately proved useful as they pointed out some errors or warnings (on some rarely used contrib files it even found a syntax error).


    A public instance is being set up. It will provide features that the community is looking forward to:

    • testing all python versions;
    • running pure python or the C variant;
    • code coverage of the test suite;
    • performance history.


    apycot proved to be highly flexible and could quickly be adapted to Mercurial's test suite even for people new to apycot. The advantages of continuously running different long running tests is obvious. So apycot seems to be a very valuable tool for improving the software development process.

  • SCons presentation in 5 minutes

    2010/02/09 by Andre Espaze

    Building software with SCons requires to have Python and SCons installed.

    As SCons is only made of Python modules, the sources may be shipped with your project if your clients can not install dependencies. All the following exemples can be downloaded at the end of that blog.

    A building tool for every file extension

    First a Fortran 77 program will be built made of two files:

    $ cd fortran-project
    $ scons -Q
    gfortran -o cfib.o -c cfib.f
    gfortran -o fib.o -c fib.f
    gfortran -o compute-fib cfib.o fib.o
    $ ./compute-fib
     First 10 Fibonacci numbers:
      0.  1.  1.  2.  3.  5.  8. 13. 21. 34.

    The '-Q' option tell to Scons to be less verbose. For cleaning the project, add the '-c' option:

    $ scons -Qc
    Removed cfib.o
    Removed fib.o
    Removed compute-fib

    From this first example, it can been seen that SCons find the 'gfortran' tool from the file extension. Then have a look at the user's manual if you want to set a particular tool.

    Describing the construction with Python objects

    A second C program will directly run the execution from the SCons file by adding a test command:

    $ cd c-project
    $ scons -Q run-test
    gcc -o test.o -c test.c
    gcc -o fact.o -c fact.c
    ar rc libfact.a fact.o
    ranlib libfact.a
    gcc -o test-fact test.o libfact.a
    run_test(["run-test"], ["test-fact"])

    However running scons alone builds only the main program:

    $ scons -Q
    gcc -o main.o -c main.c
    gcc -o compute-fact main.o libfact.a
    $ ./compute-fact
    Computing factorial for: 5
    Result: 120

    This second example shows that the construction dependency is described by passing Python objects. An interesting point is the possibility to add your own Python functions in the build process.

    Hierarchical build with environment

    A third C++ program will create a shared library used for two different programs: the main application and a test suite. The main application can be built by:

    $ cd cxx-project
    $ scons -Q
    g++ -o main.o -c -Imbdyn-src main.cxx
    g++ -o mbdyn-src/nodes.os -c -fPIC -Imbdyn-src mbdyn-src/nodes.cxx
    g++ -o mbdyn-src/solver.os -c -fPIC -Imbdyn-src mbdyn-src/solver.cxx
    g++ -o mbdyn-src/ -shared mbdyn-src/nodes.os mbdyn-src/solver.os
    g++ -o mbdyn main.o -Lmbdyn-src -lmbdyn

    It shows that SCons handles for us the compilation flags for creating a shared library according to the tool (-fPIC). Moreover extra environment variables have been given (CPPPATH, LIBPATH, LIBS), which are all translated for the chosen tool. All those variables can be found in the user's manual or in the man page. The building and running of the test suite is made by giving an extra variable:

    $ TEST_CMD="LD_LIBRARY_PATH=mbdyn-src ./%s" scons -Q run-tests
    g++ -o tests/run_all_tests.o -c -Imbdyn-src tests/run_all_tests.cxx
    g++ -o tests/test_solver.o -c -Imbdyn-src tests/test_solver.cxx
    g++ -o tests/all-tests tests/run_all_tests.o tests/test_solver.o -Lmbdyn-src -lmbdyn
    run_test(["tests/run-tests"], ["tests/all-tests"])


    That is rather convenient to build softwares by manipulating Python objects, moreover custom actions can be added in the process. SCons has also a configuration mechanism working like autotools macros that can be discovered in the user's manual.

  • Extended 256 colors in bash prompt

    2010/02/07 by Nicolas Chauvat

    The Mercurial 1.5 sprint is taking place in our offices this week-end and pair-programming with Steve made me want a better looking terminal. Have you seen his extravagant zsh prompt ? I used to have only 8 colors to decorate my shell prompt, but thanks to some time spent playing around, I now have 256.

    Here is what I used to have in my bashrc for 8 colors:

    # set a fancy prompt
    export PS1="${RED}[\u@\h \W]\$${NO_COLOUR} "

    Just put the following lines in your bashrc to get the 256 colors:

    function EXT_COLOR () { echo -ne "\[\033[38;5;$1m\]"; }
    # set a fancy prompt
    export PS1="`EXT_COLOR 172`[\u@\h \W]\$${NO_COLOUR} "

    Yay, I now have an orange prompt! I now need to write a script that will display useful information depending on the context. Displaying the status of the mercurial repository I am in might be my next step.

  • We're happy to host the mercurial Sprint

    2010/02/02 by Arthur Lutz

    We're very happy to be hosting the next mercurial sprint in our brand new offices in central Paris. It is quite an honor to be chosen when the other contender was Google.

    So a bunch of mercurial developers are heading out to our offices this coming Friday to sprint for three days on mercurial. We use mercurial a lot here over at Logilab and we also contribute a tool to visualize and manipulate a mercurial repository : hgview.

    To check out the things that we will be working on with the mercurial crew, check out the program of the sprint on their wiki.

    What is a sprint? "A sprint (sometimes called a Code Jam or hack-a-thon) is a short time period (three to five days) during which software developers work on a particular chunk of functionality. "The whole idea is to have a focused group of people make progress by the end of the week," explains Jeff Whatcott" [source]. For geographically distributed open source communities, it is also a way of physically meeting and working in the same room for a period of time.

    Sprinting is a practice that we encourage at Logilab, with CubicWeb we organize as often as possible open sprints, which is an opportunity for users and developers to come and code with us. We even use the sprint format for some internal stuff.

    photo by Sebastian Mary under creative commons licence.

  • hgview 1.2.0 released

    2010/01/21 by David Douard

    Here is at last the release of the version 1.2.0 of hgview.

    In a nutshell, this release includes:

    • a basic support for mq extension,
    • a basic support for hg-bfiles extension,
    • working directory is now displayed as a node of the graph (if there are local modifications of course),
    • it's now possible to display only the subtree from a given revision (a bit like hg log -f)
    • it's also possible to activate an annotate view (make navigation slower however),
    • several improvements in the graph filling and rendering mecanisms,
    • I also added toolbar icons for the search and goto "quickbars" so they are not "hidden" any more to the one reluctant to user manuals,
    • it's now possible to go directly to the common ancestor of 2 revisions,
    • when on a merge node, it's now possible to choose the parent the diff is computed against,
    • make search also search in commit messages (it used to search only in diff contents),
    • and several bugfixes of course.
    there are packages for debian lenny, squeeze and sid, and for ubuntu hardy, interpid, jaunty and karmic. However, for lenny and hardy, provided packages won't work on pure distribs since hgview 1.2 depends on mercurial 1.1. Thus for these 2 distributions, packages will only work if you have installed backported mercurial packages.

  • New supported repositories for Debian and Ubuntu

    2010/01/21 by Arthur Lutz

    For the release of hgview 1.2.0 in our Karmic Ubuntu repository, we would like to announce that we are now going to generate packages for the following distributions :

    • Debian Lenny (because it's stable)
    • Debian Sid (because it's the dev branch)
    • Ubuntu Hardy (because it has Long Term Support)
    • Ubuntu Karmic (because it's the current stable)
    • Ubuntu Lucid (because it's the next stable) - no repo yet, but soon...

    The old packages in the previously supported architectures are still accessible (etch, jaunty, intrepid), but new versions will not be generated for these repositories. Packages will be coming in as versions get released, if before that you need a package, give us a shout and we'll see what we can do.

    For instructions on how to use the repositories for Ubuntu or Debian, go to the following page :

  • Open Source/Design Hardware

    2009/12/13 by Nicolas Chauvat

    I have been doing free software since I discovered it existed. I bought an OpenMoko some time ago, since I am interested in anything that is open, including artwork like books, music, movies and... hardware.

    I just learned about two lists, one at Wikipedia and another one at MakeOnline, but Google has more. Explore and enjoy!

show 204 results