show 69 results

Blog entries

  • CMFProjman is being ported to Plone2

    2006/10/18 by Arthur Lutz

    CMFProjman has been asleep for quite a while, and is now being reanimated to work with Plone2. We will release it as soon as we see it's stable.


  • Distutils2 Sprint at Logilab (first day)

    2011/01/28 by Alain Leufroy

    We're very happy to host the Distutils2 sprint this week in Paris.

    The sprint has started yesterday with some of Logilab's developers and others contributors. We'll sprint during 4 days, trying to pull up the new python package manager.

    Let's sumarize this first day:

    • Boris Feld and Pierre-Yves David worked on the new system for detecting and dispatching data-files.
    • Julien Miotte worked on
      • moving qGitFilterBranch from setuptools to distutils2
      • testing distutils2 installation and register (see the tutorial)
      • the backward compatibility to distutils in setup.py, using setup.cfg to fill the setup arguments of setup for helping users to switch to distutils2.
    • André Espaze and Alain Leufroy worked on the python script that help developers build a setup.cfg by recycling their existing setup.py (track).

    Join us on IRC at #distutils on irc.freenode.net !


  • PostgreSQL on windows : plpythonu and "specified module could not be found" error

    2010/03/22

    I recently had to (remotely) debug an issue on windows involving PostgreSQL and PL/Python. Basically, two very similar computers, with Python2.5 installed via python(x,y), PostgreSQL 8.3.8 installed via the binary installer. On the first machine create language plpythonu; worked like a charm, and on the other one, it failed with C:\\Program Files\\Postgresql\\8.3\\plpython.dll: specified module could not be found. This is caused by the dynamic linker not finding some DLL. Using Depends.exe showed that plpython.dll looks for python25.dll (the one it was built against in the 8.3.8 installer), but that the DLL was there.

    I'll save the various things we tried and jump directly to the solution. After much head scratching, it turned out that the first computer had TortoiseHg installed. This caused C:\\Program Files\\TortoiseHg to be included in the System PATH environment variable, and that directory contains python25.dll. On the other hand C:\\Python25 was in the user's PATH environment variable on both computers. As the database Windows service runs using a dedicated local account (typically with login postgres), it would not have C:\\Python25 in its PATH, but if TortoiseHg was there, it would find the DLL in some other directory. So the solution was to add C:\\Python25 to the system PATH.


  • Nouvelle version de LAX - Logilab Appengine eXtension

    2008/06/09 by Arthur Lutz
    http://code.google.com/appengine/images/appengine_lowres.jpg

    La version 0.3.0 de LAX est sortie aujourd'hui voir : http://lax.logilab.org/

    Il suffit de 10 petites minutes pour avoir une application qui tourne, suivez le guide :

    Mise à jour: LAX est maintenant inclus dans CubicWeb.


  • hgview 1.1.0 released

    2009/09/25 by David Douard

    I am pleased to announce the latest release of hgview 1.1.0.

    What is it?

    For the ones from the back of the classroom near the radiator, let me remind you that hgview is a very helpful tool for daily work using the excellent DVCS Mercurial (which we heavily use at Logilab). It allows to easily and visually navigate your hg repository revision graphlog. It is written in Python and pyqt.

    http://www.logilab.org/image/18210?vid=download

    What's new

    • user can now configure colors used in the diff area (and they now defaults to white on black)
    • indicate current working directory position by a square node
    • add many other configuration options (listed when typing hg help hgview)
    • removed 'hg hgview-options' command in favor of 'hg help hgview'
    • add ability to choose which parent to diff with for merge nodes
    • dramatically improved UI behaviour (shortcuts)
    • improved help and make it accessible from the GUI
    • make it possible not to display the diffstat column of the file list (which can dramatically improve performances on big repositories)
    • standalone application: improved command line options
    • indicate working directory position in the graph
    • add auto-reload feature (when the repo is modified due to a pull, a commit, etc., hgview detects it, reloads the repo and updates the graph)
    • fix many bugs, especially the file log navigator should now display the whole graph

    Download and installation

    The source code is available as a tarball, or using our public hg repository of course.

    To use it from the sources, you just have to add a line in your .hgrc file, in the [extensions] section:

    hgext.hgview=/path/to/hgview/hgext/hgview.py

    Debian and Ubuntu users can also easily install hgview (and Logilab other free software tools) using our deb package repositories.


  • Windows, fichiers ouverts et tests unitaires

    2008/07/22

    Un problème rencontré hier : un test unitaire plante sous Windows, après avoir créé un objet qui garde des fichiers ouverts. le tearDown du test est appelé, mais il plante car Windows refuse de supprimer des fichiers ouverts, et le framework de test garde une référence sur la fonction de test pour qu'on puisse examiner la pile d'appels. Sous Linux, pas de problème (on a le droit du supprimer du disque un fichier ouvert, et donc pas de soucis dans le teardown).

    Quelques pistes pour contourner le problème:

    1. mettre le test dans un try...finally avec un del sur l'objet qui garde les fichiers ouverts dans le finally. Inconvénient : quand le test ne passe pas, pdb ne permet plus de voir grand chose
    2. au lieu de nettoyer dans le tearDown, nettoyer plus tard dans un atexit par exemple. Il faut voir comment ça se passe si plusieurs tests veulent écrire dans les mêmes fichiers (je pense qu'il faudrait un répertoire temporaire par test, si on veut pouvoir avoir plusieurs tests qui foirent et examiner leurs données, mais il faut tester pour être sûr)
    3. coller un try...except dans le tearDown autour de la suppression de chaque fichier, et mettre les fichiers qui posent problème dans une liste qui sera traitée à la sortie du programme (avec atexit par exemple).

    Ça ressemble à du bricolage, mais on a un comportement de windows sur lequel on n'a pas de contrôle (même avec des privilèges Administrateur ou System, on ne peut pas contourner cette impossibilité de supprimer un fichier ouvert, à ma connaissance).

    Une autre approche, nettement plus lourde, serait de virtualiser la création de fichiers pour travailler en mémoire (au minimum surcharger os.mkdir et le builtin open, voire dans le cas qui nous intéresse les modules qui travaillent avec des fichiers zip). Il y a peut-être des choses comme ça en circulation. Poser la question sur la liste TIP apportera peut-être des réponses (une rapide recherche dans les archives n'a rien donné).

    Voir aussi ces enfilades de mars 2004 et novembre 2004 sur comp.lang.python.


  • Reinteract: un outil intéressant pour faire du numpy/scipy

    2008/05/27 by Arthur Lutz

    Il existe un outil, Reinteract, qui permet d'avoir une sorte de d'éditeur/shell Python, où l'on peut aisément modifier et réinterpreter une ligne de code.

    Sachant qu'il sait aussi afficher des plots, etc, il est possible de s'en servir avantageusement pour faire des sessions Matlab-like.

    Je pense donc que c'est un outil à présenter à nos chers apprenants qui sont intéressés par le couple python/numpy comme substitut à Matlab ©®.

    Ex:

    http://fishsoup.net/software/reinteract/reinteract-demo.png

    écrit par David Douard


  • A new way of distributing Python code ?

    2008/09/28
    http://jonathan.demoutiez.net/images/logos/python.png

    On distutils-sig, the question of distutils/setuptools replacing is frequently raised and a lot of effort is made to find what would be the best way to build and distribute python code.

    I don't understand the reason why we have a massive coupling between build and distribution (setuptools and pypi to be more precise) and I'm not convinced about this "global" approach. I hope the python community will examine the possibility to change that and split the problem in two distinct projects.

    One of the most successful ideas of Python is its power in extending other languages. And in fact, that's the major problem to solve for the build area. I'm pretty sure it will take a long time before obtaining a valuable (and widely adopted) solution and this is so complicated that the choice of the building chain should be kept under the responsibility of the upstream maintainers for now (distutils, setuptools, makefile, SCons, ...).

    Concerning the distribution, here are the mandatory features I expect:

    • installing source code managing dependencies with foreign contribution
    • have binary builds without interaction with the primary host system
    • be multi-platform agnostic (Linux, BSD, Windows, Mac, ...)
    • clean upgrade/uninstall
    • kind of sandboxes for testing and development mode
    • no administrator privilege required
    http://0install.net/tango/package-x-generic.png

    I found the http://0install.net project homepage and was really impressed by the tons of functionalities already available and the other numerous advantages, like:

    • multiple version installation
    • reuse external distribution effort (integrate deb, rpm, ...)
    • digital signatures
    • basic mirroring solution
    • notification about software updates
    • command line oriented but various GUI exist
    • try to follow standards (XDG specifications on freedesktop.org))

    I'm questioning seriously why this project could not be considered as a clean and build-independent python packages index system. Moreover, 0install has already some build capabilities (see 0compile) but the ultimate reason is that it will largely facilitate migrations when a new python build standard will emerge.

    Conclusion

    0install looks like a mature project driven by smart people and already included in modern distributions. I'll definitively give it a try soon.


  • LAX - Logilab Appengine eXtension is a full-featured web application framework running on AppEngine

    2008/06/09 by Arthur Lutz
    http://code.google.com/appengine/images/appengine_lowres.jpg

    LAX version 0.3.0 was released today, see http://lax.logilab.org/

    Get a new application running in ten minutes with the install guide and the tutorial:

    Enjoy!

    Update: LAX is now included in the CubicWeb semantic web framework.


  • EuroSciPy 2010 schedule is out !

    2010/06/06 by Nicolas Chauvat
    https://www.euroscipy.org/data/logo.png

    The EuroSciPy 2010 conference will be held in Paris from july 8th to 11th at Ecole Normale Supérieure. Two days of tutorials, two days of conference, two interesting keynotes, a lightning talk session, an open space for collaboration and sprinting, thirty quality talks in the schedule and already 100 delegates registered.

    If you are doing science and using Python, you want to be there!


  • Pylint a besoin de vous

    2009/09/17

    Après plusieurs mois au point mort ou presque, Sylvain a pu hier soir publier des versions corrigeant un certain nombre de bogues dans pylint et astng ([1] et [2]).

    Il n'en demeure pas moins qu'à Logilab, nous manquons de temps pour faire baisser la pile de tickets ouverts dans le tracker de pylint. Si vous jetez un œuil dans l'onglet Tickets, vous y trouverez un grand nombre de bogues en souffrance et de fonctionalités indispensables (certaines peut-être un peu moins que d'autres...) Il est déjà possible de contribuer en utilisant mercurial pour fournir des patches, ou en signalant des bogues (aaaaaaaaaarg ! encore des tickets !) et certains s'y sont mis, qu'ils en soient remerciés.

    Maintenant, nous nous demandions ce que nous pourrions faire pour faire avance Pylint, et nos premières idées sont :

    • organiser un petit sprint de 3 jours environ
    • organiser des jours de "tuage de ticket", comme ça se pratique sur différents projets OSS

    Mais pour que ça soit utile, nous avons besoin de votre aide. Voici donc quelques questions :

    • est-ce que vous participeriez à un sprint à Logilab (à Paris, France), ce qui nous permettrait de nous rencontrer, de vous apprendre plein de choses sur le fonctionnement de Pylint et de travailler ensemble sur des tickets qui vous aideraient dans votre travail ?
    • si la France c'est trop loin, où est-ce que ça vous arrangerait ?
    • seriez-vous prêt à vous joindre à nous sur le serveur jabber de Logilab ou sur IRC, pour participer à une chasse au ticket (à une date à déterminer). Si oui, quel est votre degré de connaissance du fonctionnement interne de Pylint et astng ?

    Vous pouvez répondre en commentant sur ce blog (pensez à vous enregistrer en utilisant le lien en haut à droite sur cette page) ou en écrivant à sylvain.thenault@logilab.fr. Si nous avons suffisamment de réponses positives nous organiserons quelque chose.


  • SCons presentation in 5 minutes

    2010/02/09 by Andre Espaze
    http://www.scons.org/scons-logo-transparent.png

    Building software with SCons requires to have Python and SCons installed.

    As SCons is only made of Python modules, the sources may be shipped with your project if your clients can not install dependencies. All the following exemples can be downloaded at the end of that blog.

    A building tool for every file extension

    First a Fortran 77 program will be built made of two files:

    $ cd fortran-project
    $ scons -Q
    gfortran -o cfib.o -c cfib.f
    gfortran -o fib.o -c fib.f
    gfortran -o compute-fib cfib.o fib.o
    $ ./compute-fib
     First 10 Fibonacci numbers:
      0.  1.  1.  2.  3.  5.  8. 13. 21. 34.
    

    The '-Q' option tell to Scons to be less verbose. For cleaning the project, add the '-c' option:

    $ scons -Qc
    Removed cfib.o
    Removed fib.o
    Removed compute-fib
    

    From this first example, it can been seen that SCons find the 'gfortran' tool from the file extension. Then have a look at the user's manual if you want to set a particular tool.

    Describing the construction with Python objects

    A second C program will directly run the execution from the SCons file by adding a test command:

    $ cd c-project
    $ scons -Q run-test
    gcc -o test.o -c test.c
    gcc -o fact.o -c fact.c
    ar rc libfact.a fact.o
    ranlib libfact.a
    gcc -o test-fact test.o libfact.a
    run_test(["run-test"], ["test-fact"])
    OK
    

    However running scons alone builds only the main program:

    $ scons -Q
    gcc -o main.o -c main.c
    gcc -o compute-fact main.o libfact.a
    $ ./compute-fact
    Computing factorial for: 5
    Result: 120
    

    This second example shows that the construction dependency is described by passing Python objects. An interesting point is the possibility to add your own Python functions in the build process.

    Hierarchical build with environment

    A third C++ program will create a shared library used for two different programs: the main application and a test suite. The main application can be built by:

    $ cd cxx-project
    $ scons -Q
    g++ -o main.o -c -Imbdyn-src main.cxx
    g++ -o mbdyn-src/nodes.os -c -fPIC -Imbdyn-src mbdyn-src/nodes.cxx
    g++ -o mbdyn-src/solver.os -c -fPIC -Imbdyn-src mbdyn-src/solver.cxx
    g++ -o mbdyn-src/libmbdyn.so -shared mbdyn-src/nodes.os mbdyn-src/solver.os
    g++ -o mbdyn main.o -Lmbdyn-src -lmbdyn
    

    It shows that SCons handles for us the compilation flags for creating a shared library according to the tool (-fPIC). Moreover extra environment variables have been given (CPPPATH, LIBPATH, LIBS), which are all translated for the chosen tool. All those variables can be found in the user's manual or in the man page. The building and running of the test suite is made by giving an extra variable:

    $ TEST_CMD="LD_LIBRARY_PATH=mbdyn-src ./%s" scons -Q run-tests
    g++ -o tests/run_all_tests.o -c -Imbdyn-src tests/run_all_tests.cxx
    g++ -o tests/test_solver.o -c -Imbdyn-src tests/test_solver.cxx
    g++ -o tests/all-tests tests/run_all_tests.o tests/test_solver.o -Lmbdyn-src -lmbdyn
    run_test(["tests/run-tests"], ["tests/all-tests"])
    OK
    

    Conclusion

    That is rather convenient to build softwares by manipulating Python objects, moreover custom actions can be added in the process. SCons has also a configuration mechanism working like autotools macros that can be discovered in the user's manual.


  • gajim, dbus and wmii

    2008/09/02 by Adrien Di Mascio
    http://upload.wikimedia.org/wikipedia/commons/d/de/Gajim.png

    I've been using for a long time a custom version of gajim in order to make it interact with wmii. More precisely, I have, in my wmii status bar, a dedicated log zone where I print notification messages such as new incoming emails or text received from gajim (with different colors if special words were cited, etc.).

    I recently decided to throw away my custom gajim and use python and dbus to achieve the same goal in a cleaner way. A very basic version can be found in the simpled project. As of now, the only way to get the code is trhough mercurial:

    hg clone http://www.logilab.org/hg/simpled
    

    The source file is named gajimnotifier.py. In this file, you'll also find a version sending messages to Ion's status bar.


  • Belier - le ssh par hops

    2009/02/17 by Arthur Lutz

    On vient de découvrir belier qui permet de se connecter facilement à des machines auquelles on doit accéder par des machines ssh intermédiaires. Ca peut s'avérer utile. En plus, c'est en python. En plus, il a fait des paquets debian... et en plus il mentionne pylint. Du coup il mérite mention ici.

    http://www.ohmytux.com/belier/images/schema_belier.png

  • Python for applied Mathematics

    2008/07/29 by Nicolas Chauvat
    http://www.ams.org/images/siam2008-brain.jpg

    The presentation of Python as a tool for applied mathematics got highlighted at the 2008 annual meeting of the american Society for Industrial and Applied Mathematics (SIAM). For more information, read this blogpost and the slides.


  • Launching Python scripts via Condor

    2010/02/17
    http://farm2.static.flickr.com/1362/1402963775_0185d2e62f.jpg

    As part of an ongoing customer project, I've been learning about the Condor queue management system (actually it is more than just a batch queue management system, tacking the High-throughput computing problem, but in my current project, we're not using the full possibilities of Condor, and the choice was dictated by other considerations outside the scope of this note). The documentation is excellent, and the features of the product are really amazing (pity the project runs on Windows, and we cannot use 90% of these...).

    To launch a job on a computer participating in the Condor farm, you just have to write a job file which looks like this:

    Universe=vanilla
    Executable=$path_to_executabe
    Arguments=$arguments_to_the_executable
    InitialDir=$working_directory
    Log=$local_logfile_name
    Output=$local_file_for_job_stdout
    Error=$local_file_for_job_stderr
    Queue
    

    and then run condor_submit my_job_file and use condor_q to monitor the status your job (queued, running...)

    My program is generating Condor job files and submitting them, and I've spent hours yesterday trying to understand why they were all failing : the stderr file contained a message from Python complaining that it could not import site and exiting.

    A point which was not clear in the documentation I read (but I probably overlooked it) is that the executable mentionned in the job file is supposed to be a local file on the submission host which is copied to the computer running the job. In the jobs generated by my code, I was using sys.executable for the Executable field, and a path to the python script I wanted to run in the Arguments field. This resulted in the Python interpreter being copied on the execution host and not being able to run because it was not able to find the standard files it needs at startup.

    Once I figured this out, the fix was easy: I made my program write a batch script which launched the Python script and changed the job to run that script.

    UPDATE : I'm told there is a Transfer_executable=False line I could have put in the script to achieve the same thing.

    (photo by gudi&cris licenced under CC-BY-ND)


  • Compte rendu de l'équipe Logilab à PyConFR 2015

    2015/10/27 by Arthur Lutz

    Nous étions à PyConFR 2015 avec quelques personnes de l'agence toulousaine de Logilab.

    pyconfr

    Nous avons présenté 3 sujets (annoncés ici), les conférences ont été enregistrées, elles devraient être disponibles bientôt (update les vidéos ont été publiés) :

    https://pbs.twimg.com/media/CRgfK2-UkAEht8z.jpg

    Nous avons vu de nombreuses conférences et discuté python pendant les pauses, voici quelques concepts ou pointeurs qui ont retenu notre attention.

    Côté outils et système

    Le travail qu'effectue Fedora sur son bus de message au niveau de l'infrastructure (fedmsg) est fort intéressant, nous faisons des choses similaires avec le bus d’événements de Salt sur notre infrastructure.

    Toujours chez Fedora, nous allons jeter un œil sur faitout qui permet de récupérer une base de données Postgresql temporaire à utiliser dans les tests unitaires ou l'intégration continue.

    Nous utilisons déjà tox, pour un certain nombre de projets, mais cette présentation nous a motivés pour approfondir quelques pistes : comme detox pour tester les environnements en parallèle. tox permet de lancer les tests unittaires (mais aussi construire la documentation) dans des environnements virtualenv, permettant ainsi de tester une grille de configurations (différentes versions de python, ou de dépendances).

    Guix est un gestionnaire de paquets et une distribution, c'est un projet prometteur, même si nous restons très attachés à Debian. Guix s'inspire du travail effectué par NixOS (dont une présentation avait été faite lors d'un meetup salt). Un peu avant-gardiste, mais probablement utile à terme.

    Bandit, est un outil d'analyse statique qui se focalise sur la sécurité. Il est capable d'analyser du code Python pour détecter les failles de sécurité les plus courantes: SQL injection, XSS, attaque par symlink. Bandit ne fait pas tout (il ne détecte pas le code qui n'existe pas mais qui devrait y être), mais c'est un outil précieux pour automatiser les tests de sécurité et gagner du temps. Bien sûr, la meilleure école pour se former reste de lire les patchs qui corrigent les failles de sécurité et de se faire auditer par des experts. L'analyse de code est un sujet qui nous intéresse, car pylint est né à Logilab et nous travaillons encore sur ces sujets avec astroid (ancien logilab-astng) et safe-python.

    Scapy permet de recevoir, d'envoyer et de manipuler des paquets réseau. Il supporte de nombreux protocoles, et peut être utilisé notamment à des fins d'audit.

    Côté bases de données

    Deux conférences consécutives concernant SQLAlchemy et GeoAlchemy, bien que restant à un niveau de généralités, ont été très instructives. On peut en retenir que, malgré la refonte de l'API avec la version 1.0, les fonctionnalités les plus utiles de SQLAlchemy restent cachées (declarative, back_populate), et les bonnes pratiques sont très mal connues, car mal documentées. La présentation en donnait quelques unes comme "ne jamais faire de requête dans une boucle", ou encore "dans un join, il vaut mieux expliciter toutes les tables". Côté GeoAlchemy, la présentation voulait montrer qu'il est très simple de manipuler des données géométriques avec cet outil développé par la société franco-suisse camp2camp.

    https://pbs.twimg.com/media/CRnaEoFWsAQUL2e.jpg

    Côté python pur

    Comment optimiser le code Python de Mercurial pour qu'il assure des performances suffisantes, quels sont les pièges à éviter ? C'était l'objet de la conférence de Pierre-Yves David, qui a débuté le développement de evolve en 2011 quand il travaillait à Logilab et que nous cherchions à améliorer nos processus de revue de code. Côté astuces, on a retenu entre autres l'utilisation des slots, un mécanisme d'import paresseux (présent en standard dans Python 3), la désactivation ponctuelle du ramasse-miettes ou encore le pré-chargement d'attributs ou de fonctions hors des boucles.

    Nous avons pu assister à une présentation très intéressante sur la tabulation avec Python. La tabulation c'est la mémoïsation poussée à l’extrême: enregistrer tous les résultats d'une fonction pour toutes les valeurs possibles des paramètres. Dingue, non ? La présentation montrait que, moyennant la prise en compte de contraintes techniques (limiter le domaine des paramètres, optimiser le tableau des résultats en le découpant), cela était tout à fait possible, et apportait un gain de temps réel. Mais en fait l’intérêt ne réside pas vraiment dans le fait de mettre en cache des résultats pour gagner en temps de calcul ; non, il s'agit plutôt de masquer le code d'une fonction. Au lieu de fournir à l'utilisateur une version compilée qui risque de faire l'objet de rétro-ingénierie, on lui fournit le tableau des valeurs possibles en entrée et le tableau des résultats correspondants en sortie. La probabilité de retrouver l'algorithme est alors moindre, surtout pour des fonctions de type hachage que certaines sociétés tiennent à garder secrètes.

    Côté communauté

    La conférence sur les communautés locales nous a intéressé étant donné notre implication dans les meetups salt, les meetup python à Nantes et nos organisations de communautés autour de CubicWeb, de certains codes de calculs libres et du web sémantique. La lecture de The art of community online nous a été recommandé par Alexandre Fayolle, grand ancien de Logilab, qui en a bénéficié pour sa participation à l'Odoo Community Association.

    Côté CubicWeb

    Hospital avec ses healthchecks en production pourrait être un bon candidat pour nos applications CubicWeb qui sont déjà branchées sur du statsd pour les métriques métier et sentry pour la collecte d'anomalies. Le projet qui n'en est encore qu'à ses débuts mais contient certaines idées intéressantes. L'objectif est de pouvoir tester les déploiements d'une application. Les outils qui existent ne permettent d'avoir qu'une partie de l'objectif:

    • les tests automatiques, même fonctionnels, testent l'application hors de son environnement final. Ou alors il peut être long de les lancer une fois l'application déployée ;
    • la supervision permet de détecter les problèmes mais l'application est vue comme une boîte noire: savoir qu'il y a une erreur 500 ne permet pas de dire si c'est la base de données qui est HS ou s'il n'y a plus d'espace disque ;
    • les logs permettent de voir quel est le problème réel, mais trop tard.

    Hospital est un framework qui permet:

    • d’écrire des tests avec des assertions comme pour les tests automatiques ;
    • de faire des assertions sur l'application vue de l’intérieur comme une boîte blanche. Il est ainsi possible de tester la connectivité à la base de données ;
    • et de collaborer avec les outils existants comme les outils de supervision. Il existe par exemple un exécutable en ligne de commande qu'un outil de supervision est capable de lancer.

    AnyBlok a retenu notre attention car ses concepts ressemblent à ceux de CubicWeb et les deux projets pourraient s'enrichir mutuellement.

    https://www.logilab.org/file/2100959/raw/pyramid%2Bcubicweb.jpg

    CubicWeb et Pyramid (la vidéo) a été présenté par Christophe de Vienne de Unlish, qui a beaucoup oeuvré pour ce rapprochement. C'est maintenant ce qui est utilisé à Logilab.

    Coté calcul scientifique

    Pythran est un traducteur de code Python en C++, qui permet de construire un module d'extension optimisé à partir d'un code pur Python enrichi de quelques annotations. Il est destiné à un usage scientifique et offre notamment un support partiel de numpy. Il a retenu notre attention et pourrait représenter une alternative intéressante à cython pour certains de nos développements. Son auteur est un partisan de la programmation déclarative: on doit écrire ce que l'on veut obtenir et non pas comment l'obtenir. C'est au compilateur ou à un traducteur de code comme pythran de trouver alors la meilleure façon d'obtenir le résultat décrit. Une partie des améliorations de Pythran est aujourd'hui financée via Logilab grâce au projet OpenDreamKit.

    Conclusion

    https://pbs.twimg.com/media/CRg1lPfXAAAxaWE.jpg:large

    Merci aux organisateurs et à l'EISTI de Pau pour l’accueil. À l'année prochaine pour une nouvelle édition de pyconfr.

    Article rédigé à 3 mains par Yann Voté, Laura Médioni et Arthur Lutz


  • Google Custom Search Engine, for Python

    2008/07/31

    A Google custom search engine for Python has been made available by Gerard Flanagan, indexing:

    http://www.logilab.fr/images/python-logo.png

    Using refinements

    To refine the search to any of the individual sites, you can specify a refinement using the following labels: stdlib, wiki, pypi, thehazeltree

    So, to just search the python wiki, you would enter:

    somesearchterm more:wiki

    and similarly:

    somesearchterm more:stdlib somesearchterm more:pypi somesearchterm more:thehazeltree

    About http://thehazeltree.org

    The Hazel Tree is a collection of popular Python texts that I have converted to reStructuredText and put together using Sphinx. It's in a publishable state, but not as polished as I'd like, and since I'll be mostly offline for the next month it will have to remain as it is for the present. However, the search engine is ready now and the clock is ticking on its subscription (one year, renewal depending on success of site), so if it's useful to anyone, it's all yours (and if you use it on your own site a link back to http://thehazeltree.org would be appreciated).


  • Apycot for Mercurial

    2010/02/11 by Pierre-Yves David
    http://www.logilab.org/image/20439?vid=download

    What is apycot

    apycot is a highly extensible test automatization tool used for Continuous Integration. It can:

    • download the project from a version controlled repository (like SVN or Hg);
    • install it from scratch with all dependencies;
    • run various checkers;
    • store the results in a CubicWeb database;
    • post-process the results;
    • display the results in various format (html, xml, pdf, mail, RSS...);
    • repeat the whole procedure with various configurations;
    • get triggered by new changesets or run periodically.

    For an example, take a look at the "test reports" tab of the logilab-common project.

    Setting up an apycot for Mercurial

    During the mercurial sprint, we set up a proof-of-concept environment running six different checkers:

    • Check syntax of all python files.
    • Check syntax of all documentation files.
    • Run pylint on the mercurial source code with the mercurial pylintrc.
    • Run the check-code.py script included in mercurial checking style and python errors
    • Run the Mercurial's test suite.
    • Run Mercurial's benchmark on a reference repository.

    The first three checkers, shipped with apycot, were set up quickly. The last three are mercurial specific and required few additional tweaks to be integrated to apycot.

    The bot was setup to run with all public mercurial repositories. Five checkers immediately proved useful as they pointed out some errors or warnings (on some rarely used contrib files it even found a syntax error).

    Prospectives

    A public instance is being set up. It will provide features that the community is looking forward to:

    • testing all python versions;
    • running pure python or the C variant;
    • code coverage of the test suite;
    • performance history.

    Conclusion

    apycot proved to be highly flexible and could quickly be adapted to Mercurial's test suite even for people new to apycot. The advantages of continuously running different long running tests is obvious. So apycot seems to be a very valuable tool for improving the software development process.


  • New apycot release

    2008/06/02 by Arthur Lutz
    http://www.logilab.org/image/4878?vid=download&small=true

    After almost 2 years of inactivity, here is a new release of apycot the "Automated Pythonic Code Tester". We use it everyday to maintain our software quality, and we hope this tool can help you as well.

    Admittedly it's not trivial to setup, but once it's running you'll be able to count on it. We're working on getting it to work "out-of-the-box"...

    Here's what's in the ChangeLog :

    2008-05-19 -- 0.11.0
    • updated documentation
    • new pylintrc option for the pyhton_lint checker.
    • Added code to disabled checker with missing required option with the proper ERROR statut
    • removed the catalog option of the xml_valid checker this feature can now be handle with the XML_CATALOG_FILE environement variable (see libxml2 doc for details)
    • moved xml tool from python-xml to lxml
    • new 'hourly' mode for running tests
    • new 'test_activity_report' report
    • pylint checker support new disable_msg and show_categories options (show_categories default to Error and Fatal categories to avoid reports polution)
    • activity option "days" has been renamed to "time" and correspond to a number of day in daily mode but to a number of hour in hourly mode
    • fixed debian_lint and debian_piuparts to actually do something...
    • fixed docutils checker for recent docutils versions
    • dropped python 2.2/2.3 compat (to run apycot itself)
    • added output redirectors to the debian preprocessor to avoid parsing errors
    • can use regular expressions in <pp>_match_* options

  • qgpibplotter is (hopefully) working

    2008/09/04 by David Douard

    My latest personal project, pygpibtoolkit, holds a simple HPGL plotter trying to emulate the HP7470A GPIB plotter, using the very nice and cheap Prologix USB-GPIB dongle. This tool is (for now) called qgpibplotter (since it is using the Qt4 toolkit).

    Tonight, I took (at last) the time to make it work nicely. Well, nicely with the only device I own which is capable of plotting on the GPIB bus, my HP3562A DSA.

    Now, you just have to press the "Plot" button of your test equipment, and bingo! you can see the plot on your computer.

    http://www.logilab.org/image/5837?vid=download

  • Testing for NaN without depending on Numpy

    2008/05/27

    How can I test if a python float is "not a number" without depending on numpy? Simple, a nan value is different to any other value, including itself:

    def isnan(x):
        return isinstance(x, float) and x!=x
    

  • Using branches in mercurial

    2008/10/14 by Arthur Lutz
    http://www.logilab.org/image/4873?vid=download&small=true

    The more we use mercurial to manage our code repositories, the more we enjoy its extended functionalities. Lately we've been playing and using branches which end up being very useful. We also use hgview instead of the built-in "hg view" command. And its latest release supports the branches functionality, you can filter out the branch you want to look at. Update your installation (apt-get upgrade ?) to enjoy this new functionality... or download it.

    http://www.selenic.com/hg-logo/logo-droplets-50.png

  • SciPy and TimeSeries

    2008/08/04 by Nicolas Chauvat
    http://www.enthought.com/img/scipy-sm.png

    We have been using many different tools for doing statistical analysis with Python, including R, SciPy, specific C++ code, etc. It looks like the growing audience of SciPy is now in movement to have dedicated modules in SciPy (lets call them SciKits). See this thread in SciPy-user mailing-list.


  • Converting excel files to CSV using OpenOffice.org and pyuno

    2008/09/19
    http://wiki.services.openoffice.org/w/images/6/69/Py-uno_128.png

    The Task

    I recently received from a customer a fairly large amount of data, organized in dozens of xls documents, each having dozens of sheets. I need to process this, and in order to ease the manipulation of the documents, I'd rather use standard text files in CSV (Comma Separated Values) format. Of course I didn't want to spend hours manually converting each sheet of each file to CSV, so I thought this would be a good time to get my hands in pyUno.

    So I gazed over the documentation, found the Calc page on the OpenOffice.org wiki, read some sample code and got started.

    The easy bit

    The first few lines I wrote were (all imports are here, though some were actually added later).

    import logging
    import sys
    import os.path as osp
    import os
    import time
    
    import uno
    
    def convert_spreadsheet(filename):
        pass
    
    def run():
        for filename in sys.argv[1:]:
            convert_spreadsheet(filename)
    
    def configure_log():
        logger = logging.getLogger('')
        logger.setLevel(logging.DEBUG)
        handler = logging.StreamHandler(sys.stdout)
        logger.addHandler(handler)
        format = "%(asctime)s %(levelname)-7s [%(name)s] %(message)s"
        handler.setFormatter(logging.Formatter(format))
    
    if __name__ == '__main__':
        configure_log()
        run()
    

    That was the easy part. In order to write the convert_spreadsheet function, I needed to open the document. And to do that, I need to start OpenOffice.org.

    Starting OOo

    http://www.squaregoldfish.co.uk/software/e17icons/oocalc.png

    I started by copy-pasting some code I found in another project, which expected OpenOffice.org to be already started with the -accept option. I changed that code a bit, so that the function would launch soffice with the correct options if it could not contact an existing instance:

    def _uno_init(_try_start=True):
        """init python-uno bridge infrastructure"""
        try:
            # Get the uno component context from the PyUNO runtime
            local_context = uno.getComponentContext()
            # Get the local Service Manager
            local_service_manager = local_context.ServiceManager
            # Create the UnoUrlResolver on the Python side.
            local_resolver = local_service_manager.createInstanceWithContext(
                "com.sun.star.bridge.UnoUrlResolver", local_context)
            # Connect to the running OpenOffice.org and get its context.
            # XXX make host/port configurable
            context = local_resolver.resolve("uno:socket,host=localhost,port=2002;urp;StarOffice.ComponentContext")
            # Get the ServiceManager object
            service_manager = context.ServiceManager
            # Create the Desktop instance
            desktop = service_manager.createInstance("com.sun.star.frame.Desktop")
            return service_manager, desktop
        except Exception, exc:
            if exc.__class__.__name__.endswith('NoConnectException') and _try_start:
                logging.info('Trying to start UNO server')
                status = os.system('soffice -invisible -accept="socket,host=localhost,port=2002;urp;"')
                time.sleep(2)
                logging.info('status = %d', status)
                return _uno_init(False)
            else:
                logging.exception("UNO server not started, you should fix that now. "
                                  "`soffice \"-accept=socket,host=localhost,port=2002;urp;\"` "
                                  "or maybe `unoconv -l` might suffice")
                raise
    

    Spreadsheet conversion

    Now the easy (sort of, once you start understanding the OOo API): to load a document, use desktop.loadComponentFromURL(). To get the sheets of a Calc document, use document.getSheets() (that one was easy...). To iterate over the sheets, I used a sample from the SpreadsheetCommon page on the OpenOffice.org wiki.

    Exporting the CSV was a bit more tricky. The function to use is document.storeToURL(). There are two gotchas, however. The first one, is that we need to specify a filter, and to parameterize it correctly. The second one is that the CSV export filter is only able to export the active sheet, so we need to change the active sheet as we iterate over the sheets.

    Parametrizing the export filter

    The parameters are passed in a tuple of PropertyValue uno structures, as the second argument to the storeToURL method. I wrote a helper function which accepts any named arguments and convert them to such a tuple:

    def make_property_array(**kwargs):
        """convert the keyword arguments to a tuple of PropertyValue uno
        structures"""
        array = []
        for name, value in kwargs.iteritems():
            prop = uno.createUnoStruct("com.sun.star.beans.PropertyValue")
            prop.Name = name
            prop.Value = value
            array.append(prop)
        return tuple(array)
    

    Now, what do we put in that array? The answer is in the FilterOptions page of the wiki : The FilterName property is "Text - txt - csv (StarCalc)". We also need to configure the filter by using the FilterOptions property. This is a string of comma separated values

    • ASCII code of field separator
    • ASCII code of text delimiter
    • character set, use 0 for "system character set", 76 seems to be UTF-8
    • number of first line (1-based)
    • Cell format codes for the different columns (optional)

    I used the value "59,34,76,1", meaning I wanted semicolons for separators, and double quotes for text delimiters.

    Here's the code:

    def convert_spreadsheet(filename):
        """load a spreadsheet document, and convert all sheets to
        individual CSV files"""
        logging.info('processing %s', filename)
        url = "file://%s" % osp.abspath(filename)
        export_mask = make_export_mask(url)
        # initialize Uno, get a Desktop object
        service_manager, desktop = _uno_init()
        try:
            # load the Document
            document = desktop.loadComponentFromURL(url, "_blank", 0, ())
            controller = document.getCurrentController()
            sheets = document.getSheets()
            logging.info('found %d sheets', sheets.getCount())
    
            # iterate on all the spreadsheets in the document
            enumeration = sheets.createEnumeration()
            while enumeration.hasMoreElements():
                sheet = enumeration.nextElement()
                name = sheet.getName()
                logging.info('current sheet name is %s', name)
                controller.setActiveSheet(sheet)
                outfilename = export_mask % name.replace(' ', '_')
                document.storeToURL(outfilename,
                                    make_property_array(FilterName="Text - txt - csv (StarCalc)",
                                                        FilterOptions="59,34,76,1" ))
        finally:
            document.close(True)
    
    def make_export_mask(url):
        """convert the url of the input document to a mask for the written
        CSV file, with a substitution for the sheet name
    
        >>> make_export_mask('file:///home/foobar/somedoc.xls')
        'file:///home/foobar/somedoc$%s.csv'
        """
    
        components = url.split('.')
        components[-2] += '$%s'
        components[-1] = 'csv'
        return '.'.join(components)
    

  • Is the Openmoko freerunner a computer or a phone ?

    2008/08/27 by Nicolas Chauvat
    http://wiki.openmoko.org/images/thumb/b/b9/Freerunner02.gif/150px-Freerunner02.gif

    The Openmoko Freerunner is a computer with embedded GSM, accelerometer and GPS. I got mine last week after waiting for a month for the batch to get from Taiwan to the french company I bought it from. The first thing I had to admit was that some time will pass before it gets confortable to use it as a phone. The current version of the system has many weird things in its user interface and the phone works, but the other end of the call suffers a very unpleasant echo.

    I will try to install Debian, Qtopia and Om2008.8 to compare them. I also want to quickly get Python scripts to run on it and get back to Narval hacking. I had an agent running on a bulky Palm+GPS+radionetwork back in 1999 and I look forward to run on this device the same kind of funny things I was doing in AI research ten years ago.


  • New pylint/astng release, but... pylint needs you !

    2009/08/27 by Sylvain Thenault

    After several months with no time to fix/enhance pylint beside answering email and filing tickets, I've finally tackled some tasks yesterday night to publish bug fixes releases ([1] and [2]).

    The problem is that we don't have enough free time at Logilab to lower the number of tickets in pylint tracker page . If you take a look at the ticket tab, you'll see a lot of pendings bug and must-have features (well, and some other less necessary...). You can already easily contribute thanks to the great mercurial dvcs, and some of you do, either by providing patches or by reporting bugs (more tickets, iiirk ! ;) Thank you all btw !!

    Now I was wondering what could be done to make pylint going further, and the first ideas which came to my mind was :

    • do ~3 days sprint
    • do some 'tickets killing' days, as done in some popular oss projects

    But for this to be useful, we need your support, so here are some questions for you:

    • would you come to a sprint at Logilab (in Paris, France), so you can meet us, learn a lot about pylint, and work on tickets you wish to have in pylint?
    • if France is too far away for most people, would you have another location to propose?
    • would you be on jabber for a tickets killing day, providing it's ok with your agenda? if so, what's your knowledge of pylint/astng internals?

    you may answer by adding a comment to this blog (please register first by using the link at the top right of this page) or by mail to sylvain.thenault@logilab.fr. If we've enough positive answers, we'll take the time to organize such a thing.


  • EuroSciPy'10

    2010/07/13 by Adrien Chauve
    http://www.logilab.org/image/9852?vid=download

    The EuroSciPy2010 conference was held in Paris at the Ecole Normale Supérieure from July 8th to 11th and was organized and sponsored by Logilab and other companies.

    July, 8-9: Tutorials

    The first two days were dedicated to tutorials and I had the chance to talk about SciPy with André Espaze, Gaël Varoquaux and Emanuelle Gouillart in the introductory track. This was nice but it was a bit tricky to present SciPy in such a short time while trying to illustrate the material with real and interesting examples. One very nice thing for the introductory track is that all the material was contributed by different speakers and is freely available in a github repository (licensed under CC BY).

    July, 10-11: Scientific track

    The next two days were dedicated to scientific presentations and why python is such a great tool to develop scientific software and carry out research.

    Keynotes

    I had a very great time listening to the presentations, starting with the two very nice keynotes given by Hans Petter Langtangen and Konrad Hinsen. The latter gave us a very nice summary of what happened in the scientific python world during the past 15 years, what is happening now and of course what could happen during the next 15 years. Using a crystal ball and a very humorous tone, he made it very clear that the challenge in the next years will be about how using our hundreds, thousands or even more cores in a bug-free and efficient way. Functional programming may be a very good solution to this challenge as it provides a deterministic way of parallelizing our programs. Konrad also provided some hints about future versions of python that could provide a deeper and more efficient support of functional programming and maybe the addition of a keyword 'async' to handle the computation of a function in another core.

    In fact, the PEP 3148 entitled "Futures - execute computations asynchronously" was just accepted two days ago. This PEP describes the new package called "futures" designed to facilitate the evaluation of callables using threads and processes in future versions of python. A full implementation is already available.

    Parallelization

    Parallelization was indeed a very popular issue across presentations, and as for resolving embarrassingly parallel problems, several solutions were presented.

    • Playdoh: Distributes computations over computers connected to a secure network (see playdoh presentation).

      Distributing the computation of a function over two machines is as simple as:

      import playdoh
      result1, result2 = playdoh.map(fun, [arg1, arg2], _machines = ['machine1.network.com', 'machine2.network.com'])
      
    • Theano: Allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. In particular it can use GPU transparently and generate optimized C code (see theano presentation).

    • joblib: Provides among other things helpers for embarrassingly parallel problems. It's built over the multiprocessing package introduced in python 2.6 and brings more readable code and easier debugging.

    Speed

    Concerning speed, Fransesc Alted has showed us interesting tools for memory optimization currently successfully used in PyTables 2.2. You can read more details on these kind of optimizations in EuroSciPy'09 (part 1/2): The Need For Speed.

    SCons

    Last but not least, I talked with Cristophe Pradal who is one of the core developer of OpenAlea. He convinced me that SCons is worth using once you have built a nice extension for it: SConsX. I'm looking forward to testing it.


  • Reading SPE files

    2009/05/11 by Andre Espaze
    http://upload.wikimedia.org/wikipedia/commons/thumb/a/a1/CCD.jpg/300px-CCD.jpg

    If you would like to read SPE files from charge-coupled device (CCD) cameras, I have contributed a recipe to the SciPy cookbook, see Reading SPE files.


  • iclassmethod decorator to define both a class and an instance method in one go

    2009/04/28 by Sylvain Thenault

    You'll find in the logilab.common.decorators module the iclassmethod decorator which may be pretty handy in some cases as it allows methods to be both called as class methods or as instance methods. In the first case the first argument will be the class and the second case it will be the instance.

    Example extracted (and adapted for simplicity) from CubicWeb:

    from logilab.common.decorators import iclassmethod
    
    class Form(object):
      _fields_ = []
    
      def __init__(self):
          self.fields = list(self._fields_)
    
      @iclassmethod
      def field_by_name(cls_or_self, name):
          """return field with the given name and role"""
          if isinstance(cls_or_self, type):
              fields = cls_or_self._fields_
          else:
              fields = cls_or_self.fields
          for field in fields:
              if field.name == name:
                  return field
          raise Exception('FieldNotFound: %s' % name)
    

    Example session:

    >>> from logilab.common import attrdict
    >>> f = Form()
    >>> f.fields.append(attrdict({'name': 'something', 'value': 1})
    >>> f.field_by_name('something')
    {'name': 'something', 'value': 1}
    >>> Form.field_by_name('something')
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "<stdin>", line 15, in field_by_name
    Exception: FieldNotFound: something
    

    So we get a field_by_name method which will act differently (actually use different input data) when called as instance method or as class method.

    Also notice the attrdict trick that can also be achieved with the Python 2.6 named tuple.


  • Distutils2 January Sprint in Paris

    2011/01/07 by Pierre-Yves David

    At Logilab, we have the pleasure to host a distutils2 sprint in January. Sprinters are welcome in our Paris office from 9h on the 27th of January to 19h the 30th of January. This sprint will focus on polishing distutils2 for the next alpha release and on the install/remove scripts.

    Distutils2 is an important project for Python. Every contribution will help to improve the current state of packaging in Python. See the wiki page on python.org for details about participation. If you can't attend or join us in Paris, you can participate on the #distutils channel of the freenode irc network

    http://guide.python-distribute.org/_images/state_of_packaging.jpg

    For additional details, see Tarek Ziadé's original announce, read the wiki page on python.org or contact us


  • new pylint / astng / common releases

    2009/03/25 by Sylvain Thenault
    http://janckos.net/blog/wp-content/uploads/2008/08/python.png

    I'm pleased to announce releases of pylint 0.18, logilab-astng 0.19 and logilab-common 0.39. All these packages should now be cleanly available through easy install.

    Also, happy pylint users will get:

    • fixed python 2.6 support (pylint/astng tested from 2.4 to 2.6)
    • get source code (and so astng) for zip/egg imports
    • some understanding of the property decorator and of unbound methods
    • some false positives fixed and others minor improvments

    See projects home page and ChangeLog for more information:

    http://www.logilab.org/project/pylint http://www.logilab.org/project/logilab-astng http://www.logilab.org/project/logilab-common

    Please report any problem / question to the python-projects@lists.logilab.org mailing-list.

    Enjoy!


  • Virtualenv - Play safely with a Python

    2010/03/26 by Alain Leufroy
    http://farm5.static.flickr.com/4031/4255910934_80090f65d7.jpg

    virtualenv, pip and Distribute are tree tools that help developers and packagers. In this short presentation we will see some virtualenv capabilities.

    Please, keep in mind that all above stuff has been made using : Debian Lenny, python 2.5 and virtualenv 1.4.5.

    Abstract

    virtualenv builds python sandboxes where it is possible to do whatever you want as a simple user without putting in jeopardy your global environment.

    virtualenv allows you to safety:

    • install any python packages
    • add debug lines everywhere (not only in your scripts)
    • switch between python versions
    • try your code as you are a final user
    • and so on ...

    Install and usage

    Install

    Prefered way

    Just download the virtualenv python script at http://bitbucket.org/ianb/virtualenv/raw/tip/virtualenv.py and call it using python (e.g. python virtualenv.py).

    For conveinience, we will refers to this script using virtualenv.

    Other ways

    For Debian (ubuntu as well) addicts, just do :

    $ sudo aptitude install python-virtualenv
    

    Fedora users would do:

    $ sudo yum install python-virtualenv
    

    And others can install from PyPI (as superuser):

    $ pip install virtualenv
    

    or

    $ easy_install pip && pip install virtualenv
    

    You could also get the source here.

    Quick Guide

    To work in a python sandbox, do as follow:

    $ virtualenv my_py_env
    $ source my_py_env/bin/activate
    (my_py_env)$ python
    

    "That's all Folks !"

    Once you have finished just do:

    (my_py_env)$ deactivate
    

    or quit the tty.

    What does virtualenv actually do ?

    At creation time

    Let's start again ... more slowly. Consider the following environment:

    $ pwd
    /home/you/some/where
    $ ls
    

    Now create a sandbox called my-sandbox:

    $ virtualenv my-sandbox
    New python executable in "my-sandbox/bin/python"
    Installing setuptools............done.
    

    The output said that you have a new python executable and specific install tools. Your current directory now looks like:

    $ ls -Cl
    my-sandbox/ README
    $ tree -L 3 my-sandbox
    my-sandbox/
    |-- bin
    |   |-- activate
    |   |-- activate_this.py
    |   |-- easy_install
    |   |-- easy_install-2.5
    |   |-- pip
    |   `-- python
    |-- include
    |   `-- python2.5 -> /usr/include/python2.5
    `-- lib
        `-- python2.5
            |-- ...
            |-- orig-prefix.txt
            |-- os.py -> /usr/lib/python2.5/os.py
            |-- re.py -> /usr/lib/python2.5/re.py
            |-- ...
            |-- site-packages
            |   |-- easy-install.pth
            |   |-- pip-0.6.3-py2.5.egg
            |   |-- setuptools-0.6c11-py2.5.egg
            |   `-- setuptools.pth
            |-- ...
    

    In addition to the new python executable and the install tools you have an whole new python environment containing libraries, a site-packages/ (where your packages will be installed), a bin directory, ...

    Note:
    virtualenv does not create every file needed to get a whole new python environment. It uses links to global environment files instead in order to save disk space end speed up the sandbox creation. Therefore, there must already have an active python environment installed on your system.

    At activation time

    At this point you have to activate the sandbox in order to use your custom python. Once activated, python still has access to the global environment but will look at your sandbox first for python's modules:

    $ source my-sandbox/bin/activate
    (my-sandbox)$ which python
    /home/you/some/where/my-sandbox/bin/python
    $ echo $PATH
    /home/you/some/where/my-sandbox/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
    (pyver)$ python -c 'import sys;print sys.prefix;'
    /home/you/some/where/my-sandbox
    (pyver)$ python -c 'import sys;print "\n".join(sys.path)'
    /home/you/some/where/my-sandbox/lib/python2.5/site-packages/setuptools-0.6c8-py2.5.egg
    [...]
    /home/you/some/where/my-sandbox
    /home/you/personal/PYTHONPATH
    /home/you/some/where/my-sandbox/lib/python2.5/
    [...]
    /usr/lib/python2.5
    [...]
    /home/you/some/where/my-sandbox/lib/python2.5/site-packages
    [...]
    /usr/local/lib/python2.5/site-packages
    /usr/lib/python2.5/site-packages
    [...]
    

    First of all, a (my-sandbox) message is automatically added to your prompt in order to make it clear that you're using a python sandbox environment.

    Secondly, my-sandbox/bin/ is added to your PATH. So, running python calls the specific python executable placed in my-sandbox/bin.

    Note
    It is possible to improve the sandbox isolation by ignoring the global paths and your PYTHONPATH (see Improve isolation section).

    Installing package

    It is possible to install any packages in the sandbox without any superuser privilege. For instance, we will install the pylint development revision in the sandbox.

    Suppose that you have the pylint stable version already installed in your global environment:

    (my-sandbox)$ deactivate
    $ python -c 'from pylint.__pkginfo__ import version;print version'
    0.18.0
    

    Once your sandbox activated, install the development revision of pylint as an update:

    $ source /home/you/some/where/my-sandbox/bin/activate
    (my-sandbox)$ pip install -U hg+http://www.logilab.org/hg/pylint#egg=pylint-0.19
    

    The new package and its dependencies are only installed in the sandbox:

    (my-sandbox)$ python -c 'import pylint.__pkginfo__ as p;print p.version, p.__file__'
    0.19.0 /home/you/some/where/my-sandbox/lib/python2.6/site-packages/pylint/__pkginfo__.pyc
    (my-sandbox)$ deactivate
    $ python -c 'import pylint.__pkginfo__ as p;print p.version, p.__file__'
    0.18.0 /usr/lib/pymodules/python2.6/pylint/__pkginfo__.pyc
    

    You can safely do any change in the new pylint code or in others sandboxed packages because your global environment is still unchanged.

    Useful options

    Improve isolation

    As said before, your sandboxed python sys.path still references the global system path. You can however hide them by:

    • either use the --no-site-packages that do not give access to the global site-packages directory to the sandbox
    • or change your PYTHONPATH in my-sandbox/bin/activate in the same way as for PATH (see tips)
    $ virtualenv --no-site-packages closedPy
    $ sed -i '9i PYTHONPATH="$_OLD_PYTHON_PATH"
          9i export PYTHONPATH
          9i unset _OLD_PYTHON_PATH
          40i _OLD_PYTHON_PATH="$PYTHONPATH"
          40i PYTHONPATH="."
          40i export PYTHONPATH' closedPy/bin/activate
    $ source closedPy/bin/activate
    (closedPy)$ python -c 'import sys; print "\n".join(sys.path)'
    /home/you/some/where/closedPy/lib/python2.5/site-packages/setuptools-0.6c8-py2.5.egg
    /home/you/some/where/closedPy
    /home/you/some/where/closedPy/lib/python2.5
    /home/you/some/where/closedPy/lib/python2.5/plat-linux2
    /home/you/some/where/closedPy/lib/python2.5/lib-tk
    /home/you/some/where/closedPy/lib/python2.5/lib-dynload
    /usr/lib/python2.5
    /usr/lib64/python2.5
    /usr/lib/python2.5/lib-tk
    /home/you/some/where/closedPy/lib/python2.5/site-packages
    $ deactivate
    

    This way, you'll get an even more isolated sandbox, just as with a brand new python environment.

    Work with different versions of Python

    It is possible to dedicate a sandbox to a particular version of python by using the --python=PYTHON_EXE which specifies the interpreter that virtualenv was installed with (default is /usr/bin/python):

    $ virtualenv --python=python2.4 pyver24
    $ source pyver24/bin/activate
    (pyver24)$ python -V
    Python 2.4.6
    $ deactivate
    $ virtualenv --python=python2.5 pyver25
    $ source pyver25/bin/activate
    (pyver25)$ python -V
    Python 2.5.2
    $ deactivate
    

    Distribute a sandbox

    To distribute your sandbox, you must use the --relocatable option that makes an existing sandbox relocatable. This fixes up scripts and makes all .pth files relative This option should be called just before you distribute the sandbox (each time you have changed something in your sandbox).

    An important point is that the host system should be similar to your own.

    Tips

    Speed up sandbox manipulation

    Add these scripts to your .bashrc in order to help you using virtualenv and automate the creation and activation processes.

    rel2abs() {
    #from http://unix.derkeiler.com/Newsgroups/comp.unix.programmer/2005-01/0206.html
      [ "$#" -eq 1 ] || return 1
      ls -Ld -- "$1" > /dev/null || return
      dir=$(dirname -- "$1" && echo .) || return
      dir=$(cd -P -- "${dir%??}" && pwd -P && echo .) || return
      dir=${dir%??}
      file=$(basename -- "$1" && echo .) || return
      file=${file%??}
      case $dir in
        /) printf '%s\n' "/$file";;
        /*) printf '%s\n' "$dir/$file";;
        *) return 1;;
      esac
      return 0
    }
    function activate(){
        if [[ "$1" == "--help" ]]; then
            echo -e "usage: activate PATH\n"
            echo -e "Activate the sandbox where PATH points inside of.\n"
            return
        fi
        if [[ "$1" == '' ]]; then
            local target=$(pwd)
        else
            local target=$(rel2abs "$1")
        fi
        until  [[ "$target" == '/' ]]; do
            if test -e "$target/bin/activate"; then
                source "$target/bin/activate"
                echo "$target sandbox activated"
                return
            fi
            target=$(dirname "$target")
        done
        echo 'no sandbox found'
    }
    function mksandbox(){
        if [[ "$1" == "--help" ]]; then
            echo -e "usage: mksandbox NAME\n"
            echo -e "Create and activate a highly isaolated sandbox named NAME.\n"
            return
        fi
        local name='sandbox'
        if [[ "$1" != "" ]]; then
            name="$1"
        fi
        if [[ -e "$1/bin/activate" ]]; then
            echo "$1 is already a sandbox"
            return
        fi
        virtualenv --no-site-packages --clear --distribute "$name"
        sed -i '9i PYTHONPATH="$_OLD_PYTHON_PATH"
                9i export PYTHONPATH
                9i unset _OLD_PYTHON_PATH
               40i _OLD_PYTHON_PATH="$PYTHONPATH"
               40i PYTHONPATH="."
               40i export PYTHONPATH' "$name/bin/activate"
        activate "$name"
    }
    
    Note:
    The virtualenv-commands and virtualenvwrapper projects add some very interesting features to virtualenv. So, put on eye on them for more advanced features than the above ones.

    Conclusion

    I found it to be irreplaceable for testing new configurations or working on projects with different dependencies. Moreover, I use it to learn about other python projects, how my project exactly interacts with its dependencies (during debugging) or to test the final user experience.

    All of this stuff can be done without virtualenv but not in such an easy and secure way.

    I will continue the series by introducing other useful projects to enhance your productivity : pip and Distribute. See you soon.


  • Salomé accepted into Debian unstable

    2010/06/03 by Andre Espaze

    Salomé is a platform for pre and post-processing of numerical simulation available at http://salome-platform.org/. It is now available as a Debian package http://packages.debian.org/source/sid/salome and should soon appear in Ubuntu https://launchpad.net/ubuntu/+source/salome as well.

    http://salome-platform.org/salome_screens.png/image_preview

    A difficult packaging work

    A first package of Salomé 3 was made by the courageous Debian developper Adam C. Powell, IV on January 2008. Such packaging is very resources intensive because of the building of many modules. But the most difficult part was to bring Salomé to an unported environment. Even today, Salomé 5 binaries are only provided by upstream as a stand-alone piece of software ready to unpack on a Debian Sarge/Etch or a Mandriva 2006/2008. This is the first reason why several patches were required for adapting the code to new versions of the dependencies. The version 3 of Salomé was so difficult and time consuming to package that Adam decided to stop during two years.

    The packaging of Salomé started back with the version 5.1.3 in January 2010. Thanks to Logilab and the OpenHPC project, I could join him during 14 weeks of work for adapting every module to Debian unstable. Porting to the new versions of the dependencies was a first step, but we had also to adapt the code to the Debian packaging philosophy with binaries, librairies and data shipped to dedicated directories.

    A promising future

    Salomé being accepted to Debian unstable means that porting it to Ubuntu should follow in a near future. Moreover the work done for adapting Salomé to a GNU/Linux distribution may help developpers on others platforms as well.

    That is excellent news for all people involved in numerical simulation because they are going to have access to Salomé services by using their packages management tools. It will help the spreading of Salomé code on any fresh install and moreover keep it up to date.

    Join the fun

    For mechanical engineers, a derived product called Salomé-Méca has recently been published. The goal is to bring the functionalities from the Code Aster finite element solver to Salomé in order to ease simulation workflows. If you are as well interested in Debian packages for those tools, you are invited to come with us and join the fun.

    I have submitted a proposal to talk about Salomé at EuroSciPy 2010. I look forward to meet other interested parties during this conference that will take place in Paris on July 8th-11th.


  • Présentation PyCON FR 2008 - Assurance qualité

    2008/05/27
    Photo sous licence creative commons `By-Nc-Nd`

    Une présentation sur l'assurance-qualité a été présentée le 17 mai 2008 pour les journées Python organisées par l'Association Francophone Python (AFPy).

    Le but visé est de décrire quelques notions et pratiques simples pour améliorer la lisibilité et la maintenabilité de votre code python.

    Quelques outils standards de python sont décrits en première partie; pour finir par une revue de projets plus ambitieux mais indispensables pour la création de code de qualité.

    Photo sous licence creative commons By-Nc-Nd par : yota

    Pour accéder au diaporama :

    http://fr.pycon.org/presentations_2008/julien-jehannet-assurance-qualite/slides.html


  • The Configuration Management Problem

    2009/07/31 by Nicolas Chauvat
    http://www.logilab.org/image/9863?vid=download

    Today I felt like summing up my opinion on a topic that was discussed this year on the Python mailing lists, at PyCon-FR, at EuroPython and EuroSciPy... packaging software! Let us discuss the two main use cases.

    The first use case is to maintain computer systems in production. A trait of production systems, is that they can not afford failures and are often deployed on a large scale. It leaves little room for manually fixing problems. Either the installation process works or the system fails. Reaching that level of quality takes a lot of work.

    The second use case is to facilitate the life of software developers and computer users by making it easy for them to give a try to new pieces of software without much work.

    The first use case has to be addressed as a configuration management problem. There is no way around it. The best way I know of managing the configuration of a computer system is called Debian. Its package format and its tool chain provide a very extensive and efficient set of features for system development and maintenance. Of course it is not perfect and there are missing bits and open issues that could be tackled, like the dependencies between hardware and software. For example, nothing will prevent you from installing on your Debian system a version of a driver that conflicts with the version of the chip found in your hardware. That problem could be solved, but I do not think the Debian project is there yet and I do not count it as a reason to reject Debian since I have not seen any other competitor at the level as Debian.

    The second use case is kind of a trap, for it concerns most computer users and most of those users are either convinced the first use case has nothing in common with their problem or convinced that the solution is easy and requires little work.

    The situation is made more complicated by the fact that most of those users never had the chance to use a system with proper package management tools. They simply do not know the difference and do not feel like they are missing when using their system-that-comes-with-a-windowing-system-included.

    Since many software developers have never had to maintain computer systems in production (often considered a lower sysadmin job) and never developed packages for computer systems that are maintained in production, they tend to think that the operating system and their software are perfectly decoupled. They have no problem trying to create a new layer on top of existing operating systems and transforming an operating system issue (managing software installation) into a programming langage issue (see CPAN, Python eggs and so many others).

    Creating a sub-system specific to a language and hosting it on an operating system works well as long as the language boundary is not crossed and there is no competition between the sub-system and the system itself. In the Python world, distutils, setuptools, eggs and the like more or less work with pure Python code. They create a square wheel that was made round years ago by dpkg+apt-get and others, but they help a lot of their users do something they would not know how to do another way.

    A wall is quickly hit though, as the approach becomes overly complex as soon as they try to depend on things that do not belong to their Python sub-system. What if your application needs a database? What if your application needs to link to libraries? What if your application needs to reuse data from or provide data to other applications? What if your application needs to work on different architectures?

    The software developers that never had to maintain computer systems in production wish these tasks were easy. Unfortunately they are not easy and cannot be. As I said, there is no way around configuration management for the one who wants a stable system. Configuration management requires both project management work and software development work. One can have a system where packaging software is less work, but that comes at the price of stability and reduced functionnality and ease of maintenance.

    Since none of the two use cases will disappear any time soon, the only solution to the problem is to share as much data as possible between the different tools and let each one decide how to install software on his computer system.

    Some links to continue your readings on the same topic:


  • HOWTO install lodgeit pastebin under Debian/Ubuntu

    2010/06/24 by Arthur Lutz

    Lodge it is a simple open source pastebin... and it's written in Python!

    The installation under debian/ubuntu goes as follows:

    sudo apt-get update
    sudo apt-get -uVf install python-imaging python-sqlalchemy python-jinja2 python-pybabel python-werkzeug python-simplejson
    cd local
    hg clone http://dev.pocoo.org/hg/lodgeit-main
    cd lodgeit-main
    vim manage.py
    

    For debian squeeze you have to downgrade python-werkzeug, so get the old version of python-werkzeug from snapshot.debian.org at http://snapshot.debian.org/package/python-werkzeug/0.5.1-1/

    wget http://snapshot.debian.org/archive/debian/20090808T041155Z/pool/main/p/python-werkzeug/python-werkzeug_0.5.1-1_all.deb
    

    Modify the dburi and the SECRET_KEY. And launch application:

    python manage.py runserver
    

    Then off you go configure your apache or lighthttpd.

    An easy (and dirty) way of running it at startup is to add the following command to the www-data crontab

    @reboot cd /tmp/; nohup /usr/bin/python /usr/local/lodgeit-main/manage.py runserver &
    

    This should of course be done in an init script.

    http://rn0.ru/static/help/advanced_features.png

    Hopefully we'll find some time to package this nice webapp for debian/ubuntu.


  • You can now register on our sites

    2009/09/03 by Arthur Lutz

    With the new version of CubicWeb deployed on our "public" sites, we would like to welcome a new (much awaited) functionality : you can now register directly on our websites. Getting an account with give you access to a bunch of functionalities :

    http://farm1.static.flickr.com/53/148921611_eadce4f5f5_m.jpg
    • registering to a project's activity with get you automated email reports of what is happening on that project
    • you can directly add tickets on projects instead of talking about it on the mailing lists
    • you can bookmark content
    • tag stuff
    • and much more...

    This is also a way of testing out the CubicWeb framework (in this case the forge cube) which you can take home and host yourself (debian recommended). Just click on the "register" link on the top right, or here.

    Photo by wa7son under creative commons.


  • apycot 0.12.1 released

    2008/06/24 by Arthur Lutz

    After one month of internship at logilab, I'm pleased to announce the 0.12.1 release of apycot.

    for more information read the apycot 0.12.1 release note

    You can also check the new sample configuration.

    Pierre-Yves David


  • Discovering logilab-common Part 1 - deprecation module

    2010/09/02 by Stéphanie Marcu

    logilab-common library contains a lot of utilities which are often unknown. I will write a series of blog entries to explore nice features of this library.

    We will begin with the logilab.common.deprecation module which contains utilities to warn users when:

    • a function or a method is deprecated
    • a class has been moved into another module
    • a class has been renamed
    • a callable has been moved to a new module

    deprecated

    When a function or a method is deprecated, you can use the deprecated decorator. It will print a message to warn the user that the function is deprecated.

    The decorator takes two optional arguments:

    • reason: the deprecation message. A good practice is to specify at the beginning of the message, between brackets, the version number from which the function is deprecated. The default message is 'The function "[function name]" is deprecated'.
    • stacklevel: This is the option of the warnings.warn function which is used by the decorator. The default value is 2.

    We have a class Person defined in a file person.py. The get_surname method is deprecated, we must use the get_lastname method instead. For that, we use the deprecated decorator on the get_surname method.

    from logilab.common.deprecation import deprecated
    
    class Person(object):
    
        def __init__(self, firstname, lastname):
            self._firstname = firstname
            self._lastname = lastname
    
        def get_firstname(self):
            return self._firstname
    
        def get_lastname(self):
            return self._lastname
    
        @deprecated('[1.2] use get_lastname instead')
        def get_surname(self):
            return self.get_lastname()
    
    def create_user(firstname, lastname):
        return Person(firstname, lastname)
    
    if __name__ == '__main__':
        person = create_user('Paul', 'Smith')
        surname = person.get_surname()
    

    When running person.py we have the message below:

    person.py:22: DeprecationWarning: [1.2] use get_lastname instead
    surname = person.get_surname()

    class_moved

    Now we moved the class Person in a new_person.py file. We notice in the person.py file that the class has been moved:

    from logilab.common.deprecation import class_moved
    import new_person
    Person = class_moved(new_person.Person)
    
    if __name__ == '__main__':
        person = Person('Paul', 'Smith')
    

    When we run the person.py file, we have the following message:

    person.py:6: DeprecationWarning: class Person is now available as new_person.Person
    person = Person('Paul', 'Smith')

    The class_moved function takes one mandatory argument and two optional:

    • new_class: this mandatory argument is the new class
    • old_name: this optional argument specify the old class name. By default it is the same name than the new class. This argument is used in the default printed message.
    • message: with this optional argument, you can specify a custom message

    class_renamed

    The class_renamed function automatically creates a class which fires a DeprecationWarning when instantiated.

    The function takes two mandatory arguments and one optional:

    • old_name: a string which contains the old class name
    • new_class: the new class
    • message: an optional message. The default one is '[old class name] is deprecated, use [new class name]'

    We now rename the Person class into User class in the new_person.py file. Here is the new person.py file:

    from logilab.common.deprecation import class_renamed
    from new_person import User
    
    Person = class_renamed('Person', User)
    
    if __name__ == '__main__':
        person = Person('Paul', 'Smith')
    

    When running person.py, we have the following message:

    person.py:5: DeprecationWarning: Person is deprecated, use User
    person = Person('Paul', 'Smith')

    moved

    The moved function is used to tell that a callable has been moved to a new module. It returns a callable wrapper, so that when the wrapper is called, a warning is printed telling where the object can be found. Then the import is done (and not before) and the actual object is called.

    Note

    The usage is somewhat limited on classes since it will fail if the wrapper is used in a class ancestors list: use the class_moved function instead (which has no lazy import feature though).

    The moved function takes two mandatory parameters:

    • modpath: a string representing the path to the new module
    • objname: the name of the new callable

    We will use in person.py, the create_user function which is now defined in the new_person.py file:

    from logilab.common.deprecation import moved
    
    create_user = moved('new_person', 'create_user')
    
    if __name__ == '__main__':
        person = create_user('Paul', 'Smith')
    

    When running person.py, we have the following message:

    person.py:4: DeprecationWarning: object create_user has been moved to module new_person
    person = create_user('Paul', 'Smith')

show 69 results