show 315 results

Blog entries

  • Ecriture de liaisons C++ pour Python

    2014/03/13 by Laura Médioni

    Dans le cadre des travaux d'interfaçage de l'application Code_TYMPAN avec du code Python, nous avons réalisé l'étude ci-dessous sur les différents moyens de générer des liaisons Python pour du code C++. Cette étude n'a pas vocation à être exhaustive et s'est concentrée sur les aspects qui nous intéressaient directement pour les travaux susmentionnés.

    Solutions existantes

    Une recherche des solutions existantes a été effectuée, qui a permis d'obtenir la liste suivante pour une écriture manuelle du code d'interfaçage :

    • Cython, un langage de programmation inspiré de Python, basé sur Pyrex
    • Boost.Python, une librairie C++ de la collection Boost permettant d'écrire des liaisons Python
    • PyBindGen, un outil implémenté en Python et permettant de décrire des liaisons C++ directement dans ce langage
    • Swig, un outil permettant de générer des liaisons C++ pour plusieurs langages de programmation
    • Shiboken, un générateur de code d'enrobage pour des bibliothèques C/C++ basé sur du CPython

    Des solutions existent pour automatiser cette écriture. Ce sont des outils qui se basent sur des compilateurs (gcc, clang) pour faire l'analyse grammaticale du code C++ et générer le code d'interfaçage correspondant. Par exemple :

    • XDress, qui permet de générer des fichiers Cython (.pyx, .pxd) à partir de gcc-xml ou de libclang
    • PyBindGen dispose de fonctionnalités permettant de générer des liaisons python à partir de gcc
    • Ce billet explique comment utiliser libclang pour parcourir l'AST d'un code C++ et générer des liaisons Boost.Python

    Aspects pris en compte

    Cet article est intéressant car il aborde de façon très complète les problématiques découlant de l'écriture de liaisons C++ pour des langages de haut niveau. Il a été écrit lors des travaux de développement de Shiboken.

    Dans notre cas, les critères pour le choix d'une solution finale portaient sur différents aspects :

    • Le coût de développement : prise en main de l'outil, quantité de code à écrire pour enrober une classe C++ donnée, coût de l'intégration dans le système de build, degré d'automatisation de la solution, lisibilité du code généré, etc.
    • La gestion de la mémoire : comptage de référence, gestion de la propriété des objets
    • La qualité et l'exhaustivité du support C++ : compatibilité STL, gestion des références et pointeurs, des templates, surcharges d'opérateurs, etc.
    • La pérennité de la solution : technologies mises en œuvre par l'outil, qualité de la documentation, support, taille et degré d'activité de la communauté de développeurs

    Solutions envisagées

    Swig n'a pas été retenu partant de l'a priori que c'était une solution plutôt lourde et davantage orientée C que C++, constat tiré lors de travaux réalisés par Logilab il y a quelques mois de cela. La solution Boost.Python n'a pas été explorée car notre souhait était de nous rapprocher davantage du Python que du C++. Shiboken semble prometteur, bien que peu documenté et mal référencé (les premières recherches tombent sur d'anciennes pages du projet, donnant l'impression que la dernière release date d'il y a plusieurs années, alors qu'en fait, non). Il a été écarté par manque de temps.

    PyBindGen et Cython ont fait l'objet de tests.

    La cible des tests a été l'interfaçage de smart pointers, puisque cela correspond à un de nos besoins sur le projet Code_TYMPAN.

    Les essais ont été réalisés sur des classes simplifiées:

    • MyElement, classe qui représente un élément à encapsuler dans un smart pointer et hérite de IRefCount qui implémente un comptage de référence
    • SmartPtr, classe smart pointer "maison" de l'application
    • Quelques fonctions de test manipulant des smart pointers SmartPtr

    Voici un extrait des en-têtes du code C++:

    #ifndef MY_ELEMENT_H
    #define MY_ELEMENT_H
    #include <iostream>
    using namespace std;
    #include "SmartPtr.h"
    
    class MyElement : public IRefCount
    {
        public:
            MyElement ();
            MyElement (string);
                string Name(){ return _name; }
                virtual ~MyElement ();
    
        protected:
            string _name;
    };
    typedef SmartPtr<MyElement> SPMyElement;
    #endif
    
    #ifndef SMART_PTR_H
    #define SMART_PTR_H
    template <class T> class SmartPtr
    {
        public:
            SmartPtr();
            SmartPtr(T*);
            const T* getRealPointer() const;
    
        protected:
            T* _pObj;
    }
    #endif
    
    SPMyElement BuildElement();
    void UseElement(SPMyElement elt);
    

    Cython

    Cet outil offre maintenant un bon support du C++ (globalement depuis la version 0.17). Son avantage est qu'il permet la manipulation d'objets à la fois C++ et Python dans des fichiers Cython.

    Utilisation
    • Écriture (facultative) d'un fichier .pxd qui contient une recopie des headers à enrober (avec un lien vers les fichiers): déclarations des types, classes, fonctions...
    • Écriture d'un fichier .pyx qui contient des appels de fonctions, constructions d'objets C ou python. Les fonctions et classes de ce module sont utilisables depuis un script Python
    • Compilation du code Cython décrivant les interfaçages C++, génération et compilation du code C++ correspondant et production d'une librairie Python.

    Cython offre un support pour les conteneurs de la STL, les templates, la surcharge de la plupart des opérateurs ("->" non supporté), le passage d'arguments par référence et par pointeur, etc.

    Actuellement en version 0.20.1, la dernière release date du 11 février 2014. Les outils Cython sont relativement bien documentés et sa communauté de développeurs est active.

    Exemple

    Voici le code d'interfaçage Cython correspondant à l'exemple exposé ci-dessus:

    setup.py:

    from distutils.core import setup
    from Cython.Build import cythonize
    
    setup(name='smartptr',
        ext_modules=cythonize('*.pyx',
            ),
    )
    

    smartptr.pxd:

    from libcpp.string cimport string
    
    cdef extern from "src/SmartPtr.h":
        cdef cppclass SmartPtr[T]:
            SmartPtr()
            SmartPtr(T *)
            T *getRealPointer() # Pas de surcharge de ->. L'accès à l'objet ne peut être qu'explicite
    
    cdef extern from "src/MyElement.h":
        cdef cppclass MyElement:
            MyElement()
            MyElement(string)
            string Name()
    
    cdef extern from "src/Test.h":
        SmartPtr[MyElement] BuildSPElement()
        void UseSPElement(SmartPtr[MyElement])
    

    smartptr.pyx:

    # distutils: language = c++
    # distutils: libraries = element
    
    cimport smartptr
    cimport cython
    
    cdef class PySPMyElement:
        cdef SmartPtr[MyElement] thisptr
    
        def __cinit__(self, name=""):
            """ PySPMyElement constructor """
            if name == "":
                self.thisptr = SmartPtr[MyElement](new MyElement())
            else:
                self.thisptr = SmartPtr[MyElement](new MyElement(name))
    
        def get_name(self):
            """ Returns the name of the element """
            return self.thisptr.getRealPointer().Name()
    
    @cython.locals(elt=PySPMyElement)
    def build_sp_elt():
        """ Calls the C++ API to build an element """
        elt = PySPMyElement.__new__(PySPMyElement)
        elt.thisptr = BuildSPElement()
        return elt
    
    @cython.locals(elt=PySPMyElement)
    def use_sp_elt(elt):
        """ Lends elt to the C++ API """
        UseSPElement(elt.thisptr)
    

    XDress

    XDress est un générateur automatique de code d'interfaçage C/C++ écrit en Python, basé sur Cython.

    Utilisation
    • On liste dans un fichier xdressrc.py les classes et fonctions à envelopper (il n'est pas nécessaire de mettre la signature, le nom suffit. On peut choisir d'envelopper seulement certaines classes d'un .h).
    • On exécute xdress qui génère les .pyx et .pxd correspondants

    XDress permet d'envelopper des conteneurs STL via son générateur stlwrap (les conteneurs à enrober doivent être listés dans le xdressrc.py). A titre d'exemple, les vecteurs sont convertis en numpy array du type contenu.

    Ce projet est récent et pas très documenté, mais il semble prometteur.

    PyBindGen

    Utilisation
    • Écriture d'un script Python qui décrit les classes/fonctions C++ à enrober en s'appuyant sur le module PyBindGen (1) → permet de générer un fichier .cpp
    • Compilation du code C++ généré, avec la librairie du programme à envelopper et génération d'une librairie Python.

    Ce processus peut être automatisé:

    • Écriture d'un script Python qui utilise les outils PyBindGen pour lister les modules (headers) à envelopper, les lire et lancer la génération automatique des liaisons c++

    ou:

    • Écriture d'un script Python qui utilise les outils PyBindGen pour lister les modules (headers) à envelopper et générer le script Python décrit en (1) (ce qui permettrait une étape intermédiaire pour personnaliser les liaisons)

    PyBindGen offre un support pour la STL, l'héritage (multiple), la gestion des exceptions C++ côté Python, la surcharge d'opérateurs, le comptage de référence, la gestion de la propriété des objets. Mais il supporte mal les templates.

    Actuellement en version 0.17, la dernière release date du 15 février 2014 (entre autres ajout de la compatibilité Python 3.3).

    Exemple

    PyBindGen, en l'état, n'offre pas la possibilité d'envelopper simplement des templates, ni des smart pointers "maison" par extension.

    Une classe de ce package permet d'envelopper des shared pointers de Boost (boost::shared_ptr). Il serait à priori possible de la modifier légèrement pour enrober les smart pointers de l'application Code_TYMPAN (non testé).

    Voici néanmoins à titre d'exemple le code permettant d'envelopper la classe MyElement et des fonctions manipulant non plus des smart pointers mais des 'MyElement *'

    Test.h :

    MyElement *BuildElement();
    void UseElement(MyElement *elt);
    

    smartptr.py :

    import pybindgen
    import sys
    from pybindgen import retval
    from pybindgen import param
    
    mod = pybindgen.Module('smartptr')
    
    # File includes
    mod.add_include('"src/MyElement.h"')
    mod.add_include('"src/Test.h"')
    
    # Class MyElement
    MyElement = mod.add_class('MyElement')
    MyElement.add_constructor([])
    MyElement.add_method('Name', retval('std::string'), [])
    
    
    # Test functions
    # transfer_ownership=False : here Python program keeps the ownership of the element it passes to the C++ API
    mod.add_function('UseElement', None, [param('MyElement *', 'elt', transfer_ownership=False)])
    # caller_owns_return=True : here Python program will be responsible for destructing the element returned by BuildElement
    mod.add_function('BuildElement', retval('MyElement *',  caller_owns_return=True), [])
    
    if __name__ == '__main__':
        mod.generate(sys.stdout)
    

    Boost.Python

    Les liaisons Python s'écrivent directement en C++.

    C'est un outil très fiable et pérenne, avec de par sa nature un très bon support C++ : gestion de la mémoire, templates, surcharges d'opérateurs, comptage de référence, smart pointers, héritage, etc.

    Inconvénient : la syntaxe (en mode templates C++) n'est pas très intuitive.

    Conclusion

    Les solutions Cython et PyBindGen ont été explorées autour de la problématique d'enrobage de smart pointers. Il en est ressorti que:

    • Il est possible d'enrober facilement des smart pointers Code_TYMPAN en Cython. L'approche qui a été choisie est de manipuler depuis Python les objets C++ directement au travers de smart pointers (les objets Python contenus dans le .pyx encapsulent des objets SmartPtr[T *], agissant donc comme des proxys vers les objets). De cette façon, l'utilisation depuis Python d'un objet C++ incrémente le compteur de référence côté C++ et cela garantit qu'on ne perdra pas la référence à un objet au cours de son utilisation côté Python. Un appel à getRealPointer() pour enrober des fonctions manipulant directement des T * sera toujours possible dans le code Cython au besoin.
    • PyBindGen présente l'intérêt d'offrir des moyens de gérer l'attribution de la propriété des éléments entre C++ et Python (transfer_ownership, caller_owns_return). Malheureusement, il n'offre pas la possibilité d'enrober des smart pointers sans modification de classes PyBindGen, ni d'envelopper des templates.

    Par ailleurs, après utilisation de PyBindGen, il nous a semblé que bien qu'il présente des idées intéressantes, sa documentation, ses tutoriels et son support sont trop succints. Le projet est développé par une seule personne et sa viabilité est difficile à déterminer. Cython en revanche offre un meilleur support et plus de fiabilité.

    Le choix final s'est donc porté sur Cython. Il a été motivé par un souci d'utiliser un outil fiable limitant les coûts de développement (élimination de PyBindGen), aussi proche de Python que possible (élimination de Boost.Python). Cet outil semble fournir un support C++ suffisant par rapport à nos besoins tels que perçus à ce stade du projet.

    De plus si on cherche un moyen de générer automatiquement les liaisons Python, XDress présente l'avantage de permettre l'utilisation de libclang comme alternative à gcc-xml (PyBindGen est basé sur gcc-xml uniquement). Une possibilité serait par ailleurs d'utiliser XDress pour générer uniquement le .pxd et d'écrire le .pyx manuellement.

    Une question qui n'a pas été abordée au cours de cette étude car elle ne correspondait pas à un besoin interne, mais qui est néanmoins intéressante est la suivante: est-il possible de dériver depuis Python des classes de base définies en C++ et enveloppées en Cython, et d'utiliser les objets résultants dans l'application C++ ?


  • Mini compte rendu Meetup Debian à Nantes

    2014/03/13 by Arthur Lutz

    Hier soir, je suis allé au premier meetup Debian à Nantes. C'était bien sympatique, une vingtaine de personnes ont répondu présent à l'appel de Damien Raude-Morvan et Thomas Vincent. Merci à eux d'avoir lancé l'initiative (le pad d'organisation).

    //www.logilab.org/file/228927/raw/debian-france.jpg

    Après un tour de table des participants, et de quelques discussions sur debian en général (et une explication par Damien de l'état de Java dans Debian), Damien a présenté l'association Debian France ainsi que le concours du nouveau contributeur Debian. La liste d'idées est longue et sympatique n'hésitez pas à aller jeter un oeil et faire une contribution.

    Ensuite Thomas nous a présenté l'équipe de traduction francaise de debian et ses principles de fonctionnement (qualité avant quantité, listes de discussion, IRC, processus de traduction, etc.).

    //www.logilab.org/file/228931/raw/saltstack_logo.jpg

    Finalement, j'ai rapidement présenté Salt et sa place dans Debian. Pour l'archive publique : les diapos de la présentation.

    À la prochaine !

    Pour faire un commentaire, il faut s'authentifier ou s'enregistrer.


  • Retour sur MiniDebConf Paris 2014

    2014/03/05 by Arthur Lutz
    http://www.logilab.org/file/226609/raw/200px-Mini-debconf-paris.png

    Nous sommes heureux d'avoir participé à la MiniDebConf Paris.

    Nous avons sponsorisé l'évenement mais aussi effectué deux présentations :

    • Julien Cristau a présenté l'équipe dont il fait partie pour la prochaine release de debian : Jessie. Il a notamment donné des conseils à la communauté sur la préparation de jessie. Voici ces diapos : Release team: Jessie.
    • David Douard a présenté Salt to administrate Debian systems, introduisant Salt avec un focus particulier sur ce que Salt peut apporter à de l'administration d'un parc de machines sous Debian.

    Avec une cinquantaine de participants sur les deux jours, c'est toujours agréable de rencontrer la communauté francaise autour de Debian. Merci donc à l'association Debian France d'avoir organisé cette conférence.


  • Second Salt Meetup builds the french community

    2014/03/04 by Arthur Lutz

    On the 6th of February, the Salt community in France met in Paris to discuss Salt and choose the tools to federate itself. The meetup was kindly hosted by IRILL.

    There were two formal presentations :

    • Logilab did a short introduction of Salt,
    • Majerti presented a feedback of their experience with Salt in various professional contexts.

    The presentation space was then opened to other participants and Boris Feld did a short presentation of how Salt was used at NovaPost.

    http://www.logilab.org/file/226420/raw/saltstack_meetup.jpeg

    We then had a short break to share some pizza (sponsored by Logilab).

    After the break, we had some open discussion about various subjects, including "best practices" in Salt and some specific use cases. Regis Leroy talked about the states that Makina Corpus has been publishing on github. The idea of reconciling the documentation and the monitoring of an infrastructure was brought up by Logilab, that calls it "Test Driven Infrastructure".

    The tools we collectively chose to form the community were the following :

    • a mailing-list kindly hosted by the AFPY (a pythonic french organization)
    • a dedicated #salt-fr IRC channel on freenode

    We decided that the meetup would take place every two months, hence the third one will be in April. There is already some discussion about organizing events to tell as many people as possible about Salt. It will probably start with an event at NUMA in March.

    After the meetup was officially over, a few people went on to have some drinks nearby. Thank you all for coming and your participation.

    login or register to comment on this blog


  • FOSDEM PGDay 2014

    2014/02/11 by Rémi Cardona

    I attended PGDay on January 31st, in Brussels. This event was held just before FOSDEM, which I also attended (expect another blog post). Here are some of the notes I took during the conference.

    https://fosdem.org/2014/support/promote/wide.png

    Statistics in PostgreSQL, Heikki Linnakangas

    Due to transit delays, I only caught the last half of that talk.

    The main goal of this talk was to explain some of Postgres' per-column statistics. In a nutshell, Postgres needs to have some idea about tables' content in order to choose an appropriate query plan.

    Heikki explained which sorts of statistics gathers, such as most common values and histograms. Another interesting stat is the correlation between physical pages and data ordering (see CLUSTER).

    Column statistics are gathered when running ANALYZE and stored in the pg_statistic system catalog. The pg_stats view provides a human-readable version of these stats.

    Heikki also explained how to determine whether performance issues are due to out-of-date statistics or not. As it turns out, EXPLAIN ANALYZE shows for each step of the query planner how many rows it expects to process and how many it actually processed. The rule of thumb is that similar values (no more than an order of magnitude apart) mean that column statistics are doing their job. A wider margin between expected and actual rows mean that statistics are possibly preventing the query planner from picking a more optimized plan.

    It was noted though that statistics-related performance issues often happen on tables with very frequent modifications. Running ANALYZE manually or increasing the frequency of the automatic ANALYZE may help in those situations.

    Advanced Extension Use Cases, Dimitri Fontaine

    Dimitri explained with very simple cases the use of some of Postgres' lesser-known extensions and the overall extension mechanism.

    Here's a grocery-list of the extensions and types he introduced:

    • intarray extension, which adds operators and functions to the standard ARRAY type, specifically tailored for arrays of integers,
    • the standard POINT type which provides basic 2D flat-earth geometry,
    • the cube extension that can represent N-dimensional points and volumes,
    • the earthdistance extension that builds on cube to provide distance functions on a sphere-shaped Earth (a close-enough approximation for many uses),
    • the pg_trgm extension which provides text similarity functions based on trigram matching (a much simpler thus faster alternative to Levenshtein distances), especially useful for "typo-resistant" auto-completion suggestions,
    • the hstore extension which provides a simple-but-efficient key value store that has everyone talking in the Postgres world (it's touted as the NoSQL killer),
    • the hll extensions which implements the HyperLogLog algorithm which seems very well suited to storing and counting unique visitor on a web site, for example.

    An all-around great talk with simple but meaningful examples.

    http://tapoueh.org/images/fosdem_2014.jpg

    Integrated cache invalidation for better hit ratios, Magnus Hagander

    What Magnus presented almost amounted to a tutorial on caching strategies for busy web sites. He went through simple examples, using the ubiquitous Django framework for the web view part and Varnish for the HTTP caching part.

    The whole talk revolved around adding private (X-prefixed) HTTP headers in replies containing one or more "entity IDs" so that Varnish's cache can be purged whenever said entities change. The hard problem lies in how and when to call PURGE on Varnish.

    The obvious solution is to override Django's save() method on Model-derived objects. One can then use httplib (or better yet requests) to purge the cache. This solution can be slightly improved by using Django's signal mechanism instead, which sound an awful-lot like CubicWeb's hooks.

    The problem with the above solution is that any DB modification not going through Django (and they will happen) will not invalidate the cached pages. So Magnus then presented how to write the same cache-invalidating code in PL/Python in triggers.

    While this does solve that last issue, it introduces synchronous HTTP calls in the DB, killing write performance completely (or killing it completely if the HTTP calls fail). So to fix those problems, while introducing limited latency, is to use SkyTools' PgQ, a simple message queue based on Postgres. Moving the HTTP calls outside of the main database and into a Consumer (a class provided by PgQ's python bindings) makes the cache-invalidating trigger asynchronous, reducing write overhead.

    http://www.logilab.org/file/210615/raw/varnish_django_postgresql.png

    A clear, concise and useful talk for any developer in charge of high-traffic web sites or applications.

    The Worst Day of Your Life, Christophe Pettus

    Christophe humorously went back to that dreadful day in the collective Postgres memory: the release of 9.3.1 and the streaming replication chaos.

    My overall impression of the talk: Thank $DEITY I'm not a DBA!

    But Christophe also gave some valuable advice, even for non-DBAs:

    • Provision 3 times the necessary disk space, in case you need to pg_dump or otherwise do a snapshot of your currently running database,
    • Do backups and test them:
      • give them to developers,
      • use them for analytics,
      • test the restore, make it foolproof, try to automate it,
    • basic Postgres hygiene:
      • fsync = on (on by default, DON'T TURN IT OFF, there are better ways)
      • full_page_writes = on (on by default, don't turn it off)
      • deploy minor versions as soon as possible,
      • plan upgrade strategies before EOL,
      • 9.3+ checksums (createdb option, performance cost is minimal),
      • application-level consistency checks (don't wait for auto vacuum to "discover" consistency errors).

    Materialised views now and in the future, Thom Brown

    Thom presented on of the new features of Postgres 9.3, materialized views.

    In a nutshell, materialized views (MV) are read-only snapshots of queried data that's stored on disk, mostly for performance reasons. An interesting feature of materialized views is that they can have indexes, just like regular tables.

    The REFRESH MATERIALIZED VIEW command can be used to update an MV: it will simply run the original query again and store the new results.

    There are a number of caveats with the current implementation of MVs:

    • pg_dump never saves the data, only the query used to build it,
    • REFRESH requires an exclusive lock,
    • due to implementation details (frozen rows or pages IIRC), MVs may exhibit non-concurrent behavior with other running transactions.

    Looking towards 9.4 and beyond, here are some of the upcoming MV features:

    • 9.4 adds the CONCURRENTLY keyword:
      • + no longer needs an exclusive lock, doesn't block reads
      • - requires a unique index
      • - may require VACUUM
    • roadmap (no guarantees):
      • unlogged (disables the WAL),
      • incremental refresh,
      • lazy automatic refresh,
      • planner awareness of MVs (would use MVs as cache/index).

    Indexes: The neglected performance all-rounder, Markus Winand

    http://use-the-index-luke.com/img/alchemie.png

    Markus' goal with this talk showed that very few people in the SQL world actually know - let alone really care - about indexes. According to his own experience and that of others (even with competing RDBMS), poorly written SQL is still a leading cause of production downtime (he puts the number at around 50% of downtime though others he quoted put that number higher). SQL queries can indeed put such stress on DB systems and cause them to fail.

    One major issue, he argues, is poorly designed indexes. He went back in time to explain possible reasons for the lack of knowledge about indexes with both SQL developers and DBAs. One such reason may be that indexes are not part of the SQL standard and are left as implementation-specific details. Thus many books about SQL barely cover indexes, if at all.

    He then took us through a simple quiz he wrote on the topic, with only 5 questions. The questions and explanations were very insightful and I must admit my knowledge of indexes was not up to par. I think everyone in the room got his message loud and clear: indexes are part of the schema, devs should care about them too.

    Try out the test : http://use-the-index-luke.com/3-minute-test

    PostgreSQL - Community meets Business, Michael Meskes

    For the last talk of the day, Michael went back to the history of the Postgres project and its community. Unlike other IT domains such as email, HTTP servers or even operating systems, RDBMS are still largely dominated by proprietary vendors such as Oracle, IBM and Microsoft. He argues that the reasons are not technical: from a developer stand point, Postgres has all the features of the leading RDMBS (and many more) and the few missing administrative features related to scalability are being addressed.

    Instead, he argues decision makers inside companies don't yet fully trust Postgres due to its (perceived) lack of corporate backers.

    He went on to suggest ways to overcome those perceptions, for example with an "official" Postgres certification program.

    A motivational talk for the Postgres community.

    http://fosdem2014.pgconf.eu/files/img/frontrotate/slonik.jpg

  • A Salt Configuration for C++ Development

    2014/01/24 by Damien Garaud
    http://www.logilab.org/file/204916/raw/SaltStack-Logo.png

    At Logilab, we've been using Salt for one year to manage our own infrastructure. I wanted to use it to manage a specific configuration: C++ development. When I instantiate a Virtual Machine with a Debian image, I don't want to spend time to install and configure a system which fits my needs as a C++ developer:

    This article is a very simple recipe to get a C++ development environment, ready to use, ready to hack.

    Give Me an Editor and a DVCS

    Quite simple: I use the YAML file format used by Salt to describe what I want. To install these two editors, I just need to write:

    vim-nox:
      pkg.installed
    
    emacs23-nox:
      pkg.installed
    

    For Mercurial, you'll guess:

    mercurial:
     pkg.installed
    

    You can write these lines in the same init.sls file, but you can also decide to split your configuration into different subdirectories: one place for each thing. I decided to create a dev and editor directories at the root of my salt config with two init.sls inside.

    That's all for the editors. Next step: specific C++ development packages.

    Install Several "C++" Packages

    In a cpp folder, I write a file init.sls with this content:

    gcc:
        pkg.installed
    
    g++:
        pkg.installed
    
    gdb:
        pkg.installed
    
    cmake:
        pkg.installed
    
    automake:
        pkg.installed
    
    libtool:
        pkg.installed
    
    pkg-config:
        pkg.installed
    
    colorgcc:
        pkg.installed
    

    The choice of these packages is arbitrary. You add or remove some as you need. There is not a unique right solution. But I want more. I want some LLVM packages. In a cpp/llvm.sls, I write:

    llvm:
     pkg.installed
    
    clang:
        pkg.installed
    
    libclang-dev:
        pkg.installed
    
    {% if not grains['oscodename'] == 'wheezy' %}
    lldb-3.3:
        pkg.installed
    {% endif %}
    

    The last line specifies that you install the lldb package if your Debian release is not the stable one, i.e. jessie/testing or sid in my case. Now, just include this file in the init.sls one:

    # ...
    # at the end of 'cpp/init.sls'
    include:
      - .llvm
    

    Organize your sls files according to your needs. That's all for packages installation. You Salt configuration now looks like this:

    .
    |-- cpp
    |   |-- init.sls
    |   `-- llvm.sls
    |-- dev
    |   `-- init.sls
    |-- edit
    |   `-- init.sls
    `-- top.sls
    

    Launching Salt

    Start your VM and install a masterless Salt on it (e.g. apt-get install salt-minion). For launching Salt locally on your naked VM, you need to copy your configuration (through scp or a DVCS) into /srv/salt/ directory and to write the file top.sls:

    base:
      '*':
        - dev
        - edit
        - cpp
    

    Then just launch:

    > salt-call --local state.highstate
    

    as root.

    And What About Configuration Files?

    You're right. At the beginning of the post, I talked about a "ready to use" Mercurial with some HG extensions. So I use and copy the default /etc/mercurial/hgrc.d/hgext.rc file into the dev directory of my Salt configuration. Then, I edit it to set some extensions such as color, rebase, pager. As I also need Evolve, I have to clone the source code from https://bitbucket.org/marmoute/mutable-history. With Salt, I can tell "clone this repo and copy this file" to specific places.

    So, I add some lines to dev/init.sls.

    https://bitbucket.org/marmoute/mutable-history:
        hg.latest:
          - rev: tip
          - target: /opt/local/mutable-history
          - require:
             - pkg: mercurial
    
    /etc/mercurial/hgrc.d/hgext.rc:
        file.managed:
          - source: salt://dev/hgext.rc
          - user: root
          - group: root
          - mode: 644
    

    The require keyword means "install (if necessary) this target before cloning". The other lines are quite self-explanatory.

    In the end, you have just six files with a few lines. Your configuration now looks like:

    .
    |-- cpp
    |   |-- init.sls
    |   `-- llvm.sls
    |-- dev
    |   |-- hgext.rc
    |   `-- init.sls
    |-- edit
    |   `-- init.sls
    `-- top.sls
    

    You can customize it and share it with your teammates. A step further would be to add some configuration files for your favorite editor. You can also imagine to install extra packages that your library depends on. Quite simply add a subdirectory amazing_lib and write your own init.sls. I know I often need Boost libraries for example. When your Salt configuration has changed, just type: salt-call --local state.highstate.

    As you can see, setting up your environment on a fresh system will take you only a couple commands at the shell before you are ready to compile your C++ library, debug it, fix it and commit your modifications to your repository.


  • What's New in Pandas 0.13?

    2014/01/19 by Damien Garaud
    http://www.logilab.org/file/203841/raw/pandas_logo.png

    Do you know pandas, a Python library for data analysis? Version 0.13 came out on January the 16th and this post describes a few new features and improvements that I think are important.

    Each release has its list of bug fixes and API changes. You may read the full release note if you want all the details, but I will just focus on a few things.

    You may be interested in one of my previous blog post that showed a few useful Pandas features with datasets from the Quandl website and came with an IPython Notebook for reproducing the results.

    Let's talk about some new and improved Pandas features. I suppose that you have some knowledge of Pandas features and main objects such as Series and DataFrame. If not, I suggest you watch the tutorial video by Wes McKinney on the main page of the project or to read 10 Minutes to Pandas in the documentation.

    Refactoring

    I welcome the refactoring effort: the Series type, subclassed from ndarray, has now the same base class as DataFrame and Panel, i.e. NDFrame. This work unifies methods and behaviors for these classes. Be aware that you can hit two potential incompatibilities with versions less that 0.13. See internal refactoring for more details.

    Timeseries

    to_timedelta()

    Function pd.to_timedelta to convert a string, scalar or array of strings to a Numpy timedelta type (np.timedelta64 in nanoseconds). It requires a Numpy version >= 1.7. You can handle an array of timedeltas, divide it by an other timedelta to carry out a frequency conversion.

    from datetime import timedelta
    import numpy as np
    import pandas as pd
    
    # Create a Series of timedelta from two DatetimeIndex.
    dr1 = pd.date_range('2013/06/23', periods=5)
    dr2 = pd.date_range('2013/07/17', periods=5)
    td = pd.Series(dr2) - pd.Series(dr1)
    
    # Set some Na{N,T} values.
    td[2] -= np.timedelta64(timedelta(minutes=10, seconds=7))
    td[3] = np.nan
    td[4] += np.timedelta64(timedelta(hours=14, minutes=33))
    td
    
    0   24 days, 00:00:00
    1   24 days, 00:00:00
    2   23 days, 23:49:53
    3                 NaT
    4   24 days, 14:33:00
    dtype: timedelta64[ns]
    

    Note the NaT type (instead of the well-known NaN). For day conversion:

    td / np.timedelta64(1, 'D')
    
    0    24.000000
    1    24.000000
    2    23.992975
    3          NaN
    4    24.606250
    dtype: float64
    

    You can also use the DateOffSet as:

    td + pd.offsets.Minute(10) - pd.offsets.Second(7) + pd.offsets.Milli(102)
    

    Nanosecond Time

    Support for nanosecond time as an offset. See pd.offsets.Nano. You can use N of this offset in the pd.date_range function as the value of the argument freq.

    Daylight Savings

    The tz_localize method can now infer a fall daylight savings transition based on the structure of the unlocalized data. This method, as the tz_convert method is available for any DatetimeIndex, Series and DataFrame with a DatetimeIndex. You can use it to localize your datasets thanks to the pytz module or convert your timeseries to a different time zone. See the related documentation about time zone handling. To use the daylight savings inference in the method tz_localize, set the infer_dst argument to True.

    DataFrame Features

    New Method isin()

    New DataFrame method isin which is used for boolean indexing. The argument to this method can be an other DataFrame, a Series, or a dictionary of a list of values. Comparing two DataFrame with isin is equivalent to do df1 == df2. But you can also check if values from a list occur in any column or check if some values for a few specific columns occur in the DataFrame (i.e. using a dict instead of a list as argument):

    df = pd.DataFrame({'A': [3, 4, 2, 5],
                       'Q': ['f', 'e', 'd', 'c'],
                       'X': [1.2, 3.4, -5.4, 3.0]})
    
       A  Q    X
    0  3  f  1.2
    1  4  e  3.4
    2  2  d -5.4
    3  5  c  3.0
    

    and then:

    df.isin(['f', 1.2, 3.0, 5, 2, 'd'])
    
           A      Q      X
    0   True   True   True
    1  False  False  False
    2   True   True  False
    3   True  False   True
    

    Of course, you can use the previous result as a mask for the current DataFrame.

    mask = _
    df[mask.any(1)]
    
          A  Q    X
       0  3  f  1.2
       2  2  d -5.4
       3  5  c  3.0
    
    When you pass a dictionary to the ``isin`` method, you can specify the column
    labels for each values.
    
    mask = df.isin({'A': [2, 3, 5], 'Q': ['d', 'c', 'e'], 'X': [1.2, -5.4]})
    df[mask]
    
        A    Q    X
    0   3  NaN  1.2
    1 NaN    e  NaN
    2   2    d -5.4
    3   5    c  NaN
    

    See the related documentation for more details or different examples.

    New Method str.extract

    The new vectorized extract method from the StringMethods object, available with the suffix str on Series or DataFrame. Thus, it is possible to extract some data thanks to regular expressions as followed:

    s = pd.Series(['doe@umail.com', 'nobody@post.org', 'wrong.mail', 'pandas@pydata.org', ''])
    # Extract usernames.
    s.str.extract(r'(\w+)@\w+\.\w+')
    

    returns:

    0       doe
    1    nobody
    2       NaN
    3    pandas
    4       NaN
    dtype: object
    

    Note that the result is a Series with the re match objects. You can also add more groups as:

    # Extract usernames and domain.
    s.str.extract(r'(\w+)@(\w+\.\w+)')
    
            0           1
    0     doe   umail.com
    1  nobody    post.org
    2     NaN         NaN
    3  pandas  pydata.org
    4     NaN         NaN
    

    Elements that do no math return NaN. You can use named groups. More useful if you want a more explicit column names (without NaN values in the following example):

    # Extract usernames and domain with named groups.
    s.str.extract(r'(?P<user>\w+)@(?P<at>\w+\.\w+)').dropna()
    
         user          at
    0     doe   umail.com
    1  nobody    post.org
    3  pandas  pydata.org
    

    Thanks to this part of the documentation, I also found out other useful strings methods such as split, strip, replace, etc. when you handle a Series of str for instance. Note that the most of them have already been available in 0.8.1. Take a look at the string handling API doc (recently added) and some basics about vectorized strings methods.

    Interpolation Methods

    DataFrame has a new interpolate method, similar to Series. It was possible to interpolate missing data in a DataFrame before, but it did not take into account the dates if you had index timeseries. Now, it is possible to pass a specific interpolation method to the method function argument. You can use scipy interpolation functions such as slinear, quadratic, polynomial, and others. The time method is used to take your index timeseries into account.

    from datetime import date
    # Arbitrary timeseries
    ts = pd.DatetimeIndex([date(2006,5,2), date(2006,12,23), date(2007,4,13),
                           date(2007,6,14), date(2008,8,31)])
    df = pd.DataFrame(np.random.randn(5, 2), index=ts, columns=['X', 'Z'])
    # Fill the DataFrame with missing values.
    df['X'].iloc[[1, -1]] = np.nan
    df['Z'].iloc[3] = np.nan
    df
    
                       X         Z
    2006-05-02  0.104836 -0.078031
    2006-12-23       NaN -0.589680
    2007-04-13 -1.751863  0.543744
    2007-06-14  1.210980       NaN
    2008-08-31       NaN  0.566205
    

    Without any optional argument, you have:

    df.interpolate()
    
                       X         Z
    2006-05-02  0.104836 -0.078031
    2006-12-23 -0.823514 -0.589680
    2007-04-13 -1.751863  0.543744
    2007-06-14  1.210980  0.554975
    2008-08-31  1.210980  0.566205
    

    With the time method, you obtain:

    df.interpolate(method='time')
    
                       X         Z
    2006-05-02  0.104836 -0.078031
    2006-12-23 -1.156217 -0.589680
    2007-04-13 -1.751863  0.543744
    2007-06-14  1.210980  0.546496
    2008-08-31  1.210980  0.566205
    

    I suggest you to read more examples in the missing data doc part and the scipy documentation about the module interpolate.

    Misc

    Convert a Series to a single-column DataFrame with its method to_frame.

    Misc & Experimental Features

    Retrieve R Datasets

    Not a killing feature but very pleasant: the possibility to load into a DataFrame all R datasets listed at http://stat.ethz.ch/R-manual/R-devel/library/datasets/html/00Index.html

    import pandas.rpy.common as com
    titanic = com.load_data('Titanic')
    titanic.head()
    
      Survived    Age     Sex Class value
    0       No  Child    Male   1st   0.0
    1       No  Child    Male   2nd   0.0
    2       No  Child    Male   3rd  35.0
    3       No  Child    Male  Crew   0.0
    4       No  Child  Female   1st   0.0
    

    for the datasets about survival of passengers on the Titanic. You can find several and different datasets about New York air quality measurements, body temperature series of two beavers, plant growth results or the violent crime rates by US state for instance. Very useful if you would like to show pandas to a friend, a colleague or your Grandma and you do not have a dataset with you.

    And then three great experimental features.

    Eval and Query Experimental Features

    The eval and query methods which use numexpr which can fastly evaluate array expressions as x - 0.5 * y. For numexpr, x and y are Numpy arrays. You can use this powerfull feature in pandas to evaluate different DataFrame columns. By the way, we have already talked about numexpr a few years ago in EuroScipy 09: Need for Speed.

    df = pd.DataFrame(np.random.randn(10, 3), columns=['x', 'y', 'z'])
    df.head()
    
              x         y         z
    0 -0.617131  0.460250 -0.202790
    1 -1.943937  0.682401 -0.335515
    2  1.139353  0.461892  1.055904
    3 -1.441968  0.477755  0.076249
    4 -0.375609 -1.338211 -0.852466
    
    df.eval('x + 0.5 * y - z').head()
    
    0   -0.184217
    1   -1.267222
    2    0.314395
    3   -1.279340
    4   -0.192248
    dtype: float64
    

    About the query method, you can select elements using a very simple query syntax.

    df.query('x >= y > z')
    
              x         y         z
    9  2.560888 -0.827737 -1.326839
    

    msgpack Serialization

    New reading and writing functions to serialize your data with the great and well-known msgpack library. Note this experimental feature does not have a stable storage format. You can imagine to use zmq to transfer msgpack serialized pandas objects over TCP, IPC or SSH for instance.

    Google BigQuery

    A recent module pandas.io.gbq which provides a way to load into and extract datasets from the Google BigQuery Web service. I've not installed the requirements for this feature now. The example of the release note shows how you can select the average monthly temperature in the year 2000 across the USA. You can also read the related pandas documentation. Nevertheless, you will need a BigQuery account as the other Google's products.

    Take Your Keyboard

    Give it a try, play with some data, mangle and plot them, compute some stats, retrieve some patterns or whatever. I'm convinced that pandas will be more and more used and not only for data scientists or quantitative analysts. Open an IPython Notebook, pick up some data and let yourself be tempted by pandas.

    I think I will use more the vectorized strings methods that I found out about when writing this post. I'm glad to learn more about timeseries because I know that I'll use these features. I'm looking forward to the two experimental features such as eval/query and msgpack serialization.

    You can follow me on Twitter (@jazzydag). See also Logilab (@logilab_org).


  • Pylint 1.1 christmas release

    2013/12/24 by Sylvain Thenault

    Pylint 1.1 eventually got released on pypi!

    A lot of work has been achieved since the latest 1.0 release. Various people have contributed to add several new checks as well as various bug fixes and other enhancement.

    Here is the changes summary, check the changelog for more info.

    New checks:

    • 'deprecated-pragma', for use of deprecated pragma directives "pylint:disable-msg" or "pylint:enable-msg" (was previously emmited as a regular warn().
    • 'superfluous-parens' for unnecessary parentheses after certain keywords.
    • 'bad-context-manager' checking that '__exit__' special method accepts the right number of arguments.
    • 'raising-non-exception' / 'catching-non-exception' when raising/catching class non inheriting from BaseException
    • 'non-iterator-returned' for non-iterators returned by '__iter__'.
    • 'unpacking-non-sequence' for unpacking non-sequences in assignments and 'unbalanced-tuple-unpacking' when left-hand-side size doesn't match right-hand-side.

    Command line:

    • New option for the multi-statement warning to allow single-line if statements.
    • Allow to run pylint as a python module 'python -m pylint' (anatoly techtonik).
    • Various fixes to epylint

    Bug fixes:

    • Avoid false used-before-assignment for except handler defined identifier used on the same line (#111).
    • 'useless-else-on-loop' not emited if there is a break in the else clause of inner loop (#117).
    • Drop 'badly-implemented-container' which caused several problems in its current implementation.
    • Don't mark input as a bad function when using python3 (#110).
    • Use attribute regexp for properties in python3, as in python2
    • Fix false-positive 'trailing-whitespace' on Windows (#55)

    Other:

    • Replaced regexp based format checker by a more powerful (and nit-picky) parser, combining 'no-space-after-operator', 'no-space-after-comma' and 'no-space-before-operator' into a new warning 'bad-whitespace'.
    • Create the PYLINTHOME directory when needed, it might fail and lead to spurious warnings on import of pylint.config.
    • Fix setup.py so that pylint properly install on Windows when using python3.
    • Various documentation fixes and enhancements

    Packages will be available in Logilab's Debian and Ubuntu repository in the next few weeks.

    Happy christmas!


  • SaltStack Paris Meetup on Feb 6th, 2014 - (S01E02)

    2013/12/20 by Nicolas Chauvat

    Logilab has set up the second meetup for salt users in Paris on Feb 6th, 2014 at IRILL, near Place d'Italie, starting at 18:00. The address is 23 avenue d'Italie, 75013 Paris.

    Here is the announce in french http://www.logilab.fr/blogentry/1981

    Please forward it to whom may be interested, underlining that pizzas will be offered to refuel the chatters ;)

    Conveniently placed a week after the Salt Conference, topics will include anything related to salt and its uses, demos, new ideas, exchange of salt formulas, commenting the talks/videos of the saltconf, etc.

    If you are interested in Salt, Python and Devops and will be in Paris at that time, we hope to see you there !


  • A quick take on continuous integration services for Bitbucket

    2013/12/19 by Sylvain Thenault

    Some time ago, we moved Pylint from this forge to Bitbucket (more on this here).

    https://bitbucket-assetroot.s3.amazonaws.com/c/photos/2012/Oct/11/master-logo-2562750429-5_avatar.png

    Since then, I somewhat continued to use the continuous integration (CI) service we provide on logilab.org to run tests on new commits, and to do the release job (publish a tarball on pypi, on our web site, build Debian and Ubuntu packages, etc.). This is fine, but not really handy since the logilab.org's CI service is not designed to be used for projects hosted elsewhere. Also I wanted to see what others have to offer, so I decided to find a public CI service to host Pylint and Astroid automatic tests at least.

    Here are the results of my first swing at it. If you have others suggestions, some configuration proposal or whatever, please comment.

    First, here are the ones I didn't test along with why:

    The first one I actually tested, also the first one to show up when looking for "bitbucket continuous integration" on Google is https://drone.io. The UI is really simple, I was able to set up tests for Pylint in a matter of minutes: https://drone.io/bitbucket.org/logilab/pylint. Tests are automatically launched when a new commit is pushed to Pylint's Bitbucket repository and that setup was done automatically.

    Trying to push Drone.io further, one missing feature is the ability to have different settings for my project, e.g. to launch tests on all the python flavor officially supported by Pylint (2.5, 2.6, 2.7, 3.2, 3.3, pypy, jython, etc.). Last but not least, the missing killer feature I want is the ability to launch tests on top of Pull Requests, which travis-ci supports.

    Then I gave http://wercker.com a shot, but got stuck at the Bitbucket repository selection screen: none were displayed. Maybe because I don't own Pylint's repository, I'm only part of the admin/dev team? Anyway, wercker seems appealing too, though the configuration using yaml looks a bit more complicated than drone.io's, but as I was not able to test it further, there's not much else to say.

    https://www.logilab.org/file/4758432/raw/wercker.png

    So for now the winner is https://drone.io, but the first one allowing me to test on several Python versions and to launch tests on pull requests will be the definitive winner! Bonus points for automating the release process and checking test coverage on pull requests as well.

    https://drone.io/drone3000/images/alien-zap-header.png

show 315 results