subscribe to this blog

Logilab.org - en

News from Logilab and our Free Software projects, as well as on topics dear to our hearts (Python, Debian, Linux, the semantic web, scientific computing...)

show 208 results
  • Retrieve Quandl's Data and Play with a Pandas

    2013/10/31 by Damien Garaud

    This post deals with the Pandas Python library, the open and free access of timeseries datasets thanks to the Quandl website and how you can handle datasets with pandas efficiently.

    http://www.logilab.org/file/186707/raw/scrabble_data.jpg http://www.logilab.org/file/186708/raw/pandas_peluche.jpg

    Why this post?

    There has been a long time that I want to play a little with pandas. Not an adorable black and white teddy bear but the well-known Python Data library based on Numpy. I would like to show how you can easely retrieve some numerical datasets from the Quandl website and its API, and handle these datasets with pandas efficiently trought its main object: the DataFrame.

    Note that this blog post comes with a IPython Notebook which can be found at http://nbviewer.ipython.org/url/www.logilab.org/file/187482/raw/quandl-data-with-pandas.ipynb

    You also can get it at http://hg.logilab.org/users/dag/blog/2013/quandl-data-pandas/ with HG.

    Just do:

    hg clone http://hg.logilab.org/users/dag/blog/2013/quandl-data-pandas/
    

    and get the IPython Notebook, the HTML conversion of this Notebook and some related CSV files.

    First Step: Get the Code

    At work or at home, I use Debian. A quick and dumb apt-get install python-pandas is enough. Nevertheless, (1) I'm keen on having a fresh and bloody upstream sources to get the lastest features and (2) I'm trying to contribute a little to the project --- tiny bugs, writing some docs. So I prefer to install it from source. Thus, I pull, I do sudo python setup.py develop and a few Cython compiling seconds later, I can do:

    import pandas as pd
    

    For the other ways to get the library, see the download page on the official website or see the dedicated Pypi page.

    Let's build 10 brownian motions and plotting them with matplotlib.

    import numpy as np
    pd.DataFrame(np.random.randn(120, 10).cumsum(axis=0)).plot()
    

    I don't very like the default font and color of the matplotlib figures and curves. I know that pandas defines a "mpl style". Just after the import, you can write:

    pd.options.display.mpl_style = 'default'
    
    http://www.logilab.org/file/186714/raw/Ten-Brownian-Motions.png

    Second Step: Have You Got Some Data Please ?

    Maybe I'm wrong, but I think that it's sometimes a quite difficult to retrieve some workable numerial datasets in the huge amount of available data over the Web. Free Data, Open Data and so on. OK folks, where are they ? I don't want to spent my time to through an Open Data website, find some interesting issues, parse an Excel file, get some specific data, mangling them to get a 2D arrays of floats with labels. Note that pandas fits with these kinds of problem very well. See the IO part of the pandas documentation --- CSV, Excel, JSON, HDF5 reading/writing functions. I just want workable numerical data without making effort.

    A few days ago, a colleague of mine talked me about Quandl, a website dedicated to find and use numerical datasets with timeseries on the Internet. A perfect source to retrieve some data and play with pandas. Note that you can access some data about economics, health, population, education etc. thanks to a clever API. Get some datasets in CSV/XML/JSON formats between this date and this date, aggregate them, compute the difference, etc.

    Moreover, you can access Quandl's datasets through any programming languages, like R, Julia, Clojure or Python (also available plugins or modules for some softwares such as Excel, Stata, etc.). The Quandl's Python package depends on Numpy and pandas. Perfect ! I can use the module Quandl.py available on GitHub and query some datasets directly in a DataFrame.

    Here we are, huge amount of data are teasing me. Next question: which data to play with?

    Third Step: Give some Food to Pandas

    I've already imported the pandas library. Let's query some datasets thanks to the Quandl Python module. An example inspired by the README from the Quandl's GitHub page project.

    import Quandl
    data = Quandl.get('GOOG/NYSE_IBM')
    data.tail()
    

    and you get:

                  Open    High     Low   Close    Volume
    Date
    2013-10-11  185.25  186.23  184.12  186.16   3232828
    2013-10-14  185.41  186.99  184.42  186.97   2663207
    2013-10-15  185.74  185.94  184.22  184.66   3367275
    2013-10-16  185.42  186.73  184.99  186.73   6717979
    2013-10-17  173.84  177.00  172.57  174.83  22368939
    

    OK, I'm not very familiar with this kind of data. Take a look at the Quandl website. After a dozen of minutes on the Quandl website, I found this OECD murder rates. This page shows current and historical murder rates (assault deaths per 100 000 people) for 33 countries from the OECD. Take a country and type:

    uk_df = Quandl.get('OECD/HEALTH_STAT_CICDHOCD_TXCMILTX_GBR')
    

    It's a DataFrame with a single column 'Value'. The index of the DataFrame is a timeserie. You can easily plot these data thanks to a:

    uk_df.plot()
    
    http://www.logilab.org/file/186711/raw/GBR-oecd-murder-rates.png

    See the other pieces of code and using examples in the dedicated IPython Notebook. I also get data about unemployment in OECD for the quite same countries with more dates. Then, as I would like to compare these data, I must select similar countries, time-resample my data to have the same frequency and so on. Take a look. Any comment is welcomed.

    So, the remaining content of this blog post is just a summary of a few interesting and useful pandas features used in the IPython notebook.

    • Using the timeseries as Index of my DataFrames
    • pd.concat to concatenate several DataFrames along a given axis. This function can deal with missing values if the Index of each DataFrame are not similar (this is my case)
    • DataFrame.to_csv and pd.read_csv to dump/load your data to/from CSV files. There are different arguments for the read_csv which deals with dates, mising value, header & footer, etc.
    • DateOffset pandas object to deal with different time frequencies. Quite useful if you handle some data with calendar or business day, month end or begin, quarter end or begin, etc.
    • Resampling some data with the method resample. I use it to make frequency conversion of some data with timeseries.
    • Merging/joining DataFrames. Quite similar to the "SQL" feature. See pd.merge function or the DataFrame.join method. I used this feature to align my two DataFrames along its Index.
    • Some Matplotlib plotting functions such as DataFrame.plot() and plot(kind='bar').

    Conclusion

    I showed a few useful pandas features in the IPython Notebooks: concatenation, plotting, data computation, data alignement. I think I can show more but this could be occurred in a further blog post. Any comments, suggestions or questions are welcomed.

    The next 0.13 pandas release should be coming soon. I'll write a short blog post about it in a few days.

    The pictures come from:


  • SaltStack Paris Meetup - some of what was said

    2013/10/09 by Arthur Lutz

    Last week, on the first day of OpenWorldForum 2013, we met up with Thomas Hatch of SaltStack to have a talk about salt. He was in Paris to give two talks the following day (1 & 2), and it was a great opportunity to meet him and physically meet part of the French Salt community. Since Logilab hosted the Great Salt Sprint in Paris, we offered to co-organise the meetup at OpenWorldForum.

    http://saltstack.com/images/SaltStack-Logo.png http://openworldforum.org/static/pictures/Calque1.png

    Introduction

    About 15 people gathered in Montrouge (near Paris) and we all took turns to present ourselves and how or why we used salt. Some people wanted to migrate from BCFG2 to salt. Some people told the story of working a month with CFEngine and meeting the same functionnality in two days with salt and so decided to go for that instead. Some like salt because they can hack its python code. Some use salt to provision pre-defined AMI images for the clouds (salt-ami-cloud-builder). Some chose salt over Ansible. Some want to use salt to pilot temporary computation clusters in the cloud (sort of like what StarCluster does with boto and ssh).

    When Paul from Logilab introduced salt-ami-cloud-builder, Thomas Hatch said that some work is being done to go all the way : build an image from scratch from a state definition. On the question of Debian packaging, some efforts could be done to have salt into wheezy-backports. Julien Cristau from Logilab who is a debian developer might help with that.

    Some untold stories where shared : some companies that replaced puppet by salt, some companies use salt to control an HPC cluster, some companies use salt to pilot their existing puppet system.

    We had some discussions around salt-cloud, which will probably be merged into salt at some point. One idea for salt-cloud was raised : have a way of defining a "minimum" type of configuration which translates into the profiles according to which provider is used (an issue should be added shortly). The expression "pushing states" was often used, it is probably a good way of looking a the combination of using salt-cloud and the masterless mode available with salt-ssh. salt-cloud controls an existing cloud, but Thomas Hatch points to the fact that with salt-virt, salt is becoming a cloud controller itself, more on that soon.

    Mixing pillar definition between 'public' and 'private' definitions can be tricky. Some solutions exist with multiple gitfs (or mercurial) external pillar definitions, but more use cases will drive more flexible functionalities in the future.

    Presentation and live demo

    For those in the audience that were not (yet) users of salt, Thomas went back to explaining a few basics about it. Salt should be seen as a "toolkit to solve problems in a infrastructure" says Thomas Hatch. Why is it fast ? It is completely asynchronous and event driven.

    He gave a quick presentation about the new salt-ssh which was introduced in 0.17, which allows the application of salt recipes to machines that don't have a minion connected to the master.

    The peer communication can be used so as to add a condition for a state on the presence of service on a different minion.

    While doing demos or even hacking on salt, one can use salt/test/minionswarm.py which makes fake minions, not everyone has hundreds of servers in at their fingertips.

    Smart modules are loaded dynamically, for example, the git module that gets loaded if a state installs git and then in the same highstate uses the git modules.

    Thomas explained the difference between grains and pillars : grains is data about a minion that lives on the minion, pillar is data about the minion that lives on the master. When handling grains, the grains.setval can be useful (it writes in /etc/salt/grains as yaml, so you can edit it separately). If a minion is not reachable one can obtain its grains information by replacing test=True by cache=True.

    Thomas shortly presented saltstack-formulas : people want to "program" their states, and formulas answer this need, some of the jinja2 is overly complicated to make them flexible and programmable.

    While talking about the unified package commands (a salt command often has various backends according to what system runs the minion), for example salt-call --local pkg.install vim, Thomas told this funny story : ironically, salt was nominated for "best package manager" at some linux magazine competition. (so you don't have to learn how to use FreeBSD packaging tools).

    While hacking salt, one can take a look at the Event Bus (see test/eventlisten.py), many applications are possible when using the data on this bus. Thomas talks about a future IOflow python module where a complex logic can be implemented in the reactor with rules and a state machine. One example use of this would be if the load is high on X number of servers and the number of connexions Y on these servers then launch extra machines.

    To finish on a buzzword, someone asked "what is the overlap of salt and docker" ? The answer is not simple, but Thomas thinks that in the long run there will be a lot of overlap, one can check out the existing lxc modules and states.

    Wrap up

    To wrap up, Thomas announced a salt conference planned for January 2014 in Salt Lake City.

    Logilab proposes to bootstrap the French community around salt. As the group suggest this could take the form of a mailing list, an irc channel, a meetup group , some sprints, or a combination of all the above. On that note, next international sprint will probably take place in January 2014 around the salt conference.


  • Setup your project with cloudenvy and OpenStack

    2013/10/03 by Arthur Lutz

    One nice way of having a reproducible development or test environment is to "program" a virtual machine to do the job. If you have a powerful machine at hand you might use Vagrant in combination with VirtualBox. But if you have an OpenStack setup at hand (which is our case), you might want to setup and destroy your virtual machines on such a private cloud (or public cloud if you want or can). Sure, Vagrant has some plugins that should add OpenStack as a provider, but, here at Logilab, we have a clear preference for python over ruby. So this is where cloudenvy comes into play.

    http://www.openstack.org/themes/openstack/images/open-stack-cloud-computing-logo-2.png

    Cloudenvy is written in python and with some simple YAML configuration can help you setup and provision some virtual machines that contain your tests or your development environment.

    http://www.python.org/images/python-logo.gif

    Setup your authentication in ~/.cloudenvy.yml :

    cloudenvy:
      clouds:
        cloud01:
          os_username: username
          os_password: password
          os_tenant_name: tenant_name
          os_auth_url: http://keystone.example.com:5000/v2.0/
    

    Then create an Envyfile.yml at the root of your project

    project_config:
      name: foo
      image: debian-wheezy-x64
    
      # Optional
      #remote_user: ec2-user
      #flavor_name: m1.small
      #auto_provision: False
      #provision_scripts:
        #- provision_script.sh
      #files:
        # files copied from your host to the VM
        #local_file : destination
    

    Now simply type envy up. Cloudenvy does the rest. It "simply" creates your machine, copies the files, runs your provision script and gives you it's IP address. You can then run envy ssh if you don't want to be bothered with IP addresses and such nonsense (forget about copy and paste from the OpenStack web interface, or your nova show commands).

    Little added bonus : you know your machine will run a web server on port 8080 at some point, set it up in your environment by defining in the same Envyfile.yml your access rules

    sec_groups: [
        'tcp, 22, 22, 0.0.0.0/0',
        'tcp, 80, 80, 0.0.0.0/0',
        'tcp, 8080, 8080, 0.0.0.0/0',
      ]
    

    As you might know (or I'll just recommend it), you should be able to scratch and restart your environment without loosing anything, so once in a while you'll just do envy destroy to do so. You might want to have multiples VM with the same specs, then go for envy up -n second-machine.

    Only downside right now : cloudenvy isn't packaged for debian (which is usually a prerequisite for the tools we use), but let's hope it gets some packaging soon (or maybe we'll end up doing it).

    Don't forget to include this configuration in your project's version control so that a colleague starting on the project can just type envy up and have a working setup.

    In the same order of ideas, we've been trying out salt-cloud <https://github.com/saltstack/salt-cloud> because provisioning machines with SaltStack is the way forward. A blog about this is next.


  • DebConf13 report

    2013/09/25 by Julien Cristau

    As announced before, I spent a week last month in Vaumarcus, Switzerland, attending the 14th Debian conference (DebConf13).

    It was great to be at DebConf again, with lots of people I hadn't seen since New York City three years ago, and lots of new faces. Kudos to the organizers for pulling this off. These events are always a great boost for motivation, even if the amount of free time after coming back home is not quite as copious as I might like.

    One thing that struck me this year was the number of upstream people, not directly involved in Debian, who showed up. From systemd's Lennart and Kay, to MariaDB's Monty, and people from upstart, dracut, phpmyadmin or munin. That was a rather pleasant surprise for me.

    Here's a report on the talks and BoF sessions I attended. It's a bit long, but hey, the conference lasted a week. In addition to those I had quite a few chats with various people, including fellow members of the Debian release team.

    http://debconf13.debconf.org/images/logo.png

    Day 1 (Aug 11)

    Linux kernel : Ben Hutchings made a summary of the features added between 3.2 in wheezy and the current 3.10, and their status in Debian (some still need userspace work).

    SPI status : Bdale Garbee and Jimmy Kaplowitz explained what steps SPI is making to deal with its growth, including getting help from a bookkeeper recently to relieve the pressure on the (volunteer) treasurer.

    Hardware support in Debian stable : If you buy new hardware today, it's almost certainly not supported by the Debian stable release. Ideas to improve this :

    • backport whole subsystems: probably not feasible, risk of regressions would be too high
    • ship compat-drivers, and have the installer automatically install newer drivers based on PCI ids, seems possible.
    • mesa: have the GL loader pick a different driver based on the hardware, and ship newer DRI drivers for the new hardware, without touching the old ones. Issue: need to update libGL and libglapi too when adding new drivers.
    • X drivers, drm: ? (it's complicated)

    Meeting between release team and DPL to figure out next steps for jessie. Decided to schedule a BoF later in the week.

    Day 2 (Aug 12)

    Munin project lead on new features in 2.0 (shipped in wheezy) and roadmap for 2.2. Improvements on the scalability front (both in terms of number of nodes and number of plugins on a node). Future work includes improving the UI to make it less 1990 and moving some metadata to sql.

    jeb on AWS and Debian : Amazon Web Services (AWS) includes compute (ec2), storage (s3), network (virtual private cloud, load balancing, ..) and other services. Used by Debian for package rebuilds. http://cloudfront.debian.net is a CDN frontend for archive mirrors. Official Debian images are on ec2, including on the AWS marketplace front page. build-debian-cloud tool from Anders Ingeman et al. was presented.

    openstack in Debian : Packaging work is focused on making things easy for newcomers, basic config with debconf. Advanced users are going to use puppet or similar anyway. Essex is in wheezy, but end-of-life upstream. Grizzly available in sid and in a separate archive for wheezy. This work is sponsored by enovance.

    Patents : http://patents.stackexchange.com, looks like the USPTO has used comments made there when rejecting patent applications based on prior art. Patent applications are public, and it's a lot easier to get a patent application rejected than invalidate a patent later on. Should we use that site? Help build momentum around it? Would other patent offices use that kind of research? Issues: looking at patent applications (and publicly commenting) might mean you're liable for treble damages if the patent is eventually granted? Can you comment anonymously?

    Why systemd? : Lennart and Kay. Pop corn, upstart trolling, nothing really new.

    Day 3 (Aug 13)

    dracut : dracut presented by Harald Hoyer, its main developer. Seems worth investigating replacing initramfs-tools and sharing the maintenance load. Different hooks though, so we'll need to coordinate this with various packages.

    upstart : More Debian-focused than the systemd talk. Not helped by Canonical's CLA...

    dh_busfactor : debhelper is essentially a one-man show from the beginning. Though various packages/people maintain different dh_* tools either in the debhelper package itself or elsewhere. Joey is thinking about creating a debhelper team including those people. Concerns over increased breakage while people get up to speed (joeyh has 10 years of experience and still occasionally breaks stuff).

    dri3000 : Keith is trying to fix dri2 issues. While dri2 fixed a number of things that were wrong with dri1, it still has some problems. One of the goals is to improve presentation: we need a way to sync between app and compositor (to avoid displaying incompletely drawn frames), avoid tearing, and let the app choose immediate page flip instead of waiting for next vblank if it missed its target (stutter in games is painful). He described this work on his blog.

    security team BoF : explain the workflow, try to improve documentation of the process and what people can do to help. http://security.debian.org/

    Day 4 (Aug 14)

    day trip, and conference dinner on a boat from Neuchatel to Vaumarcus

    Day 5 (Aug 15)

    git-dpm : Spent half an hour explaining git, then was rushed to show git-dpm itself. Still, needs looking at. Lets you work with git and export changes as quilt series to build a source package.

    Ubuntu daily QA : The goal was to make it possible for canonical devs (not necessarily people working on the distro) to use ubuntu+1 (dev release). They tried syncing from testing for a while, but noticed bug fixes being delayed: not good. In the previous workflow the dev release was unusable/uninstallable for the first few months. Multiarch made things even more problematic because it requires amd64/i386 being in sync.

    • 12.04: a bunch of manpower thrown at ubuntu+1 to keep backlog of technical debt under control.
    • 12.10: prepare infrastructure (mostly launchpad), add APIs, to make non-canonical people able to do stuff that previously required shell access on central machines.
    • 13.04: proposed migration. britney is used to migrate packages from devel-proposed to devel. A few teething problems at first, but good reaction.
    • 13.10 and beyond: autopkgtest runs triggered after upload/build, also for rdeps. Phased updates for stable releases (rolled out to a subset of users and then gradually generalized). Hook into errors.ubuntu.com to match new crashes with package uploads. Generally more continuous integration. Better dashboard. (Some of that is still to be done.)

    Lessons learned from debian:

    • unstable's backlog can get bad → proposed is only used for builds and automated tests, no delay
    • transitions can take weeks at best
    • to avoid dividing human attention, devs are focused on devel, not devel-proposed

    Lessons debian could learn:

    • keeping testing current is a collective duty/win
    • splitting users between testing and unstable has important costs
    • hooking automated testing into britney really powerful; there's a small but growing number of automated tests

    Ideas:

    • cut migration delay in half
    • encourage writing autopkgtests
    • end goal: make sid to testing migration entirely based on automated tests

    Debian tests using Jenkins http://jenkins.debian.net

    • https://github.com/h01ger/jenkins-job-builder
    • Only running amd64 right now.
    • Uses jenkins plugins: git, svn, log parser, html publisher, ...
    • Has existing jobs for installer, chroot installs, others
    • Tries to make it easy to reproduce jobs, to allow debugging
    • {c,sh}ould add autopkgtests

    Day 6 (Aug 16)

    X Strike Force BoF : Too many bugs we can't do anything about: {mass,auto}-close them, asking people to report upstream. Reduce distraction by moving the non-X stuff to separate teams (compiz removed instead, wayland to discuss...). We should keep drivers as close to upstream as possible. A couple of people in the room volunteered to handle the intel, ati and input drivers.

    reclass BoF

    I had missed the talk about reclass, and Martin kindly offered to give a followup BoF to show what reclass can do.

    Reclass provides adaptors for puppet(?), salt, ansible. A yaml file describes each host:

    • can declare applications and parameters
    • host is leaf in a dag/tree of classes

    Lets you put the data in reclass instead of the config management tool, keeping generic templates in ansible/salt.

    I'm definitely going to try this and see if it makes it easier to organize data we're currently putting directly in salt states.

    release BoF : Notes are on http://gobby.debian.org. Basic summary: "Releasing in general is hard. Releasing something as big/diverse/distributed as Debian is even harder." Who knew?

    freedombox : status update from Bdale

    Keith Packard showed off the free software he uses in his and Bdale's rockets adventures.

    This was followed by a birthday party in the evening, as Debian turned 20 years old.

    Day 7 (Aug 17)

    x2go : Notes are on http://gobby.debian.org. To be solved: issues with nx libs (gpl fork of old x). Seems like a good thing to try as alternative to LTSP which we use at Logilab.

    lightning talks

    • coquelicot (lunar) - one-click secure(ish) file upload web app
    • notmuch (bremner) - need to try that again now that I have slightly more disk space
    • fedmsg (laarmen) - GSoC, message passing inside the debian infrastructure

    Debconf15 bids :

    • Mechelen/Belgium - Wouter
    • Germany (no city yet) - Marga

    Debconf14 presentation : Will be in Portland (Portland State University) next August. Presentation by vorlon, harmoney, keithp. Looking forward to it!

    • Closing ceremony

    The videos of most of the talks can be downloaded, thanks to the awesome work of the video team. And if you want to check what I didn't see or talk about, check the complete schedule.


  • JDEV2013 - Software development conference of CNRS

    2013/09/13 by Nicolas Chauvat

    I had the pleasure to be invited to lead a tutorial at JDEV2013 titled Learning TDD and Python in Dojo mode.

    http://www.logilab.org/file/177427/raw/logo_JDEV2013.png

    I quickly introduced the keywords with a single slide to keep it simple:

    http://Python.org
    + Test Driven Development (Test, Code, Refactor)
    + Dojo (house of training: Kata / Randori)
    = Calculators
      - Reverse Polish Notation
      - Formulas with Roman Numbers
      - Formulas with Numbers in letters
    

    As you can see, I had three types of calculators, hence at least three Kata to practice, but as usual with beginners, it took us the whole tutorial to get done with the first one.

    The room was a class room that we set up as our coding dojo with the coder and his copilot working on a laptop, facing the rest of the participants, with the large screen at their back. The pair-programmers could freely discuss with the people facing them, who were following the typing on the large screen.

    We switched every ten minutes: the copilot became coder, the coder went back to his seat in the class and someone else stood up to became the copilot.

    The session was allocated 3 hours split over two slots of 1h30. It took me less than 10 minutes to open the session with the above slide, 10 minutes as first coder and 10 minutes to close it. Over a time span of 3 hours, that left 150 minutes for coding, hence 15 people. Luckily, the whole group was about that size and almost everyone got a chance to type.

    I completely skipped explaining Python, its syntax and the unittest framework and we jumped right into writing our first tests with if and print statements. Since they knew about other programming languages, they picked up the Python langage on the way.

    After more than an hour of slowly discovering Python and TDD, someone in the room realized they had been focusing more on handling exception cases and failures than implementing the parsing and computation of the formulas because the specifications where not clearly understood. He then asked me the right question by trying to define Reverse Polish Notation in one sentence and checking that he got it right.

    Different algorithms to parse and compute RPN formulas where devised at the blackboard over the pause while part of the group went for a coffee break.

    The implementation took about another hour to get right, with me making sure they would not wander too far from the actual goal. Once the stack-based solution was found and implemented, I asked them to delete the files, switch coder and start again. They had forgotten about the Kata definition and were surprised, but quickly enjoyed it when they realized that progress was much faster on the second attempt.

    Since it is always better to show that you can walk the talk, I closed the session by praticing the RPN calculator kata myself in a bit less than 10 minutes. The order in which to write the tests is the tricky part, because it can easily appear far-fetched for such a small problem when you already know an algorithm that solves it.

    Here it is:

    import operator
    
    OPERATORS = {'+': operator.add,
                 '*': operator.mul,
                 '/': operator.div,
                 '-': operator.sub,
                 }
    
    def compute(args):
        items = args.split()
        stack = []
        for item in items:
            if item in OPERATORS:
                b,a = stack.pop(), stack.pop()
                stack.append(OPERATORS[item](a,b))
            else:
                stack.append(int(item))
        return stack[0]
    

    with the accompanying tests:

    import unittest
    from npi import compute
    
    class TestTC(unittest.TestCase):
    
        def test_unit(self):
            self.assertEqual(compute('1'), 1)
    
        def test_dual(self):
            self.assertEqual(compute('1 2 +'), 3)
    
        def test_tri(self):
            self.assertEqual(compute('1 2 3 + +'), 6)
            self.assertEqual(compute('1 2 + 3 +'), 6)
    
        def test_precedence(self):
            self.assertEqual(compute('1 2 + 3 *'), 9)
            self.assertEqual(compute('1 2 * 3 +'), 5)
    
        def test_zerodiv(self):
            self.assertRaises(ZeroDivisionError, compute, '10 0 /')
    
    unittest.main()
    

    Apparently, it did not go too bad, for I had positive comments at the end from people that enjoyed discovering in a single session Python, Test Driven Development and the Dojo mode of learning.

    I had fun doing this tutorial and thank the organizators for this conference!


  • Going to EuroScipy2013

    2013/09/04 by Alain Leufroy

    The EuroScipy2013 conference was held in Bruxelles at the Université libre de Bruxelles.

    http://www.logilab.org/file/175984/raw/logo-807286783.png

    As usual the first two days were dedicated to tutorials while the last two ones were dedicated to scientific presentations and general python related talks. The meeting was extended by one more day for sprint sessions during which enthusiasts were able to help free software projects, namely sage, vispy and scipy.

    Jérôme and I had the great opportunity to represent Logilab during the scientific tracks and the sprint day. We enjoyed many talks about scientific applications using python. We're not going to describe the whole conference. Visit the conference website if you want the complete list of talks. In this article we will try to focus on the ones we found the most interesting.

    First of all the keynote by Cameron Neylon about Network ready research was very interesting. He presented some graphs about the impact of a group job on resolving complex problems. They revealed that there is a critical network size for which the effectiveness for solving a problem drastically increase. He pointed that the source code accessibility "friction" limits the "getting help" variable. Open sourcing software could be the best way to reduce this "friction" while unit testing and ongoing integration are facilitators. And, in general, process reproducibility is very important, not only in computing research. Retrieving experimental settings, metadata, and process environment is vital. We agree with this as we are experimenting it everyday in our work. That is why we encourage open source licenses and develop a collaborative platform that provides the distributed simulation traceability and reproducibility platform Simulagora (in french).

    Ian Ozsvald's talk dealt with key points and tips from his own experience to grow a business based on open source and python, as well as mistakes to avoid (e.g. not checking beforehand there are paying customers interested by what you want to develop). His talk was comprehensive and mentioned a wide panel of situations.

    http://vispy.org/_static/img/logo.png

    We got a very nice presentation of a young but interesting visualization tools: Vispy. It is 6 months old and the first public release was early August. It is the result of the merge of 4 separated libraries, oriented toward interactive visualisation (vs. static figure generation for Matplotlib) and using OpenGL on GPUs to avoid CPU overload. A demonstration with large datasets showed vispy displaying millions of points in real time at 40 frames per second. During the talk we got interesting information about OpenGL features like anti-grain compared to Matplotlib Agg using CPU.

    We also got to learn about cartopy which is an open source Python library originally written for weather and climate science. It provides useful and simple API to manipulate cartographic mapping.

    Distributed computing systems was a hot topic and many talks were related to this theme.

    https://www.openstack.org/themes/openstack/images/openstack-logo-preview-full-color.png

    Gael Varoquaux reminded us what are the keys problems with "biggish data" and the key points to successfully process them. I think that some of his recommendations are generally useful like "choose simple solutions", "fail gracefully", "make it easy to debug". For big data processing when I/O limit is the constraint, first try to split the problem into random fractions of the data, then run algorithms and aggregate the results to circumvent this limit. He also presented mini-batch that takes a bunch of observations (trade-off memory usage/vectorization) and joblib.parallel that makes I/O faster using compression (CPUs are faster than disk access).

    Benoit Da Mota talked about shared memory in parallel computing and Antonio Messina gave us a quick overview on how to build a computing cluster with Elasticluster, using OpenStack/Slurm/ansible. He demonstrated starting and stopping a cluster on OpenStack: once all VMs are started, ansible configures them as hosts to the cluster and new VMs can be created and added to the cluster on the fly thanks to a command line interface.

    We also got a keynote by Peter Wang (from Continuum Analytics) about the future of data analysis with Python. As a PhD in physics I loved his metaphor of giving mass to data. He tried to explain the pain that scientists have when using databases.

    https://scikits.appspot.com/static/images/scipyshiny_small.png

    After the conference we participated to the numpy/scipy sprint. It was organized by Ralph Gommers and Pauli Virtanen. There were 18 people trying to close issues from different difficulty levels and had a quick tutorial on how easy it is to contribute: the easiest is to fork from the github project page on your own github account (you can create one for free), so that later your patch submission will be a simple "Pull Request" (PR). Clone locally your scipy fork repository, and make a new branch (git checkout -b <newbranch>) to tackle one specific issue. Once your patch is ready, commit it locally, push it on your github repository and from the github interface choose "Push request". You will be able to add something to your commit message before your PR is sent and looked at by the project lead developers. For example using "gh-XXXX" in your commit message will automatically add a link to the issue no. XXXX. Here is the list of open issues for scipy; you can filter them, e.g. displaying only the ones considered easy to fix :D

    For more information: Contributing to SciPy.


  • Emacs turned into a IDE with CEDET

    2013/08/29 by Anthony Truchet

    Abstract

    In this post you will find one way, namely thanks to CEDET, of turning your Emacs into an IDE offering features for semantic browsing and refactoring assistance similar to what you can find in major IDE like Visual Studio or Eclipse.

    Introduction

    Emacs is a tool of choice for the developer: it is very powerful, highly configurable and has a wealth of so called modes to improve many aspects of daily work, especially when editing code.

    The point, as you might have realised in case you have already worked with an IDE like Eclipse or Visual Studio, is that Emacs (code) browsing abilities are quite rudimentary... at least out of the box!

    In this post I will walk through one way to configure Emacs + CEDET which works for me. This is by far not the only way to get to it but finding this path required several days of wandering between inconsistent resources, distribution pitfall and the like.

    I will try to convey relevant parts of what I have learnt on the way, to warn about some pitfalls and also to indicate some interesting direction I haven't followed (be it by choice or necessity) and encourage you to try. Should you try to push this adventure further, your experience will be very much appreciated... and in any case your feedback on this post is also very welcome.

    The first part gives some deemed useful background to understand what's going on. If you want to go straight to the how-to please jump directly to the second part.

    Sketch map of the jungle

    This all started because I needed a development environment to do work remotely on a big, legacy C++ code base from quite a lightweight machine and a weak network connection.

    My former habit of using Eclipse CDT and compiling locally was not an option any longer but I couldn't stick to a bare text editor plus remote compilation either because of the complexity of the code base. So I googled emacs IDE code browser and started this journey to set CEDET + ECB up...

    I quickly got lost in a jungle of seemingly inconsistent options and I reckon that some background facts are welcome at this point as to why.

    Up to this date - sept. 2013 - most of the world is in-between two major releases of Emacs. Whereas Emacs 23.x is still packaged in many stable Linux distribution, the latest release is Emacs 24.3. In this post we will use Emacs 24.x which brings lots of improvements, two of those are really relevant to us:

    • the introduction of a package manager, which is great and (but) changes initialisation
    • the partial integration of some version of CEDET into Emacs since version 23.2

    Emacs 24 initialisation

    Very basically, Emacs used to read the user's Emacs config (~/.emacs or ~/.emacs.d/init.el) which was responsible for adapting the load-path and issuing the right (require 'stuff) commands and configuring each library in some appropriate sequence.

    Emacs 24 introduces ELPA, a new package system and official packages repository. It can be extended by other packages repositories such as Marmalade or MELPA

    By default in Emacs 24, the initialisation order is a bit more complex due to packages loading: the user's config is still read but should NOT require the libraries installed through the package system: those are automatically loaded (the former load-path adjustment and (require 'stuff) steps) after the ~/.emacs or ~/.emacs.d/init.el has finished. This makes configuring the loaded libraries much more error-prone, especially for libraries designed to be configured the old way (as of today most libraries, notably CEDET).

    Here is a good analysis of the situation and possible options. And for those interested in the details of the new initialisation process, see following sections of the manual:

    I first tried to stick to the new-way, setting up hooks in ~/.emacs.d/init.el to be called after loading the various libraries, each library having its own configuration hook, and praying for the interaction between the package manager load order and my hooks to be ok... in vain. So I ended up forcing the initialisation to the old way (see Emacs 24 below).

    What is CEDET ?

    CEDET is a Collection of Emacs Development Environment Tools. The major word here is collection, do not expect it to be an integrated environment. The main components of (or coupled with) CEDET are:

    Semantic
    Extract a common semantic from source code in different languages
    (e)ctags / GNU global
    Traditional (exhuberant) CTags or GNU global can be used as a source of information for Semantic
    SemanticDB
    SemanticDB provides for caching the outcome of semantic analysis in some database to reduce analysis overhead across several editing sessions
    Emacs Code Browser
    This component uses information provided by Semantic to offer a browsing GUI with windows for traversing files, classes, dependencies and the like
    EDE
    This provides a notion of project analogous to most IDE. Even if the features related to building projects are very Emacs/ Linux/ Autotools-centric (and thus not necessarily very helful depending on your project setup), the main point of EDE is providing scoping of source code for Semantic to analyse and include path customisation at the project level.
    AutoComplete
    This is not part of CEDET but Semantic can be configured as a source of completions for auto-complete to propose to the user.
    and more...
    Senator, SRecode, Cogre, Speedbar, EIEIO, EAssist are other components of CEDET I've not looked at yet.

    To add some more complexity, CEDET itself is also undergoing heavy changes and is in-between major versions. The last standalone release is 1.1 but it has the old source layout and activation method. The current head of development says it is version 2.0, has new layout and activation method, plus some more features but is not released yet.

    Integration of CEDET into Emacs

    Since Emacs 23.2, CEDET is built into Emacs. More exactly parts of some version of new CEDET are built into Emacs, but of course this built-in version is older than the current head of new CEDET... As for the notable parts not built into Emacs, ECB is the most prominent! But it is packaged into Marmalade in a recent version following head of development closely which, mitigates the inconvenience.

    My first choice was using built-in CEDET with ECB installed from the packages repository: the installation was perfectly smooth but I was not able to configure cleanly enough the whole to get proper operation. Although I tried hard, I could not get Semantic to take into account the include paths I configured using my EDE project for example.

    I would strongly encourage you to try this way, as it is supposed to require much less effort to set up and less maintenance. Should you succeed I would greatly appreciate some feedback of you experience!

    As for me I got down to install the latest version from the source repositories following as closely as possible Alex Ott's advices and using his own fork of ECB to make it compliant with most recent CEDET:

    How to set up CEDET + ECB in Emacs 24

    Emacs 24

    Install Emacs 24 as you wish, I will not cover the various options here but simply summarise the local install from sources I choose.

    1. Get the source archive from http://ftpmirror.gnu.org/emacs/
    2. Extract it somewhere and run the usual (or see the INSTALL file) - configure --prefix=~/local, - make, - make install

    Create your emacs personal directory and configuration file ~/.emacs.d/site-lisp/ and ~/.emacs.d/init.el and put this inside the latter:

    ;; this is intended for manually installed libraries
    (add-to-list 'load-path "~/.emacs.d/site-lisp/")
    
    ;; load the package system and add some repositories
    (require 'package)
    (add-to-list 'package-archives
                 '("marmalade" . "http://marmalade-repo.org/packages/"))
    (add-to-list 'package-archives
                 '("melpa" . "http://melpa.milkbox.net/packages/") t)
    
    ;; Install a hook running post-init.el *after* initialization took place
    (add-hook 'after-init-hook (lambda () (load "post-init.el")))
    
    ;; Do here basic initialization, (require) non-ELPA packages, etc.
    
    ;; disable automatic loading of packages after init.el is done
    (setq package-enable-at-startup nil)
    ;; and force it to happen now
    (package-initialize)
    ;; NOW you can (require) your ELPA packages and configure them as normal
    

    Useful Emacs packages

    Using the emacs commands M-x package-list-packages interactively or M-x package-install <package name>, you can install many packages easily. For example I installed:

    Choose your own! I just recommend against installing ECB or other CEDET since we are going to install those from source.

    You can also insert or load your usual Emacs configuration here, simply beware of configuring ELPA, Marmalade et al. packages after (package-initialize).

    CEDET

    • Get the source and put it under ~/.emacs.d/site-lisp/cedet-bzr. You can either download a snapshot from http://www.randomsample.de/cedet-snapshots/ or check it out of the bazaar repository with:

      ~/.emacs.d/site-lisp$ bzr checkout --lightweight \
      bzr://cedet.bzr.sourceforge.net/bzrroot/cedet/code/trunk cedet-bzr
      
    • Run make (and optionnaly make install-info) in cedet-bzr or see the INSTALL file for more details.

    • Get Alex Ott's minimal CEDET configuration file to ~/.emacs.d/config/cedet.el for example

    • Adapt it to your system by editing the first lines as follows

      (setq cedet-root-path
          (file-name-as-directory (expand-file-name
              "~/.emacs.d/site-lisp/cedet-bzr/")))
      (add-to-list 'Info-directory-list
              "~/projects/cedet-bzr/doc/info")
      
    • Don't forget to load it from your ~/.emacs.d/init.el:

      ;; this is intended for configuration snippets
      (add-to-list 'load-path "~/.emacs.d/")
      ...
      (load "config/cedet.el")
      
    • restart your emacs to check everything is OK; the --debug-init option is of great help for that purpose.

    ECB

    • Get Alex Ott ECB fork into ~/.emacs.d/site-lisp/ecb-alexott:

      ~/.emacs.d/site-lisp$ git clone --depth 1  https://github.com/alexott/ecb/
      
    • Run make in ecb-alexott and see the README file for more details.

    • Don't forget to load it from your ~/.emacs.d/init.el:

      (add-to-list 'load-path (expand-file-name
            "~/.emacs.d/site-lisp/ecb-alexott/"))
      (require 'ecb)
      ;(require 'ecb-autoloads)
      

      Note

      You can theoretically use (require 'ecb-autoloads) instead of (require 'ecb) in order to load ECB by need. I encountered various misbehaviours trying this option and finally dropped it, but I encourage you to try it and comment on your experience.

    • restart your emacs to check everything is OK (you probably want to use the --debug-init option).

    • Create a hello.cpp with you CEDET enable Emacs and say M-x ecb-activate to check that ECB is actually installed.

    Tune your configuration

    Now, it is time to tune your configuration. There is no good recipe from here onward... But I'll try to propose some snippets below. Some of them are adapted from Alex Ott personal configuration

    More Semantic options

    You can use the following lines just before (semantic-mode 1) to add to the activated features list:

    (add-to-list 'semantic-default-submodes 'global-semantic-decoration-mode)
    (add-to-list 'semantic-default-submodes 'global-semantic-idle-local-symbol-highlight-mode)
    (add-to-list 'semantic-default-submodes 'global-semantic-idle-scheduler-mode)
    (add-to-list 'semantic-default-submodes 'global-semantic-idle-completions-mode)
    

    You can also load additional capabilities with those lines after (semantic-mode 1):

    (require 'semantic/ia)
    (require 'semantic/bovine/gcc) ; or depending on you compiler
    ; (require 'semantic/bovine/clang)
    
    Auto-completion

    If you want to use auto-complete you can tell it to interface with Semantic by configuring it as follows (where AAAAMMDD.rrrr is the date.revision suffix of the version od auti-complete installed by you package manager):

    ;; Autocomplete
    (require 'auto-complete-config)
    (add-to-list 'ac-dictionary-directories (expand-file-name
                 "~/.emacs.d/elpa/auto-complete-AAAAMMDD.rrrr/dict"))
    (setq ac-comphist-file (expand-file-name
                 "~/.emacs.d/ac-comphist.dat"))
    (ac-config-default)
    

    and activating it in your cedet hook, for example:

    ...
    ;; customisation of modes
    (defun alexott/cedet-hook ()
    ...
        (add-to-list 'ac-sources 'ac-source-semantic)
    ) ; defun alexott/cedet-hook ()
    
    Support for GNU global a/o (e)ctags
    ;; if you want to enable support for gnu global
    (when (cedet-gnu-global-version-check t)
      (semanticdb-enable-gnu-global-databases 'c-mode)
      (semanticdb-enable-gnu-global-databases 'c++-mode))
    
    ;; enable ctags for some languages:
    ;;  Unix Shell, Perl, Pascal, Tcl, Fortran, Asm
    (when (cedet-ectag-version-check)
      (semantic-load-enable-primary-exuberent-ctags-support))
    

    Using CEDET for development

    Once CEDET + ECB + EDE is up you can start using it for actual development. How to actually use it is beyond the scope of this already too long post. I can only invite you to have a look at:

    Conclusion

    CEDET provides an impressive set of features both to allow your emacs environment to "understand" your code and to provide powerful interfaces to this "understanding". It is probably one of the very few solution to work with complex C++ code base in case you can't or don't want to use a heavy-weight IDE like Eclipse CDT.

    But its being highly configurable also means, at least for now, some lack of integration, or at least a pretty complex configuration. I hope this post will help you to do your first steps with CEDET and find your way to setup and configure it to you own taste.


  • Pylint 1.0 released!

    2013/08/06 by Sylvain Thenault

    Hi there,

    I'm very pleased to announce, after 10 years of existence, the 1.0 release of Pylint.

    This release has a hell long ChangeLog, thanks to many contributions and to the 10th anniversary sprint we hosted during june. More details about changes below.

    Chances are high that your Pylint score will go down with this new release that includes a lot of new checks :) Also, there are a lot of improvments on the Python 3 side (notably 3.3 support which was somewhat broken).

    You may download and install it from Pypi or from Logilab's debian repositories. Notice Pylint has been updated to use the new Astroid library (formerly known as logilab-astng) and that the logilab-common 0.60 library includes some fixes necessary for using Pylint with Python3 as well as long-awaited support for namespace packages.

    For those interested, below is a comprehensive list of what changed:

    Command line and output formating

    • A new --msg-template option to control output, deprecating "msvc" and "parseable" output formats as well as killing --include-ids and --symbols options.
    • Fix spelling of max-branchs option, now max-branches.
    • Start promoting usage of symbolic name instead of numerical ids.

    New checks

    • "missing-final-newline" (C0304) for files missing the final newline.
    • "invalid-encoded-data" (W0512) for files that contain data that cannot be decoded with the specified or default encoding.
    • "bad-open-mode" (W1501) for calls to open (or file) that specify invalid open modes (Original implementation by Sasha Issayev).
    • "old-style-class" (C1001) for classes that do not have any base class.
    • "trailing-whitespace" (C0303) that warns about trailing whitespace.
    • "unpacking-in-except" (W0712) about unpacking exceptions in handlers, which is unsupported in Python 3.
    • "old-raise-syntax" (W0121) for the deprecated syntax raise Exception, args.
    • "unbalanced-tuple-unpacking" (W0632) for unbalanced unpacking in assignments (bitbucket #37).

    Enhanced behaviours

    • Do not emit [fixme] for every line if the config value 'notes' is empty
    • Emit warnings about lines exceeding the column limit when those lines are inside multiline docstrings.
    • Name check enhancement:
      • simplified message,
      • don't double-check parameter names with the regex for parameters and inline variables,
      • don't check names of derived instance class members,
      • methods that are decorated as properties are now treated as attributes,
      • names in global statements are now checked against the regular expression for constants,
      • for toplevel name assignment, the class name regex will be used if pylint can detect that value on the right-hand side is a class (like collections.namedtuple()),
      • add new name type 'class_attribute' for attributes defined in class scope. By default, allow both const and variable names.
    • Add a configuration option for missing-docstring to optionally exempt short functions/methods/classes from the check.
    • Add the type of the offending node to missing-docstring and empty-docstring.
    • Do not warn about redefinitions of variables that match the dummy regex.
    • Do not treat all variables starting with "_" as dummy variables, only "_" itself.
    • Make the line-too-long warning configurable by adding a regex for lines for with the length limit should not be enforced.
    • Do not warn about a long line if a pylint disable option brings it above the length limit.
    • Do not flag names in nested with statements as undefined.
    • Remove string module from the default list of deprecated modules (bitbucket #3).
    • Fix incomplete-protocol false positive for read-only containers like tuple (bitbucket #25).

    Other changes

    • Support for pkgutil.extend_path and setuptools pkg_resources (logilab-common #8796).
    • New utility classes for per-checker unittests in testutils.py
    • Added a new base class and interface for checkers that work on the tokens rather than the syntax, and only tokenize the input file once.
    • epylint shouldn't hang anymore when there is a large output on pylint'stderr (bitbucket #15).
    • Put back documentation in source distribution (bitbucket #6).

    Astroid

    • New API to make it smarter by allowing transformation functions on any node, providing a register_transform function on the manager instead of the register_transformer to make it more flexible wrt node selection
    • Use this new transformation API to provide support for namedtuple (actually in pylint-brain, logilab-astng #8766)
    • Better description of hashlib
    • Properly recognize methods annotated with abc.abstract{property,method} as abstract.
    • Added the test_utils module for building ASTs and extracting deeply nested nodes for easier testing.

  • Astroid 1.0 released!

    2013/08/02 by Sylvain Thenault

    Astroid is the new name of former logilab-astng library. It's an AST library, used as the basis of Pylint and including Python 2.5 -> 3.3 compatible tree representation, statical type inference and other features useful for advanced Python code analysis, such as an API to provide extra information when statistical inference can't overcome Python dynamic nature (see the pylint-brain project for instance).

    It has been renamed and hosted to bitbucket to make clear that this is not a Logilab dedicated project but a community project that could benefit to any people manipulating Python code (statistical analysis tools, IDE, browser, etc).

    Documentation is a bit rough but should quickly improve. Also a dedicated web-site is now online, visit www.astroid.org (or https://bitbucket.org/logilab/astroid for development).

    You may download and install it from Pypi or from Logilab's debian repositories.


  • Going to DebConf13

    2013/08/01 by Julien Cristau

    The 14th Debian developers conference (DebConf13) will take place between August 11th and August 18th in Vaumarcus, Switzerland.

    Logilab is a DebConf13 sponsor, and I'll attend the conference. There are quite a lot of cloud-related events on the schedule this year, plus the usual impromptu discussions and hallway track. Looking forward to meeting the usual suspects there!

    https://www.logilab.org/file/158611/raw/dc13-btn0-going-bg.png

show 208 results