subscribe to this blog

Logilab.org - en

News from Logilab and our Free Software projects, as well as on topics dear to our hearts (Python, Debian, Linux, the semantic web, scientific computing...)

show 204 results
  • A Salt Configuration for C++ Development

    2014/01/24 by Damien Garaud
    http://www.logilab.org/file/204916/raw/SaltStack-Logo.png

    At Logilab, we've been using Salt for one year to manage our own infrastructure. I wanted to use it to manage a specific configuration: C++ development. When I instantiate a Virtual Machine with a Debian image, I don't want to spend time to install and configure a system which fits my needs as a C++ developer:

    This article is a very simple recipe to get a C++ development environment, ready to use, ready to hack.

    Give Me an Editor and a DVCS

    Quite simple: I use the YAML file format used by Salt to describe what I want. To install these two editors, I just need to write:

    vim-nox:
      pkg.installed
    
    emacs23-nox:
      pkg.installed
    

    For Mercurial, you'll guess:

    mercurial:
     pkg.installed
    

    You can write these lines in the same init.sls file, but you can also decide to split your configuration into different subdirectories: one place for each thing. I decided to create a dev and editor directories at the root of my salt config with two init.sls inside.

    That's all for the editors. Next step: specific C++ development packages.

    Install Several "C++" Packages

    In a cpp folder, I write a file init.sls with this content:

    gcc:
        pkg.installed
    
    g++:
        pkg.installed
    
    gdb:
        pkg.installed
    
    cmake:
        pkg.installed
    
    automake:
        pkg.installed
    
    libtool:
        pkg.installed
    
    pkg-config:
        pkg.installed
    
    colorgcc:
        pkg.installed
    

    The choice of these packages is arbitrary. You add or remove some as you need. There is not a unique right solution. But I want more. I want some LLVM packages. In a cpp/llvm.sls, I write:

    llvm:
     pkg.installed
    
    clang:
        pkg.installed
    
    libclang-dev:
        pkg.installed
    
    {% if not grains['oscodename'] == 'wheezy' %}
    lldb-3.3:
        pkg.installed
    {% endif %}
    

    The last line specifies that you install the lldb package if your Debian release is not the stable one, i.e. jessie/testing or sid in my case. Now, just include this file in the init.sls one:

    # ...
    # at the end of 'cpp/init.sls'
    include:
      - .llvm
    

    Organize your sls files according to your needs. That's all for packages installation. You Salt configuration now looks like this:

    .
    |-- cpp
    |   |-- init.sls
    |   `-- llvm.sls
    |-- dev
    |   `-- init.sls
    |-- edit
    |   `-- init.sls
    `-- top.sls
    

    Launching Salt

    Start your VM and install a masterless Salt on it (e.g. apt-get install salt-minion). For launching Salt locally on your naked VM, you need to copy your configuration (through scp or a DVCS) into /srv/salt/ directory and to write the file top.sls:

    base:
      '*':
        - dev
        - edit
        - cpp
    

    Then just launch:

    > salt-call --local state.highstate
    

    as root.

    And What About Configuration Files?

    You're right. At the beginning of the post, I talked about a "ready to use" Mercurial with some HG extensions. So I use and copy the default /etc/mercurial/hgrc.d/hgext.rc file into the dev directory of my Salt configuration. Then, I edit it to set some extensions such as color, rebase, pager. As I also need Evolve, I have to clone the source code from https://bitbucket.org/marmoute/mutable-history. With Salt, I can tell "clone this repo and copy this file" to specific places.

    So, I add some lines to dev/init.sls.

    https://bitbucket.org/marmoute/mutable-history:
        hg.latest:
          - rev: tip
          - target: /opt/local/mutable-history
          - require:
             - pkg: mercurial
    
    /etc/mercurial/hgrc.d/hgext.rc:
        file.managed:
          - source: salt://dev/hgext.rc
          - user: root
          - group: root
          - mode: 644
    

    The require keyword means "install (if necessary) this target before cloning". The other lines are quite self-explanatory.

    In the end, you have just six files with a few lines. Your configuration now looks like:

    .
    |-- cpp
    |   |-- init.sls
    |   `-- llvm.sls
    |-- dev
    |   |-- hgext.rc
    |   `-- init.sls
    |-- edit
    |   `-- init.sls
    `-- top.sls
    

    You can customize it and share it with your teammates. A step further would be to add some configuration files for your favorite editor. You can also imagine to install extra packages that your library depends on. Quite simply add a subdirectory amazing_lib and write your own init.sls. I know I often need Boost libraries for example. When your Salt configuration has changed, just type: salt-call --local state.highstate.

    As you can see, setting up your environment on a fresh system will take you only a couple commands at the shell before you are ready to compile your C++ library, debug it, fix it and commit your modifications to your repository.


  • What's New in Pandas 0.13?

    2014/01/19 by Damien Garaud
    http://www.logilab.org/file/203841/raw/pandas_logo.png

    Do you know pandas, a Python library for data analysis? Version 0.13 came out on January the 16th and this post describes a few new features and improvements that I think are important.

    Each release has its list of bug fixes and API changes. You may read the full release note if you want all the details, but I will just focus on a few things.

    You may be interested in one of my previous blog post that showed a few useful Pandas features with datasets from the Quandl website and came with an IPython Notebook for reproducing the results.

    Let's talk about some new and improved Pandas features. I suppose that you have some knowledge of Pandas features and main objects such as Series and DataFrame. If not, I suggest you watch the tutorial video by Wes McKinney on the main page of the project or to read 10 Minutes to Pandas in the documentation.

    Refactoring

    I welcome the refactoring effort: the Series type, subclassed from ndarray, has now the same base class as DataFrame and Panel, i.e. NDFrame. This work unifies methods and behaviors for these classes. Be aware that you can hit two potential incompatibilities with versions less that 0.13. See internal refactoring for more details.

    Timeseries

    to_timedelta()

    Function pd.to_timedelta to convert a string, scalar or array of strings to a Numpy timedelta type (np.timedelta64 in nanoseconds). It requires a Numpy version >= 1.7. You can handle an array of timedeltas, divide it by an other timedelta to carry out a frequency conversion.

    from datetime import timedelta
    import numpy as np
    import pandas as pd
    
    # Create a Series of timedelta from two DatetimeIndex.
    dr1 = pd.date_range('2013/06/23', periods=5)
    dr2 = pd.date_range('2013/07/17', periods=5)
    td = pd.Series(dr2) - pd.Series(dr1)
    
    # Set some Na{N,T} values.
    td[2] -= np.timedelta64(timedelta(minutes=10, seconds=7))
    td[3] = np.nan
    td[4] += np.timedelta64(timedelta(hours=14, minutes=33))
    td
    
    0   24 days, 00:00:00
    1   24 days, 00:00:00
    2   23 days, 23:49:53
    3                 NaT
    4   24 days, 14:33:00
    dtype: timedelta64[ns]
    

    Note the NaT type (instead of the well-known NaN). For day conversion:

    td / np.timedelta64(1, 'D')
    
    0    24.000000
    1    24.000000
    2    23.992975
    3          NaN
    4    24.606250
    dtype: float64
    

    You can also use the DateOffSet as:

    td + pd.offsets.Minute(10) - pd.offsets.Second(7) + pd.offsets.Milli(102)
    

    Nanosecond Time

    Support for nanosecond time as an offset. See pd.offsets.Nano. You can use N of this offset in the pd.date_range function as the value of the argument freq.

    Daylight Savings

    The tz_localize method can now infer a fall daylight savings transition based on the structure of the unlocalized data. This method, as the tz_convert method is available for any DatetimeIndex, Series and DataFrame with a DatetimeIndex. You can use it to localize your datasets thanks to the pytz module or convert your timeseries to a different time zone. See the related documentation about time zone handling. To use the daylight savings inference in the method tz_localize, set the infer_dst argument to True.

    DataFrame Features

    New Method isin()

    New DataFrame method isin which is used for boolean indexing. The argument to this method can be an other DataFrame, a Series, or a dictionary of a list of values. Comparing two DataFrame with isin is equivalent to do df1 == df2. But you can also check if values from a list occur in any column or check if some values for a few specific columns occur in the DataFrame (i.e. using a dict instead of a list as argument):

    df = pd.DataFrame({'A': [3, 4, 2, 5],
                       'Q': ['f', 'e', 'd', 'c'],
                       'X': [1.2, 3.4, -5.4, 3.0]})
    
       A  Q    X
    0  3  f  1.2
    1  4  e  3.4
    2  2  d -5.4
    3  5  c  3.0
    

    and then:

    df.isin(['f', 1.2, 3.0, 5, 2, 'd'])
    
           A      Q      X
    0   True   True   True
    1  False  False  False
    2   True   True  False
    3   True  False   True
    

    Of course, you can use the previous result as a mask for the current DataFrame.

    mask = _
    df[mask.any(1)]
    
          A  Q    X
       0  3  f  1.2
       2  2  d -5.4
       3  5  c  3.0
    
    When you pass a dictionary to the ``isin`` method, you can specify the column
    labels for each values.
    
    mask = df.isin({'A': [2, 3, 5], 'Q': ['d', 'c', 'e'], 'X': [1.2, -5.4]})
    df[mask]
    
        A    Q    X
    0   3  NaN  1.2
    1 NaN    e  NaN
    2   2    d -5.4
    3   5    c  NaN
    

    See the related documentation for more details or different examples.

    New Method str.extract

    The new vectorized extract method from the StringMethods object, available with the suffix str on Series or DataFrame. Thus, it is possible to extract some data thanks to regular expressions as followed:

    s = pd.Series(['doe@umail.com', 'nobody@post.org', 'wrong.mail', 'pandas@pydata.org', ''])
    # Extract usernames.
    s.str.extract(r'(\w+)@\w+\.\w+')
    

    returns:

    0       doe
    1    nobody
    2       NaN
    3    pandas
    4       NaN
    dtype: object
    

    Note that the result is a Series with the re match objects. You can also add more groups as:

    # Extract usernames and domain.
    s.str.extract(r'(\w+)@(\w+\.\w+)')
    
            0           1
    0     doe   umail.com
    1  nobody    post.org
    2     NaN         NaN
    3  pandas  pydata.org
    4     NaN         NaN
    

    Elements that do no math return NaN. You can use named groups. More useful if you want a more explicit column names (without NaN values in the following example):

    # Extract usernames and domain with named groups.
    s.str.extract(r'(?P<user>\w+)@(?P<at>\w+\.\w+)').dropna()
    
         user          at
    0     doe   umail.com
    1  nobody    post.org
    3  pandas  pydata.org
    

    Thanks to this part of the documentation, I also found out other useful strings methods such as split, strip, replace, etc. when you handle a Series of str for instance. Note that the most of them have already been available in 0.8.1. Take a look at the string handling API doc (recently added) and some basics about vectorized strings methods.

    Interpolation Methods

    DataFrame has a new interpolate method, similar to Series. It was possible to interpolate missing data in a DataFrame before, but it did not take into account the dates if you had index timeseries. Now, it is possible to pass a specific interpolation method to the method function argument. You can use scipy interpolation functions such as slinear, quadratic, polynomial, and others. The time method is used to take your index timeseries into account.

    from datetime import date
    # Arbitrary timeseries
    ts = pd.DatetimeIndex([date(2006,5,2), date(2006,12,23), date(2007,4,13),
                           date(2007,6,14), date(2008,8,31)])
    df = pd.DataFrame(np.random.randn(5, 2), index=ts, columns=['X', 'Z'])
    # Fill the DataFrame with missing values.
    df['X'].iloc[[1, -1]] = np.nan
    df['Z'].iloc[3] = np.nan
    df
    
                       X         Z
    2006-05-02  0.104836 -0.078031
    2006-12-23       NaN -0.589680
    2007-04-13 -1.751863  0.543744
    2007-06-14  1.210980       NaN
    2008-08-31       NaN  0.566205
    

    Without any optional argument, you have:

    df.interpolate()
    
                       X         Z
    2006-05-02  0.104836 -0.078031
    2006-12-23 -0.823514 -0.589680
    2007-04-13 -1.751863  0.543744
    2007-06-14  1.210980  0.554975
    2008-08-31  1.210980  0.566205
    

    With the time method, you obtain:

    df.interpolate(method='time')
    
                       X         Z
    2006-05-02  0.104836 -0.078031
    2006-12-23 -1.156217 -0.589680
    2007-04-13 -1.751863  0.543744
    2007-06-14  1.210980  0.546496
    2008-08-31  1.210980  0.566205
    

    I suggest you to read more examples in the missing data doc part and the scipy documentation about the module interpolate.

    Misc

    Convert a Series to a single-column DataFrame with its method to_frame.

    Misc & Experimental Features

    Retrieve R Datasets

    Not a killing feature but very pleasant: the possibility to load into a DataFrame all R datasets listed at http://stat.ethz.ch/R-manual/R-devel/library/datasets/html/00Index.html

    import pandas.rpy.common as com
    titanic = com.load_data('Titanic')
    titanic.head()
    
      Survived    Age     Sex Class value
    0       No  Child    Male   1st   0.0
    1       No  Child    Male   2nd   0.0
    2       No  Child    Male   3rd  35.0
    3       No  Child    Male  Crew   0.0
    4       No  Child  Female   1st   0.0
    

    for the datasets about survival of passengers on the Titanic. You can find several and different datasets about New York air quality measurements, body temperature series of two beavers, plant growth results or the violent crime rates by US state for instance. Very useful if you would like to show pandas to a friend, a colleague or your Grandma and you do not have a dataset with you.

    And then three great experimental features.

    Eval and Query Experimental Features

    The eval and query methods which use numexpr which can fastly evaluate array expressions as x - 0.5 * y. For numexpr, x and y are Numpy arrays. You can use this powerfull feature in pandas to evaluate different DataFrame columns. By the way, we have already talked about numexpr a few years ago in EuroScipy 09: Need for Speed.

    df = pd.DataFrame(np.random.randn(10, 3), columns=['x', 'y', 'z'])
    df.head()
    
              x         y         z
    0 -0.617131  0.460250 -0.202790
    1 -1.943937  0.682401 -0.335515
    2  1.139353  0.461892  1.055904
    3 -1.441968  0.477755  0.076249
    4 -0.375609 -1.338211 -0.852466
    
    df.eval('x + 0.5 * y - z').head()
    
    0   -0.184217
    1   -1.267222
    2    0.314395
    3   -1.279340
    4   -0.192248
    dtype: float64
    

    About the query method, you can select elements using a very simple query syntax.

    df.query('x >= y > z')
    
              x         y         z
    9  2.560888 -0.827737 -1.326839
    

    msgpack Serialization

    New reading and writing functions to serialize your data with the great and well-known msgpack library. Note this experimental feature does not have a stable storage format. You can imagine to use zmq to transfer msgpack serialized pandas objects over TCP, IPC or SSH for instance.

    Google BigQuery

    A recent module pandas.io.gbq which provides a way to load into and extract datasets from the Google BigQuery Web service. I've not installed the requirements for this feature now. The example of the release note shows how you can select the average monthly temperature in the year 2000 across the USA. You can also read the related pandas documentation. Nevertheless, you will need a BigQuery account as the other Google's products.

    Take Your Keyboard

    Give it a try, play with some data, mangle and plot them, compute some stats, retrieve some patterns or whatever. I'm convinced that pandas will be more and more used and not only for data scientists or quantitative analysts. Open an IPython Notebook, pick up some data and let yourself be tempted by pandas.

    I think I will use more the vectorized strings methods that I found out about when writing this post. I'm glad to learn more about timeseries because I know that I'll use these features. I'm looking forward to the two experimental features such as eval/query and msgpack serialization.

    You can follow me on Twitter (@jazzydag). See also Logilab (@logilab_org).


  • Pylint 1.1 christmas release

    2013/12/24 by Sylvain Thenault

    Pylint 1.1 eventually got released on pypi!

    A lot of work has been achieved since the latest 1.0 release. Various people have contributed to add several new checks as well as various bug fixes and other enhancement.

    Here is the changes summary, check the changelog for more info.

    New checks:

    • 'deprecated-pragma', for use of deprecated pragma directives "pylint:disable-msg" or "pylint:enable-msg" (was previously emmited as a regular warn().
    • 'superfluous-parens' for unnecessary parentheses after certain keywords.
    • 'bad-context-manager' checking that '__exit__' special method accepts the right number of arguments.
    • 'raising-non-exception' / 'catching-non-exception' when raising/catching class non inheriting from BaseException
    • 'non-iterator-returned' for non-iterators returned by '__iter__'.
    • 'unpacking-non-sequence' for unpacking non-sequences in assignments and 'unbalanced-tuple-unpacking' when left-hand-side size doesn't match right-hand-side.

    Command line:

    • New option for the multi-statement warning to allow single-line if statements.
    • Allow to run pylint as a python module 'python -m pylint' (anatoly techtonik).
    • Various fixes to epylint

    Bug fixes:

    • Avoid false used-before-assignment for except handler defined identifier used on the same line (#111).
    • 'useless-else-on-loop' not emited if there is a break in the else clause of inner loop (#117).
    • Drop 'badly-implemented-container' which caused several problems in its current implementation.
    • Don't mark input as a bad function when using python3 (#110).
    • Use attribute regexp for properties in python3, as in python2
    • Fix false-positive 'trailing-whitespace' on Windows (#55)

    Other:

    • Replaced regexp based format checker by a more powerful (and nit-picky) parser, combining 'no-space-after-operator', 'no-space-after-comma' and 'no-space-before-operator' into a new warning 'bad-whitespace'.
    • Create the PYLINTHOME directory when needed, it might fail and lead to spurious warnings on import of pylint.config.
    • Fix setup.py so that pylint properly install on Windows when using python3.
    • Various documentation fixes and enhancements

    Packages will be available in Logilab's Debian and Ubuntu repository in the next few weeks.

    Happy christmas!


  • SaltStack Paris Meetup on Feb 6th, 2014 - (S01E02)

    2013/12/20 by Nicolas Chauvat

    Logilab has set up the second meetup for salt users in Paris on Feb 6th, 2014 at IRILL, near Place d'Italie, starting at 18:00. The address is 23 avenue d'Italie, 75013 Paris.

    Here is the announce in french http://www.logilab.fr/blogentry/1981

    Please forward it to whom may be interested, underlining that pizzas will be offered to refuel the chatters ;)

    Conveniently placed a week after the Salt Conference, topics will include anything related to salt and its uses, demos, new ideas, exchange of salt formulas, commenting the talks/videos of the saltconf, etc.

    If you are interested in Salt, Python and Devops and will be in Paris at that time, we hope to see you there !


  • A quick take on continuous integration services for Bitbucket

    2013/12/19 by Sylvain Thenault

    Some time ago, we moved Pylint from this forge to Bitbucket (more on this here).

    https://bitbucket-assetroot.s3.amazonaws.com/c/photos/2012/Oct/11/master-logo-2562750429-5_avatar.png

    Since then, I somewhat continued to use the continuous integration (CI) service we provide on logilab.org to run tests on new commits, and to do the release job (publish a tarball on pypi, on our web site, build Debian and Ubuntu packages, etc.). This is fine, but not really handy since the logilab.org's CI service is not designed to be used for projects hosted elsewhere. Also I wanted to see what others have to offer, so I decided to find a public CI service to host Pylint and Astroid automatic tests at least.

    Here are the results of my first swing at it. If you have others suggestions, some configuration proposal or whatever, please comment.

    First, here are the ones I didn't test along with why:

    The first one I actually tested, also the first one to show up when looking for "bitbucket continuous integration" on Google is https://drone.io. The UI is really simple, I was able to set up tests for Pylint in a matter of minutes: https://drone.io/bitbucket.org/logilab/pylint. Tests are automatically launched when a new commit is pushed to Pylint's Bitbucket repository and that setup was done automatically.

    Trying to push Drone.io further, one missing feature is the ability to have different settings for my project, e.g. to launch tests on all the python flavor officially supported by Pylint (2.5, 2.6, 2.7, 3.2, 3.3, pypy, jython, etc.). Last but not least, the missing killer feature I want is the ability to launch tests on top of Pull Requests, which travis-ci supports.

    Then I gave http://wercker.com a shot, but got stuck at the Bitbucket repository selection screen: none were displayed. Maybe because I don't own Pylint's repository, I'm only part of the admin/dev team? Anyway, wercker seems appealing too, though the configuration using yaml looks a bit more complicated than drone.io's, but as I was not able to test it further, there's not much else to say.

    https://www.logilab.org/file/4758432/raw/wercker.png

    So for now the winner is https://drone.io, but the first one allowing me to test on several Python versions and to launch tests on pull requests will be the definitive winner! Bonus points for automating the release process and checking test coverage on pull requests as well.

    https://drone.io/drone3000/images/alien-zap-header.png

  • A retrospective of 10 years animating the pylint free software projet

    2013/11/25 by Sylvain Thenault

    was the topic of the talk I gave last saturday at the Capitol du Libre in Toulouse.

    Here are the slides (pdf) for those interested (in french). A video of the talk should be available soon on the Capitol du Libre web site. The slides are mirrored on slideshare (see below):


  • Retrieve Quandl's Data and Play with a Pandas

    2013/10/31 by Damien Garaud

    This post deals with the Pandas Python library, the open and free access of timeseries datasets thanks to the Quandl website and how you can handle datasets with pandas efficiently.

    http://www.logilab.org/file/186707/raw/scrabble_data.jpg http://www.logilab.org/file/186708/raw/pandas_peluche.jpg

    Why this post?

    There has been a long time that I want to play a little with pandas. Not an adorable black and white teddy bear but the well-known Python Data library based on Numpy. I would like to show how you can easely retrieve some numerical datasets from the Quandl website and its API, and handle these datasets with pandas efficiently trought its main object: the DataFrame.

    Note that this blog post comes with a IPython Notebook which can be found at http://nbviewer.ipython.org/url/www.logilab.org/file/187482/raw/quandl-data-with-pandas.ipynb

    You also can get it at http://hg.logilab.org/users/dag/blog/2013/quandl-data-pandas/ with HG.

    Just do:

    hg clone http://hg.logilab.org/users/dag/blog/2013/quandl-data-pandas/
    

    and get the IPython Notebook, the HTML conversion of this Notebook and some related CSV files.

    First Step: Get the Code

    At work or at home, I use Debian. A quick and dumb apt-get install python-pandas is enough. Nevertheless, (1) I'm keen on having a fresh and bloody upstream sources to get the lastest features and (2) I'm trying to contribute a little to the project --- tiny bugs, writing some docs. So I prefer to install it from source. Thus, I pull, I do sudo python setup.py develop and a few Cython compiling seconds later, I can do:

    import pandas as pd
    

    For the other ways to get the library, see the download page on the official website or see the dedicated Pypi page.

    Let's build 10 brownian motions and plotting them with matplotlib.

    import numpy as np
    pd.DataFrame(np.random.randn(120, 10).cumsum(axis=0)).plot()
    

    I don't very like the default font and color of the matplotlib figures and curves. I know that pandas defines a "mpl style". Just after the import, you can write:

    pd.options.display.mpl_style = 'default'
    
    http://www.logilab.org/file/186714/raw/Ten-Brownian-Motions.png

    Second Step: Have You Got Some Data Please ?

    Maybe I'm wrong, but I think that it's sometimes a quite difficult to retrieve some workable numerial datasets in the huge amount of available data over the Web. Free Data, Open Data and so on. OK folks, where are they ? I don't want to spent my time to through an Open Data website, find some interesting issues, parse an Excel file, get some specific data, mangling them to get a 2D arrays of floats with labels. Note that pandas fits with these kinds of problem very well. See the IO part of the pandas documentation --- CSV, Excel, JSON, HDF5 reading/writing functions. I just want workable numerical data without making effort.

    A few days ago, a colleague of mine talked me about Quandl, a website dedicated to find and use numerical datasets with timeseries on the Internet. A perfect source to retrieve some data and play with pandas. Note that you can access some data about economics, health, population, education etc. thanks to a clever API. Get some datasets in CSV/XML/JSON formats between this date and this date, aggregate them, compute the difference, etc.

    Moreover, you can access Quandl's datasets through any programming languages, like R, Julia, Clojure or Python (also available plugins or modules for some softwares such as Excel, Stata, etc.). The Quandl's Python package depends on Numpy and pandas. Perfect ! I can use the module Quandl.py available on GitHub and query some datasets directly in a DataFrame.

    Here we are, huge amount of data are teasing me. Next question: which data to play with?

    Third Step: Give some Food to Pandas

    I've already imported the pandas library. Let's query some datasets thanks to the Quandl Python module. An example inspired by the README from the Quandl's GitHub page project.

    import Quandl
    data = Quandl.get('GOOG/NYSE_IBM')
    data.tail()
    

    and you get:

                  Open    High     Low   Close    Volume
    Date
    2013-10-11  185.25  186.23  184.12  186.16   3232828
    2013-10-14  185.41  186.99  184.42  186.97   2663207
    2013-10-15  185.74  185.94  184.22  184.66   3367275
    2013-10-16  185.42  186.73  184.99  186.73   6717979
    2013-10-17  173.84  177.00  172.57  174.83  22368939
    

    OK, I'm not very familiar with this kind of data. Take a look at the Quandl website. After a dozen of minutes on the Quandl website, I found this OECD murder rates. This page shows current and historical murder rates (assault deaths per 100 000 people) for 33 countries from the OECD. Take a country and type:

    uk_df = Quandl.get('OECD/HEALTH_STAT_CICDHOCD_TXCMILTX_GBR')
    

    It's a DataFrame with a single column 'Value'. The index of the DataFrame is a timeserie. You can easily plot these data thanks to a:

    uk_df.plot()
    
    http://www.logilab.org/file/186711/raw/GBR-oecd-murder-rates.png

    See the other pieces of code and using examples in the dedicated IPython Notebook. I also get data about unemployment in OECD for the quite same countries with more dates. Then, as I would like to compare these data, I must select similar countries, time-resample my data to have the same frequency and so on. Take a look. Any comment is welcomed.

    So, the remaining content of this blog post is just a summary of a few interesting and useful pandas features used in the IPython notebook.

    • Using the timeseries as Index of my DataFrames
    • pd.concat to concatenate several DataFrames along a given axis. This function can deal with missing values if the Index of each DataFrame are not similar (this is my case)
    • DataFrame.to_csv and pd.read_csv to dump/load your data to/from CSV files. There are different arguments for the read_csv which deals with dates, mising value, header & footer, etc.
    • DateOffset pandas object to deal with different time frequencies. Quite useful if you handle some data with calendar or business day, month end or begin, quarter end or begin, etc.
    • Resampling some data with the method resample. I use it to make frequency conversion of some data with timeseries.
    • Merging/joining DataFrames. Quite similar to the "SQL" feature. See pd.merge function or the DataFrame.join method. I used this feature to align my two DataFrames along its Index.
    • Some Matplotlib plotting functions such as DataFrame.plot() and plot(kind='bar').

    Conclusion

    I showed a few useful pandas features in the IPython Notebooks: concatenation, plotting, data computation, data alignement. I think I can show more but this could be occurred in a further blog post. Any comments, suggestions or questions are welcomed.

    The next 0.13 pandas release should be coming soon. I'll write a short blog post about it in a few days.

    The pictures come from:


  • SaltStack Paris Meetup - some of what was said

    2013/10/09 by Arthur Lutz

    Last week, on the first day of OpenWorldForum 2013, we met up with Thomas Hatch of SaltStack to have a talk about salt. He was in Paris to give two talks the following day (1 & 2), and it was a great opportunity to meet him and physically meet part of the French Salt community. Since Logilab hosted the Great Salt Sprint in Paris, we offered to co-organise the meetup at OpenWorldForum.

    http://saltstack.com/images/SaltStack-Logo.png http://openworldforum.org/static/pictures/Calque1.png

    Introduction

    About 15 people gathered in Montrouge (near Paris) and we all took turns to present ourselves and how or why we used salt. Some people wanted to migrate from BCFG2 to salt. Some people told the story of working a month with CFEngine and meeting the same functionnality in two days with salt and so decided to go for that instead. Some like salt because they can hack its python code. Some use salt to provision pre-defined AMI images for the clouds (salt-ami-cloud-builder). Some chose salt over Ansible. Some want to use salt to pilot temporary computation clusters in the cloud (sort of like what StarCluster does with boto and ssh).

    When Paul from Logilab introduced salt-ami-cloud-builder, Thomas Hatch said that some work is being done to go all the way : build an image from scratch from a state definition. On the question of Debian packaging, some efforts could be done to have salt into wheezy-backports. Julien Cristau from Logilab who is a debian developer might help with that.

    Some untold stories where shared : some companies that replaced puppet by salt, some companies use salt to control an HPC cluster, some companies use salt to pilot their existing puppet system.

    We had some discussions around salt-cloud, which will probably be merged into salt at some point. One idea for salt-cloud was raised : have a way of defining a "minimum" type of configuration which translates into the profiles according to which provider is used (an issue should be added shortly). The expression "pushing states" was often used, it is probably a good way of looking a the combination of using salt-cloud and the masterless mode available with salt-ssh. salt-cloud controls an existing cloud, but Thomas Hatch points to the fact that with salt-virt, salt is becoming a cloud controller itself, more on that soon.

    Mixing pillar definition between 'public' and 'private' definitions can be tricky. Some solutions exist with multiple gitfs (or mercurial) external pillar definitions, but more use cases will drive more flexible functionalities in the future.

    Presentation and live demo

    For those in the audience that were not (yet) users of salt, Thomas went back to explaining a few basics about it. Salt should be seen as a "toolkit to solve problems in a infrastructure" says Thomas Hatch. Why is it fast ? It is completely asynchronous and event driven.

    He gave a quick presentation about the new salt-ssh which was introduced in 0.17, which allows the application of salt recipes to machines that don't have a minion connected to the master.

    The peer communication can be used so as to add a condition for a state on the presence of service on a different minion.

    While doing demos or even hacking on salt, one can use salt/test/minionswarm.py which makes fake minions, not everyone has hundreds of servers in at their fingertips.

    Smart modules are loaded dynamically, for example, the git module that gets loaded if a state installs git and then in the same highstate uses the git modules.

    Thomas explained the difference between grains and pillars : grains is data about a minion that lives on the minion, pillar is data about the minion that lives on the master. When handling grains, the grains.setval can be useful (it writes in /etc/salt/grains as yaml, so you can edit it separately). If a minion is not reachable one can obtain its grains information by replacing test=True by cache=True.

    Thomas shortly presented saltstack-formulas : people want to "program" their states, and formulas answer this need, some of the jinja2 is overly complicated to make them flexible and programmable.

    While talking about the unified package commands (a salt command often has various backends according to what system runs the minion), for example salt-call --local pkg.install vim, Thomas told this funny story : ironically, salt was nominated for "best package manager" at some linux magazine competition. (so you don't have to learn how to use FreeBSD packaging tools).

    While hacking salt, one can take a look at the Event Bus (see test/eventlisten.py), many applications are possible when using the data on this bus. Thomas talks about a future IOflow python module where a complex logic can be implemented in the reactor with rules and a state machine. One example use of this would be if the load is high on X number of servers and the number of connexions Y on these servers then launch extra machines.

    To finish on a buzzword, someone asked "what is the overlap of salt and docker" ? The answer is not simple, but Thomas thinks that in the long run there will be a lot of overlap, one can check out the existing lxc modules and states.

    Wrap up

    To wrap up, Thomas announced a salt conference planned for January 2014 in Salt Lake City.

    Logilab proposes to bootstrap the French community around salt. As the group suggest this could take the form of a mailing list, an irc channel, a meetup group , some sprints, or a combination of all the above. On that note, next international sprint will probably take place in January 2014 around the salt conference.


  • Setup your project with cloudenvy and OpenStack

    2013/10/03 by Arthur Lutz

    One nice way of having a reproducible development or test environment is to "program" a virtual machine to do the job. If you have a powerful machine at hand you might use Vagrant in combination with VirtualBox. But if you have an OpenStack setup at hand (which is our case), you might want to setup and destroy your virtual machines on such a private cloud (or public cloud if you want or can). Sure, Vagrant has some plugins that should add OpenStack as a provider, but, here at Logilab, we have a clear preference for python over ruby. So this is where cloudenvy comes into play.

    http://www.openstack.org/themes/openstack/images/open-stack-cloud-computing-logo-2.png

    Cloudenvy is written in python and with some simple YAML configuration can help you setup and provision some virtual machines that contain your tests or your development environment.

    http://www.python.org/images/python-logo.gif

    Setup your authentication in ~/.cloudenvy.yml :

    cloudenvy:
      clouds:
        cloud01:
          os_username: username
          os_password: password
          os_tenant_name: tenant_name
          os_auth_url: http://keystone.example.com:5000/v2.0/
    

    Then create an Envyfile.yml at the root of your project

    project_config:
      name: foo
      image: debian-wheezy-x64
    
      # Optional
      #remote_user: ec2-user
      #flavor_name: m1.small
      #auto_provision: False
      #provision_scripts:
        #- provision_script.sh
      #files:
        # files copied from your host to the VM
        #local_file : destination
    

    Now simply type envy up. Cloudenvy does the rest. It "simply" creates your machine, copies the files, runs your provision script and gives you it's IP address. You can then run envy ssh if you don't want to be bothered with IP addresses and such nonsense (forget about copy and paste from the OpenStack web interface, or your nova show commands).

    Little added bonus : you know your machine will run a web server on port 8080 at some point, set it up in your environment by defining in the same Envyfile.yml your access rules

    sec_groups: [
        'tcp, 22, 22, 0.0.0.0/0',
        'tcp, 80, 80, 0.0.0.0/0',
        'tcp, 8080, 8080, 0.0.0.0/0',
      ]
    

    As you might know (or I'll just recommend it), you should be able to scratch and restart your environment without loosing anything, so once in a while you'll just do envy destroy to do so. You might want to have multiples VM with the same specs, then go for envy up -n second-machine.

    Only downside right now : cloudenvy isn't packaged for debian (which is usually a prerequisite for the tools we use), but let's hope it gets some packaging soon (or maybe we'll end up doing it).

    Don't forget to include this configuration in your project's version control so that a colleague starting on the project can just type envy up and have a working setup.

    In the same order of ideas, we've been trying out salt-cloud <https://github.com/saltstack/salt-cloud> because provisioning machines with SaltStack is the way forward. A blog about this is next.


  • DebConf13 report

    2013/09/25 by Julien Cristau

    As announced before, I spent a week last month in Vaumarcus, Switzerland, attending the 14th Debian conference (DebConf13).

    It was great to be at DebConf again, with lots of people I hadn't seen since New York City three years ago, and lots of new faces. Kudos to the organizers for pulling this off. These events are always a great boost for motivation, even if the amount of free time after coming back home is not quite as copious as I might like.

    One thing that struck me this year was the number of upstream people, not directly involved in Debian, who showed up. From systemd's Lennart and Kay, to MariaDB's Monty, and people from upstart, dracut, phpmyadmin or munin. That was a rather pleasant surprise for me.

    Here's a report on the talks and BoF sessions I attended. It's a bit long, but hey, the conference lasted a week. In addition to those I had quite a few chats with various people, including fellow members of the Debian release team.

    http://debconf13.debconf.org/images/logo.png

    Day 1 (Aug 11)

    Linux kernel : Ben Hutchings made a summary of the features added between 3.2 in wheezy and the current 3.10, and their status in Debian (some still need userspace work).

    SPI status : Bdale Garbee and Jimmy Kaplowitz explained what steps SPI is making to deal with its growth, including getting help from a bookkeeper recently to relieve the pressure on the (volunteer) treasurer.

    Hardware support in Debian stable : If you buy new hardware today, it's almost certainly not supported by the Debian stable release. Ideas to improve this :

    • backport whole subsystems: probably not feasible, risk of regressions would be too high
    • ship compat-drivers, and have the installer automatically install newer drivers based on PCI ids, seems possible.
    • mesa: have the GL loader pick a different driver based on the hardware, and ship newer DRI drivers for the new hardware, without touching the old ones. Issue: need to update libGL and libglapi too when adding new drivers.
    • X drivers, drm: ? (it's complicated)

    Meeting between release team and DPL to figure out next steps for jessie. Decided to schedule a BoF later in the week.

    Day 2 (Aug 12)

    Munin project lead on new features in 2.0 (shipped in wheezy) and roadmap for 2.2. Improvements on the scalability front (both in terms of number of nodes and number of plugins on a node). Future work includes improving the UI to make it less 1990 and moving some metadata to sql.

    jeb on AWS and Debian : Amazon Web Services (AWS) includes compute (ec2), storage (s3), network (virtual private cloud, load balancing, ..) and other services. Used by Debian for package rebuilds. http://cloudfront.debian.net is a CDN frontend for archive mirrors. Official Debian images are on ec2, including on the AWS marketplace front page. build-debian-cloud tool from Anders Ingeman et al. was presented.

    openstack in Debian : Packaging work is focused on making things easy for newcomers, basic config with debconf. Advanced users are going to use puppet or similar anyway. Essex is in wheezy, but end-of-life upstream. Grizzly available in sid and in a separate archive for wheezy. This work is sponsored by enovance.

    Patents : http://patents.stackexchange.com, looks like the USPTO has used comments made there when rejecting patent applications based on prior art. Patent applications are public, and it's a lot easier to get a patent application rejected than invalidate a patent later on. Should we use that site? Help build momentum around it? Would other patent offices use that kind of research? Issues: looking at patent applications (and publicly commenting) might mean you're liable for treble damages if the patent is eventually granted? Can you comment anonymously?

    Why systemd? : Lennart and Kay. Pop corn, upstart trolling, nothing really new.

    Day 3 (Aug 13)

    dracut : dracut presented by Harald Hoyer, its main developer. Seems worth investigating replacing initramfs-tools and sharing the maintenance load. Different hooks though, so we'll need to coordinate this with various packages.

    upstart : More Debian-focused than the systemd talk. Not helped by Canonical's CLA...

    dh_busfactor : debhelper is essentially a one-man show from the beginning. Though various packages/people maintain different dh_* tools either in the debhelper package itself or elsewhere. Joey is thinking about creating a debhelper team including those people. Concerns over increased breakage while people get up to speed (joeyh has 10 years of experience and still occasionally breaks stuff).

    dri3000 : Keith is trying to fix dri2 issues. While dri2 fixed a number of things that were wrong with dri1, it still has some problems. One of the goals is to improve presentation: we need a way to sync between app and compositor (to avoid displaying incompletely drawn frames), avoid tearing, and let the app choose immediate page flip instead of waiting for next vblank if it missed its target (stutter in games is painful). He described this work on his blog.

    security team BoF : explain the workflow, try to improve documentation of the process and what people can do to help. http://security.debian.org/

    Day 4 (Aug 14)

    day trip, and conference dinner on a boat from Neuchatel to Vaumarcus

    Day 5 (Aug 15)

    git-dpm : Spent half an hour explaining git, then was rushed to show git-dpm itself. Still, needs looking at. Lets you work with git and export changes as quilt series to build a source package.

    Ubuntu daily QA : The goal was to make it possible for canonical devs (not necessarily people working on the distro) to use ubuntu+1 (dev release). They tried syncing from testing for a while, but noticed bug fixes being delayed: not good. In the previous workflow the dev release was unusable/uninstallable for the first few months. Multiarch made things even more problematic because it requires amd64/i386 being in sync.

    • 12.04: a bunch of manpower thrown at ubuntu+1 to keep backlog of technical debt under control.
    • 12.10: prepare infrastructure (mostly launchpad), add APIs, to make non-canonical people able to do stuff that previously required shell access on central machines.
    • 13.04: proposed migration. britney is used to migrate packages from devel-proposed to devel. A few teething problems at first, but good reaction.
    • 13.10 and beyond: autopkgtest runs triggered after upload/build, also for rdeps. Phased updates for stable releases (rolled out to a subset of users and then gradually generalized). Hook into errors.ubuntu.com to match new crashes with package uploads. Generally more continuous integration. Better dashboard. (Some of that is still to be done.)

    Lessons learned from debian:

    • unstable's backlog can get bad → proposed is only used for builds and automated tests, no delay
    • transitions can take weeks at best
    • to avoid dividing human attention, devs are focused on devel, not devel-proposed

    Lessons debian could learn:

    • keeping testing current is a collective duty/win
    • splitting users between testing and unstable has important costs
    • hooking automated testing into britney really powerful; there's a small but growing number of automated tests

    Ideas:

    • cut migration delay in half
    • encourage writing autopkgtests
    • end goal: make sid to testing migration entirely based on automated tests

    Debian tests using Jenkins http://jenkins.debian.net

    • https://github.com/h01ger/jenkins-job-builder
    • Only running amd64 right now.
    • Uses jenkins plugins: git, svn, log parser, html publisher, ...
    • Has existing jobs for installer, chroot installs, others
    • Tries to make it easy to reproduce jobs, to allow debugging
    • {c,sh}ould add autopkgtests

    Day 6 (Aug 16)

    X Strike Force BoF : Too many bugs we can't do anything about: {mass,auto}-close them, asking people to report upstream. Reduce distraction by moving the non-X stuff to separate teams (compiz removed instead, wayland to discuss...). We should keep drivers as close to upstream as possible. A couple of people in the room volunteered to handle the intel, ati and input drivers.

    reclass BoF

    I had missed the talk about reclass, and Martin kindly offered to give a followup BoF to show what reclass can do.

    Reclass provides adaptors for puppet(?), salt, ansible. A yaml file describes each host:

    • can declare applications and parameters
    • host is leaf in a dag/tree of classes

    Lets you put the data in reclass instead of the config management tool, keeping generic templates in ansible/salt.

    I'm definitely going to try this and see if it makes it easier to organize data we're currently putting directly in salt states.

    release BoF : Notes are on http://gobby.debian.org. Basic summary: "Releasing in general is hard. Releasing something as big/diverse/distributed as Debian is even harder." Who knew?

    freedombox : status update from Bdale

    Keith Packard showed off the free software he uses in his and Bdale's rockets adventures.

    This was followed by a birthday party in the evening, as Debian turned 20 years old.

    Day 7 (Aug 17)

    x2go : Notes are on http://gobby.debian.org. To be solved: issues with nx libs (gpl fork of old x). Seems like a good thing to try as alternative to LTSP which we use at Logilab.

    lightning talks

    • coquelicot (lunar) - one-click secure(ish) file upload web app
    • notmuch (bremner) - need to try that again now that I have slightly more disk space
    • fedmsg (laarmen) - GSoC, message passing inside the debian infrastructure

    Debconf15 bids :

    • Mechelen/Belgium - Wouter
    • Germany (no city yet) - Marga

    Debconf14 presentation : Will be in Portland (Portland State University) next August. Presentation by vorlon, harmoney, keithp. Looking forward to it!

    • Closing ceremony

    The videos of most of the talks can be downloaded, thanks to the awesome work of the video team. And if you want to check what I didn't see or talk about, check the complete schedule.


show 204 results