Blog entries by Arthur Lutz 
On the 6th of February, the Salt community in France met in Paris to
discuss Salt and choose the tools to federate itself. The
meetup was kindly hosted by IRILL.
There were two formal presentations :
- Logilab did a short introduction of Salt,
- Majerti presented a feedback of their
experience with Salt in various professional contexts.
The presentation space was then opened to other participants and
Boris Feld did a short
presentation of how Salt was used at NovaPost.
We then had a short break to share some pizza (sponsored by Logilab).
After the break, we had some open discussion about various subjects,
including "best practices" in Salt and some specific use
cases. Regis Leroy talked about the
states that Makina Corpus has been publishing on github. The idea of
reconciling the documentation and the monitoring of an infrastructure
was brought up by Logilab, that calls it "Test Driven
The tools we collectively chose to form the community were the following :
- a mailing-list kindly
hosted by the AFPY (a pythonic french organization)
- a dedicated #salt-fr IRC channel on freenode
We decided that the meetup would take place every two months, hence the third
one will be in April. There is already some discussion about organizing
events to tell as many people as possible about Salt. It will probably start
with an event at NUMA in March.
After the meetup was officially over, a few people went on to have
some drinks nearby. Thank you all for coming and your participation.
login or register to comment on this blog
Last week, on the first day of OpenWorldForum 2013, we met up with
Thomas Hatch of SaltStack to have a talk about salt. He was in Paris
to give two talks the following day (1 & 2), and it was a
great opportunity to meet him and physically meet part of the French
Salt community. Since Logilab hosted the Great Salt Sprint in Paris, we offered to
co-organise the meetup at OpenWorldForum.
About 15 people gathered in Montrouge (near Paris) and we all took
turns to present ourselves and how or why we used salt. Some people
wanted to migrate from BCFG2 to salt. Some people told
the story of working a month with CFEngine and meeting the same
functionnality in two days with salt and so decided to go for that
instead. Some like salt because they can hack its python code. Some
use salt to provision pre-defined AMI images for the clouds
(salt-ami-cloud-builder). Some chose
salt over Ansible. Some want to
use salt to pilot temporary computation clusters in the cloud (sort of
like what StarCluster does
with boto and ssh).
When Paul from Logilab introduced salt-ami-cloud-builder, Thomas
Hatch said that some work is being done to go all the way : build an
image from scratch from a state definition. On the question of Debian
packaging, some efforts could be done to have salt into
wheezy-backports. Julien Cristau from Logilab who is a
debian developer might help with that.
Some untold stories where shared : some companies that replaced
puppet by salt, some companies use salt
to control an HPC cluster, some companies use salt to pilot their
existing puppet system.
We had some discussions around salt-cloud, which will probably
be merged into salt at some point. One idea for salt-cloud was raised :
have a way of defining a "minimum" type of configuration which translates
into the profiles according to which provider is used (an issue should
be added shortly). The expression "pushing states" was often used, it
is probably a good way of looking a the combination of using
salt-cloud and the masterless mode available with salt-ssh. salt-cloud
controls an existing cloud, but Thomas Hatch points to the fact that
with salt-virt, salt is becoming a cloud controller itself,
more on that soon.
Mixing pillar definition between 'public' and 'private' definitions
can be tricky. Some solutions exist with multiple gitfs (or
mercurial) external pillar definitions, but more use cases will drive
more flexible functionalities in the future.
For those in the audience that were not (yet) users of salt, Thomas
went back to explaining a few basics about it. Salt should be seen as
a "toolkit to solve problems in a infrastructure" says Thomas
Hatch. Why is it fast ? It is completely asynchronous and event
He gave a quick presentation about the new salt-ssh which was
introduced in 0.17, which
allows the application of salt recipes to machines that don't have a
minion connected to the master.
The peer communication
can be used so as to add a condition for a state on the presence of
service on a different minion.
While doing demos or even hacking on salt, one can use
salt/test/minionswarm.py which makes fake minions, not everyone has
hundreds of servers in at their fingertips.
Smart modules are loaded dynamically, for example, the git module that
gets loaded if a state installs git and then in the same highstate
uses the git modules.
Thomas explained the difference between grains and pillars : grains is
data about a minion that lives on the minion, pillar is data about the
minion that lives on the master. When handling grains, the
grains.setval can be useful (it writes in /etc/salt/grains as yaml,
so you can edit it separately). If a minion is not reachable one can
obtain its grains information by replacing test=True by cache=True.
Thomas shortly presented saltstack-formulas : people want to "program"
their states, and formulas answer this need, some of the jinja2 is overly complicated to make them
flexible and programmable.
While talking about the unified package commands (a salt command often
has various backends according to what system runs the minion), for
example salt-call --local pkg.install vim, Thomas told this funny
story : ironically, salt was nominated for "best package manager" at
some linux magazine competition. (so you don't have to learn how to
use FreeBSD packaging tools).
While hacking salt, one can take a look at the Event Bus (see
test/eventlisten.py), many applications are possible when using the
data on this bus. Thomas talks about a future IOflow python module
where a complex logic can be implemented in the reactor with rules and
a state machine. One example use of this would be if the load is high
on X number of servers and the number of connexions Y on these servers
then launch extra machines.
To finish on a buzzword, someone asked "what is the overlap of salt
and docker" ? The answer is not simple, but Thomas thinks that in the
long run there will be a lot of overlap, one can check out the
existing lxc modules and states.
To wrap up, Thomas announced a salt conference
planned for January 2014 in Salt Lake City.
Logilab proposes to bootstrap the French community around salt. As the
group suggest this could take the form of a mailing list, an irc
channel, a meetup group , some sprints, or a combination of all the
above. On that note, next international sprint will probably take
place in January 2014 around the salt conference.
One nice way of having a reproducible development or test environment is to "program" a virtual machine to do the job. If you have a powerful machine at hand you might use Vagrant in combination with VirtualBox. But if you have an OpenStack setup at hand (which is our case), you might want to setup and destroy your virtual machines on such a private cloud (or public cloud if you want or can). Sure, Vagrant has some plugins that should add OpenStack as a provider, but, here at Logilab, we have a clear preference for python over ruby. So this is where cloudenvy comes into play.
Cloudenvy is written in python and with some simple YAML configuration can help you setup and provision some virtual machines that contain your tests or your development environment.
Setup your authentication in ~/.cloudenvy.yml :
Then create an Envyfile.yml at the root of your project
# files copied from your host to the VM
#local_file : destination
Now simply type envy up. Cloudenvy does the rest. It "simply" creates your machine, copies the files, runs your provision script and gives you it's IP address. You can then run envy ssh if you don't want to be bothered with IP addresses and such nonsense (forget about copy and paste from the OpenStack web interface, or your nova show commands).
Little added bonus : you know your machine will run a web server on port 8080 at some point, set it up in your environment by defining in the same Envyfile.yml your access rules
'tcp, 22, 22, 0.0.0.0/0',
'tcp, 80, 80, 0.0.0.0/0',
'tcp, 8080, 8080, 0.0.0.0/0',
As you might know (or I'll just recommend it), you should be able to scratch and restart your environment without loosing anything, so once in a while you'll just do envy destroy to do so. You might want to have multiples VM with the same specs, then go for envy up -n second-machine.
Only downside right now : cloudenvy isn't packaged for debian (which is usually a prerequisite for the tools we use), but let's hope it gets some packaging soon (or maybe we'll end up doing it).
Don't forget to include this configuration in your project's version control so that a colleague starting on the project can just type envy up and have a working setup.
In the same order of ideas, we've been trying out salt-cloud <https://github.com/saltstack/salt-cloud> because provisioning machines with SaltStack is the way forward. A blog about this is next.
Last Friday, we hosted the French event for the international Great Salt Sprint. Here is a report on what was done and discussed on this occasion.
We started off by discussing various points that were of interest to the participants :
- automatically write documentation from salt sls files (for Sphinx)
- salt-mine add security layer with restricted access (bug #5467 and #6437)
- test compatibility of salt-cloud with openstack
- module bridge bug correction : traceback on KeyError
- setting up the network in debian (equivalent of rh_ip)
- configure existing monitoring solution through salt (add machines, add checks, etc) on various backends with a common syntax
We then split up into pairs to tackle issues in small groups, with some general discussions from time to time.
6 people participated, 5 from Logilab, 1 from nbs-system. We were expecting more participants but some couldn't make it at the last minute, or though the sprint was taking place at some other time.
Unfortunately we had a major electricity black out all afternoon, some of us switched to battery and 3G tethering to carry on, but that couldn't last all afternoon. We ended up talking about design and use cases. ERDF (French electricity distribution company) ended up bringing generator trucks for the neighborhood !
Some unfinished draft code for supervision backends was written and pushed on github. We explored how a common "interface" could be done in salt (using a combination of states and __virtual___). The official documentation was often very useful, reading code was also always a good resource (and the code is really readable).
While we were fixing stuff because of the power black out, Benoit submitted a bug fix.
The idea is to couple the SLS description and the current state of the salt master to generate documentation about one's infrastructure using Sphinx. This was transmitted to the mailing-list.
Design was done around which information should be extracted and display and how to configure access control to the salt-master, taking a further look to external_auth and salt-api will probably be the way forward.
We had general discussions around concepts of access control to a salt master, on how to define this access. One of the things we believe to be missing (but haven't checked thoroughly) is the ability to separate the "read-only" operations to the "read-write" operations in states and modules, if this was done (through decorators?) we could easily tell salt-api to only give access to data collection. Complex scenarios of access were discussed. Having a configuration or external_auth based on ssh public keys (similar to mercurial-server would be nice, and would provide a "limited" shell to a mercurial server.
The power black out didn't help us get things done, but nevertheless, some sharing was done around our uses cases around SaltStack and features that we'd want to get out of it (or from third party applications). We hope to convert all the discussions into bug reports or further discussion on the mailing-lists and (obviously) into code and pull-requests. Check out the scoreboard for an overview of how the other cities contributed.
to comment this post you need to login or create an account
Pylint - the world renowned Python code static checker - now has a
landing page : http://www.pylint.org
We've tried to summarize all the things a newcomer should know about
pylint. We hope it reflects the diversity of uses and support canals
Note that pylint is not hosted on github or another well-known forge, since we firmly believe in a decentralized architecture for the web.
This applies especially to open source software development. Pylint's development is self-hosted on a forge and its code is version-controlled with mercurial, a distributed version control system (DVCS). Both tools are free software written in python.
We know centralized (and closed source) platforms for managing
software projects can make things easier for contributors. We have
enabled a mirror on bitbucket (and pylint-brain) so as to ease forks and
pull requests. Pull requests can be made there and even from a
self-hosted mercurial (with a quick email on the mailing-list).
Feel free to add your comments or feedback below.
Lodge it is a simple open source pastebin... and it's written in Python!
The installation under debian/ubuntu goes as follows:
sudo apt-get update
sudo apt-get -uVf install python-imaging python-sqlalchemy python-jinja2 python-pybabel python-werkzeug python-simplejson
hg clone http://dev.pocoo.org/hg/lodgeit-main
For debian squeeze you have to downgrade python-werkzeug, so get the
old version of python-werkzeug from snapshot.debian.org at
Modify the dburi and the SECRET_KEY. And launch application:
python manage.py runserver
Then off you go configure your apache or lighthttpd.
An easy (and dirty) way of running it at startup is to add the
following command to the www-data crontab
@reboot cd /tmp/; nohup /usr/bin/python /usr/local/lodgeit-main/manage.py runserver &
This should of course be done in an init script.
Hopefully we'll find some time to package this nice webapp for
With the release of Ubuntu Lucid Lynx, the use of an encrypted /home is becoming a pretty common and simple to setup thing. This is good news for privacy reasons obviously. The next step which a lot of users are reluctant to accomplish is the use of an encrypted swap. One of the most obvious reasons is that in most cases it breaks the suspend and hibernate functions.
Here is a little HOWTO on how to switch from normal swap to encrypted swap and back. That way, when you need a secure laptop (trip to a conference, or situtation with risk of theft) you can active it, and then deactivate it when you're at home for example.
That is pretty simple
The idea is to turn off swap, remove the ecryptfs layer, reformat your partition with normal swap and enable it. We use sda5 as an example for the swap partition, please use your own (fdisk -l will tell you which swap partition you are using - or in /etc/crypttab)
sudo swapoff -a
sudo cryptsetup remove /dev/mapper/cryptswap1
sudo vim /etc/crypttab
*remove the /dev/sda5 line*
sudo /sbin/mkswap /dev/sda5
sudo swapon /dev/sda5
sudo vim /etc/fstab
*replace /dev/mapper/cryptswap1 with /dev/sda5*
If this is is useful, you can probably stick it in a script to turn on and off... maybe we could get an ecryptfs-unsetup-swap into ecryptfs.
Logilab is proud to announce that the blog entries published on the blogs of http://www.logilab.org and http://www.cubicweb.org are now licensed under a Creative Commons Attribution-Share Alike 2.0 License (check out the footer).
We often use creative commons licensed photographs to illustrate this blog, and felt that being developers of open source software it was quite logical that some of our content should be published under a similar license. Some of the documentation that we release also uses this license, for example the "Building Salome" documentation. This license footer has been integrated to the cubicweb-blog package that is used to publish our sites (as part of cubicweb-forge).
We're very happy to be hosting the next mercurial sprint in our brand new offices in central Paris. It is quite an honor to be chosen when the other contender was Google.
So a bunch of mercurial developers are heading out to our offices this coming Friday to sprint for three days on mercurial. We use mercurial a lot here over at Logilab and we also contribute a tool to visualize and manipulate a mercurial repository : hgview.
To check out the things that we will be working on with the mercurial crew, check out the program of the sprint on their wiki.
What is a sprint? "A sprint (sometimes called a Code Jam or hack-a-thon) is a short time period (three to five days) during which software developers work on a particular chunk of functionality. "The whole idea is to have a focused group of people make progress by the end of the week," explains Jeff Whatcott" [source]. For geographically distributed open source communities, it is also a way of physically meeting and working in the same room for a period of time.
Sprinting is a practice that we encourage at Logilab, with CubicWeb we organize as often as possible open sprints, which is an opportunity for users and developers to come and code with us. We even use the sprint format for some internal stuff.
photo by Sebastian Mary under creative commons licence.
For the release of hgview 1.2.0 in our Karmic Ubuntu repository, we would like to announce that we are now going to generate packages for the following distributions :
- Debian Lenny (because it's stable)
- Debian Sid (because it's the dev branch)
- Ubuntu Hardy (because it has Long Term Support)
- Ubuntu Karmic (because it's the current stable)
- Ubuntu Lucid (because it's the next stable) - no repo yet, but soon...
The old packages in the previously supported architectures are still accessible (etch, jaunty, intrepid), but new versions will not be generated for these repositories. Packages will be coming in as versions get released, if before that you need a package, give us a shout and we'll see what we can do.
For instructions on how to use the repositories for Ubuntu or Debian, go to the following page : http://www.logilab.org/card/LogilabDebianRepository
With the new version of CubicWeb deployed on our "public" sites, we would like to welcome a new (much awaited) functionality : you can now register directly on our websites. Getting an account with give you access to a bunch of functionalities :
- registering to a project's activity with get you automated email reports of what is happening on that project
- you can directly add tickets on projects instead of talking about it on the mailing lists
- you can bookmark content
- tag stuff
- and much more...
This is also a way of testing out the CubicWeb framework (in this case the forge cube) which you can take home and host yourself (debian recommended). Just click on the "register" link on the top right, or here.
Photo by wa7son under creative commons.
As you might have noticed we quite like munin. We use it quite a bit to monitor how our servers and services are doing. One of the things we like about munin is obviously that the plugins can be written in python (and perl, bash and ruby).
On a few recent servers we started playing with IPMI to sensor the temperature, watts, fan's rpms etc. So we went out looking for a munin plugin for that. We found Peter Palfrader's ruby plugins. There was one small glitch though, we came across a simple bug : the "ipmitool -I open sensor" can be real long to execute on certain machines, so configuring the plugin was a bit painful and running it too. Changing the ruby code was a bit tricky since we don't really know ruby... so we did a quick rewrite of the plugin in python... with a few optimizations.
It's not really complete but works for us, and might be useful to you, so we're publishing the hg repo. You can get the tgz or browse the source.
Logilab.org has almost reached a thousand tickets on the Logilab's open source projects. To be exact there are 940 tickets right now. What kind of tickets are they ?
Here is a quick graph of the state of the tickets in our tracker :
Graphing is neat. Maybe soon we'll get this kind of feature automatically in the CubicWeb forge, see this ticket.
Being big fans of debian, we are impatiently awaiting the new stable release of the distribution : lenny. Finding it pretty difficult to find information about when they were expecting to release it, I asked a colleague if he knew. He's a debian developer so I though he might have the info. And he did : according to the debian.devel mailing list we should be having the release for the 14th of February 2009. In other words : in 5 days!
There's a few geeky emails on the release date if you have time to read the threads.
The version convention that we use is pretty straight forward and standard : it's composed of 3 numbers separated by dots. What are the rules to incrementing each on of these numbers ?
- The last number is a incremented when bugs are corrected
- The middle number is incremented when stories (functionalities) are implemented to the software
- The first number is incremented when we have a major change of technology
Well... if you've been paying attention, apycot just turned 1.0.0, the major change of technology is that it is now integrated to CubicWeb (instead of just generating html files). So for a project in your forge, you describe the apycot configuration for it, and the tests for quality assurance are launched on a regular basis. We're still in the process of stabilizing it (latest right now it 1.0.5), but it already runs on the CubicWeb projects, see the screenshot below :
You should also know that now apycot has two components : the apycotbot which runs the tests and an cubicweb-apycot which displays the results in cubicweb (download cubicweb-apycot-1.0.5.tar.gz and apycotbot-1.0.5.tar.gz).
We've always been big fans of debian here at Logilab. So publishing debian packages for our open source software has always been a priority.
We're now a bit involved with Ubuntu, work with it on some client projects, have a few Ubuntu machines lying around, and we like it too. So we've decided to publish our packages for Ubuntu as well as for debian.
In the 0.12.1 version of logilab-devtools we introduced publishing of Ubuntu packages with lgp (Logilab Packaging) - see ticket. Since then, you can add the following Ubuntu source to your Ubuntu system
deb http://ftp.logilab.org/dists hardy/
For now, only hardy is up and running, give us a shout if you want something else!
We have a public forum that is accessible both using XMPP (jabber) or IRC.
The more we use mercurial to manage our code repositories, the more we enjoy its extended functionalities. Lately we've been playing and using branches which end up being very useful. We also use hgview instead of the built-in "hg view" command. And its latest release supports the branches functionality, you can filter out the branch you want to look at. Update your installation (apt-get upgrade ?) to enjoy this new functionality... or download it.
We've decided to go to Europython this year. We're obviously going to give a talk about the exciting things we're doing with LAX and GoogleAppEngine. We're on wednesday at midday in the alfa room, check out the schedule here. Since we think it's important that these events take place, we're also chipping in and sponsoring the event.
We hope to see you there. Drop us a note if you want to meet up.
Here at Logilab we find Munin pretty useful. We monitor a lots of machines and a lot of services with it, and it usually gives us pretty useful indicators over time that guide us through to optimizations.
One of the reasons we adopted this technology is it's modular approach with the plugin architecture. And when we realized we could write plugins in python, we knew we'd like it. After years of using it, we're now actually writing plugins for it. Optimizing zope and zeo servers is not an easy task so we're developping plugins to be able to see the difference between before and after changing things.
You check out the project here, and download it from the ftp.
Previous documentation was merged into a LAX Book now featuring step-by-step screenshots to get up and running faster.
Don't we all like screenshots...
Update: LAX is now included in the CubicWeb semantic web framework.
LAX version 0.3.0 was released today, see http://lax.logilab.org/
Get a new application running in ten minutes with the install guide
and the tutorial:
Update: LAX is now included in the CubicWeb semantic web framework.
After almost 2 years of inactivity, here is a new release of apycot the "Automated Pythonic Code Tester". We use it everyday to maintain our software quality, and we hope this tool can help you as well.
Admittedly it's not trivial to setup, but once it's running you'll be able to count on it. We're working on getting it to work "out-of-the-box"...
Here's what's in the ChangeLog :
- 2008-05-19 -- 0.11.0
- updated documentation
- new pylintrc option for the pyhton_lint checker.
- Added code to disabled checker with missing required option with the
proper ERROR statut
- removed the catalog option of the xml_valid checker this feature can now
be handle with the XML_CATALOG_FILE environement variable (see libxml2
doc for details)
- moved xml tool from python-xml to lxml
- new 'hourly' mode for running tests
- new 'test_activity_report' report
- pylint checker support new disable_msg and show_categories options
(show_categories default to Error and Fatal categories to avoid
- activity option "days" has been renamed to "time" and correspond
to a number of day in daily mode but to a number of hour in hourly
- fixed debian_lint and debian_piuparts to actually do something...
- fixed docutils checker for recent docutils versions
- dropped python 2.2/2.3 compat (to run apycot itself)
- added output redirectors to the debian preprocessor to avoid
- can use regular expressions in <pp>_match_* options
Three of us from Logilab are going to San Francisco to listen, share and discuss at Google I/O.
It's a two day developer gathering in San Francisco, with various talks about google technologies : http://code.google.com/events/io/
We're hoping to show and talk about LAX (http://lax.logilab.org) which uses Google AppEngine
Here are a few pictures from the sprint we organized at Pycon-FR
We got a few people to install Google AppEngine and LAX on their machines, and explained the concepts at hand to a bunch of other people.
Update: LAX is now included in the CubicWeb semantic web framework.
This is how easy it is to get lax to run on your linux machine :
hg clone http://www.logilab.org/hg/lax/
Point your favorite browser to http://localhost:8080/
UPDATE: LAX is now included in the CubicWeb semantic web framework.
** Insert announce text here **
CMFProjman has been asleep for quite a while, and is now being reanimated to work with Plone2. We will release it as soon as we see it's stable.