Blog entries

  • "Gestion d'entrepôts et de paquets Debian non officiels" aux rencontres Debian Nantes

    2017/02/09 by Arthur Lutz

    Cet article résume le retour d'expérience d'Arthur Lutz (Logilab) sur la gestion d'entrepôts et de paquets Debian non officiels présenté lors des rencontres Debian Nantes en février 2017. Il a été complété en direct-live par Cyril Brulebois.

    https://www.logilab.org/file/2269692/raw/debian_nantes.png

    Objectifs

    • distribuer du logiciel qu'il n'est pas nécessaire de faire rentrer dans Debian
    • livrer ses clients (via https protégé par mot de passe)
    • préparer des backports
    • changer des options de compilation
    • activer des modules/plugins
    • compiler pour une version précise de debian (type wheezy-backports construit sur jessie)
    • diminuer les opérations manuelles
    • flexibilité de l'automatisation (pouvoir passer en manuel à tout moment, rejouer une étape, etc.)
    • progressivement corriger les erreurs signalées par lintian

    Récuperer les sources et le packaging

    • dget
    • debcheckout (utilise VCS-, bzr, git, etc.)
    • apt-get source

    Construire sur place

    • dpkg-buildpackage
    • pdebuild (wrapper pour les suivants)
    • pbuilder (dans un chroot)
    • sbuild (official) sur buildd
    • cowbuilder
    • logilab-packaging (lgp)

    Gestion des dépôts

    Entrepôts d'autres technologies

    Futur


  • Debian Lenny release date - almost there ?

    2009/02/09 by Arthur Lutz
    http://www.debian.org/logos/openlogo-nd-50.png

    Being big fans of debian, we are impatiently awaiting the new stable release of the distribution : lenny. Finding it pretty difficult to find information about when they were expecting to release it, I asked a colleague if he knew. He's a debian developer so I though he might have the info. And he did : according to the debian.devel mailing list we should be having the release for the 14th of February 2009. In other words : in 5 days!

    http://thread.gmane.org/gmane.linux.debian.devel.announce/1318

    There's a few geeky emails on the release date if you have time to read the threads.

    http://www.sinologic.net/wp-content/uploads/2008/08/lenny_debian.jpg

  • The DEBSIGN_KEYID trick

    2010/05/12 by Nicolas Chauvat

    I have been wondering for some time why debsign would not use the DEBSIGN_KEYID environment variable that I exported from my bashrc. Debian bug 444641 explains the trick: debsign ignores environment variables and sources ~/.devscripts instead. A simple export DEBSIGN_KEYID=ABCDEFG in ~/.devscripts is enough to get rid of the -k argument once and for good.


  • Belier - le ssh par hops

    2009/02/17 by Arthur Lutz

    On vient de découvrir belier qui permet de se connecter facilement à des machines auquelles on doit accéder par des machines ssh intermédiaires. Ca peut s'avérer utile. En plus, c'est en python. En plus, il a fait des paquets debian... et en plus il mentionne pylint. Du coup il mérite mention ici.

    http://www.ohmytux.com/belier/images/schema_belier.png

  • We're now publishing for Ubuntu aswell

    2009/01/26 by Arthur Lutz
    http://www.ubuntu.com/themes/ubuntu07/images/ubuntulogo.png

    We've always been big fans of debian here at Logilab. So publishing debian packages for our open source software has always been a priority.

    We're now a bit involved with Ubuntu, work with it on some client projects, have a few Ubuntu machines lying around, and we like it too. So we've decided to publish our packages for Ubuntu as well as for debian.

    In the 0.12.1 version of logilab-devtools we introduced publishing of Ubuntu packages with lgp (Logilab Packaging) - see ticket. Since then, you can add the following Ubuntu source to your Ubuntu system

    deb http://ftp.logilab.org/dists hardy/
    

    For now, only hardy is up and running, give us a shout if you want something else!


  • New supported repositories for Debian and Ubuntu

    2010/01/21 by Arthur Lutz

    For the release of hgview 1.2.0 in our Karmic Ubuntu repository, we would like to announce that we are now going to generate packages for the following distributions :

    • Debian Lenny (because it's stable)
    • Debian Sid (because it's the dev branch)
    • Ubuntu Hardy (because it has Long Term Support)
    • Ubuntu Karmic (because it's the current stable)
    • Ubuntu Lucid (because it's the next stable) - no repo yet, but soon...
    http://img.generation-nt.com/ubuntulogo_0080000000420571.png

    The old packages in the previously supported architectures are still accessible (etch, jaunty, intrepid), but new versions will not be generated for these repositories. Packages will be coming in as versions get released, if before that you need a package, give us a shout and we'll see what we can do.

    For instructions on how to use the repositories for Ubuntu or Debian, go to the following page : http://www.logilab.org/card/LogilabDebianRepository


  • Profiter pleinement des CPUs avec Zope/Zeo/Debian

    2008/05/27 by Arthur Lutz

    Voici vite fait comment on profite du quad-core bi-proc multicoeurs avec zope/zeo/pound ... le tout en commandes debian.

    Inspiré de : http://plone.org/documentation/how-to/simple-zope-clustering-with-squid-and-pound

    http://plone.org/documentation/tutorial/introduction-to-the-zodb/zeo%20diagram.png
    • apt-get -uVf install plone-site pound

    • dzhandle -z 2.10 make-zeoinstance sgel_zeo

    • dzhandle -z 2.10 make-instance sgel2 --zeo-server=localhost:8100 -m all

    • dzhandle -z 2.10 make-instance sgel3 --zeo-server=localhost:8100 -m all

    • dzhandle -z 2.10 make-instance sgel1 --zeo-server=localhost:8100 -m all

    • dzhandle -z 2.10 make-instance sgel4 --zeo-server=localhost:8100 -m all

    • modifiez les ports de chaque instance (par exemple 9673, 9674, 9675, 9676)

    • vim ~/zope/instances/sgel*/etc/zope.conf

    • dzhandle add-product sgel1 CMFPlone

    • dzhandle add-product sgel2 CMFPlone

    • dzhandle add-product sgel3 CMFPlone

    • dzhandle add-product sgel4 CMFPlone

    • dzhandle zeoctl sgel_zeo start

    • dzhandle zopectl sgel1 start

    • dzhandle zopectl sgel2 start

    • dzhandle zopectl sgel3 start

    • dzhandle zopectl sgel4 start

    • vim /etc/pound/pound.cfg pour remplacer

      BackEnd
              Address 127.0.0.1
              Port    8080
      End
      

      par

      Service
              BackEnd
                      Address 127.0.0.1
                      Port    9673
              End
              BackEnd
                      Address 127.0.0.1
                      Port    9674
              End
              BackEnd
                      Address 127.0.0.1
                      Port    9675
              End
              BackEnd
                      Address 127.0.0.1
                      Port    9676
              End
      End
      
    • /etc/init.d/pound restart

    • tapez sur http://localhost:8080

    • ajoutez un site plone

    pour tester, lancez htop pour voir l'activité et regardez la différence entre :

    • apt-get -uVf install apache2-utils
    • /usr/sbin/ab -n 100 -c 100 localhost:8080/plone

    et

    • /usr/sbin/ab -n 100 -c 100 localhost:9673/plone

    nice!


  • Openstack, Wheezy and ZFS on Linux

    2012/12/19 by David Douard

    Openstack, Wheezy and ZFS on Linux

    A while ago, I started the install of an OpenStack cluster at Logilab, so our developers can play easily with any kind of environment. We are planning to improve our Apycot automatic testing platform so it can use "elastic power". And so on.

    http://www.openstack.org/themes/openstack/images/open-stack-cloud-computing-logo-2.png

    I first tried a Ubuntu Precise based setup, since at that time, Debian packages were not really usable. The setup never reached a point where it could be relased as production ready, due to the fact I tried a too complex and bleeding edge configuration (involving Quantum, openvswitch, sheepdog)...

    Meanwhile, we went really short of storage capacity. For now, it mainly consists in hard drives distributed in our 19" Dell racks (generally with hardware RAID controllers). So I recently purchased a low-cost storage bay (SuperMicro SC937 with a 6Gb/s JBOD-only HBA) with 18 spinning hard drives and 4 SSDs. This storage bay being driven by ZFS on Linux (tip: the SSD-stored ZIL is a requirement to get decent performances). This storage setup is still under test for now.

    http://zfsonlinux.org/images/zfs-linux.png

    I also went to the last Mini-DebConf in Paris, where Loic Dachary presented the status of the OpenStack packaging effort in Debian. This gave me the will to give a new try to OpenStack using Wheezy and a bit simpler setup. But I could not consider not to use my new ZFS-based storage as a nova volume provider. It is not available for now in OpenStack (there is a backend for Solaris, but not for ZFS on Linux). However, this is Python and in fact, the current ISCSIDriver backend needs very little to make it work with zfs instead of lvm as "elastics" block-volume provider and manager.

    So, I wrote a custom nova volume driver to handle this. As I don't want the nova-volume daemon to run on my ZFS SAN, I wrote this backend mixing the SanISCSIDriver (which manages the storage system via SSH) and the standard ISCSIDriver (which uses standard Linux isci target tools). I'm not very fond of the API of the VolumeDriver (especially the fact that the ISCSIDriver is responsible for 2 roles: managing block-level volumes and exporting block-level volumes). This small design flaw (IMHO) is the reason I had to duplicate some code (not much but...) to implement my ZFSonLinuxISCSIDriver...

    So here is the setup I made:

    Infrastructure

    My OpenStack Essex "cluster" consists for now in:

    • one control node, running in a "normal" libvirt-controlled virtual machine; it is a Wheezy that runs:
      • nova-api
      • nova-cert
      • nova-network
      • nova-scheduler
      • nova-volume
      • glance
      • postgresql
      • OpenStack dashboard
    • one computing node (Dell R310, Xeon X3480, 32G, Wheezy), which runs:
      • nova-api
      • nova-network
      • nova-compute
    • ZFS-on-Linux SAN (3x raidz1 poools made of 6 1T drives, 2x (mirrored) 32G SLC SDDs, 2x 120G MLC SSDs for cache); for now, the storage is exported to the SAN via one 1G ethernet link.

    OpensStack Essex setup

    I mainly followed the Debian HOWTO to setup my private cloud. I mainly tuned the network settings to match my environement (and the fact my control node lives in a VM, with VLAN stuff handled by the host).

    I easily got a working setup (I must admit that I think my previous experiment with OpenStack helped a lot when dealing with custom configurations... and vocabulary; I'm not sure I would have succeded "easily" following the HOWTO, but hey, it is a functionnal HOWTO, meaning if you do not follow the instructions because you want special tunings, don't blame the HOWTO).

    Compared to the HOWTO, my nova.conf looks like (as of today):

    [DEFAULT]
    logdir=/var/log/nova
    state_path=/var/lib/nova
    lock_path=/var/lock/nova
    root_helper=sudo nova-rootwrap
    auth_strategy=keystone
    dhcpbridge_flagfile=/etc/nova/nova.conf
    dhcpbridge=/usr/bin/nova-dhcpbridge
    sql_connection=postgresql://novacommon:XXX@control.openstack.logilab.fr/nova
    
    ##  Network config
    # A nova-network on each compute node
    multi_host=true
    # VLan manger
    network_manager=nova.network.manager.VlanManager
    vlan_interface=eth1
    # My ip
    my-ip=172.17.10.2
    public_interface=eth0
    # Dmz & metadata things
    dmz_cidr=169.254.169.254/32
    ec2_dmz_host=169.254.169.254
    metadata_host=169.254.169.254
    
    ## More general things
    # The RabbitMQ host
    rabbit_host=control.openstack.logilab.fr
    
    ## Glance
    image_service=nova.image.glance.GlanceImageService
    glance_api_servers=control.openstack.logilab.fr:9292
    use-syslog=true
    ec2_host=control.openstack.logilab.fr
    
    novncproxy_base_url=http://control.openstack.logilab.fr:6080/vnc_auto.html
    vncserver_listen=0.0.0.0
    vncserver_proxyclient_address=127.0.0.1
    

    Volume

    I had a bit more work to do to make nova-volume work. First, I got hit by this nasty bug #695791 which is trivial to fix... when you know how to fix it (I noticed the bug report after I fixed it by myself).

    Then, as I wanted the volumes to be stored and exported by my shiny new ZFS-on-Linux setup, I had to write my own volume driver, which was quite easy, since it is Python, and the logic to implement was already provided by the ISCSIDriver class on the one hand, and by the SanISCSIDrvier on the other hand. So I ended with this firt implementation. This file should be copied to nova volumes package directory (nova/volume/zol.py):

    # vim: tabstop=4 shiftwidth=4 softtabstop=4
    
    # Copyright 2010 United States Government as represented by the
    # Administrator of the National Aeronautics and Space Administration.
    # Copyright 2011 Justin Santa Barbara
    # Copyright 2012 David DOUARD, LOGILAB S.A.
    # All Rights Reserved.
    #
    #    Licensed under the Apache License, Version 2.0 (the "License"); you may
    #    not use this file except in compliance with the License. You may obtain
    #    a copy of the License at
    #
    #         http://www.apache.org/licenses/LICENSE-2.0
    #
    #    Unless required by applicable law or agreed to in writing, software
    #    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
    #    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
    #    License for the specific language governing permissions and limitations
    #    under the License.
    """
    Driver for ZFS-on-Linux-stored volumes.
    
    This is mainly a custom version of the ISCSIDriver that uses ZFS as
    volume provider, generally accessed over SSH.
    """
    
    import os
    
    from nova import exception
    from nova import flags
    from nova import utils
    from nova import log as logging
    from nova.openstack.common import cfg
    from nova.volume.driver import _iscsi_location
    from nova.volume import iscsi
    from nova.volume.san import SanISCSIDriver
    
    
    LOG = logging.getLogger(__name__)
    
    san_opts = [
        cfg.StrOpt('san_zfs_command',
                   default='/sbin/zfs',
                   help='The ZFS command.'),
        ]
    
    FLAGS = flags.FLAGS
    FLAGS.register_opts(san_opts)
    
    
    class ZFSonLinuxISCSIDriver(SanISCSIDriver):
        """Executes commands relating to ZFS-on-Linux-hosted ISCSI volumes.
    
        Basic setup for a ZoL iSCSI server:
    
        XXX
    
        Note that current implementation of ZFS on Linux does not handle:
    
          zfs allow/unallow
    
        For now, needs to have root access to the ZFS host. The best is to
        use a ssh key with ssh authorized_keys restriction mechanisms to
        limit root access.
    
        Make sure you can login using san_login & san_password/san_private_key
        """
        ZFSCMD = FLAGS.san_zfs_command
    
        _local_execute = utils.execute
    
        def _getrl(self):
            return self._runlocal
        def _setrl(self, v):
            if isinstance(v, basestring):
                v = v.lower() in ('true', 't', '1', 'y', 'yes')
            self._runlocal = v
        run_local = property(_getrl, _setrl)
    
        def __init__(self):
            super(ZFSonLinuxISCSIDriver, self).__init__()
            self.tgtadm.set_execute(self._execute)
            LOG.info("run local = %s (%s)" % (self.run_local, FLAGS.san_is_local))
    
        def set_execute(self, execute):
            LOG.debug("override local execute cmd with %s (%s)" %
                      (repr(execute), execute.__module__))
            self._local_execute = execute
    
        def _execute(self, *cmd, **kwargs):
            if self.run_local:
                LOG.debug("LOCAL execute cmd %s (%s)" % (cmd, kwargs))
                return self._local_execute(*cmd, **kwargs)
            else:
                LOG.debug("SSH execute cmd %s (%s)" % (cmd, kwargs))
                check_exit_code = kwargs.pop('check_exit_code', None)
                command = ' '.join(cmd)
                return self._run_ssh(command, check_exit_code)
    
        def _create_volume(self, volume_name, sizestr):
            zfs_poolname = self._build_zfs_poolname(volume_name)
    
            # Create a zfs volume
            cmd = [self.ZFSCMD, 'create']
            if FLAGS.san_thin_provision:
                cmd.append('-s')
            cmd.extend(['-V', sizestr])
            cmd.append(zfs_poolname)
            self._execute(*cmd)
    
        def _volume_not_present(self, volume_name):
            zfs_poolname = self._build_zfs_poolname(volume_name)
            try:
                out, err = self._execute(self.ZFSCMD, 'list', '-H', zfs_poolname)
                if out.startswith(zfs_poolname):
                    return False
            except Exception as e:
                # If the volume isn't present
                return True
            return False
    
        def create_volume_from_snapshot(self, volume, snapshot):
            """Creates a volume from a snapshot."""
            zfs_snap = self._build_zfs_poolname(snapshot['name'])
            zfs_vol = self._build_zfs_poolname(snapshot['name'])
            self._execute(self.ZFSCMD, 'clone', zfs_snap, zfs_vol)
            self._execute(self.ZFSCMD, 'promote', zfs_vol)
    
        def delete_volume(self, volume):
            """Deletes a volume."""
            if self._volume_not_present(volume['name']):
                # If the volume isn't present, then don't attempt to delete
                return True
            zfs_poolname = self._build_zfs_poolname(volume['name'])
            self._execute(self.ZFSCMD, 'destroy', zfs_poolname)
    
        def create_export(self, context, volume):
            """Creates an export for a logical volume."""
            self._ensure_iscsi_targets(context, volume['host'])
            iscsi_target = self.db.volume_allocate_iscsi_target(context,
                                                                volume['id'],
                                                          volume['host'])
            iscsi_name = "%s%s" % (FLAGS.iscsi_target_prefix, volume['name'])
            volume_path = self.local_path(volume)
    
            # XXX (ddouard) this code is not robust: does not check for
            # existing iscsi targets on the host (ie. not created by
            # nova), but fixing it require a deep refactoring of the iscsi
            # handling code (which is what have been done in cinder)
            self.tgtadm.new_target(iscsi_name, iscsi_target)
            self.tgtadm.new_logicalunit(iscsi_target, 0, volume_path)
    
            if FLAGS.iscsi_helper == 'tgtadm':
                lun = 1
            else:
                lun = 0
            if self.run_local:
                iscsi_ip_address = FLAGS.iscsi_ip_address
            else:
                iscsi_ip_address = FLAGS.san_ip
            return {'provider_location': _iscsi_location(
                    iscsi_ip_address, iscsi_target, iscsi_name, lun)}
    
        def remove_export(self, context, volume):
            """Removes an export for a logical volume."""
            try:
                iscsi_target = self.db.volume_get_iscsi_target_num(context,
                                                               volume['id'])
            except exception.NotFound:
                LOG.info(_("Skipping remove_export. No iscsi_target " +
                           "provisioned for volume: %d"), volume['id'])
                return
    
            try:
                # ietadm show will exit with an error
                # this export has already been removed
                self.tgtadm.show_target(iscsi_target)
            except Exception as e:
                LOG.info(_("Skipping remove_export. No iscsi_target " +
                           "is presently exported for volume: %d"), volume['id'])
                return
    
            self.tgtadm.delete_logicalunit(iscsi_target, 0)
            self.tgtadm.delete_target(iscsi_target)
    
        def check_for_export(self, context, volume_id):
            """Make sure volume is exported."""
            tid = self.db.volume_get_iscsi_target_num(context, volume_id)
            try:
                self.tgtadm.show_target(tid)
            except exception.ProcessExecutionError, e:
                # Instances remount read-only in this case.
                # /etc/init.d/iscsitarget restart and rebooting nova-volume
                # is better since ensure_export() works at boot time.
                LOG.error(_("Cannot confirm exported volume "
                            "id:%(volume_id)s.") % locals())
                raise
    
        def local_path(self, volume):
            zfs_poolname = self._build_zfs_poolname(volume['name'])
            zvoldev = '/dev/zvol/%s' % zfs_poolname
            return zvoldev
    
        def _build_zfs_poolname(self, volume_name):
            zfs_poolname = '%s%s' % (FLAGS.san_zfs_volume_base, volume_name)
            return zfs_poolname
    

    To configure my nova-volume instance (which runs on the control node, since it's only a manager), I added these to my nova.conf file:

    # nove-volume config
    volume_driver=nova.volume.zol.ZFSonLinuxISCSIDriver
    iscsi_ip_address=172.17.1.7
    iscsi_helper=tgtadm
    san_thin_provision=false
    san_ip=172.17.1.7
    san_private_key=/etc/nova/sankey
    san_login=root
    san_zfs_volume_base=data/openstack/volume/
    san_is_local=false
    verbose=true
    

    Note that the private key (/etc/nova/sankey here) is stored in clear and that it must be readable by the nova user.

    This key being stored in clear and giving root acces to my ZFS host, I have limited a bit this root access by using a custom command wrapper in the .ssh/authorized_keys file.

    Something like (naive implementation):

    [root@zfshost ~]$ cat /root/zfswrapper
    #!/bin/sh
    CMD=`echo $SSH_ORIGINAL_COMMAND | awk '{print $1}'`
    if [ "$CMD" != "/sbin/zfs" && "$CMD" != "tgtadm" ]; then
      echo "Can do only zfs/tgtadm stuff here"
      exit 1
    fi
    
    echo "[`date`] $SSH_ORIGINAL_COMMAND" >> .zfsopenstack.log
    exec $SSH_ORIGINAL_COMMAND
    

    Using this in root's .ssh/authorized_keys file:

    [root@zfshost ~]$ cat /root/.ssh/authorized_keys | grep control
    from="control.openstack.logilab.fr",no-pty,no-port-forwarding,no-X11-forwarding, \
          no-agent-forwarding,command="/root/zfswrapper" ssh-rsa AAAA[...] root@control
    

    I had to set the iscsi_ip_address (the ip address of the ZFS host), but I think this is a result of something mistakenly implemented in my ZFSonLinux driver.

    Using this config, I can boot an image, create a volume on my ZFS storage, and attach it to the running image.

    I have to test things like snapshot, (live?) migration and so. This is a very first draft implementation which needs to be refined, improved and tested.

    What's next

    Besides the fact that it needs more tests, I plan to use salt for my OpenStack deployment (first to add more compute nodes in my cluster), and on the other side, I'd like to try the salt-cloud so I have a bunch of Debian images that "just work" (without the need of porting the cloud-init Ubuntu package).

    On the side of my zol driver, I need to port it to Cinder, but I do not have a Folsom install to test it...


  • Simile-Widgets

    2008/08/07 by Nicolas Chauvat
    http://simile.mit.edu/images/logo.png

    While working on knowledge management and semantic web technologies, I came across the Simile project at MIT a few years back. I even had a demo of the Exhibit widget fetching then displaying data from our semantic web application framework back in 2006 at the Web2 track of Solutions Linux in Paris.

    Now that we are using these widgets when implementing web apps for clients, I was happy to see that the projects got a life of their own outside of MIT and became full-fledged free-software projects hosted on Google Code. See Simile-Widgets for more details and expect us to provide a debian package soon unless someone does it first.

    Speaking of Debian, here is a nice demo a the Timeline widget presenting the Debian history.

    http://beta.thumbalizr.com/app/thumbs/?src=/thumbs/onl/source/d2/d280583f143793f040bdacf44a39b0d5.png&w=320&q=0&enc=

  • Salomé accepted into Debian unstable

    2010/06/03 by Andre Espaze

    Salomé is a platform for pre and post-processing of numerical simulation available at http://salome-platform.org/. It is now available as a Debian package http://packages.debian.org/source/sid/salome and should soon appear in Ubuntu https://launchpad.net/ubuntu/+source/salome as well.

    http://salome-platform.org/salome_screens.png/image_preview

    A difficult packaging work

    A first package of Salomé 3 was made by the courageous Debian developper Adam C. Powell, IV on January 2008. Such packaging is very resources intensive because of the building of many modules. But the most difficult part was to bring Salomé to an unported environment. Even today, Salomé 5 binaries are only provided by upstream as a stand-alone piece of software ready to unpack on a Debian Sarge/Etch or a Mandriva 2006/2008. This is the first reason why several patches were required for adapting the code to new versions of the dependencies. The version 3 of Salomé was so difficult and time consuming to package that Adam decided to stop during two years.

    The packaging of Salomé started back with the version 5.1.3 in January 2010. Thanks to Logilab and the OpenHPC project, I could join him during 14 weeks of work for adapting every module to Debian unstable. Porting to the new versions of the dependencies was a first step, but we had also to adapt the code to the Debian packaging philosophy with binaries, librairies and data shipped to dedicated directories.

    A promising future

    Salomé being accepted to Debian unstable means that porting it to Ubuntu should follow in a near future. Moreover the work done for adapting Salomé to a GNU/Linux distribution may help developpers on others platforms as well.

    That is excellent news for all people involved in numerical simulation because they are going to have access to Salomé services by using their packages management tools. It will help the spreading of Salomé code on any fresh install and moreover keep it up to date.

    Join the fun

    For mechanical engineers, a derived product called Salomé-Méca has recently been published. The goal is to bring the functionalities from the Code Aster finite element solver to Salomé in order to ease simulation workflows. If you are as well interested in Debian packages for those tools, you are invited to come with us and join the fun.

    I have submitted a proposal to talk about Salomé at EuroSciPy 2010. I look forward to meet other interested parties during this conference that will take place in Paris on July 8th-11th.


  • HOWTO install lodgeit pastebin under Debian/Ubuntu

    2010/06/24 by Arthur Lutz

    Lodge it is a simple open source pastebin... and it's written in Python!

    The installation under debian/ubuntu goes as follows:

    sudo apt-get update
    sudo apt-get -uVf install python-imaging python-sqlalchemy python-jinja2 python-pybabel python-werkzeug python-simplejson
    cd local
    hg clone http://dev.pocoo.org/hg/lodgeit-main
    cd lodgeit-main
    vim manage.py
    

    For debian squeeze you have to downgrade python-werkzeug, so get the old version of python-werkzeug from snapshot.debian.org at http://snapshot.debian.org/package/python-werkzeug/0.5.1-1/

    wget http://snapshot.debian.org/archive/debian/20090808T041155Z/pool/main/p/python-werkzeug/python-werkzeug_0.5.1-1_all.deb
    

    Modify the dburi and the SECRET_KEY. And launch application:

    python manage.py runserver
    

    Then off you go configure your apache or lighthttpd.

    An easy (and dirty) way of running it at startup is to add the following command to the www-data crontab

    @reboot cd /tmp/; nohup /usr/bin/python /usr/local/lodgeit-main/manage.py runserver &
    

    This should of course be done in an init script.

    http://rn0.ru/static/help/advanced_features.png

    Hopefully we'll find some time to package this nice webapp for debian/ubuntu.


  • Building Debian images for an OpenStack (private) cloud

    2012/12/23 by David Douard

    Now I have a working OpenStack cloud at Logilab, I want to provide my fellow collegues a bunch of ready-made images to create instances.

    Strangely, there are no really usable ready-made UEC Debian images available out there. There have been recent efforts made to provide Debian images on Amazon Market Place, and the toolsuite used to build these is available as a collection of bash shell scripts from a github repository. There are also some images for Eucalyptus, but I have not been able to make them boot properly on my kvm-based OpenStack install.

    So I have tried to build my own set of Debian images to upload in my glance shop.

    Vocabulary

    A bit of vocabulary may be useful for the one not very accustomed with OpenStack nor AWS jargons.

    When you want to create an instance of an image, ie. boot a virtual machine in a cloud, you generally choose from a set of ready made system images, then you choose a virtual machine flavor (ie. a combination of a number of virtual CPUs, an amount of RAM, and a harddrive size used as root device). Generally, you have to choose between tiny (1 CPU, 512MB, no disk), small (1 CPU, 2G of RAM, 20G of disk), etc.

    In the cloud world, an instance is not meant to be sustainable. What is sustainable is a volume that can be attached to a running instance.

    If you want your instance to be sustainable, there are 2 choices:

    • you can snapshot a running instance and upload it as a new image ; so it is not really a sustainable instance, instead, it's the ability to configure an instance that is then the base for booting other instances,
    • or you can boot an instance from a volume (which is the sustainable part of a virtual machine in a cloud).

    In the Amazon world, a "standard" image (the one that is instanciated when creating a new instance) is called an instance store-backed AMI images, also called an UEC image, and a volume image is called an EBS-backed AMI image (EBS stands for Elastic Block Storage). So an AMI images stored in a volume cannot be instanciated, it can be booted once and only once at a time. But it is sustainable. Different usage.

    An UEC or AMI image consist in a triplet: a kernel, an init ramdisk and a root file system image. An EBS-backed image is just the raw image disk to be booted on a virtulization host (a kvm raw or qcow2 image, etc.)

    Images in OpenStack

    In OpenStack, when you create an instance from a given image, what happens depends on the kind of image.

    In fact, in OpenStack, one can upload traditional UEC AMI images (need to upload the 3 files, the kernel, the initial ramdisk and the root filesystem as a raw image). But one can also upload bare images. These kind of images are booted directly by the virtualization host. So it is some kind of hybrid between a boot from volume (an EBS-backed boot in the Amazon world) and the traditional instanciation from an UEC image.

    Instanciating an AMI image

    When one creates an instance from an AMI image in an OpenStack cloud:

    • the kernel is copied to the virtualization host,
    • the initial ramdisk is copied to the virtualization host,
    • the root FS image is copied to the virtualization host,
    • then, the root FS image is :
      • duplicated (instanciated),
      • resized (the file is increased if needed) to the size of the asked instance flavor,
      • the file system is resized to the new size of the file,
      • the contained filesystem is mounted (using qemu-nbd) and the configured SSH acces key is added to /root/.ssh/authorized_keys
      • the nbd volume is then unmounted
    • a libvirt domain is created, configured to boot from the given kernel and init ramdisk, using the resized and modified image disk as root filesystem,
    • the libvirt domain is then booted.

    Instantiating a BARE image

    When one creates an instance from a BARE image in an OpenStack cloud:

    • the VM image file is copied on the virtualization host,
    • the VM image file is duplicated (instantiated),
    • a libvirt domain is created, configured to boot from this copied image disk as root filesystem,
    • the libvirt domain is then booted.

    Differences between the 2 instantiation methods

    Instantiating a BARE image:
    • Involves a much simpler process.
    • Allows to boot a non-linux system (depends on the virtualization system, especially true when using kvm vitualization).
    • Is slower to boot and consumes more resources, since the virtual machine image must be the size of the required/wanted virtual machine (but can remain minimal if using a qcow2 image format). If you use a 10G raw image, then 10G of data will be copied from the image provider to the virtualization host, and this big file will be duplicated each time you instantiate this image.
    • The root filesystem size corresponding to the flavor of the instance is not honored; the filesystem size is the one of the BARE images.
    Instantiating an AMI image:
    • Honours the flavor.
    • Generally allows quicker instance creation process.
    • Less resource consumption.
    • Can only boot Linux guests.

    If one wants to boot a Windows guest in OpenStack, the only solution (as far as I know) is to use a BARE image of an installed Windows system. It works (I have succeeded in doing so), but a minimal Windows 7 install is several GB, so instantiating such a BARE image is very slow, because the image needs to be uploaded on the virtualization host.

    Building a Debian AMI image

    So I wanted to provide a minimal Debian image in my cloud, and to provide it as an AMI image so the flavor is honoured, and so the standard cloud injection mechanisms (like setting up the ssh key to access the VM) work without having to tweak the rc.local script or use cloud-init in my guest.

    Here is what I did.

    1. Install a Debian system in a standard libvirt/kvm guest.

    david@host:~$ virt-install  --connect qemu+tcp://virthost/system   \
                     -n openstack-squeeze-amd64 -r 512 \
                     -l http://ftp2.fr.debian.org/pub/debian/dists/stable/main/installer-amd64/ \
                     --disk pool=default,bus=virtio,type=qcow2,size=5 \
                     --network bridge=vm7,model=virtio  --nographics  \
                     --extra-args='console=tty0 console=ttyS0,115200'
    

    This creates a new virtual machine, launch the Debian installer directly downloaded from a Debian mirror, and start the usual Debian installer in a virtual serial console (I don't like VNC very much).

    I then followed the installation procedure. When asked for the partitioning and so, I chose to create only one primary partition (ie. with no swap partition; it wont be necessary here). I also chose only "Default system" and "SSH server" to be installed.

    2. Configure the system

    After the installation process, the VM is rebooted, I log into it (by SSH or via the console), so I can configure a bit the system.

    david@host:~$ ssh root@openstack-squeeze-amd64.vm.logilab.fr
    Linux openstack-squeeze-amd64 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64
    
    The programs included with the Debian GNU/Linux system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
    permitted by applicable law.
    Last login: Sun Dec 23 20:14:24 2012 from 192.168.1.34
    root@openstack-squeeze-amd64:~# apt-get update
    root@openstack-squeeze-amd64:~# apt-get install vim curl parted # install some must have packages
    [...]
    root@openstack-squeeze-amd64:~# dpkg-reconfigure locales # I like to have fr_FR and en_US in my locales
    [...]
    root@openstack-squeeze-amd64:~# echo virtio_baloon >> /etc/modules
    root@openstack-squeeze-amd64:~# echo acpiphp >> /etc/modules
    root@openstack-squeeze-amd64:~# update-initramfs -u
    root@openstack-squeeze-amd64:~# apt-get clean
    root@openstack-squeeze-amd64:~# rm /etc/udev/rules.d/70-persistent-net.rules
    root@openstack-squeeze-amd64:~# rm .bash_history
    root@openstack-squeeze-amd64:~# poweroff
    

    What we do here is to install some packages, do some configurations. The important part is adding the acpiphp module so the volume attachment will work in our instances. We also clean some stuffs up before shutting the VM down.

    3. Convert the image into an AMI image

    Since I created the VM image as a qcow2 image, I needed to convert it back to a raw image:

    david@host:~$ scp root@virthost:/var/lib/libvirt/images/openstack-squeeze-amd64.img .
    david@host:~$ qemu-img convert -O raw openstack-squeeze-amd64.img openstack-squeeze-amd64.raw
    

    Then, as I want a minimal-sized disk image, the filesystem must be resized to minimal. I did this like described below, but I think there are simpler methods to do so.

    david@host:~$ fdisk -l openstack-squeeze-amd64.raw  # display the partition location in the disk
    
    Disk openstack-squeeze-amd64.raw: 5368 MB, 5368709120 bytes
    149 heads, 8 sectors/track, 8796 cylinders, total 10485760 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x0001fab7
    
                       Device Boot      Start         End      Blocks   Id  System
    debian-squeeze-amd64.raw1            2048    10483711     5240832   83  Linux
    david@host:~$ # extract the filesystem from the image
    david@host:~$ dd if=openstack-squeeze-amd64.raw of=openstack-squeeze-amd64.ami bs=1024 skip=1024 count=5240832
    david@host:~$ losetup /dev/loop1 openstack-squeeze-amd64.ami
    david@host:~$ mkdir /tmp/img
    david@host:~$ mount /dev/loop1 /tmp/img
    david@host:~$ cp /tmp/img/boot/vmlinuz-2.6.32-5-amd64 .
    david@host:~$ cp /tmp/img/boot/initrd.img-2.6.32-5-amd64 .
    david@host:~$ umount /tmp/img
    david@host:~$ e2fsck -f /dev/loop1 # required before a resize
    
    e2fsck 1.42.5 (29-Jul-2012)
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    /dev/loop1: 26218/327680 files (0.2% non-contiguous), 201812/1310208 blocks
    david@host:~$ resize2fs -M /dev/loop1 # minimize the filesystem
    
    resize2fs 1.42.5 (29-Jul-2012)
    Resizing the filesystem on /dev/loop1 to 191461 (4k) blocks.
    The filesystem on /dev/loop1 is now 191461 blocks long.
    david@host:~$ # note the new size ^^^^ and the block size above (4k)
    david@host:~$ losetup -d /dev/loop1 # detach the lo device
    david@host:~$ dd if=debian-squeeze-amd64.ami of=debian-squeeze-amd64-reduced.ami bs=4096 count=191461
    

    4. Upload in OpenStack

    After all this, you have a kernel image, a init ramdisk file and a minimized root filesystem image file. So you just have to upload them to your OpenStack image provider (glance):

    david@host:~$ glance add disk_format=aki container_format=aki name="debian-squeeze-uec-x86_64-kernel" \
                     < vmlinuz-2.6.32-5-amd64
    Uploading image 'debian-squeeze-uec-x86_64-kernel'
    ==================================================================================[100%] 24.1M/s, ETA  0h  0m  0s
    Added new image with ID: 644e59b8-1503-403f-a4fe-746d4dac2ff8
    david@host:~$ glance add disk_format=ari container_format=ari name="debian-squeeze-uec-x86_64-initrd" \
                     < initrd.img-2.6.32-5-amd64
    Uploading image 'debian-squeeze-uec-x86_64-initrd'
    ==================================================================================[100%] 26.7M/s, ETA  0h  0m  0s
    Added new image with ID: 6f75f1c9-1e27-4cb0-bbe0-d30defa8285c
    david@host:~$ glance add disk_format=ami container_format=ami name="debian-squeeze-uec-x86_64" \
                     kernel_id=644e59b8-1503-403f-a4fe-746d4dac2ff8 ramdisk_id=6f75f1c9-1e27-4cb0-bbe0-d30defa8285c \
                     < debian-squeeze-amd64-reduced.ami
    Uploading image 'debian-squeeze-uec-x86_64'
    ==================================================================================[100%] 42.1M/s, ETA  0h  0m  0s
    Added new image with ID: 4abc09ae-ea34-44c5-8d54-504948e8d1f7
    
    http://www.logilab.org/file/115220?vid=download

    And that's it (!). I now have a working Debian squeeze image in my cloud that works fine:

    http://www.logilab.org/file/115221?vid=download

  • MiniDebConf Paris 2010

    2010/09/09 by Arthur Lutz
    http://france.debian.net/debian-france.png

    Debian France organise le 30 et 31 octobre prochain une minidebconf à Paris. Le wiki de la conférence est en train de s'étoffer, et pour le moment c'est là qu'il faut s'inscrire. À Logilab nous sommes utilisateurs et contributeurs de Debian, c'est donc naturellement que nous allons essayer d'aller participer à cette conférence. Alexandre Fayolle, développeur Debian ira assister (entre autres) à la présentation de Carl Chenet sur l'état de Python dans Debian.


  • Setting up my Microsoft Natural Keyboard under Debian Squeeze

    2011/06/08 by Nicolas Chauvat

    I upgraded to Debian Squeeze over the week-end and it broke my custom Xmodmap. While I was fixing it, I realized that the special keys of my Microsoft Natural keyboard that were not working under Lenny were now functionnal. The only piece missing was the "zoom" key. Here is how I got it to work.

    I found on the askubuntu forum an solution to the same problem, that is missing the following details.

    To find which keysym to map, I listed input devices:

    $ ls /dev/input/by-id/
    usb-Logitech_USB-PS.2_Optical_Mouse-mouse        usb-Logitech_USB-PS_2_Optical_Mouse-mouse
    usb-Logitech_USB-PS_2_Optical_Mouse-event-mouse  usb-Microsoft_Natural??_Ergonomic_Keyboard_4000-event-kbd
    

    then used evtest to find the keysym:

    $ evtest /dev/input/by-id/usb-Microsoft*
    

    then used udevadm to find the identifiers:

    $ udevadm info --export-db | less
    

    then edited /lib/udev/rules.d/95-keymap.rules to add:

    ENV{ID_VENDOR}=="Microsoft", ENV{ID_MODEL_ID}=="00db", RUN+="keymap $name microsoft-natural-keyboard-4000"
    

    in the section keyboard_usbcheck

    and created the keymap file:

    $ cat /lib/udev/keymaps/microsoft-natural-keyboard-4000
    0xc022d pageup
    0xc022e pagedown
    

    then loaded the keymap:

    $ /lib/udev/keymap /dev/input/by-id/usb-Microsoft_Natural®_Ergonomic_Keyboard_4000-event-kbd /lib/udev/keymaps/microsoft-natural-keyboard-4000
    

    then used evtest again to check it was working.

    Of course, you do not have to map the events to pageup and pagedown, but I found it convenient to use that key to scroll up and down pages.

    Hope this helps :)


  • Going to DebConf13

    2013/08/01 by Julien Cristau

    The 14th Debian developers conference (DebConf13) will take place between August 11th and August 18th in Vaumarcus, Switzerland.

    Logilab is a DebConf13 sponsor, and I'll attend the conference. There are quite a lot of cloud-related events on the schedule this year, plus the usual impromptu discussions and hallway track. Looking forward to meeting the usual suspects there!

    https://www.logilab.org/file/158611/raw/dc13-btn0-going-bg.png

  • Mini compte rendu Meetup Debian à Nantes

    2014/03/13 by Arthur Lutz

    Hier soir, je suis allé au premier meetup Debian à Nantes. C'était bien sympatique, une vingtaine de personnes ont répondu présent à l'appel de Damien Raude-Morvan et Thomas Vincent. Merci à eux d'avoir lancé l'initiative (le pad d'organisation).

    //www.logilab.org/file/228927/raw/debian-france.jpg

    Après un tour de table des participants, et de quelques discussions sur debian en général (et une explication par Damien de l'état de Java dans Debian), Damien a présenté l'association Debian France ainsi que le concours du nouveau contributeur Debian. La liste d'idées est longue et sympatique n'hésitez pas à aller jeter un oeil et faire une contribution.

    Ensuite Thomas nous a présenté l'équipe de traduction francaise de debian et ses principles de fonctionnement (qualité avant quantité, listes de discussion, IRC, processus de traduction, etc.).

    //www.logilab.org/file/228931/raw/saltstack_logo.jpg

    Finalement, j'ai rapidement présenté Salt et sa place dans Debian. Pour l'archive publique : les diapos de la présentation.

    À la prochaine !

    Pour faire un commentaire, il faut s'authentifier ou s'enregistrer.


  • DebConf13 report

    2013/09/25 by Julien Cristau

    As announced before, I spent a week last month in Vaumarcus, Switzerland, attending the 14th Debian conference (DebConf13).

    It was great to be at DebConf again, with lots of people I hadn't seen since New York City three years ago, and lots of new faces. Kudos to the organizers for pulling this off. These events are always a great boost for motivation, even if the amount of free time after coming back home is not quite as copious as I might like.

    One thing that struck me this year was the number of upstream people, not directly involved in Debian, who showed up. From systemd's Lennart and Kay, to MariaDB's Monty, and people from upstart, dracut, phpmyadmin or munin. That was a rather pleasant surprise for me.

    Here's a report on the talks and BoF sessions I attended. It's a bit long, but hey, the conference lasted a week. In addition to those I had quite a few chats with various people, including fellow members of the Debian release team.

    http://debconf13.debconf.org/images/logo.png

    Day 1 (Aug 11)

    Linux kernel : Ben Hutchings made a summary of the features added between 3.2 in wheezy and the current 3.10, and their status in Debian (some still need userspace work).

    SPI status : Bdale Garbee and Jimmy Kaplowitz explained what steps SPI is making to deal with its growth, including getting help from a bookkeeper recently to relieve the pressure on the (volunteer) treasurer.

    Hardware support in Debian stable : If you buy new hardware today, it's almost certainly not supported by the Debian stable release. Ideas to improve this :

    • backport whole subsystems: probably not feasible, risk of regressions would be too high
    • ship compat-drivers, and have the installer automatically install newer drivers based on PCI ids, seems possible.
    • mesa: have the GL loader pick a different driver based on the hardware, and ship newer DRI drivers for the new hardware, without touching the old ones. Issue: need to update libGL and libglapi too when adding new drivers.
    • X drivers, drm: ? (it's complicated)

    Meeting between release team and DPL to figure out next steps for jessie. Decided to schedule a BoF later in the week.

    Day 2 (Aug 12)

    Munin project lead on new features in 2.0 (shipped in wheezy) and roadmap for 2.2. Improvements on the scalability front (both in terms of number of nodes and number of plugins on a node). Future work includes improving the UI to make it less 1990 and moving some metadata to sql.

    jeb on AWS and Debian : Amazon Web Services (AWS) includes compute (ec2), storage (s3), network (virtual private cloud, load balancing, ..) and other services. Used by Debian for package rebuilds. http://cloudfront.debian.net is a CDN frontend for archive mirrors. Official Debian images are on ec2, including on the AWS marketplace front page. build-debian-cloud tool from Anders Ingeman et al. was presented.

    openstack in Debian : Packaging work is focused on making things easy for newcomers, basic config with debconf. Advanced users are going to use puppet or similar anyway. Essex is in wheezy, but end-of-life upstream. Grizzly available in sid and in a separate archive for wheezy. This work is sponsored by enovance.

    Patents : http://patents.stackexchange.com, looks like the USPTO has used comments made there when rejecting patent applications based on prior art. Patent applications are public, and it's a lot easier to get a patent application rejected than invalidate a patent later on. Should we use that site? Help build momentum around it? Would other patent offices use that kind of research? Issues: looking at patent applications (and publicly commenting) might mean you're liable for treble damages if the patent is eventually granted? Can you comment anonymously?

    Why systemd? : Lennart and Kay. Pop corn, upstart trolling, nothing really new.

    Day 3 (Aug 13)

    dracut : dracut presented by Harald Hoyer, its main developer. Seems worth investigating replacing initramfs-tools and sharing the maintenance load. Different hooks though, so we'll need to coordinate this with various packages.

    upstart : More Debian-focused than the systemd talk. Not helped by Canonical's CLA...

    dh_busfactor : debhelper is essentially a one-man show from the beginning. Though various packages/people maintain different dh_* tools either in the debhelper package itself or elsewhere. Joey is thinking about creating a debhelper team including those people. Concerns over increased breakage while people get up to speed (joeyh has 10 years of experience and still occasionally breaks stuff).

    dri3000 : Keith is trying to fix dri2 issues. While dri2 fixed a number of things that were wrong with dri1, it still has some problems. One of the goals is to improve presentation: we need a way to sync between app and compositor (to avoid displaying incompletely drawn frames), avoid tearing, and let the app choose immediate page flip instead of waiting for next vblank if it missed its target (stutter in games is painful). He described this work on his blog.

    security team BoF : explain the workflow, try to improve documentation of the process and what people can do to help. http://security.debian.org/

    Day 4 (Aug 14)

    day trip, and conference dinner on a boat from Neuchatel to Vaumarcus

    Day 5 (Aug 15)

    git-dpm : Spent half an hour explaining git, then was rushed to show git-dpm itself. Still, needs looking at. Lets you work with git and export changes as quilt series to build a source package.

    Ubuntu daily QA : The goal was to make it possible for canonical devs (not necessarily people working on the distro) to use ubuntu+1 (dev release). They tried syncing from testing for a while, but noticed bug fixes being delayed: not good. In the previous workflow the dev release was unusable/uninstallable for the first few months. Multiarch made things even more problematic because it requires amd64/i386 being in sync.

    • 12.04: a bunch of manpower thrown at ubuntu+1 to keep backlog of technical debt under control.
    • 12.10: prepare infrastructure (mostly launchpad), add APIs, to make non-canonical people able to do stuff that previously required shell access on central machines.
    • 13.04: proposed migration. britney is used to migrate packages from devel-proposed to devel. A few teething problems at first, but good reaction.
    • 13.10 and beyond: autopkgtest runs triggered after upload/build, also for rdeps. Phased updates for stable releases (rolled out to a subset of users and then gradually generalized). Hook into errors.ubuntu.com to match new crashes with package uploads. Generally more continuous integration. Better dashboard. (Some of that is still to be done.)

    Lessons learned from debian:

    • unstable's backlog can get bad → proposed is only used for builds and automated tests, no delay
    • transitions can take weeks at best
    • to avoid dividing human attention, devs are focused on devel, not devel-proposed

    Lessons debian could learn:

    • keeping testing current is a collective duty/win
    • splitting users between testing and unstable has important costs
    • hooking automated testing into britney really powerful; there's a small but growing number of automated tests

    Ideas:

    • cut migration delay in half
    • encourage writing autopkgtests
    • end goal: make sid to testing migration entirely based on automated tests

    Debian tests using Jenkins http://jenkins.debian.net

    • https://github.com/h01ger/jenkins-job-builder
    • Only running amd64 right now.
    • Uses jenkins plugins: git, svn, log parser, html publisher, ...
    • Has existing jobs for installer, chroot installs, others
    • Tries to make it easy to reproduce jobs, to allow debugging
    • {c,sh}ould add autopkgtests

    Day 6 (Aug 16)

    X Strike Force BoF : Too many bugs we can't do anything about: {mass,auto}-close them, asking people to report upstream. Reduce distraction by moving the non-X stuff to separate teams (compiz removed instead, wayland to discuss...). We should keep drivers as close to upstream as possible. A couple of people in the room volunteered to handle the intel, ati and input drivers.

    reclass BoF

    I had missed the talk about reclass, and Martin kindly offered to give a followup BoF to show what reclass can do.

    Reclass provides adaptors for puppet(?), salt, ansible. A yaml file describes each host:

    • can declare applications and parameters
    • host is leaf in a dag/tree of classes

    Lets you put the data in reclass instead of the config management tool, keeping generic templates in ansible/salt.

    I'm definitely going to try this and see if it makes it easier to organize data we're currently putting directly in salt states.

    release BoF : Notes are on http://gobby.debian.org. Basic summary: "Releasing in general is hard. Releasing something as big/diverse/distributed as Debian is even harder." Who knew?

    freedombox : status update from Bdale

    Keith Packard showed off the free software he uses in his and Bdale's rockets adventures.

    This was followed by a birthday party in the evening, as Debian turned 20 years old.

    Day 7 (Aug 17)

    x2go : Notes are on http://gobby.debian.org. To be solved: issues with nx libs (gpl fork of old x). Seems like a good thing to try as alternative to LTSP which we use at Logilab.

    lightning talks

    • coquelicot (lunar) - one-click secure(ish) file upload web app
    • notmuch (bremner) - need to try that again now that I have slightly more disk space
    • fedmsg (laarmen) - GSoC, message passing inside the debian infrastructure

    Debconf15 bids :

    • Mechelen/Belgium - Wouter
    • Germany (no city yet) - Marga

    Debconf14 presentation : Will be in Portland (Portland State University) next August. Presentation by vorlon, harmoney, keithp. Looking forward to it!

    • Closing ceremony

    The videos of most of the talks can be downloaded, thanks to the awesome work of the video team. And if you want to check what I didn't see or talk about, check the complete schedule.


  • LMGC90 Sprint at Logilab in March 2013

    2013/03/28 by Vladimir Popescu

    LMGC90 Sprint at Logilab

    At the end of March 2013, Logilab hosted a sprint on the LMGC90 simulation code in Paris.

    LMGC90 is an open-source software developed at the LMGC ("Laboratoire de Mécanique et Génie Civil" -- "Mechanics and Civil Engineering Laboratory") of the CNRS, in Montpellier, France. LMGC90 is devoted to contact mechanics and is, thus, able to model large collections of deformable or undeformable physical objects of various shapes, with numerous interaction laws. LMGC90 also allows for multiphysics coupling.

    Sprint Participants

    https://www.logilab.org/file/143585/raw/logo_LMGC.jpg https://www.logilab.org/file/143749/raw/logo_SNCF.jpg https://www.logilab.org/file/143750/raw/logo_LaMSID.jpg https://www.logilab.org/file/143751/raw/logo_LOGILAB.jpg

    More than ten hackers joined in from:

    • the LMGC, which leads LMCG90 development and aims at constantly improving its architecture and usability;
    • the Innovation and Research Department of the SNCF (the French state-owned railway company), which uses LMGC90 to study railway mechanics, and more specifically, the ballast;
    • the LaMSID ("Laboratoire de Mécanique des Structures Industrielles Durables", "Laboratory for the Mechanics of Ageing Industrial Structures") laboratory of the EDF / CNRS / CEA , which has an strong expertise on Code_ASTER and LMGC90;
    • Logilab, as the developer, for the SNCF, of a CubicWeb-based platform dedicated to the simulation data and knowledge management.

    After a great introduction to LMGC90 by Frédéric Dubois and some preliminary discussions, teams were quickly constituted around the common areas of interest.

    Enhancing LMGC90's Python API to build core objects

    As of the sprint date, LMGC90 is mainly developed in Fortran, but also contains Python code for two purposes:

    • Exposing the Fortran functions and subroutines in the LMGC90 core to Python; this is achieved using Fortran 2003's ISO_C_BINDING module and Swig. These Python bindings are grouped in a module called ChiPy.
    • Making it easy to generate input data (so called "DATBOX" files) using Python. This is done through a module called Pre_LMGC.

    The main drawback of this approach is the double modelling of data that this architecture implies: once in the core and once in Pre_LMGC.

    It was decided to build a unique user-level Python layer on top of ChiPy, that would be able to build the computational problem description and write the DATBOX input files (currently achieved by using Pre_LMGC), as well as to drive the simulation and read the OUTBOX result files (currently by using direct ChiPy calls).

    This task has been met with success, since, in the short time span available (half a day, basically), the team managed to build some object types using ChiPy calls and save them into a DATBOX.

    Using the Python API to feed a computation data store

    This topic involved importing LMGC90 DATBOX data into the numerical platform developed by Logilab for the SNCF.

    This was achieved using ChiPy as a Python API to the Fortran core to get:

    • the bodies involved in the computation, along with their materials, behaviour laws (with their associated parameters), geometries (expressed in terms of zones);
    • the interactions between these bodies, along with their interaction laws (and associated parameters, e.g. friction coefficient) and body pair (each interaction is defined between two bodies);
    • the interaction groups, which contain interactions that have the same interaction law.

    There is still a lot of work to be done (notably regarding the charges applied to the bodies), but this is already a great achievement. This could only have occured in a sprint, were every needed expertise is available:

    • the SNCF experts were there to clarify the import needs and check the overall direction;

    • Logilab implemented a data model based on CubicWeb, and imported the data using the ChiPy bindings developed on-demand by the LMGC core developer team, using the usual-for-them ISO_C_BINDING/ Swig Fortran wrapping dance.

      https://www.logilab.org/file/143753/raw/logo_CubicWeb.jpg
    • Logilab undertook the data import; to this end, it asked the LMGC how the relevant information from LMGC90 can be exposed to Python via the ChiPy API.

    Using HDF5 as a data storage backend for LMGC90

    The main point of this topic was to replace the in-house DATBOX/OUTBOX textual format used by LMGC90 to store input and output data, with an open, standard and efficient format.

    Several formats have been considered, like HDF5, MED and NetCDF4.

    MED has been ruled out for the moment, because it lacks the support for storing body contact information. HDF5 was chosen at last because of the quality of its Python libraries, h5py and pytables, and the ease of use tools like h5fs provide.

    https://www.logilab.org/file/143754/raw/logo_HDF.jpg

    Alain Leufroy from Logilab quickly presented h5py and h5fs usage, and the team started its work, measuring the performance impact of the storage pattern of LMGC90 data. This was quickly achieved, as the LMGC experts made it easy to setup tests of various sizes, and as the Logilab developers managed to understand the concepts and implement the required code in a fast and agile way.

    Debian / Ubuntu Packaging of LMGC90

    This topic turned out to be more difficult than initially assessed, mainly because LMGC90 has dependencies to non-packaged external libraries, which thus had to be packaged first:

    • the Matlib linear algebra library, written in C,
    • the Lapack95 library, which is a Fortran95 interface to the Lapack library.

    Logilab kept working on this after the sprint and produced packages that are currently being tested by the LMGC team. Some changes are expected (for instance, Python modules should be prefixed with a proper namespace) before the packages can be submitted for inclusion into Debian. The expertise of Logilab regarding Debian packaging was of great help for this task. This will hopefully help to spread the use of LMGC90.

    https://www.logilab.org/file/143755/raw/logo_Debian.jpg

    Distributed Version Control System for LMGC90

    As you may know, Logilab is really fond of Mercurial as a DVCS. Our company invested a lot into the development of the great evolve extension, which makes Mercurial a very powerful tool to efficiently manage the team development of software in a clean fashion.

    This is why Logilab presented Mercurial's features and advantages over the current VCS used to manage LMGC90 sources, namely svn, to the other participants of the Sprint. This was appreciated and will hopefully benefit to LMGC90 ease of development and spread among the Open Source community.

    https://www.logilab.org/file/143756/raw/logo_HG.jpg

    Conclusions

    All in all, this two-day sprint on LMGC90, involving participants from several industrial and academic institutions has been a great success. A lot of code has been written but, more importantly, several stepping stones have been laid, such as:

    • the general LMGC90 data access architecture, with the Python layer on top of the LMGC90 core;
    • the data storage format, namely HDF5.

    Colaterally somehow, several other results have also been achieved:

    • partial LMGC90 data import into the SNCF CubicWeb-based numerical platform,
    • Debian / Ubuntu packaging of LMGC90 and dependencies.

    On a final note, one would say that we greatly appreciated the cooperation between the participants, which we found pleasant and efficient. We look forward to finding more occasions to work together.


  • About salt-ami-cloud-builder

    2013/06/07 by Paul Tonelli

    What

    At Logilab we are big fans of SaltStack, we use it quite extensivelly to centralize, configure and automate deployments.

    http://www.logilab.org/file/145398/raw/SaltStack-Logo.png

    We've talked on this blog about how to build a Debian AMI "by hand" and we wanted to automate this fully. Hence the salt way seemed to be the obvious way to go.

    So we wrote salt-ami-cloud-builder. It is mainly glue between existing pieces of software that we use and like. If you already have some definition of a type of host that you provision using salt-stack, salt-ami-cloud-builder should be able to generate the corresponding AMI.

    http://www.logilab.org/file/145397/raw/open-stack-cloud-computing-logo-2.png

    Why

    Building a Debian based OpenStack private cloud using salt made us realize that we needed a way to generate various flavours of AMIs for the following reasons:

    • Some of our openstack users need "preconfigured" AMIs (for example a Debian system with Postgres 9.1 and the appropriate Python bindings) without doing the modifications by hand or waiting for an automated script to do the job at AMI boot time.
    • Some cloud use cases require that you boot many (hundreds for instance) machines with the same configuration. While tools like salt automate the job, waiting while the same download and install takes place hundreds of times is a waste of resources. If the modifications have already been integrated into a specialized ami, you save a lot of computing time. And especially in the Amazon (or other pay-per-use cloud infrastructures), these resources are not free.
    • Sometimes one needs to repeat a computation on an instance with the very same packages and input files, possibly years after the first run. Freezing packages and files in one preconfigured AMI helps this a lot. When relying only on a salt configuration the installed packages may not be (exactly) the same from one run to the other.

    Relation to other projects

    While multiple tools like build-debian-cloud exist, their objective is to build a vanilla AMI from scratch. The salt-ami-cloud-builder starts from such vanilla AMIs to create variations. Other tools like salt-cloud focus instead on the boot phase of the deployment of (multiple) machines.

    Chef & Puppet do the same job as Salt, however Salt being already extensively deployed at Logilab, we continue to build on it.

    Get it now !

    Grab the code here: http://hg.logilab.org/master/salt-ami-cloud-builder

    The project page is http://www.logilab.org/project/salt-ami-cloud-builder

    The docs can be read here: http://docs.logilab.org/salt-ami-cloud-builder

    We hope you find it useful. Bug reports and contributions are welcome.

    The logilab-salt-ami-cloud-builder team :)


  • Deuxième hackathon codes libres de mécanique

    2014/04/07 by Nicolas Chauvat

    Organisation

    Le 27 mars 2014, Logilab a accueilli un hackathon consacré aux codes libres de simulation des phénomènes mécaniques. Etaient présents:

    • Patrick Pizette, Sébastien Rémond (Ecole des Mines de Douai / DemGCE)
    • Frédéric Dubois, Rémy Mozul (LMGC Montpellier / LMGC90)
    • Mickaël Abbas, Mathieu Courtois (EDF R&D / Code_Aster)
    • Alexandre Martin (LAMSID / Code_Aster)
    • Luca Dall'Olio, Maximilien Siavelis (Alneos)
    • Florent Cayré, Nicolas Chauvat, Denis Laxalde, Alain Leufroy (Logilab)

    DemGCE et LMGC90

    Patrick Pizette et Sébastien Rémond des Mines de Douai sont venus parler de leur code de modélisation DemGCE de "sphères molles" (aussi appelé smooth DEM), des potentialités d'intégration de leurs algorithmes dans LMGC90 avec Frédéric Dubois du LMGC et de l'interface Simulagora développée par Logilab. DemGCE est un code DEM en 3D développé en C par le laboratoire des Mines de Douai. Il effectuera bientôt des calculs parallèles en mémoire partagée grâce à OpenMP. Après une présentation générale de LMGC90, de son écosystème et de ses applications, ils ont pu lancer leurs premiers calculs en mode dynamique des contacts en appelant via l'interface Python leurs propres configurations d'empilements granulaires.

    Ils ont grandement apprécié l'architecture logicielle de LMGC90, et en particulier son utilisation comme une bibliothèque de calcul via Python, la prise en compte de particules de forme polyhédrique et les aspects visualisations avec Paraview. Il a été discuté de la réutilisation de la partie post/traitement visualisation via un fichier standard ou une bibliothèque dédiée visu DEM.

    Frédéric Dubois semblait intéressé par l'élargissement de la communauté et du spectre des cas d'utilisation, ainsi que par certains algorithmes mis au point par les Mines de Douai sur la génération géométrique d'empilements. Il serait envisageable d'ajouter à LMGC90 les lois d'interaction de la "smooth DEM" en 3D, car elles ne sont aujourd'hui implémentées dans LMGC90 que pour les cas 2D. Cela permettrait de tester en mode "utilisateur" le code LMGC90 et de faire une comparaison avec le code des Mines de Douai (efficacité parallélisation, etc.).

    Florent Cayré a fait une démonstration du potentiel de Simulagora.

    LMGC90 et Code_Aster dans Debian

    Denis Laxalde de Logilab a travaillé d'une part avec Rémy Mozul du LMGC sur l'empaquetage Debian de LMGC90 (pour intégrer en amont les modifications nécessaires), et d'autre part avec Mathieu Courtois d'EDF R&D, pour finaliser l'empaquetage de Code_Aster et notamment discuter de la problématique du lien avec la bibliothèque Metis: la version actuellement utilisée dans Code_Aster (Metis 4), n'est pas publiée dans une licence compatible avec la section principale de Debian. Pour cette raison, Code_Aster n'est pas compilé avec le support MED dans Debian actuellement. En revanche la version 5 de Metis a une licence compatible et se trouve déjà dans Debian. Utiliser cette version permettrait d'avoir Code_Aster avec le support Metis dans Debian. Cependant, le passage de la version 4 à la version 5 de Metis ne semble pas trivial.

    Voir les tickets:

    Replier LibAster dans Code_Aster

    Alain Leufroy et Nicolas Chauvat de Logilab ont travaillé à transformer LibAster en une liste de pull request sur la forge bitbucket de Code_Aster. Ils ont présenté leurs modifications à Mathieu Courtois d'EDF R&D ce qui facilitera leur intégration.

    Voir les tickets:

    Suppression du superviseur dans Code_Aster

    En fin de journée, Alain Leufroy, Nicolas Chauvat et Mathieu Courtois ont échangé leurs idées sur la simplification/suppression du superviseur de commandes actuel de Code_Aster. Il est souhaitable que la vérification de la syntaxe (choix des mots-clés) soit dissociée de l'étape d'exécution.

    La vérification pourrait s'appuyer sur un outil comme pylint, la description de la syntaxe des commandes de Code_Aster pour pylint pourrait également permettre de produire un catalogue compréhensible par Eficas.

    L'avantage d'utiliser pylint serait de vérifier le fichier de commandes avant l'exécution même si celui-ci contient d'autres instructions Python.

    Allocation mémoire dans Code_Aster

    Mickaël Abbas d'EDF R&D s'est intéressé à la modernisation de l'allocation mémoire dans Code_Aster et a listé les difficultés techniques à surmonter ; l'objectif visé est un accès facilité aux données numériques du Fortran depuis l'interface Python. Une des difficultés est le partage des types dérivés Fortran en Python. Rémy Mozul du LMGC et Denis Laxalde de Logilab ont exploré une solution technique basée sur Cython et ISO-C-Bindings. De son côté Mickaël Abbas a contribué à l'avancement de cette tâche directement dans Code_Aster.

    Doxygen pour documentation des sources de Code_Aster

    Luca Dall'Olio d'Alneos et Mathieu Courtois ont testé la mise en place de Doxygen pour documenter Code_Aster. Le fichier de configuration pour doxygen a été modifié pour extraire les commentaires à partir de code Fortran (les commentaires doivent se trouver au dessus de la déclaration de la fonction, par exemple). La configuration doxygen a été restituée dans le depôt Bitbucket. Reste à évaluer s'il y aura besoin de plusieurs configurations (pour la partie C, Python et Fortran) ou si une seule suffira. Une configuration particulière permet d'extraire, pour chaque fonction, les points où elle est appelée et les autres fonctions utilisées. Un exemple a été produit pour montrer comment écrire des équations en syntaxe Latex, la génération de la documentation nécessite plus d'une heure (seule la partie graphique peut être parallélisée). La documentation produite devrait être publiée sur le site de Code_Aster.

    La suite envisagée est de coupler Doxygen avec Breathe et Sphinx pour compléter la documentation extraite du code source de textes plus détaillés.

    La génération de cette documentation devrait être une cible de waf, par exemple waf doc. Un aperçu rapide du rendu de la documentation d'un module serait possible par waf doc file1.F90 [file2.c [...]].

    Voir Code Aster #18 configure doxygen to comment the source files

    Catalogue d'éléments finis

    Maximilien Siavelis d'Alneos et Alexandre Martin du LAMSID, rejoints en fin de journée par Frédéric Dubois du LMGC ainsi que Nicolas Chauvat et Florent Cayré de Logilab, ont travaillé à faciliter la description des catalogues d'éléments finis dans Code_Aster. La définition de ce qui caractérise un élément fini a fait l'objet de débats passionnés. Les points discutés nourriront le travail d'Alexandre Martin sur ce sujet dans Code_Aster. Alexandre Martin a déjà renvoyé aux participants un article qu'il a écrit pour résumer les débats.

    Remontée d'erreurs de fortran vers Python

    Mathieu Courtois d'EDF R&D a montré à Rémy Mozul du LMGC un mécanisme de remontée d'exception du Fortran vers le Python, qui permettra d'améliorer la gestion des erreurs dans LMGC90, qui a posé problème dans un projet réalisé par Denis Laxalde de Logilab pour la SNCF.

    Voir aster_exceptions.c

    Conclusion

    Tous les participants semblaient contents de ce deuxième hackathon, qui faisait suite à la première édition de mars 2013 . La prochaine édition aura lieu à l'automne 2014 ou au printemps 2015, ne la manquez pas !


  • Debian science sprint and workshop at ESRF

    2012/06/22 by Julien Cristau

    esrf debian

    From June 24th to June 26th, the European Synchrotron organises a workshop centered around Debian. On Monday, a number of talks about the use of Debian in scientific facilities will be featured. On Sunday and Tuesday, members of the Debian Science group will meet for a sprint focusing on the upcoming Debian 7.0 release.

    Among the speakers will be Stefano Zacchiroli, the current Debian project leader. Logilab will be present with Nicolas Chauvat at Monday's conference, and Julien Cristau at both the sprint and the conference.

    At the sprint we'll be discussing packaging of scientific libraries such as blas or MPI implementations, and working on polishing other scientific packages, such as python-related ones (including Salome on which we are currently working).


  • Retour sur MiniDebConf Paris 2014

    2014/03/05 by Arthur Lutz
    http://www.logilab.org/file/226609/raw/200px-Mini-debconf-paris.png

    Nous sommes heureux d'avoir participé à la MiniDebConf Paris.

    Nous avons sponsorisé l'évenement mais aussi effectué deux présentations :

    • Julien Cristau a présenté l'équipe dont il fait partie pour la prochaine release de debian : Jessie. Il a notamment donné des conseils à la communauté sur la préparation de jessie. Voici ces diapos : Release team: Jessie.
    • David Douard a présenté Salt to administrate Debian systems, introduisant Salt avec un focus particulier sur ce que Salt peut apporter à de l'administration d'un parc de machines sous Debian.

    Avec une cinquantaine de participants sur les deux jours, c'est toujours agréable de rencontrer la communauté francaise autour de Debian. Merci donc à l'association Debian France d'avoir organisé cette conférence.


  • Code_Aster back in Debian unstable

    2014/03/31 by Denis Laxalde

    Last week, a new release of Code_Aster entered Debian unstable. Code_Aster is a finite element solver for partial differential equations in mechanics, mainly developed by EDF R&D (Électricité de France). It is arguably one of the most feature complete free software available in this domain.

    Aster has been in Debian since 2012 thanks to the work of debian-science team. Yet it has always been somehow a problematic package with a couple of persistent Release Critical (RC) bugs (FTBFS, instalability issues) and actually never entered a stable release of Debian.

    Logilab has committed to improving Code_Aster for a long time in various areas, notably through the LibAster friendly fork, which aims at turning the monolithic Aster into a library, usable from Python.

    Recently, the EDF R&D team in charge of the development of Code_Aster took several major decisions, including:

    • the move to Bitbucket forge as a sign of community opening (following the path opened by LibAster that imported the code of Code_Aster into a Mercurial repository) and,
    • the change of build system from a custom makefile-style architecture to a fine-grained Waf system (taken from that of LibAster).

    The latter obviously led to significant changes on the Debian packaging side, most of which going into a sane direction: the debian/rules file slimed down from 239 lines to 51 and a bunch of tricky install-step manipulations were dropped leading to something much simpler and closer to upstream (see #731211 for details). From upstream perspective, this re-packaging effort based on the new build-system may be the opportunity to update the installation scheme (in particular by declaring the Python library as private).

    Clearly, there's still room for improvements on both side (like building with the new metis library, shipping several versions of Aster stable/testing, MPI/serial). All in all, this is good for both Debian users and upstream developers. At Logilab, we hope that this effort will consolidate our collaboration with EDF R&D.


  • Debian bug squashing party in Paris

    2012/02/16 by Julien Cristau

    Logilab will be present at the upcoming Debian BSP in Paris this week-end. This event will focus on fixing as many "release critical" bugs as possible, to help with the preparation of the upcoming Debian 7.0 "wheezy" release. It will also provide an opportunity to introduce newcomers to the processes of Debian development and bug fixing, as well as provide an opportunity for contributors in various areas of the project to interact "in real life".

    http://www.logilab.org/file/88881?vid=download

    The current stable release, Debian 6.0 "squeeze", came out in February 2011. The development of "wheezy" is scheduled to freeze in June 2012, for an eventual release later this year.

    Among the things we hope to work on during this BSP, the latest HDF5 release (1.8.8) includes API and packaging changes that require some changes in dependent packages. With the number of scientific packages relying on HDF5, this is a pretty big change, as tracked in this Debian bug.


  • Mini-DebConf Paris 2012

    2012/11/29 by Julien Cristau

    Last week-end, I attended the mini-DebConf organized at EPITA (near Paris) by the French Debian association and sponsored by Logilab.

    http://www.logilab.org/file/112649?vid=download

    The event was a great success, with a rather large number of attendees, including people coming from abroad such as Debian kernel maintainers Ben Hutchings and Maximilian Attems, who talked about their work with Linux.

    Among the other speakers were Loïc Dachary about OpenStack and its packaging in Debian, and Josselin Mouette about his work deploying Debian/GNOME desktops in a large enterprise environment at EDF R&D.

    On my part I gave a talk on Saturday about Debian's release team, and the current state of the wheezy (to-be Debian 7.0) release.

    On Sunday I presented together with Vladimir Daric the work we did to migrate a computation cluster from Red Hat to Debian. Attendees had quite a few questions about our use of ZFS on Linux for storage, and salt for configuration management and deployment.

    Slides for the talks are available on the mini-DebConf web page (wheezy state, migration to debian cluster also viewable on slideshare), and videos will soon be on http://video.debian.net/.

    Now looking forward to next summer's DebConf13 in Switzerland, and hopefully next year's edition of the Paris event.


  • Meetup debian Nantes octobre 2015

    2015/10/23 by Arthur Lutz

    Hier soir, nous nous sommes réunis entre utilisateurs et aficionados de Debian à la cantine numérique de Nantes. Une trentaine de personnes ont répondu présents à l'appel. Damien Raude-Morvan a introduit la soirée, suivi de Thomas Vincent qui nous a présenté le statut de développeur Debian non uploader en invitant les personnes présentes à participer à Debian sans forcément mettre les mains dans le paquet. Lunar a ensuite présenté les travaux sur la compilation reproductible.

    //www.logilab.org/file/2269692/raw/debian_nantes.png

    J'ai présenté rapidement l'utilisation de Salt pour gérer de nombreux systèmes Debian (slides html, slideshare), en appuyant notamment sur l'utilisation du bus d'évènements fourni par salt (scheduler, orchestration, reactor).

    La dynamique des meetups Debian à Nantes est donc (re)lancée avec un objectif de se réunir tous les deux mois. À suivre donc (notamment sur le pad d'organisation).


  • Logilab at Debconf 2014 - Debian annual conference

    2014/08/21 by Arthur Lutz

    Logilab is proud to contribute to the annual debian conference which will take place in Portland (USA) from the 23rd to the 31st of august.

    Julien Cristau (debian page) will be giving two talks at the conference :

    http://www.logilab.org/file/263602/raw/debconf2014.png

    Logilab is also contributing to the conference as a sponsor for the event.

    Here is what we previously blogged about salt and the previous debconf . Stay tuned for a blog post about what we saw and heard at the conference.

    https://www.debian.org/logos/openlogo-100.png

  • Report from DebConf14

    2014/09/05 by Julien Cristau

    Last week I attended DebConf14 in Portland, Oregon. As usual the conference was a blur, with lots of talks, lots of new people, and lots of old friends. The organizers tried to do something different this year, with a longer conference (9 days instead of a week) and some dedicated hack time, instead of a pre-DebConf "DebCamp" week. That worked quite well for me, as it meant the schedule was not quite so full with talks, and even though I didn't really get any hacking done, it felt a bit more relaxed and allowed some more hallway track discussions.

    http://www.logilab.org/file/264666/raw/Screenshot%20from%202014-09-05%2015%3A09%3A38.png

    On the talks side, the keynotes from Zack and Biella provided some interesting thoughts. Some nice progress was made on making package builds reproducible.

    I gave two talks: an introduction to salt (odp),

    http://www.logilab.org/file/264663/raw/slide2.jpg

    and a report on the Debian jessie release progress (pdf).

    http://www.logilab.org/file/264665/raw/slide3.jpg

    And as usual all talks were streamed live and recorded, and many are already available thanks to the awesome DebConf video team. Also for a change, and because I'm a sucker for punishment, I came back with more stuff to do.


  • Using Saltstack to limit impact of Poodle SSLv3 vulnerability

    2014/10/15 by Arthur Lutz

    Here at Logilab, we're big fans of SaltStack automation. As seen with Heartbleed, controlling your infrastructure and being able to fix your servers in a matter of a few commands as documented in this blog post. Same applies to Shellshock more recently with this blog post.

    Yesterday we got the news that a big vulnerability on SSL was going to be released. Code name : Poodle. This morning we got the details and started working on a fix through salt.

    So far, we've handled configuration changes and services restart for apache, nginx, postfix and user configuration for iceweasel (debian's firefox) and chromium (adapting to firefox and chrome should be a breeze). Some credit goes to mtpettyp for his answer on askubuntu.

    http://www.logilab.org/file/267853/raw/saltstack_poodlebleed.jpg
    {% if salt['pkg.version']('apache2') %}
    poodle apache server restart:
        service.running:
            - name: apache2
      {% for foundfile in salt['cmd.run']('rgrep -m 1 SSLProtocol /etc/apache*').split('\n') %}
        {% if 'No such file' not in foundfile and 'bak' not in foundfile and foundfile.strip() != ''%}
    poodle {{ foundfile.split(':')[0] }}:
        file.replace:
            - name : {{ foundfile.split(':')[0] }}
            - pattern: "SSLProtocol all -SSLv2[ ]*$"
            - repl: "SSLProtocol all -SSLv2 -SSLv3"
            - backup: False
            - show_changes: True
            - watch_in:
                service: apache2
        {% endif %}
      {% endfor %}
    {% endif %}
    
    {% if salt['pkg.version']('nginx') %}
    poodle nginx server restart:
        service.running:
            - name: nginx
      {% for foundfile in salt['cmd.run']('rgrep -m 1 ssl_protocols /etc/nginx/*').split('\n') %}
        {% if 'No such file' not in foundfile and 'bak' not in foundfile and foundfile.strip() != ''%}
    poodle {{ foundfile.split(':')[0] }}:
        file.replace:
            - name : {{ foundfile.split(':')[0] }}
            - pattern: "ssl_protocols .*$"
            - repl: "ssl_protocols TLSv1 TLSv1.1 TLSv1.2;"
            - show_changes: True
            - watch_in:
                service: nginx
        {% endif %}
      {% endfor %}
    {% endif %}
    
    {% if salt['pkg.version']('postfix') %}
    poodle postfix server restart:
        service.running:
            - name: postfix
    poodle /etc/postfix/main.cf:
    {% if 'main.cf' in salt['cmd.run']('grep smtpd_tls_mandatory_protocols /etc/postfix/main.cf') %}
        file.replace:
            - pattern: "smtpd_tls_mandatory_protocols=.*"
            - repl: "smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3"
    {% else %}
        file.append:
            - text: |
                # poodle fix
                smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3
    {% endif %}
            - name: /etc/postfix/main.cf
            - watch_in:
                service: postfix
    {% endif %}
    
    {% if salt['pkg.version']('chromium') %}
    /usr/share/applications/chromium.desktop:
        file.replace:
            - pattern: Exec=/usr/bin/chromium %U
            - repl: Exec=/usr/bin/chromium --ssl-version-min=tls1 %U
    {% endif %}
    
    {% if salt['pkg.version']('iceweasel') %}
    /etc/iceweasel/pref/poodle.js:
        file.managed:
            - text : pref("security.tls.version.min", "1")
    {% endif %}
    

    The code is also published as a gist on github. Feel free to comment and fork the gist. There is room for improvement, and don't forget that by disabling SSLv3 you might prevent some users with "legacy" browsers from accessing your services.


  • Mini-Debconf Lyon 2015

    2015/04/29 by Julien Cristau
    //www.logilab.org/file/291628/raw/debian-france.png

    A couple of weeks ago I attended the mini-DebConf organized by Debian France in Lyon.

    It was a really nice week-end, and the first time a French mini-DebConf wasn't in Paris :)

    Among the highlights, Juliette Belin reported on her experience as a new contributor to Debian: she authored the awesome "Lines" theme which was selected as the default theme for Debian 8.

    //www.logilab.org/file/291626/raw/juliette.jpg

    As a non-developer and newcomer to the free software community, she had quite intesting insights and ideas about areas where development processes need to improve.

    And Raphael Geissert reported on the new httpredir.debian.org service (previously http.debian.net), an http redirector to automagically pick the closest Debian archive mirror. So long, manual sources.list updates on laptops whenever travelling!

    //www.logilab.org/file/291627/raw/raphael.jpg

    Finally the mini-DebConf was a nice opportunity to celebrate the release of Debian 8, two weeks in advance.

    Now it's time to go and upgrade all our infrastructure to jessie.


  • Going to DebConf15

    2015/08/11 by Julien Cristau

    On Sunday I travelled to Heidelberg, Germany, to attend the 16th annual Debian developer's conference, DebConf15.

    The conference itself is not until next week, but this week is DebCamp, a hacking session. I've already met a few of my DSA colleagues, who've been working on setting up the network infrastructure. My other plans for this week involve helping the Big Transition of 2015 along, and trying to remove the setuid bit from /usr/bin/X in the default Debian install (bug #748203 in particular).

    As for next week, there's a rich schedule in which I'll need to pick a few things to go see.

    //www.logilab.org/file/524206/raw/Dc15going1.png

  • DebConf15 wrap-up

    2015/08/25 by Julien Cristau
    //www.logilab.org/file/856155/raw/heidelberg-panorama-2.jpg

    I just came back from two weeks in Heidelberg for DebCamp15 and DebConf15.

    In the first week, besides helping out DebConf's infrastructure team with network setup, I tried to make some progress on the library transitions triggered by libstdc++6's C++11 changes. At first, I spent many hours going through header files for a bunch of libraries trying to figure out if the public API involved std::string or std::list. It turns out that is time-consuming, error-prone, and pretty efficient at making me lose the will to live. So I ended up stealing a script from Steve Langasek to automatically rename library packages for this transition. This ended in 29 non-maintainer uploads to the NEW queue, quickly processed by the FTP team. Sadly the transition is not quite there yet, as making progress with the initial set of packages reveals more libraries that need renaming.

    Building on some earlier work from Laurent Bigonville, I've also moved the setuid root Xorg wrapper from the xserver-xorg package to xserver-xorg-legacy, which is now in experimental. Hopefully that will make its way to sid and stretch soon (need to figure out what to do with non-KMS drivers first).

    Finally, with the help of the security team, the security tracker was moved to a new VM that will hopefully not eat its root filesystem every week as the old one was doing the last few months. Of course, the evening we chose to do this was the night DebConf15's network was being overhauled, which made things more interesting.

    DebConf itself was the opportunity to meet a lot of people. I was particularly happy to meet Andreas Boll, who has been a member of pkg-xorg for two years now, working on our mesa package, among other things. I didn't get to see a lot of talks (too many other things going on), but did enjoy Enrico's stand up comedy, the CitizenFour screening, and Jake Applebaum's keynote. Thankfully, for the rest the video team has done a great job as usual.

    Note

    Above picture is by Aigars Mahinovs, licensed under CC-BY 2.0


  • Installing Debian Jessie on a "pure UEFI" system

    2016/06/13 by David Douard

    At the core of the Logilab infrastructure is a highly-available pair of small machines dedicated to our main directory and authentication services: LDAP, DNS, DHCP, Kerberos and Radius.

    The machines are small fanless boxes powered by a 1GHz Via Eden processor, 512Mb of RAM and 2Gb of storage on a CompactFlash module.

    They have served us well for many years, but now is the time for an improvement. We've bought a pair of Lanner FW-7543B that have the same form-factor. They are not fanless, but are much more powerful. They are pretty nice, but have one major drawback: their firmware does not boot on a legacy BIOS-mode device when set up in UEFI. Another hard point is that they do not have a video connector (there is a VGA output on the motherboard, but the connector is optional), so everything must be done via the serial console.

    https://www.logilab.org/file/6679313/raw/FW-7543_front.jpg

    I knew the Debian Jessie installer would provide everything that is required to handle an UEFI-based system, but it took me a few tries to get it to boot.

    First, I tried the standard netboot image, but the firmware did not want to boot from a USB stick, probably because the image requires a MBR-based bootloader.

    Then I tried to boot from the Refind bootable image and it worked! At least I had the proof this little beast could boot in UEFI. But, although it is probably possible, I could not figure out how to tweak the Refind config file to make it boot properly the Debian installer kernel and initrd.

    https://www.logilab.org/file/6679257/raw/uefi_lanner_nope.png

    Finally I gave a try to something I know much better: Grub. Here is what I did to have a working UEFI Debian installer on a USB key.

    Partitionning

    First, in the UEFI world, you need a GPT partition table with a FAT partition typed "EFI System":

    david@laptop:~$ sudo fdisk /dev/sdb
    Welcome to fdisk (util-linux 2.25.2).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    
    Command (m for help): g
    Created a new GPT disklabel (GUID: 52FFD2F9-45D6-40A5-8E00-B35B28D6C33D).
    
    Command (m for help): n
    Partition number (1-128, default 1): 1
    First sector (2048-3915742, default 2048): 2048
    Last sector, +sectors or +size{K,M,G,T,P} (2048-3915742, default 3915742):  +100M
    
    Created a new partition 1 of type 'Linux filesystem' and of size 100 MiB.
    
    Command (m for help): t
    Selected partition 1
    Partition type (type L to list all types): 1
    Changed type of partition 'Linux filesystem' to 'EFI System'.
    
    Command (m for help): p
    Disk /dev/sdb: 1.9 GiB, 2004877312 bytes, 3915776 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 52FFD2F9-45D6-40A5-8E00-B35B28D6C33D
    
    Device     Start    End Sectors  Size Type
    /dev/sdb1   2048 206847  204800  100M EFI System
    
    Command (m for help): w
    

    Install Grub

    Now we need to install a grub-efi bootloader in this partition:

    david@laptop:~$ pmount sdb1
    david@laptop:~$ sudo grub-install --target x86_64-efi --efi-directory /media/sdb1/ --removable --boot-directory=/media/sdb1/boot
    Installing for x86_64-efi platform.
    Installation finished. No error reported.
    

    Copy the Debian Installer

    Our next step is to copy the Debian's netboot kernel and initrd on the USB key:

    david@laptop:~$ mkdir /media/sdb1/EFI/debian
    david@laptop:~$ wget -O /media/sdb1/EFI/debian/linux http://ftp.fr.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/debian-installer/amd64/linux
    --2016-06-13 18:40:02--  http://ftp.fr.debian.org/debian/dists/jessie/main/installer-amd64/current  /images/netboot/debian-installer/amd64/linux
    Resolving ftp.fr.debian.org (ftp.fr.debian.org)... 212.27.32.66, 2a01:e0c:1:1598::2
    Connecting to ftp.fr.debian.org (ftp.fr.debian.org)|212.27.32.66|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 3120416 (3.0M) [text/plain]
    Saving to: ‘/media/sdb1/EFI/debian/linux’
    
    /media/sdb1/EFI/debian/linux      100%[========================================================>]   2.98M      464KB/s   in 6.6s
    
    2016-06-13 18:40:09 (459 KB/s) - ‘/media/sdb1/EFI/debian/linux’ saved [3120416/3120416]
    
    david@laptop:~$ wget -O /media/sdb1/EFI/debian/initrd.gz http://ftp.fr.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/debian-installer/amd64/initrd.gz
    --2016-06-13 18:41:30--  http://ftp.fr.debian.org/debian/dists/jessie/main/installer-amd64/current/images/netboot/debian-installer/amd64/initrd.gz
    Resolving ftp.fr.debian.org (ftp.fr.debian.org)... 212.27.32.66, 2a01:e0c:1:1598::2
    Connecting to ftp.fr.debian.org (ftp.fr.debian.org)|212.27.32.66|:80... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 15119287 (14M) [application/x-gzip]
    Saving to: ‘/media/sdb1/EFI/debian/initrd.gz’
    
    /media/sdb1/EFI/debian/initrd.g    100%[========================================================>]  14.42M    484KB/s   in 31s
    
    2016-06-13 18:42:02 (471 KB/s) - ‘/media/sdb1/EFI/debian/initrd.gz’ saved [15119287/15119287]
    

    Configure Grub

    Then, we must write a decent grub.cfg file to load these:

    david@laptop:~$ echo >/media/sdb1/boot/grub/grub.cfg <<EOF
    menuentry "Jessie Installer" {
      insmod part_msdos
      insmod ext2
      insmod part_gpt
      insmod fat
      insmod gzio
      echo  'Loading Linux kernel'
      linux /EFI/debian/linux --- console=ttyS0,115200
      echo 'Loading InitRD'
      initrd /EFI/debian/initrd.gz
    }
    EOF
    

    Et voilà, piece of cake!