Blog entries

  • A quick take on continuous integration services for Bitbucket

    2013/12/19 by Sylvain Thenault

    Some time ago, we moved Pylint from this forge to Bitbucket (more on this here).

    https://bitbucket-assetroot.s3.amazonaws.com/c/photos/2012/Oct/11/master-logo-2562750429-5_avatar.png

    Since then, I somewhat continued to use the continuous integration (CI) service we provide on logilab.org to run tests on new commits, and to do the release job (publish a tarball on pypi, on our web site, build Debian and Ubuntu packages, etc.). This is fine, but not really handy since the logilab.org's CI service is not designed to be used for projects hosted elsewhere. Also I wanted to see what others have to offer, so I decided to find a public CI service to host Pylint and Astroid automatic tests at least.

    Here are the results of my first swing at it. If you have others suggestions, some configuration proposal or whatever, please comment.

    First, here are the ones I didn't test along with why:

    The first one I actually tested, also the first one to show up when looking for "bitbucket continuous integration" on Google is https://drone.io. The UI is really simple, I was able to set up tests for Pylint in a matter of minutes: https://drone.io/bitbucket.org/logilab/pylint. Tests are automatically launched when a new commit is pushed to Pylint's Bitbucket repository and that setup was done automatically.

    Trying to push Drone.io further, one missing feature is the ability to have different settings for my project, e.g. to launch tests on all the python flavor officially supported by Pylint (2.5, 2.6, 2.7, 3.2, 3.3, pypy, jython, etc.). Last but not least, the missing killer feature I want is the ability to launch tests on top of Pull Requests, which travis-ci supports.

    Then I gave http://wercker.com a shot, but got stuck at the Bitbucket repository selection screen: none were displayed. Maybe because I don't own Pylint's repository, I'm only part of the admin/dev team? Anyway, wercker seems appealing too, though the configuration using yaml looks a bit more complicated than drone.io's, but as I was not able to test it further, there's not much else to say.

    https://www.logilab.org/file/4758432/raw/wercker.png

    So for now the winner is https://drone.io, but the first one allowing me to test on several Python versions and to launch tests on pull requests will be the definitive winner! Bonus points for automating the release process and checking test coverage on pull requests as well.

    https://drone.io/drone3000/images/alien-zap-header.png

  • Experiments on building a Jenkins CI service with Salt

    2015/06/17 by Denis Laxalde

    In this blog post, I'll talk about my recent experiments on building a continuous integration service with Jenkins that is, as much as possible, managed through Salt. We've been relying on a Jenkins platform for quite some time at Logilab (Tolosa team). The service was mostly managed by me with sporadic help from other team-mates but I've never been entirely satisfied about the way it was managed because it involved a lot of boilerplate configuration through Jenkins user interface and this does not scale very well nor does it make long term maintenance easy.

    So recently, I've taken a stance and decided to go through a Salt-based configuration and management of our Jenkins CI platform. There are actually two aspects here. The first concerns the setup of Jenkins itself (this includes installation, security configuration, plugins management amongst other things). The second concerns the management of client projects (or jobs in Jenkins jargon). For this second aspect, one of the design goals was to enable easy configuration of jobs by users not necessarily familiar with Jenkins setup and to make collaborative maintenance easy. To tackle these two aspects I've essentially been using (or developing) two distinct Salt formulas which I'll detail hereafter.

    Jenkins jobs salt

    Core setup: the jenkins formula

    The core setup of Jenkins is based on an existing Salt formula, the jenkins-formula which I extended a bit to support map.jinja and which was further improved to support installation of plugins by Yann and Laura (see 3b524d4).

    With that, deploying a Jenkins server is as simple as adding the following to your states and pillars top.sls files:

    base:
      "jenkins":
        - jenkins
        - jenkins.plugins
    

    Base pillar configuration is used to declare anything that differs from the default Jenkins settings in a jenkins section, e.g.:

    jenkins:
      lookup:
        - home: /opt/jenkins
    

    Plugins configuration is declared in plugins subsection as follows:

    jenkins:
      lookup:
        plugins:
          scm-api:
            url: 'http://updates.jenkins-ci.org/download/plugins/scm-api/0.2/scm-api.hpi'
            hash: 'md5=9574c07bf6bfd02a57b451145c870f0e'
          mercurial:
            url: 'http://updates.jenkins-ci.org/download/plugins/mercurial/1.54/mercurial.hpi'
            hash: 'md5=1b46e2732be31b078001bcc548149fe5'
    

    (Note that plugins dependency is not handled by Jenkins when installing from the command line, neither by this formula. So in the preceding example, just having an entry for the Mercurial plugin would have not been enough because this plugin depends on scm-api.)

    Other aspects (such as security setup) are not handled yet (neither by the original formula, nor by our extension), but I tend to believe that this is acceptable to manage this "by hand" for now.

    Jobs management : the jenkins_jobs formula

    For this task, I leveraged the excellent jenkins-job-builder tool which makes it possible to configure jobs using a declarative YAML syntax. The tool takes care of installing the job and also handles any housekeeping tasks such as checking configuration validity or deleting old configurations. With this tool, my goal was to let end-users of the Jenkins service add their own project by providing a minima a YAML job description file. So for instance, a simple Job description for a CubicWeb job could be:

    - scm:
        name: cubicweb
        scm:
          - hg:
             url: http://hg.logilab.org/review/cubicweb
             clean: true
    
    - job:
        name: cubicweb
        display-name: CubicWeb
        scm:
          - cubicweb
        builders:
          - shell: "find . -name 'tmpdb*' -delete"
          - shell: "tox --hashseed noset"
        publishers:
          - email:
              recipients: cubicweb@lists.cubicweb.org
    

    It consists of two parts:

    • the scm section declares, well, SCM information, here the location of the review Mercurial repository, and,

    • a job section which consists of some metadata (project name), a reference of the SCM section declared above, some builders (here simple shell builders) and a publisher part to send results by email.

    Pretty simple. (Note that most test running configuration is here declared within the source repository, via tox (another story), so that the CI bot holds minimum knowledge and fetches information from the sources repository directly.)

    To automate the deployment of this kind of configurations, I made a jenkins_jobs-formula which takes care of:

    1. installing jenkins-job-builder,
    2. deploying YAML configurations,
    3. running jenkins-jobs update to push jobs into the Jenkins instance.

    In addition to installing the YAML file and triggering a jenkins-jobs update run upon changes of job files, the formula allows for job to list distribution packages that it would require for building.

    Wrapping things up, a pillar declaration of a Jenkins job looks like:

    jenkins_jobs:
      lookup:
        jobs:
          cubicweb:
            file: <path to local cubicweb.yaml>
            pkgs:
              - mercurial
              - python-dev
              - libgecode-dev
    

    where the file section indicates the source of the YAML file to install and pkgs lists build dependencies that are not managed by the job itself (typically non Python package in our case).

    So, as an end user, all is needed to provide is the YAML file and a pillar snippet similar to the above.

    Outlook

    This initial setup appears to be enough to greatly reduce the burden of managing a Jenkins server and to allow individual users to contribute jobs for their project based on simple contribution to a Salt configuration.

    Later on, there is a few things I'd like to extend on jenkins_jobs-formula side. Most notably the handling of distant sources for YAML configuration file (as well as maybe the packages list file). I'd also like to experiment on configuring slaves for the Jenkins server, possibly relying on Docker (taking advantage of another of my experiment...).


  • Testing salt formulas with testinfra

    2016/07/21 by Philippe Pepiot

    In a previous post we talked about an environment to develop salt formulas. To add some spicy requirements, the formula must now handle multiple target OS (Debian and Centos), have tests and a continuous integration (CI) server setup.

    http://testinfra.readthedocs.io/en/latest/_static/logo.png

    I started a year ago to write a framework to this purpose, it's called testinfra and is used to execute commands on remote systems and make assertions on the state and the behavior of the system. The modules API provides a pythonic way to inspect the system. It has a smooth integration with pytest that adds some useful features out of the box like parametrization to run tests against multiple systems.

    Writing useful tests is not an easy task, my advice is to test code that triggers implicit actions, code that has caused issues in the past or simply test the application is working correctly like you would do in the shell.

    For instance this is one of the tests I wrote for the saemref formula

    def test_saemref_running(Process, Service, Socket, Command):
        assert Service("supervisord").is_enabled
    
        supervisord = Process.get(comm="supervisord")
        # Supervisor run as root
        assert supervisord.user == "root"
        assert supervisord.group == "root"
    
        cubicweb = Process.get(ppid=supervisord.pid)
        # Cubicweb should run as saemref user
        assert cubicweb.user == "saemref"
        assert cubicweb.group == "saemref"
        assert cubicweb.comm == "uwsgi"
        # Should have 2 worker process with 8 thread each and 1 http proccess with one thread
        child_threads = sorted([c.nlwp for c in Process.filter(ppid=cubicweb.pid)])
        assert child_threads == [1, 8, 8]
    
        # uwsgi should bind on all ipv4 adresses
        assert Socket("tcp://0.0.0.0:8080").is_listening
    
        html = Command.check_output("curl http://localhost:8080")
        assert "<title>accueil (Référentiel SAEM)</title>" in html
    

    Now we can run tests against a running container by giving its name or docker id to testinfra:

    % testinfra --hosts=docker://1a8ddedf8164 test_saemref.py
    [...]
    test/test_saemref.py::test_saemref_running[docker:/1a8ddedf8164] PASSED
    

    The immediate advantage of writing such test is that you can reuse it for monitoring purpose, testinfra can behave like a nagios plugin:

    % testinfra -qq --nagios --hosts=ssh://prod test_saemref.py
    TESTINFRA OK - 1 passed, 0 failed, 0 skipped in 2.31 seconds
    .
    

    We can now integrate the test suite in our run-tests.py by adding some code to build and run a provisioned docker image and add a test command that runs testinfra tests against it.

    provision_option = click.option('--provision', is_flag=True, help="Provision the container")
    
    @cli.command(help="Build an image")
    @image_choice
    @provision_option
    def build(image, provision=False):
        dockerfile = "test/{0}.Dockerfile".format(image)
        tag = "{0}-formula:{1}".format(formula, image)
        if provision:
            dockerfile_content = open(dockerfile).read()
            dockerfile_content += "\n" + "\n".join([
                "ADD test/minion.conf /etc/salt/minion.d/minion.conf",
                "ADD {0} /srv/formula/{0}".format(formula),
                "RUN salt-call --retcode-passthrough state.sls {0}".format(formula),
            ]) + "\n"
            dockerfile = "test/{0}_provisioned.Dockerfile".format(image)
            with open(dockerfile, "w") as f:
                f.write(dockerfile_content)
            tag += "-provisioned"
        subprocess.check_call(["docker", "build", "-t", tag, "-f", dockerfile, "."])
        return tag
    
    
    @cli.command(help="Spawn an interactive shell in a new container")
    @image_choice
    @provision_option
    @click.pass_context
    def dev(ctx, image, provision=False):
        tag = ctx.invoke(build, image=image, provision=provision)
        subprocess.call([
            "docker", "run", "-i", "-t", "--rm", "--hostname", image,
            "-v", "{0}/test/minion.conf:/etc/salt/minion.d/minion.conf".format(BASEDIR),
            "-v", "{0}/{1}:/srv/formula/{1}".format(BASEDIR, formula),
            tag, "/bin/bash",
        ])
    
    
    @cli.command(help="Run tests against a provisioned container",
                 context_settings={"allow_extra_args": True})
    @click.pass_context
    @image_choice
    def test(ctx, image):
        import pytest
        tag = ctx.invoke(build, image=image, provision=True)
        docker_id = subprocess.check_output([
            "docker", "run", "-d", "--hostname", image,
            "-v", "{0}/test/minion.conf:/etc/salt/minion.d/minion.conf".format(BASEDIR),
            "-v", "{0}/{1}:/srv/formula/{1}".format(BASEDIR, formula),
            tag, "tail", "-f", "/dev/null",
        ]).strip()
        try:
            ctx.exit(pytest.main(["--hosts=docker://" + docker_id] + ctx.args))
        finally:
            subprocess.check_call(["docker", "rm", "-f", docker_id])
    

    Tests can be run on a local CI server or on travis, they "just" require a docker server, here is an example of .travis.yml

    sudo: required
    services:
      - docker
    language: python
    python:
      - "2.7"
    env:
      matrix:
        - IMAGE=centos7
        - IMAGE=jessie
    install:
      - pip install testinfra
    script:
      - python run-tests.py test $IMAGE -- -v
    

    I wrote a dummy formula with the above code, feel free to use it as a template for your own formula or open pull requests and break some tests.

    There is a highly enhanced version of this code in the saemref formula repository, including:

    • Building a provisioned docker image with custom pillars, we use it to run an online demo
    • Destructive tests where each test is run in a dedicated "fresh" container
    • Run Systemd in the containers to get a system close to the production one (this enables the use of Salt service module)
    • Run a postgresql container linked to the tested container for specific tests like upgrading a Cubicweb instance.

    Destructive tests rely on advanced pytest features that may produce weird bugs when mixed together, too much magic involved here. Also, handling Systemd in docker is really painful and adds a lot of complexity, for instance some systemctl commands require a running systemd as PID 1 and this is not the case during the docker build phase. So the trade-off between complexity and these features may not be worth.

    There is also a lot of quite new tools to develop and test infrastructure code that you could include in your stack like test-kitchen, serverspec, and goss. Choose your weapon and go test your infrastructure code.