subscribe to this blog

Logilab.org - en

News from Logilab and our Free Software projects, as well as on topics dear to our hearts (Python, Debian, Linux, the semantic web, scientific computing...)

show 191 results
  • Our work for the OpenDreamKit project during the 77th Sage days

    2016/04/18 by Florent Cayré

    Logilab is part of OpenDreamKit, a Horizon 2020 European Research Infrastructure project that will run until 2019 and provides substantial funding to the open source computational mathematics ecosystem.

    https://www.logilab.org/file/5545539/raw

    One of the goals of this project is improve the packaging and documentation of SageMath, the open source alternative to Maple and Mathematica.

    The core developers of SageMath organised the 77th Sage days last week and Logilab has taken part, with David Douard, Julien Cristau and I, Florent Cayre.

    David and Julien have been working on packaging SageMath for Debian. This is a huge task (several man-months of work), split into two sub-tasks for now:

    • building SageMath with Debian-packaged versions of its dependencies, if available;
    • packaging some of the missing dependencies, starting with the most expected ones, like the latest releases of Jupyter and IPython.
    http://ipython.org/_static/IPy_header.png http://jupyter.org/assets/nav_logo.svg https://www.debian.org/Pics/hotlink/swirl-debian.png

    As a first result, the following packages have been pushed into Debian experimental:

    There is still a lot of work to be done, and packaging the notebook is the next task on the list.

    One hiccup along the way was a python crash involving multiple inheritance from Cython extensions classes. Having people nearby who knew the SageMath codebase well (or even wrote the relevant parts) was invaluable for debugging, and allowed us to blame a recent CPython change.

    Julien also gave a hand to Florent Hivert and Robert Lehmann who were trying to understand why building SageMath's documentation needed this much memory.

    As far as I am concerned, I made a prototype of a structured HTML documentation produced with Sphinx and containing Python executable code ran on https://tmpnb.org/ thanks to the Thebe javascript library that interfaces statically delivered HTML pages with a Jupyter notebook server.

    The Sage days have been an excellent opportunity to efficiently work on the technical tasks with skillfull and enthusiastic people. We would like to thank the OpenDreamKit core team for the organization and their hard work. We look forward to the next workshop.


  • 3D Visualization of simulation data with x3dom

    2016/02/16 by Yuanxiang Wang

    X3DOM Plugins

    As part of the Open Dream Kit project, we are working at Logilab on the creation of tools for mesh data visualization and analysis in a web application. Our goal was to create widgets to use in Jupyter notebook (formerly IPython) for 3D visualization and analysis.

    We found two interesting technologies for 3D rendering: ThreeJS and X3DOM. ThreeJS is a large JavaScript 3D library and X3DOM a HTML5 framework for 3D. After working with both, we chose to use X3DOM because of its high level architecture. With X3DOM the 3D model is defined in the DOM in HTML5, so the parameters of the nodes can be changed easily with the setAttribute DOM function. This makes the creation of user interfaces and widgets much easier.

    We worked to create new DOM nodes that integrate nicely in a standard X3DOM tree, namely IsoColor, Threshold and ClipPlane.

    We had two goals in mind:

    1. create an X3DOM plugin API that allows one to create new DOM nodes which extend X3DOM functionality;
    2. keep a simple X3DOM-like interface for the final users.

    Example of the plugins Threshold and IsoColor:

    image0

    The Threshold and IsoColor nodes work like any X3DOM node and react to attribute changes performed with the setAttribute method. This makes it easy to use HTML widgets like sliders / buttons to drive the plugin's parameters.

    X3Dom API

    The goal is to create custom nodes that affect the rendering based on data (positions, pressure, temperature...). The idea is to manipulate the shaders, since it gives low-level manipulation on the 3D rendering. Shaders give more freedom and efficiency compared to reusing other X3DOM nodes. (Reminder : Shaders are parts of GLSL, used to work with the GPU).

    X3DOM has a native node to all users to write shaders : the ComposedShader node. The problem of this node is it overwrites the shaders generated by X3DOM. For example, nodes like ClipPlane are disabled with a ComposedShader node in the DOM. Another example is image texturing, the computation of the color from texture coordinate should be written within the ComposedShader.

    In order to add parts of shader to the generated shader without overwriting it I created a new node: CustomAttributeNode. This node is a generic node to add uniforms, varying and shader parts into X3DOW. The data of the geometry (attributes) are set using the X3DOM node named FloatVertexAttribute.

    Example of CustomAttributeNode to create a threshold node:

    image2

    The CustomAttributeNode is the entry point in x3dom for the javascript API.

    JavaScript API

    The idea of the the API is to create a new node inherited from CustomAttributeNode. We wrote some functions to make the implementation of the node easier.

    Ideas for future improvement

    There are still some points that need improvement

    • Create a tree widget using the grouping nodes in X3DOM
    • Add high level functions to X3DGeometricPropertyNode to set the values. For instance the IsoColor node is only a node that set the values of the TextureCoordinate node from the FloatVertexAttribute node.
    • Add high level function to return the variable name needed to overwrite the basic attributes like positions in a Geometry. With my API, the IsoColor use a varying defined in X3DOM to overwrite the values of the texture coordinate. Because there are no documentation, it is hard for the users to find the varying names. On the other hand there are no specification on the varying names so it might need to be maintained.
    • Maybe the CustomAttributeNode should be a X3DChildNode instead of a X3DGeometricPropertyNode.

    image4

    This structure might allow the "use" attribute in X3DOM. Like that, X3DOM avoid data duplication and writing too much HTML. The following code illustrate what I expect.

    image5


  • We went to cfgmgmtcamp 2016 (after FOSDEM)

    2016/02/09 by Arthur Lutz

    Following a day at FOSDEM (another post about it), we spend two days at cfgmgmtcamp in Gent. At cfgmgmtcamp, we obviously spent some time in the Salt track since it's our tool of choice as you might have noticed. But checking out how some of the other tools and communities are finding solutions to similar problems is also great.

    cfgmgmtcamp logo

    I presented Roll out active Supervision with Salt, Graphite and Grafana (mirrored on slideshare), you can find the code on bitbucket.

    http://image.slidesharecdn.com/cfgmgmtcamp2016activesupervisionwithsalt-160203131954/95/cfgmgmtcamp-2016-roll-out-active-supervision-with-salt-graphite-and-grafana-1-638.jpg?cb=1454505737

    We saw :

    Day 1

    • Mark Shuttleworth from Canonical presenting Juju and its ecosystem, software modelling. MASS (Metal As A Service) was demoed on the nice "OrangeBox". It promises to spin up an OpenStack infrastructure in 15 minutes. One of the interesting things with charms and bundles of charms is the interfaces that need to be established between different service bricks. In the salt community we have salt-formulas but they lack maturity in the sense that there's no possibility to plug in multiple formulas that interact with each other... yet.
    juju deploy of openstack
    • Mitch Michell from Hashicorp presented vault. Vault stores your secrets (certificates, passwords, etc.) and we will probably be trying it out in the near future. A lot of concepts in vault are really well thought out and resonate with some of the things we want to do and automate in our infrastructure. The use of Shamir Secret Sharing technique (also used in the debian infrastructure team) for the N-man challenge to unvault the secrets is quite nice. David is already looking into automating it with Salt and having GSSAPI (kerberos) authentication.
    https://www.vaultproject.io/assets/images/hero-95b4a434.png bikes!

    Day 2

    • Gareth Rushgrove from PuppetLabs talked about the importance of metadata in docker images and docker containers by explaining how these greatly benefit tools like dpkg and rpm and that the container community should be inspired by the amazing skills and experience that has been built by these package management communities (think of all the language-specific package managers that each reinvent the wheel one after the other).
    • Testing Immutable Infrastructure: we found some inspiration from test-kitchen and running the tests inside a docker container instead of vagrant virtual machine. We'll have to take a look at the SaltStack provisioner for test-kitchen. We already do some of that stuff in docker and OpenStack using salt-cloud. But maybe we can take it further with such tools (or testinfra whose author will be joining Logilab next month).
    coreos, rkt, kubernetes
    • How CoreOS is built, modified, and updated: From repo sync to Omaha by Brian "RedBeard" Harrington. Interesting presentation of the CoreOS system. Brian also revealed that CoreOS is now capable of using the TPM to enforce a signed OS, but also signed containers. Official CoreOS images shipped through Omaha are now signed with a root key that can be installed in the TPM of the host (ie. they didn't use a pre-installed Microsoft key), along with a modified TPM-aware version of GRUB. For now, the Omaha platform is not open source, so it may not be that easy to build one's own CoreOS images signed with a personal root key, but it is theoretically possible. Brian also said that he expect their Omaha server implementation to become open source some day.
    • The use of Salt in Foreman was presented and demoed by Stephen Benjamin. We'll have to retry using that tool with the newest features of the smart proxy.
    • Jonathan Boulle from CoreOS presented "rkt and Kubernetes: What’s new with Container Runtimes and Orchestration" In this last talk, Johnathan gave a tour of the rkt project and how it is used to build, coupled with kubernetes, a comprehensive, secure container running infrastructure (which uses saltstack!). He named the result "rktnetes". The idea is to use rkt as the kubelet's (primany node agent) container runtime of a kubernetes cluster powered by CoreOS. Along with the new CoreOS support for TPM-based trust chain, it allows to ensure completely secured executions, from the bootloader to the container! The possibility to run fully secured containers is one of the reasons why CoreOS developed the rkt project.
    coffee!

    We would like to thank the cfgmgmntcamp organisation team, it was a great conference, we highly recommend it. Thanks for the speaker event the night before the conference, and the social event on Monday evening. (and thanks for the chocolate!).


  • We went to FOSDEM 2016 (and cfgmgmtcamp)

    2016/02/09 by Arthur Lutz

    David & I went to FOSDEM and cfgmgmtcamp this year to see some conferences, do two presentations, and discuss with the members of the open source communities we contribute to.

    https://www.logilab.org/file/4253021/raw/16312670359_565eec1e3d_k.jpg

    At FOSDEM, we started early by doing a presentation at 9.00 am in the "Configuration Management devroom", which to our surprise was a large room which was almost full. The presentation was streamed over the Internet and should be available to view shortly.

    I presented "Once you've configured your infrastructure using salt, monitor it by re-using that definition". (mirrored on slideshare. The main part was a demo, the code being published on bitbucket.

    http://image.slidesharecdn.com/fosdem2016describeitmonitorit-160203131836/95/fosdem-2016-after-describing-your-infrastructure-as-code-reuse-that-to-monitor-it-1-638.jpg?cb=1454505792

    The presentation was streamed live (I came across someone that watched it on the Internet to "sleep in"), and should be available to watch when it gets encoded on http://video.fosdem.org/.

    FOSDEM video box

    We then saw the following talks :

    • Unified Framework for Big Data Foreign Data Wrappers (FDW) by Shivram Mani in the Postgresql Track
    • Mainflux Open Source IoT Cloud
    • EzBench, a tool to help you benchmark and bisect the Graphics Stack's performance
    • The RTC components in the debian infrastructure
    • CoreOS: A Linux distribution designed for application containers that scale
    • Using PostgreSQL for Bibliographic Data (since we've worked on http://data.bnf.fr/ with http://cubicweb.org/ and PostgreSQL)
    • The FOSDEM infrastructure review

    Congratulations to all the FOSDEM organisers, volunteers and speakers. We will hopefully be back for more.

    We then took the train to Gent where we spent two days learning and sharing about Configuration Management Systems and all the ecosystem around it (orchestration, containers, clouds, testing, etc.).

    More on our cfmgmtcamp experience in another blog post.

    Photos under creative commons CC-BY, by Ludovic Hirlimann and Deborah Bryant here and here


  • DebConf15 wrap-up

    2015/08/25 by Julien Cristau
    //www.logilab.org/file/856155/raw/heidelberg-panorama-2.jpg

    I just came back from two weeks in Heidelberg for DebCamp15 and DebConf15.

    In the first week, besides helping out DebConf's infrastructure team with network setup, I tried to make some progress on the library transitions triggered by libstdc++6's C++11 changes. At first, I spent many hours going through header files for a bunch of libraries trying to figure out if the public API involved std::string or std::list. It turns out that is time-consuming, error-prone, and pretty efficient at making me lose the will to live. So I ended up stealing a script from Steve Langasek to automatically rename library packages for this transition. This ended in 29 non-maintainer uploads to the NEW queue, quickly processed by the FTP team. Sadly the transition is not quite there yet, as making progress with the initial set of packages reveals more libraries that need renaming.

    Building on some earlier work from Laurent Bigonville, I've also moved the setuid root Xorg wrapper from the xserver-xorg package to xserver-xorg-legacy, which is now in experimental. Hopefully that will make its way to sid and stretch soon (need to figure out what to do with non-KMS drivers first).

    Finally, with the help of the security team, the security tracker was moved to a new VM that will hopefully not eat its root filesystem every week as the old one was doing the last few months. Of course, the evening we chose to do this was the night DebConf15's network was being overhauled, which made things more interesting.

    DebConf itself was the opportunity to meet a lot of people. I was particularly happy to meet Andreas Boll, who has been a member of pkg-xorg for two years now, working on our mesa package, among other things. I didn't get to see a lot of talks (too many other things going on), but did enjoy Enrico's stand up comedy, the CitizenFour screening, and Jake Applebaum's keynote. Thankfully, for the rest the video team has done a great job as usual.

    Note

    Above picture is by Aigars Mahinovs, licensed under CC-BY 2.0


  • Going to DebConf15

    2015/08/11 by Julien Cristau

    On Sunday I travelled to Heidelberg, Germany, to attend the 16th annual Debian developer's conference, DebConf15.

    The conference itself is not until next week, but this week is DebCamp, a hacking session. I've already met a few of my DSA colleagues, who've been working on setting up the network infrastructure. My other plans for this week involve helping the Big Transition of 2015 along, and trying to remove the setuid bit from /usr/bin/X in the default Debian install (bug #748203 in particular).

    As for next week, there's a rich schedule in which I'll need to pick a few things to go see.

    //www.logilab.org/file/524206/raw/Dc15going1.png

  • Experiments on building a Jenkins CI service with Salt

    2015/06/17 by Denis Laxalde

    In this blog post, I'll talk about my recent experiments on building a continuous integration service with Jenkins that is, as much as possible, managed through Salt. We've been relying on a Jenkins platform for quite some time at Logilab (Tolosa team). The service was mostly managed by me with sporadic help from other team-mates but I've never been entirely satisfied about the way it was managed because it involved a lot of boilerplate configuration through Jenkins user interface and this does not scale very well nor does it make long term maintenance easy.

    So recently, I've taken a stance and decided to go through a Salt-based configuration and management of our Jenkins CI platform. There are actually two aspects here. The first concerns the setup of Jenkins itself (this includes installation, security configuration, plugins management amongst other things). The second concerns the management of client projects (or jobs in Jenkins jargon). For this second aspect, one of the design goals was to enable easy configuration of jobs by users not necessarily familiar with Jenkins setup and to make collaborative maintenance easy. To tackle these two aspects I've essentially been using (or developing) two distinct Salt formulas which I'll detail hereafter.

    Jenkins jobs salt

    Core setup: the jenkins formula

    The core setup of Jenkins is based on an existing Salt formula, the jenkins-formula which I extended a bit to support map.jinja and which was further improved to support installation of plugins by Yann and Laura (see 3b524d4).

    With that, deploying a Jenkins server is as simple as adding the following to your states and pillars top.sls files:

    base:
      "jenkins":
        - jenkins
        - jenkins.plugins
    

    Base pillar configuration is used to declare anything that differs from the default Jenkins settings in a jenkins section, e.g.:

    jenkins:
      lookup:
        - home: /opt/jenkins
    

    Plugins configuration is declared in plugins subsection as follows:

    jenkins:
      lookup:
        plugins:
          scm-api:
            url: 'http://updates.jenkins-ci.org/download/plugins/scm-api/0.2/scm-api.hpi'
            hash: 'md5=9574c07bf6bfd02a57b451145c870f0e'
          mercurial:
            url: 'http://updates.jenkins-ci.org/download/plugins/mercurial/1.54/mercurial.hpi'
            hash: 'md5=1b46e2732be31b078001bcc548149fe5'
    

    (Note that plugins dependency is not handled by Jenkins when installing from the command line, neither by this formula. So in the preceding example, just having an entry for the Mercurial plugin would have not been enough because this plugin depends on scm-api.)

    Other aspects (such as security setup) are not handled yet (neither by the original formula, nor by our extension), but I tend to believe that this is acceptable to manage this "by hand" for now.

    Jobs management : the jenkins_jobs formula

    For this task, I leveraged the excellent jenkins-job-builder tool which makes it possible to configure jobs using a declarative YAML syntax. The tool takes care of installing the job and also handles any housekeeping tasks such as checking configuration validity or deleting old configurations. With this tool, my goal was to let end-users of the Jenkins service add their own project by providing a minima a YAML job description file. So for instance, a simple Job description for a CubicWeb job could be:

    - scm:
        name: cubicweb
        scm:
          - hg:
             url: http://hg.logilab.org/review/cubicweb
             clean: true
    
    - job:
        name: cubicweb
        display-name: CubicWeb
        scm:
          - cubicweb
        builders:
          - shell: "find . -name 'tmpdb*' -delete"
          - shell: "tox --hashseed noset"
        publishers:
          - email:
              recipients: cubicweb@lists.cubicweb.org
    

    It consists of two parts:

    • the scm section declares, well, SCM information, here the location of the review Mercurial repository, and,

    • a job section which consists of some metadata (project name), a reference of the SCM section declared above, some builders (here simple shell builders) and a publisher part to send results by email.

    Pretty simple. (Note that most test running configuration is here declared within the source repository, via tox (another story), so that the CI bot holds minimum knowledge and fetches information from the sources repository directly.)

    To automate the deployment of this kind of configurations, I made a jenkins_jobs-formula which takes care of:

    1. installing jenkins-job-builder,
    2. deploying YAML configurations,
    3. running jenkins-jobs update to push jobs into the Jenkins instance.

    In addition to installing the YAML file and triggering a jenkins-jobs update run upon changes of job files, the formula allows for job to list distribution packages that it would require for building.

    Wrapping things up, a pillar declaration of a Jenkins job looks like:

    jenkins_jobs:
      lookup:
        jobs:
          cubicweb:
            file: <path to local cubicweb.yaml>
            pkgs:
              - mercurial
              - python-dev
              - libgecode-dev
    

    where the file section indicates the source of the YAML file to install and pkgs lists build dependencies that are not managed by the job itself (typically non Python package in our case).

    So, as an end user, all is needed to provide is the YAML file and a pillar snippet similar to the above.

    Outlook

    This initial setup appears to be enough to greatly reduce the burden of managing a Jenkins server and to allow individual users to contribute jobs for their project based on simple contribution to a Salt configuration.

    Later on, there is a few things I'd like to extend on jenkins_jobs-formula side. Most notably the handling of distant sources for YAML configuration file (as well as maybe the packages list file). I'd also like to experiment on configuring slaves for the Jenkins server, possibly relying on Docker (taking advantage of another of my experiment...).


  • Running a local salt-master to orchestrate docker containers

    2015/05/20 by David Douard

    In a recent blog post, Denis explained how to build Docker containers using Salt.

    What's missing there is how to have a running salt-master dedicated to Docker containers.

    There is not need the salt-master run as root for this. A test config of mine looks like:

    david@perseus:~$ mkdir -p salt/etc/salt
    david@perseus:~$ cd salt
    david@perseus:~salt/$ cat << EOF >etc/salt/master
    interface: 192.168.127.1
    user: david
    
    root_dir: /home/david/salt/
    pidfile: var/run/salt-master.pid
    pki_dir: etc/salt/pki/master
    cachedir: var/cache/salt/master
    sock_dir: var/run/salt/master
    
    file_roots:
      base:
        - /home/david/salt/states
        - /home/david/salt/formulas/cubicweb
    
    pillar_roots:
      base:
        - /home/david/salt/pillar
    EOF
    

    Here, 192.168.127.1 is the ip of my docker0 bridge. Also note that path in file_roots and pillar_roots configs must be absolute (they are not relative to root_dir, see the salt-master configuration documentation).

    Now we can start a salt-master that will be accessible to Docker containers:

    david@perseus:~salt/$ /usr/bin/salt-master -c etc/salt
    

    Warning

    with salt 2015.5.0, salt-master really wants to execute dmidecode, so add /usr/sbin to the $PATH variable before running the salt-master as non-root user.

    From there, you can talk to your test salt master by adding -c ~/salt/etc/salt option to all salt commands. Fortunately, you can also set the SALT_CONFIG_DIR environment variable:

    david@perseus:~salt/$ export SALT_CONFIG_DIR=~/salt/etc/salt
    david@perseus:~salt/$ salt-key
    Accepted Keys:
    Denied Keys:
    Unaccepted Keys:
    Rejected Keys:
    

    Now, you need to have a Docker images with salt-minion already installed, as explained in Denis' blog post. (I prefer using supervisord as PID 1 in my dockers, but that's not important here.)

    david@perseus:~salt/ docker run -d --add-host salt:192.168.127.1  logilab/salted_debian:wheezy
    53bf7d8db53001557e9ae25f5141cd9f2caf7ad6bcb7c2e3442fcdbb1caf5144
    david@perseus:~salt/ docker run -d --name jessie1 --hostname jessie1 --add-host salt:192.168.127.1  logilab/salted_debian:jessie
    3da874e58028ff6dcaf3999b29e2563e1bc4d6b1b7f2f0b166f9a8faffc8aa47
    david@perseus:~salt/ salt-key
    Accepted Keys:
    Denied Keys:
    Unaccepted Keys:
    53bf7d8db530
    jessie1
    Rejected Keys:
    david@perseus:~/salt$ salt-key -y -a 53bf7d8db530
    The following keys are going to be accepted:
    Unaccepted Keys:
    53bf7d8db530
    Key for minion 53bf7d8db530 accepted.
    david@perseus:~/salt$ salt-key -y -a jessie1
    The following keys are going to be accepted:
    Unaccepted Keys:
    jessie1
    Key for minion jessie1 accepted.
    david@perseus:~/salt$ salt '*' test.ping
    jessie1:
        True
    53bf7d8db530:
        True
    

    You can now build Docker images as explained by Denis, or test your sls config files in containers.


  • Mini-Debconf Lyon 2015

    2015/04/29 by Julien Cristau
    //www.logilab.org/file/291628/raw/debian-france.png

    A couple of weeks ago I attended the mini-DebConf organized by Debian France in Lyon.

    It was a really nice week-end, and the first time a French mini-DebConf wasn't in Paris :)

    Among the highlights, Juliette Belin reported on her experience as a new contributor to Debian: she authored the awesome "Lines" theme which was selected as the default theme for Debian 8.

    //www.logilab.org/file/291626/raw/juliette.jpg

    As a non-developer and newcomer to the free software community, she had quite intesting insights and ideas about areas where development processes need to improve.

    And Raphael Geissert reported on the new httpredir.debian.org service (previously http.debian.net), an http redirector to automagically pick the closest Debian archive mirror. So long, manual sources.list updates on laptops whenever travelling!

    //www.logilab.org/file/291627/raw/raphael.jpg

    Finally the mini-DebConf was a nice opportunity to celebrate the release of Debian 8, two weeks in advance.

    Now it's time to go and upgrade all our infrastructure to jessie.


  • Building Docker containers using Salt

    2015/04/07 by Denis Laxalde

    In this blog post, I'll talk about a way to use Salt to automate the build and configuration of Docker containers. I will not consider the deployment of Docker containers with Salt as this subject is already covered elsewhere (here for instance). The emphasis here is really on building (or configuring) a container for future deployment.

    Motivation

    Salt is a remote execution framework that can be used for configuration management. It's already widely used at Logilab to manage our infrastructure as well as on a semi-daily basis during our application development activities.

    Docker is a tool that helps automating the deployment of applications within Linux containers. It essentially provides a convenient abstraction and a set of utilities for system level virtualization on Linux. Amongst other things, Docker provides container build helpers around the concept of dockerfile.

    So, the first question is why would you use Salt to build Docker containers when you already have this dockerfile building tool. My first motivation is to encompass the limitations of the available declarations one could insert in a Dockerfile. First limitation: you can only execute instructions in a sequential manner using a Dockerfile, there's is no possibility of declaring dependencies between instructions or even of making an instruction conditional (apart from using the underlying shell conditional machinery of course). Then, you have only limited possibilities of specializing a Dockerfile. Finally, it's no so easy to apply a configuration step-by-step, for instance during the development of said configuration.

    That's enough for an introduction to lay down the underlying motivation of this post. Let's move on to more practical things!

    A Dockerfile for the base image

    Before jumping into the usage of Salt for the configuration of a Docker image, the first thing you need to do is to build a Docker container into a proper Salt minion.

    Assuming we're building on top of some a base image of Debian flavour subsequently referred to as <debian> (I won't tell you where it comes from, since you ought to build your own base image -- or find some friend you trust to provide you with one!), the following Dockerfile can be used to initialize a working image which will serve as the starting point for further configuration with Salt:

    FROM <debian>
    RUN apt-get update
    RUN apt-get install -y salt-minion
    

    Then, run docker build . docker_salt/debian_salt_minion and you're done.

    Plugin the minion container with the Salt master

    The next thing to do with our fresh Debian+salt-minion image is to turn it into a container running salt-minion, waiting for the Salt master to instruct it.

    docker run --add-host=salt:10.1.1.1 --hostname docker_minion \
        --name minion_container \
        docker_salt/debian/salt_minion salt-minion
    

    Here:

    • --hostname is used to specify the network name of the container, for easier query by the Salt master,
    • 10.1.1.1 is usually the IP address of the host, which in our example will serve as the Salt master,
    • --name is just used for easier book-keeping.

    Finally,

    salt-key -a docker_minion
    

    will register the new minion's key into the master's keyring.

    If all went well, the following command should succeed:

    salt 'docker_minion' test.ping
    

    Configuring the container with a Salt formula

    salt 'docker_minion' state.sls some_formula
    salt 'docker_minion' state.highstate
    

    Final steps: save the configured image and build a runnable image

    (Optional step, cleanup salt-minion installation.)

    Make a snapshot image of your configured container.

    docker stop minion_container
    docker commit -m 'Install something with Salt' \
        minion_container me/something
    

    Try out your new image:

    docker run -p 8080:80 me/something <entry point>
    

    where <entry point> will be the main program driving the service provided by the container (typically defined through the Salt formula).

    Make a fully configured image for you service:

    FROM me/something
    [...anything else you need, such as EXPOSE, etc...]
    CMD <entry point>
    

show 191 results