Blog entries

Semantic Roundup - infos glanées au NextMediaCamp

2008/11/12 by Arthur Lutz
http://barcamp.org/f/NMC.PNG

Par manque de temps voici les infos en brut glanées jeudi soir dernier au NextMediaBarCamp :

Un BarCamp c'est assez rigolo, un peu trop de jeune cadres dynamiques en cravate à mon goût, mais bon. Parmi les trois mini-conférences auxquelles j'ai participé, il y avait une sur le web sémantique. Animée en partie par Fabrice Epelboin qui écrit pour la version française de ReadWriteWeb, j'ai appris des choses. Pour résumer ce que j'y ai compris : le web sémantique il y a deux approches :

  1. soit on pompe le contenu existant et on le transforme en contenu lisible par les machines avec des algorithmes de la mort, comme opencalais le fait (top-down)
  2. soit on écrit nous même en web sémantique avec des microformats ensuite les machines liront de plus en plus le web (bottom-up)
http://microformats.org/wordpress/wp-content/themes/microformats/img/logo.gif

Dans le deuxième cas la difficulté est de faciliter la tache des rédacteurs du web pour qu'ils puissent facilement publier du contenu en web sémantique. Pour cela ces outils sont mentionnés : Zemanta, Glue (de la société AdaptiveBlue).

Tout ça m'a fait penser au fait que si CubicWeb publie déjà en microformats, il lui manque une interface d'édition à la hauteur des enjeux. Par exemple lorsque l'on tape un article et que l'application a les coordonnées d'une personne metionnée, il fait automagiquement une relation à cet élément. A creuser...

Sinon sur les autres confs et le futur des médias, selon les personnes présentes, c'est assez glauque : des medias publicitaires, custom-made pour le bonheur de l'individu, où l'on met des sous dans les agrégateurs de contenu plutôt sur des journalistes de terrain. Pour ceux que ça intéresse, j'ai aussi découvert lors de ce BarCamp un petit film "rigolo" qui traite ce sujet préoccupant.


DBpedia 3.2 released

2008/11/19 by Nicolas Chauvat
http://wiki.dbpedia.org/images/dbpedia_logo.png

For those interested in the Semantic Web as much as we are at Logilab, the announce of the new DBpedia release is very good news. Version 3.2 is extracted from the October 2008 Wikipedia dumps and provides three mayor improvements: the DBpedia Schema which is a restricted vocabulary extracted from the Wikipedia infoboxes ; RDF links from DBpedia to Freebase, the open-license database providing about a million of things from various domains ; cleaner abstracts without the traces of Wikipedia markup that made them difficult to reuse.

DBpedia can be downloaded, queried with SPARQL or linked to via the Linked Data interface. See the about page for details.

It is important to note that ontologies are usually more of a common language for data exchange, meant for broad re-use, which means that they can not enforce too many restrictions. On the opposite, database schemas are more restrictive and allow for more interesting inferences. For example, a database schema may enforce that the Publisher of a Document is a Person, whereas a more general ontology will have to allow for Publisher to be a Person or a Company.

DBpedia provides its schema and moves forward by adding a mapping from that schema to actual ontologies like UMBEL, OpenCyc and Yago. This enables DBpedia users to infer from facts fetched from different databases, like DBpedia + Freebase + OpenCyc. Moreover 'checking' DBpedia's data against ontologies will help detect mistakes or weirdnesses in Wikipedia's pages. For example, if data extracted from Wikipedia's infoboxes states that "Paris was_born_in New_York", reasoning and consistency checking tools will be able to point out that a person may be born in a city, but not a city, hence the above fact is probably an error and should be reviewed.

With CubicWeb, one can easily define a schema specific to his domain, then quickly set up a web application and easily publish the content of its database as RDF for a known ontology. In other words, CubicWeb makes almost no difference between a web application and a database accessible thru the web.


Release of CubicWeb 3.0

2009/01/05 by Nicolas Chauvat
http://www.cubicweb.org/index-cubicweb.png

As some readers of this blog may be aware of, Logilab has been developing its own framework since 2001. It evolved over the years trying to reach the main goal (managing and publishing data with style) and to incorporate the goods ideas seen in other Python frameworks Logilab developers had used. Now, companies other than Logilab have started providing services for this framework and it is stable enough for the core team to be confident in recommending it to third parties willing to build on it without suffering from the tasmanian devil syndrom.

CubicWeb version 3.0 was released on the last day of 2008. That's 7 years of research and development and (at least) three rewrites that were needed to get this in shape. Enjoy it at http://www.cubicweb.org/ !


Rencontre francophone autour des forges

2009/01/19
http://farm1.static.flickr.com/25/61448490_5f9a023307_m.jpg

Logilab sera présent le Mercredi 21 janvier à la rencontre francophone autour des forges. Les présentations et les discussions porteront sur :

  • échange de données entre forges, interopérabilité
  • définition d’un modèle d’intégration ouvert
  • recherche multi-forges
  • gestion des permissions et partage d’identités
  • interaction entre la forge et le poste client

Logilab espère ainsi bénéficier de l'expérience de chacun et des pistes d'évolutions pour, à terme, les intégrer dans son projet de forge basé sur CubicWeb.

Toutes les informations de l'événement sont disponibles ici.

photo par StormyDog sous licence Creative Commons.


Apycot big version change

2009/01/26 by Arthur Lutz

The version convention that we use is pretty straight forward and standard : it's composed of 3 numbers separated by dots. What are the rules to incrementing each on of these numbers ?

  • The last number is a incremented when bugs are corrected
  • The middle number is incremented when stories (functionalities) are implemented to the software
  • The first number is incremented when we have a major change of technology

Well... if you've been paying attention, apycot just turned 1.0.0, the major change of technology is that it is now integrated to CubicWeb (instead of just generating html files). So for a project in your forge, you describe the apycot configuration for it, and the tests for quality assurance are launched on a regular basis. We're still in the process of stabilizing it (latest right now it 1.0.5), but it already runs on the CubicWeb projects, see the screenshot below :

http://www.logilab.org/image/7682?vid=download

You should also know that now apycot has two components : the apycotbot which runs the tests and an cubicweb-apycot which displays the results in cubicweb (download cubicweb-apycot-1.0.5.tar.gz and apycotbot-1.0.5.tar.gz).


Résumé de la rencontre francophone autour des forges du 21 janvier 2009

2009/01/27

Définition d'une forge

Je vous propose tout d'abord une définition assez complète d'une forge logicielle.

« Une forge ou plate-forme d'hébergement de projets logiciels est un ensemble réunissant les technologies du travail coopératif et du génie logiciel pour permettre le développement coordonné de logiciels en équipe. Les services de base d'une telle plate-forme sont axés sur le partage de fichiers (code source, données et exécutables) et l'animation du groupe. Ils permettent la rédaction et la programmation collaborative, et facilitent la communication dans le groupe grâce à des outils associés aux projets tels que des gestionnaires de listes de messagerie, des logiciels de suivi des tâches et de gestion des rapports d'anomalies. L'utilisation d'une plate-forme de ce type améliore la qualité des réalisations en accélérant les processus d'échange entre développeurs et les cycles de version des logiciels, tout en facilitant l'implication des utilisateurs dans la détection des erreurs ou la mise en lumière des fonctionnalités pertinentes des logiciels.»

Tirée de http://overcrowded.anoptique.org/ForgesEtatArt (licence Art Libre, créateur initial Benoît Sibaud, diffusable sous LAL, CC By, CC By-SA et GFDL).

Voir:

Problèmes recensés

image by http://flickr.com/photos/fastjack/ under creative commons

Je liste ici quelques problèmes qui m'ont semblé émerger pendant les discussions de la journée.

  • forks nombreux du projet GForge avec peu de collaboration entre les projets
  • périmètre technique souvent imposé dans le débat; ce qui gêne la définition fonctionnelle d'une forge
  • forte hiérarchie des responsabilités avec un rôle 'développeur' limité au profit de celui du 'chef de projet'
  • très orienté web avec une stratégie d'intégration de produits tiers; d'où une situation délicate pour l'homogénéité des solutions à cause des limitations du protocole HTTP
  • la méthodologie de conception des logiciels n'est pas évoquée dans le choix d'une solution (et pas d'approche Agile en général).

Présentations

ALM, cycle de vie des logiciels et poste client

Voir:

http://gforge.org/themes/gforgegroup/images/logo.jpg
Projet NovaForge

Forge très aboutie où une approche MDA a été mise en oeuvre pour la génération de cas de tests à partir des cas d'utilisation décrits en UML.

Intégration continue

Plusieurs outils ont été nommés. J'ai placé la liste sur la page du wiki dédiée.

Interopérabilité sémantique

Projet xfmigration

Ce projet vise à migrer le contenu d'une forge (BerliOS) vers GForge sans passer par le backend SGBD. L'idée est alors d'utiliser différentes ontologies pour 'mapper' les concepts des 2 forges.

Cross-Forge Migration Tool is a concept for facilitating the migration process of project metadata between forge platforms. It uses SWRL rules for mapping and OWL-S standard for exporting mapping results.

Voir:

Projet HELIOS

L'exposé présentait un cas d'utilisation du web sémantique pour la recherche et le tri de rapport d'anomalies sur des forges séparées. L'exemple utilisé a été de pouvoir rapatrier; puis sélectionner des tickets relatifs au projet Konqueror à travers le bugzilla de Debian et celui du projet lui-même. Il est ainsi possible de détecter certains doublons ou incohérences de versions.

http://www.cubicweb.org/index-cubicweb.png
Projet CubicWeb forge

J'ai pu présenter le framework CubicWeb et son architecture de composants. La présentation n'étant pas prévue initialement mais j'ai pû faire une démonstration à partir de la forge publique de Logilab.

À partir d'un schéma enrichi des composants installés, il est possible de faire des requêtes RQL (similaire à SPARQL) sur des entités et des sources distantes (Base de données, LDAP, subversion, ...). Logilab travaille aujourd'hui à l'ajout d'ontologies dans la manipulation de ce schéma.

Remerciements

Je remercie les organisateurs de cette rencontre édition 2009 ainsi que l'ensemble des intervenants propices à l'échange de points de vue.


Fetching book descriptions and covers

2009/05/11 by Nicolas Chauvat
http://www.logilab.org/image/9139?vid=download

We recently added the book cube to our intranet in order for books available in our library to show up in the search results. Entering a couple books using the default HTML form, even with the help of copy/paste from Google Book or Amazon, is boring enough to make one seek out other options.

As a Python and Debian user, I put the python-gdata package on my list of options, but quickly realized that the version in Debian is not current and that the books service is not yet accessible with the python gdata client. Both problems could be easily overcome since I could update Debian's version from 1.1.1 to the latest 1.3.1 and patch it with the book search support that will be included in the next release, but I went on exploring other options.

Amazon is the first answer that comes to mind when speaking of books on the net and pyAWS looks like a nice wrapper around the Amazon Web Service. The quickstart example on the home page does almost exactly what I was looking for. Trying to find a Debian package of pyAWS, I only came accross boto which appears to be general purpose.

Registering with Amazon and Google to get a key and use their web services is doable, but one wonders why something as common as books and public libraries would have to be accessed through private companies. It turns out Wikipedia knows of many book catalogs on the net, but I was looking for a site publishing data as RDF or part of the Linked Open Data initiative. I ended up with almost exactly what I needed.

The Open Library features millions of books and covers, publicly accessible as JSON using its API. There is even a dump of the database. End of search, be happy.

Next step is to use this service to enhance the cubicweb-book cube by allowing a user to add a new book to its collection by simply entering a ISBN. All data about the book can be fetched from the OpenLibrary, including the cover and information about the author. You can expect such a new version soon... and we will probably get a new demo of CubicWeb online in the process, since all that data available as a dump is screaming for reuse as others have already found out by making it available as RDF on AppEngine!


Nous allons à PyConFr 2009

2009/05/25 by Arthur Lutz

Le 30 et 31 mai prochain (samedi et dimanche prochain) nous allons être présents à PyConFr édition 2009, nous sommes partenaire de l'évènement et allons y parler de CubicWeb. Pour être plus précis, Nicolas Chauvat y présentera "CubicWeb pour publier DBpedia et OpenLibrary". Il avait déjà évoqué ces sujets sur ce site : Fetching book descriptions and covers et DBpedia 3.2 released.

Si vous comptez y aller, n'hésitez pas à venir nous dire bonjour.

http://pycon.fr/images/logo_pyconfr_small.png

Semantic web technology conference 2009

2009/06/17 by Sandrine Ribeau
The semantic web technology conference is taking place every year in San Jose, California. It is meant to be the world's symposium on the business of semantic technologies. Essentially here we discuss about semantic search, how to improve access to the data and how we make sense of structured, but mainly unstructured content. Some exhibitors were more NLP oriented, concepts extraction (such as SemanticV), others were more focused on providing a scalable storage (essentially RDF storage). Most of the solutions includes a data aggregator/unifier in order to combine multi-sources data into a single storage from which ontologies could be defined. Then on top of that is the enhanced search engine. They concentrate on internal data within the enterprise and not that much about using the Web as a resource. For those who built a web application on top of the data, they choosed Flex as their framework (Metatomix).
From all the exhibitors, the ones that kept my attention were The Anzo suite (open source project), ORDI and Allegrograph RDF store.
Developped by Cambridge Semantics, in Java, Anzo suite, especially, Anzo on the web and Anzo collaboration server, is the closest tools to CubicWeb, providing a multi source data server and an AJAX/HTML interface to develop semantic web applications, customize views of the data using a templating language. It is available in open source. The feature that I think was interesting is an assistant to load data into their application that then helps the users define the data model based on that data. The internal representation of the content is totally transparent to the user, types are inferred by the application, as well as relations.
RDF Resource Description Framework IconI did not get a demo of ORDI, but it was just mentionned to me as an open source equivalent to CubicWeb, which I am not too sure about after looking at their web site. It does data integration into RDF.
Allegrograph RDF store is a potential candidate for another source type in CubicWeb . It is already supported by Jena and Sesame framework. They developped a Python client API to attract pythonist in the Java world.
They all agreed on one thing : the use of SPARQL should be the standard query language. I quickly heard about Knowledge Interface Format (KIF) which seems to be an interesting representation of knowledge used for multi-lingual applications. If there was one buzz word to recall from the conference, I would choose ontology :)

EuroPython 2009

2009/07/06 by Nicolas Chauvat
http://www.logilab.org/image/9580?vid=download

Once again Logilab sponsored the EuroPython conference. We would like to thank the organization team (especially John Pinner and Laura Creighton) for their hard work. The Conservatoire is a very central location in Birmingham and walking around the city center and along the canals was nice. The website was helpful when preparing the trip and made it easy to find places where to eat and stay. The conference program was full of talks about interesting topics.

I presented CubicWeb and spent a large part of my talk explaining what is the semantic web and what features we need in the tools we will use to be part of that web of data. I insisted on the fact that CubicWeb is made of two parts, the web engine and the data repository, and that the repository can be used without the web engine. I demonstrated this with a TurboGears application that used the CubicWeb repository as its persistence layer. RQL in TurboGears! See my slides and Reinout Van Rees' write-up.

Christian Tismer took over the development of Psyco a few months ago. He said he recently removed some bugs that were show stoppers, including one that was generating way too many recompilations. His new version looks very promising. Performance improved, long numbers are supported, 64bit support may become possible, generators work... and Stackless is about to be rebuilt on top of Psyco! Psyco 2.0 should be out today.

I had a nice chat with Cosmin Basca about the Semantic Web. He suggested using Mako as a templating language for CubicWeb. Cosmin is doing his PhD at DERI and develops SurfRDF which is an Object-RDF mapper that wraps a SPARQL endpoint to provide "discoverable" objects. See his slides and Reinout Van Rees' summary of his talk.

I saw a lightning talk about the Nagare framework which refuses to use templating languages, for the same reason we do not use them in CubicWeb. Is their h.something the right way of doing things? The example reminds me of the C++ concatenation operator. I am not really convinced with the continuation idea since I have been for years a happy user of the reactor model that's implemented in frameworks liked Twisted. Read the blog and documentation for more information.

I had a chat with Jasper Op de Coul about Infrae's OAI Server and the work he did to manage RDF data in Subversion and a relational database before publishing it within a web app based on YUI. We commented code that handles books and library catalogs. Part of my CubicWeb demo was about books in DBpedia and cubicweb-book. He gave me a nice link to the WorldCat API.

Souheil Chelfouh showed me his work on Dolmen and Menhir. For several design problems and framework architecture issues, we compared the solutions offered by the Zope Toolkit library with the ones found by CubicWeb. I will have to read more about Martian and Grok to make sure I understand the details of that component architecture.

I had a chat with Martijn Faassen about packaging Python modules. A one sentence summary would be that the Python community should agree on a meta-data format that describes packages and their dependencies, then let everyone use the tool he likes most to manage the installation and removal of software on his system. I hope the work done during the last PyConUS and led by Tarek Ziadé arrived at the same conclusion. Read David Cournapeau's blog entry about Python Packaging for a detailed explanation of why the meta-data format is the way to go. By the way, Martijn is the lead developer of Grok and Martian.

Godefroid Chapelle and I talked a lot about Zope Toolkit (ZTK) and CubicWeb. We compared the way the two frameworks deal with pluggable components. ZTK has adapters and a registry. CubicWeb does not use adapters as ZTK does, but has a view selection mechanism that required a registry with more features than the one used in ZTK. The ZTK registry only has to match a tuple (Interface, Class) when looking for an adapter, whereas CubicWeb's registry has to find the views that can be applied to a result set by checking various properties:

  • interfaces: all items of first column implement the Calendar Interface,
  • dimensions: more than one line, more than two columns,
  • types: items of first column are numbers or dates,
  • form: form contains key XYZ that has a value lower than 10,
  • session: user is authenticated,
  • etc.

As for Grok and Martian, I will have to look into the details to make sure nothing evil is hinding there. I should also find time to compare zope.schema and yams and write about it on this blog.

And if you want more information about the conference:


Logilab at OSCON 2009

2009/07/28 by Sandrine Ribeau
http://assets.en.oreilly.com/1/event/27/oscon2009_oscon_11_years.gif

OSCON, Open Source CONvention, takes place every year and promotes Open Source for technology. It is one of the meeting hubs for the growing open source community. This was the occasion for us to learn about new projects and to present CubicWeb during a BAYPIGgies meeting hosted by OSCON.

http://www.openlina.com/templates/rhuk_milkyway/images/header_red_left.png

I had the chance to talk with some of the folks working at OpenLina where they presented LINA. LINA is a thin virtual layer that enables developers to write and compile code using ordinary Linux tools, then package that code into a single executable that runs on a variety of operating systems. LINA runs invisibly in the background, enabling the user to install and run LINAfied Linux applications as if they were native to that user's operating system. They were curious about CubicWeb and took as a challenge to package it with LINA... maybe soon on LINA's applications list.

Two open sources projects catched my attention as potential semantic data publishers. The first one is Family search where they provide a tool to search for family history and genealogy. Also they are working to define a standard format to exchange citation with Open Library. Democracy Lab provide an application to collect votes and build geographic statitics based on political interests. They will at some point publish data semantically so that their application data could be consumed.

It also was for us the occasion of introducing CubicWeb to the BayPIGgies folks. The same presentation as the one held at Europython 2009. I'd like to take the opportunity to answer a question I did not manage to answer at that time. The question was: how different is CubicWeb from Freebase Parallax in terms of interface and views filters? Before answering this question let's detail what Freebase Parallax is.

Freebase Parallax provides a new way to browse and explore data in Freebase. It allows to browse data from a set of data to a related set of data. This interface enables to aggregate visualization. For instance, given the set of US presidents, different types of views could be applied, such as a timeline view, where the user could set up which start and end date to use to draw the timeline. So generic views (which applies to any data) are customizable by the user.

http://res.freebase.com/s/f64a2f0cc4534b2b17140fd169cee825a7ed7ddcefe0bf81570301c72a83c0a8/resources/images/freebase-logo.png

The search powered by Parallax is very similar to CubicWeb faceted search, except that Parallax provides the user with a list of suggested filters to add in addition to the default one, the user can even remove a filter. That is something we could think about for CubicWeb: provide a generated faceted search so that the user could decide which filters to choose.

Parallax also provides related topics to the current data set which ease navigation between sets of data. The main difference I could see with the view filter offered by Parallax and CubicWeb is that Parallax provides the same views to any type of data whereas CubicWeb has specific views depending on the data type and generic views that applies to any type of data. This is a nice Web interface to browse data and it could be a good source of inspiration for CubicWeb.

http://www.zgeek.com/forum/gallery/files/6/3/2/img_228_96x96.jpg

During this talk, I mentionned that CubicWeb now understands SPARQL queries thanks to the fyzz parser.


You can now register on our sites

2009/09/03 by Arthur Lutz

With the new version of CubicWeb deployed on our "public" sites, we would like to welcome a new (much awaited) functionality : you can now register directly on our websites. Getting an account with give you access to a bunch of functionalities :

http://farm1.static.flickr.com/53/148921611_eadce4f5f5_m.jpg
  • registering to a project's activity with get you automated email reports of what is happening on that project
  • you can directly add tickets on projects instead of talking about it on the mailing lists
  • you can bookmark content
  • tag stuff
  • and much more...

This is also a way of testing out the CubicWeb framework (in this case the forge cube) which you can take home and host yourself (debian recommended). Just click on the "register" link on the top right, or here.

Photo by wa7son under creative commons.


Now publishing blog entries under creative commons

2010/03/15 by Arthur Lutz

Logilab is proud to announce that the blog entries published on the blogs of http://www.logilab.org and http://www.cubicweb.org are now licensed under a Creative Commons Attribution-Share Alike 2.0 License (check out the footer).

http://creativecommons.org/images/deed/seal.png

We often use creative commons licensed photographs to illustrate this blog, and felt that being developers of open source software it was quite logical that some of our content should be published under a similar license. Some of the documentation that we release also uses this license, for example the "Building Salome" documentation. This license footer has been integrated to the cubicweb-blog package that is used to publish our sites (as part of cubicweb-forge).


Sprint CubicWeb chez Logilab - annonce de dernière minute

2010/04/29 by Arthur Lutz

Logilab est en ce moment en train d'acceuillir un sprint autour de la plateforme CubicWeb. L'objectif principal de ce sprint de 5 jours est d'améliorer l'usage de javascript et des css dans CubicWeb :

http://www.logilab.org/image/28586?vid=downloadhttp://codesnip.net/wp-content/uploads/javascript.png
  • avoir une API javascript propre, testée et documentée
  • pouvoir facilement changer le style d'une application cubicweb
  • gestion de bundle pour javascript et CSS
  • une documentation sur les standards d'écriture des fichiers JS et CSS pour cubicweb

Ce sprint aura lieu du jeudi 29 avril 2010 au 5 mai 2010 (weekend exlus - les bureaux seront fermés). Vous êtes les bienvenus pour contribuer, filer un coup de main, faire du pair programming... ou simplement rencontrer des développeurs cubicweb. Vous pouvez même venir une après-midi ou une seule journée. Pour ceux avec des portables, il y aura du réseau disponible pour être connecté.

Adresse : 104 Boulevard Auguste-Blanqui, Paris. Sonnez à "Logilab".

Métro : St Jacques or Corvisart (Glacière est la station la plus proche mais sera fermée à partir de lundi)

Contact : http://www.logilab.fr/contact

Dates : du 29/04/2010 au 30/04/2010 et du 03/05/2010 au 05/05/2010


AgileFrance 2010: retour d'expérience sur la gestion agile du projet Pylos

2010/10/07

J'ai présenté au printemps dernier à l'occasion de la conférence AgileFrance2010 un retour d'expérience sur la gestion agile du projet Pylos. Mon "client" m'a fait la gentillesse de participer à l'élaboration de cette présentation et de venir co-présenter.

Après avoir longtemps tardé, voici le support de la présentation (le texte se trouve à la fin, avec les notes pour les orateurs). Bonne lecture.

Merci à Christine, et aux organisateurs de la conférence.

http://blog.xebia.fr/wp-content/uploads/2010/05/speaker-2010.png

Rencontre Open Data à Nantes: Enjeux et opportunités pour le secteur culturel

2011/11/17 by Arthur Lutz

Nous étions présents à l'évenement organisé par Stereolux et Libertic consacré à l'OpenData dans le domaine de la culture à Nantes. Voici un court compte rendu des points que nous avons retenus de ces présentations.

Présentation générale de l'OpenData par Libertic

Il existe sur la toile assez d'articles sur l'Opendata pour qu'il ne nous semble pas nécessaire d'en donner une description, mais nous tenons à souligner que l'OpenData n'est pas simplement une mise à disposition des informations. Pour que des données puissent être qualifiées d'ouvertes, il faut qu'elles respectent une dizaine de principes parmi lesquels l'accessiblité, l'exploitabilité (données brutes), et la la réutilisablitié (licence).

https://libertic.files.wordpress.com/2010/02/logo-libertic.png?w=300&h=180

Claire Gallon a cité plusieurs exemples d'OpenData dans le domaine culturel :

  • la mise à disposition de données sur la fréquentation d'un musée permet de développer un service qui donnera la meilleure heure pour visiter ce musée. Voir When Should I visit Tate Modern
  • Marseille-Provence 2013 (capitale culturelle européenne) ouvre ses données et attend que les acteurs écrivent des applications (mobiles notamment).

Un idée importante est que le service public doit s'adresser au plus grand nombre et ne peut pas consacrer ses ressources à la mise en place de services de niche. La mise à disposition des données permet à des tiers d'occuper ces niches.

En conclusion, Claire Gallon insiste sur la nécessité d'inclure la gestion de la communauté dans les démarches d'ouverture des données. La prochaine priorité des acteurs de l'OpenData sera la coproduction, à la fois pour l'écriture des applications et pour l'amélioration des données.

Présentation du projet data.bnf.fr par Romain Wenz

http://data.bnf.fr/data/logo-bnf.gifhttp://data.bnf.fr/data/logo-data.gif

Romain Wenz de la Bibliothèque nationale de France a présenté http://data.bnf.fr sous l'angle de l'ouverture : l'ouverture à un public différent, l'ouverture à un mode de recherche différent (on cherche sur internet avant d'aller en bibliothèque) et l'ouverture sur les reseaux sociaux où le public partage des références à des contenus qu'il apprécie (twitter, facebook, etc.). Cette ouverture passe forcément par un web indexable, où l'on peut communiquer facilement une URL d'un contenu (exit les portails de recherche avec des sessions et variable http). Si un site n'est pas indexable, son contenu pourra être trouvé en s'y connectant directement, mais celui-ci restera dans le web "invisible" ou "profond".

Romain Wenz a insisté sur l'Importance des technologies utilisées : d'un coté les strandards ouverts et formalisés par le W3C, notamment en terme de web sémantique (RDF, RDFa, opengraph, schema.org, etc.) et de l'autre l'utilité de s'appuyer sur du logiciel libre. Dans le cas de http://data.bnf.fr il s'agit de CubicWeb.

Présentation des collaborations entre Wikimedia France et des institutions publiques à Toulouse

https://upload.wikimedia.org/wikipedia/commons/thumb/4/41/Commons-logo-en.svg/75px-Commons-logo-en.svg.png

La transition entre la BnF et Wikimedia est facile : Wikisource (bibliothèque de livres libres de droits) a signé un partenariat avec Gallica qui lui a fourni des numérisations de livres tombés dans le domaine public.

Wikimedia France a présenté deux projets réussis en coproduction avec des institutions Toulousaines :

  • le projet Phoebus a donné accès aux archives du Muséum de Toulouse à des bénévoles
  • la communauté Wikimedia Commons a participé à l'enrichissement des metadonnées du fond consacré au photographe Eugène Trutat.

Présentation OpenData par la mairie de Nantes Métropole

http://nantes.fr/webdav/site/nantesfr/shared/fileadmin/images/Puces/autrespuces/logo64_queue.png

Frédéric Vasse a briévement présenté la démarche de la Ville de Nantes en matière d'OpenData. Le lancement de la plateforme aura lieu lundi prochain à la Cantine Numérique de Nantes. Selon lui, l'objectif de Nantes est de réussir la coproduction avec les acteurs du territoire.

Conclusion et ouverture sur un projet concret d'OpenData pour les acteurs culturels

Libertic a conclu en proposant aux acteurs culturels un projet d'aggrégateur d'informations sur les événements culturels à Nantes. Nous espérons pouvoir vous donner prochainement plus d'informations sur ce projet.

Autre compte rendu (prises de notes) : http://www.scribd.com/doc/72810587/Opendata-Culture


OpenData à Nantes: agrégateur des événements culturels

2011/12/12 by Arthur Lutz

Jeudi 8 décembre 2011 nous avons participé à la réunion de travail sur l'ouverture des données événementielles.

Problématique des licences

Un premier problème est que la licence proposée par LiberTIC est la CreativeCommons CC-BY, alors que les producteurs de données n'ont souvent pas les droits sur toutes les données qu'ils diffusent (par exemple la photo d'illustration d'un concert). Ils auront donc du mal à les publier en totalité sous licence CC-BY. Espérons que la licence Creative Commons rentre dans les habitudes et que cela ne va pas trop freiner le projet.

Aujourd'hui, l'utilisation ressemble à du Fair Use: on tolère la ré-utilisation de contenus protégés par le droit d'auteur car cela sert la diffusion de l'information.

Nous nous sommes demandé s'il est possible de mélanger deux licences dans un flux de données ou s'il faut faire deux flux séparés mais liés.

https://creativecommons.org/images/license-layers.png

Problématique d'utilisation

Un deuxième problème est que les réutilisateurs ne seront pas intéréssés si les données sont trop pauvres et qu'elles n'incluent pas d'image ou de vidéo. Il faut donc trouver un socle commun qui satisfasse les producteurs et les réutilisateurs.

Import ou gros formulaires qui tâchent ?

Vu la complexité du modèle de données qui a émergé des discussions (beaucoup de cas particuliers), il a été proposé de fournir un formulaire de saisie d'un événement. A notre avis, la saisie "manuelle" doit rester un cas exceptionnel (un acteur culturel n'ayant pas de site pour publier par exemple), au risque de n'être pour les producteurs qu'un enième site à renseigner lors de la publication de son agenda.

Un exemple de bonnes pratiques est le très populaire GoodRelations qui offre un formulaire pour qu'un utilisateur qui n'a pas intégré le format à sa boutique en ligne puisse facilement générer son fichier et l'héberger chez lui, favorisant ainsi un modèle décentralisé calqué sur celui des moteurs de recherche.

Formats

Il nous semble donc important de se concentrer sur les formats standards qui pourraient être importés et exportés par la plateforme.

En voici une liste non exhaustive:

Lectures supplémentaires

Cherchant à combiner des vocabulaires existants (afin de ne pas réinventer un format qui devra être traduit dans un autre vocabulaire pour être réutilisable) nous sommes tombés sur les articles suivants :

http://cdn1.iconfinder.com/data/icons/transformers/network-connections.pnghttp://cdn1.iconfinder.com/data/icons/transformers/Internet-Explorer.pnghttp://cdn1.iconfinder.com/data/icons/transformers/entire-network.png

Conclusion

Il nous paraît important de ne pas se tromper dans les orientations choisies:

  • utiliser des formats standards et combiner l'utilisation de namespaces existants plutôt que d'inventer un nouveau format
  • proposer plusieurs formats d'export pour différentes utilisations (json, ical, etc) quitte à ne pas inclure tout le contenu disponible si le format ne s'y prête pas
  • ne pas créer une API de plus et choisir de privilégier les standards du web sémantique en publiant du RDF et si possible en fournissant un accès SPARQL
  • préférer la publication distribuée des données par leurs producteurs et leur agrégation par la plate-forme plutôt que d'attendre des producteurs qu'ils remplissent un formulaire de plus.

Nous attendons avec impatience la suite des travaux. Selon LiberTIC la plateforme sera developpée en logiciel libre avec des outils collaboratifs pour piloter le projet.

CubicWeb est une plateforme disponible en logiciel libre qui a déjà fait ses preuves et a été conçue pour développer des applications du type de l'aggrégateur décrit ci-dessus: import et export des données sous différents formats, utilisation des technologies standards du web sémantique. Nous espérons que ceux qui auront à réaliser l'agrégateur choisiront CubicWeb comme base technique pour ce projet.


Réseau social ouvert et distribué avec CubicWeb

2012/07/18 by Nicolas Chauvat

Qu'est-ce qu'un réseau social ?

  • descriptions de personnes (profil, histoire, etc)
  • liens avec les autres membres (carnet adresses, etc)
  • création de groupes
  • partage de contenu (photos, vidéos, présentations, etc)
  • discussion (blog, commentaires, forums, messages, microblog)
  • mise en relation (boulot, ludo, dodo, etc)
  • recommandation (lien, livre, achat, film, music, etc)
  • présence (fait quoi, avec qui, où, etc)

Et l'interopérabilité ?

  • nombreuses applications / plate-formes
  • en majorité centralisées et fermées
  • ouverture progressive: protocoles et API en cours de dév
  • réseaux ouverts et distribués: appleseed, diaspora, onesocialweb, etc.
  • pourrait-on faire autrement ?

API: openstack

  • découverte / discovery = xrd
  • identité / identity = openid
  • contrôle d'accès / access control = oauth
  • activités / streams = activity streams
  • personnes / people = portable contacts
  • applications = opensocial

Et en utilisant les standards du Web ?

Architecture ouverte et distribuée

  • vocabulaires RDF et protocole HTTP + REST
  • chacun son serveur
  • GET et éventuellement copie locale
  • abonnement si nécessaire (pubsub, xmpp, atom ?)
  • permissions gérées localement

=> social semantic network

Pourquoi CubicWeb ?

  • plate-forme pour web sémantique (semantic web framework)
  • conçu pour avoir composants à assembler
  • chacun peut définir son application sur mesure
  • fait pour publier html et rdf en parallèle
  • fait pour exporter et importer données
  • déjà foaf, skos, sioc, doap, rss, atom, etc.

Exemple

  • (micro)blog + book + link + file
  • pourrait ajouter: musique, photos, etc.
  • mais aussi: journal, recherche appartement, etc.

Et ensuite ?

Il y a bien longtemps...

  • découverte = who et cat /etc/passwd | cut -d: -f1
  • identité = login
  • contrôle accès = chmod, chgrp, su
  • activités = .plan
  • personnes = .addressbook
  • applications = vim ~/public_html/me.html

Note

Ce texte a été présenté en août 2010, lors de la conférence française des utilisateurs de Python (PyCon-Fr 2010)


Retour Agile Tour Nantes 2012 - présentation et pistes à explorer

2012/12/04 by Arthur Lutz

Nous utilisons les méthodes agiles depuis la création de Logilab. Nous avons parfois pris des libertés avec le formalisme des méthodes connues, en adaptant nos pratiques à nos clients et nos particularités. Nous avons en chemin développé nos propres outils orientés vers notre activité de développement logiciel (gestion de version, processus sur les tickets, intégration continue, etc).

https://www.logilab.org/file/113044?vid=download

Il est parfois bon de se replonger dans la théorie et d'échanger les bonnes pratiques en terme d'agilité. C'est pour cette raison que nous avons participé à l'étape nantaise de l'Agile Tour.

Logiciels libres et agilité

Plutôt que d'être simples spectateurs, nous avons présenté nos pratiques agiles, fortement liées au logiciel libre, dont un avantage indéniable est la possibilité offerte à chacun de le modifier pour l'adapter à ses besoins.

Premièrement, en utilisant la plate-forme web CubicWeb, nous avons pu construire une forge dont nous contrôlons le modèle de données. Les processus de gestion peuvent donc être spécifiques et les données des applications peuvent être étroitement intégrées. Par exemple, bien que la base logicielle soit la même, le circuit de validation des tickets sur l'extranet n'est pas identique à celui de nos forges publiques. Autre exemple, les versions livrées sur l'extranet apparaissent directement dans l'outil intranet de suivi des affaires et de décompte du temps (CRM/ERP).

Deuxièmement, nous avons choisi mercurial (hg) en grande partie car il est écrit en python ce qui nous a permis de l'intégrer à nos autres outils, mais aussi d'y contribuer (cf evolve).

Notre présentation est visible sur slideshare :

http://www.logilab.org/file/113040?vid=download

ou à télécharger en PDF.

Behaviour Driven Development

Le BDD (Behaviour Driven Development) se combine avec des tests fonctionnels haut niveau qui peuvent être décrits grâce à un formalisme syntaxique souvent associé au langage Gherkin. Ces scénarios de test peuvent ensuite être convertis en code et exécutés. Coté Python, nous avons trouvé behave et lettuce. De manière similaire à Selenium (scénarios de test de navigation Web), la difficulté de ce genre de tests est plutôt leur maintenance que l'écriture initiale.

https://www.logilab.org/file/113042?vid=download

Ce langage haut niveau peut néanmoins être un canal de communication avec un client écrivant des tests. À ce jour, nous avons eu plusieurs clients prenant le temps de faire des fiches de tests que nous "traduisons" ensuite en tests unitaires. Si le client n'est pas forcément prêt à apprendre le Python et leurs tests unitaires, il serait peut-être prêt à écrire des tests selon ce formalisme.


Nazca is out !

2012/12/21 by Simon Chabot

What is it for ?

Nazca is a python library aiming to help you to align data. But, what does “align data” mean? For instance, you have a list of cities, described by their name and their country and you would like to find their URI on dbpedia to have more information about them, as the longitude and the latitude. If you have two or three cities, it can be done with bare hands, but it could not if there are hundreds or thousands cities. Nazca provides you all the stuff we need to do it.

This blog post aims to introduce you how this library works and can be used. Once you have understood the main concepts behind this library, don't hesitate to try Nazca online !

Introduction

The alignment process is divided into three main steps:

  1. Gather and format the data we want to align. In this step, we define two sets called the alignset and the targetset. The alignset contains our data, and the targetset contains the data on which we would like to make the links.
  2. Compute the similarity between the items gathered. We compute a distance matrix between the two sets according to a given distance.
  3. Find the items having a high similarity thanks to the distance matrix.

Simple case

  1. Let's define alignset and targetset as simple python lists.
alignset = ['Victor Hugo', 'Albert Camus']
targetset = ['Albert Camus', 'Guillaume Apollinaire', 'Victor Hugo']
  1. Now, we have to compute the similarity between each items. For that purpose, the Levenshtein distance [1], which is well accurate to compute the distance between few words, is used. Such a function is provided in the nazca.distance module.

    The next step is to compute the distance matrix according to the Levenshtein distance. The result is given in the following table.

     

    Albert Camus

    Guillaume Apollinaire

    Victor Hugo

    Victor Hugo

    6

    9

    0

    Albert Camus

    0

    8

    6

  2. The alignment process is ended by reading the matrix and saying items having a value inferior to a given threshold are identical.

[1]Also called the edit distance, because the distance between two words is equal to the number of single-character edits required to change one word into the other.

A more complex one

The previous case was simple, because we had only one attribute to align (the name), but it is frequent to have a lot of attributes to align, such as the name and the birth date and the birth city. The steps remain the same, except that three distance matrices will be computed, and items will be represented as nested lists. See the following example:

alignset = [['Paul Dupont', '14-08-1991', 'Paris'],
            ['Jacques Dupuis', '06-01-1999', 'Bressuire'],
            ['Michel Edouard', '18-04-1881', 'Nantes']]
targetset = [['Dupond Paul', '14/08/1991', 'Paris'],
             ['Edouard Michel', '18/04/1881', 'Nantes'],
             ['Dupuis Jacques ', '06/01/1999', 'Bressuire'],
             ['Dupont Paul', '01-12-2012', 'Paris']]

In such a case, two distance functions are used, the Levenshtein one for the name and the city and a temporal one for the birth date [2].

The cdist function of nazca.distances enables us to compute those matrices :

  • For the names:
>>> nazca.matrix.cdist([a[0] for a in alignset], [t[0] for t in targetset],
>>>                    'levenshtein', matrix_normalized=False)
array([[ 1.,  6.,  5.,  0.],
       [ 5.,  6.,  0.,  5.],
       [ 6.,  0.,  6.,  6.]], dtype=float32)
  Dupond Paul Edouard Michel Dupuis Jacques Dupont Paul
Paul Dupont 1 6 5 0
Jacques Dupuis 5 6 0 5
Edouard Michel 6 0 6 6
  • For the birthdates:
>>> nazca.matrix.cdist([a[1] for a in alignset], [t[1] for t in targetset],
>>>                    'temporal', matrix_normalized=False)
array([[     0.,  40294.,   2702.,   7780.],
       [  2702.,  42996.,      0.,   5078.],
       [ 40294.,      0.,  42996.,  48074.]], dtype=float32)
  14/08/1991 18/04/1881 06/01/1999 01-12-2012
14-08-1991 0 40294 2702 7780
06-01-1999 2702 42996 0 5078
18-04-1881 40294 0 42996 48074
  • For the birthplaces:
>>> nazca.matrix.cdist([a[2] for a in alignset], [t[2] for t in targetset],
>>>                    'levenshtein', matrix_normalized=False)
array([[ 0.,  4.,  8.,  0.],
       [ 8.,  9.,  0.,  8.],
       [ 4.,  0.,  9.,  4.]], dtype=float32)
  Paris Nantes Bressuire Paris
Paris 0 4 8 0
Bressuire 8 9 0 8
Nantes 4 0 9 4

The next step is gathering those three matrices into a global one, called the global alignment matrix. Thus we have :

  0 1 2 3
0 1 40304 2715 7780
1 2715 43011 0 5091
2 40304 0 43011 48084

Allowing some misspelling mistakes (for example Dupont and Dupond are very closed), the matching threshold can be set to 1 or 2. Thus we can see that the item 0 in our alignset is the same that the item 0 in the targetset, the 1 in the alignset and the 2 of the targetset too : the links can be done !

It's important to notice that even if the item 0 of the alignset and the 3 of the targetset have the same name and the same birthplace they are unlikely identical because of their very different birth date.

You may have noticed that working with matrices as I did for the example is a little bit boring. The good news is that Nazca makes all this job for you. You just have to give the sets and distance functions and that's all. An other good news is the project comes with the needed functions to build the sets !

[2]Provided in the nazca.distances module.

Real applications

Just before we start, we will assume the following imports have been done:

from nazca import dataio as aldio   #Functions for input and output data
from nazca import distances as ald  #Functions to compute the distances
from nazca import normalize as aln  #Functions to normalize data
from nazca import aligner as ala    #Functions to align data

The Goncourt prize

On wikipedia, we can find the Goncourt prize winners, and we would like to establish a link between the winners and their URI on dbpedia (Let's imagine the Goncourt prize winners category does not exist in dbpedia)

We simply copy/paste the winners list of wikipedia into a file and replace all the separators (- and ,) by #. So, the beginning of our file is :

1903#John-Antoine Nau#Force ennemie (Plume)
1904#Léon Frapié#La Maternelle (Albin Michel)
1905#Claude Farrère#Les Civilisés (Paul Ollendorff)
1906#Jérôme et Jean Tharaud#Dingley, l'illustre écrivain (Cahiers de la Quinzaine)

When using the high-level functions of this library, each item must have at least two elements: an identifier (the name, or the URI) and the attribute to compare. With the previous file, we will use the name (so the column number 1) as identifier (we don't have an URI here as identifier) and attribute to align. This is told to python thanks to the following code:

alignset = aldio.parsefile('prixgoncourt', indexes=[1, 1], delimiter='#')

So, the beginning of our alignset is:

>>> alignset[:3]
[[u'John-Antoine Nau', u'John-Antoine Nau'],
 [u'Léon Frapié', u'Léon, Frapié'],
 [u'Claude Farrère', u'Claude Farrère']]

Now, let's build the targetset thanks to a sparql query and the dbpedia end-point. We ask for the list of the French novelists, described by their URI and their name in French:

query = """
     SELECT ?writer, ?name WHERE {
       ?writer  <http://purl.org/dc/terms/subject> <http://dbpedia.org/resource/Category:French_novelists>.
       ?writer rdfs:label ?name.
       FILTER(lang(?name) = 'fr')
    }
 """
 targetset = aldio.sparqlquery('http://dbpedia.org/sparql', query)

Both functions return nested lists as presented before. Now, we have to define the distance function to be used for the alignment. This is done thanks to a python dictionary where the keys are the columns to work on, and the values are the treatments to apply.

treatments = {1: {'metric': ald.levenshtein}} # Use a levenshtein on the name
                                              # (column 1)

Finally, the last thing we have to do, is to call the alignall function:

alignments = ala.alignall(alignset, targetset,
                       0.4, #This is the matching threshold
                       treatments,
                       mode=None,#We'll discuss about that later
                       uniq=True #Get the best results only
                      )

This function returns an iterator over the different alignments done. You can see the results thanks to the following code :

for a, t in alignments:
    print '%s has been aligned onto %s' % (a, t)

It may be important to apply some pre-treatment on the data to align. For instance, names can be written with lower or upper characters, with extra characters as punctuation or unwanted information in parenthesis and so on. That is why we provide some functions to normalize your data. The most useful may be the simplify() function (see the docstring for more information). So the treatments list can be given as follow:

def remove_after(string, sub):
    """ Remove the text after ``sub`` in ``string``
        >>> remove_after('I like cats and dogs', 'and')
        'I like cats'
        >>> remove_after('I like cats and dogs', '(')
        'I like cats and dogs'
    """
    try:
        return string[:string.lower().index(sub.lower())].strip()
    except ValueError:
        return string


treatments = {1: {'normalization': [lambda x:remove_after(x, '('),
                                    aln.simply],
                  'metric': ald.levenshtein
                 }
             }

Cities alignment

The previous case with the Goncourt prize winners was pretty simply because the number of items was small, and the computation fast. But in a more real use case, the number of items to align may be huge (some thousands or millions…). In such a case it's unthinkable to build the global alignment matrix because it would be too big and it would take (at least...) fews days to achieve the computation. So the idea is to make small groups of possible similar data to compute smaller matrices (i.e. a divide and conquer approach). For this purpose, we provide some functions to group/cluster data. We have functions to group text and numerical data.

This is the code used, we will explain it:

targetset = aldio.rqlquery('http://demo.cubicweb.org/geonames',
                           """Any U, N, LONG, LAT WHERE X is Location, X name
                              N, X country C, C name "France", X longitude
                              LONG, X latitude LAT, X population > 1000, X
                              feature_class "P", X cwuri U""",
                           indexes=[0, 1, (2, 3)])
alignset = aldio.sparqlquery('http://dbpedia.inria.fr/sparql',
                             """prefix db-owl: <http://dbpedia.org/ontology/>
                             prefix db-prop: <http://fr.dbpedia.org/property/>
                             select ?ville, ?name, ?long, ?lat where {
                              ?ville db-owl:country <http://fr.dbpedia.org/resource/France> .
                              ?ville rdf:type db-owl:PopulatedPlace .
                              ?ville db-owl:populationTotal ?population .
                              ?ville foaf:name ?name .
                              ?ville db-prop:longitude ?long .
                              ?ville db-prop:latitude ?lat .
                              FILTER (?population > 1000)
                             }""",
                             indexes=[0, 1, (2, 3)])


treatments = {1: {'normalization': [aln.simply],
                  'metric': ald.levenshtein,
                  'matrix_normalized': False
                 }
             }
results = ala.alignall(alignset, targetset, 3, treatments=treatments, #As before
                       indexes=(2, 2), #On which data build the kdtree
                       mode='kdtree',  #The mode to use
                       uniq=True) #Return only the best results

Let's explain the code. We have two files, containing a list of cities we want to align, the first column is the identifier, and the second is the name of the city and the last one is location of the city (longitude and latitude), gathered into a single tuple.

In this example, we want to build a kdtree on the couple (longitude, latitude) to divide our data in few candidates. This clustering is coarse, and is only used to reduce the potential candidats without loosing any more refined possible matchs.

So, in the next step, we define the treatments to apply. It is the same as before, but we ask for a non-normalized matrix (ie: the real output of the levenshtein distance). Thus, we call the alignall function. indexes is a tuple saying the position of the point on which the kdtree must be built, mode is the mode used to find neighbours [3].

Finally, uniq ask to the function to return the best candidate (ie: the one having the shortest distance below the given threshold)

The function outputs a generator yielding tuples where the first element is the identifier of the alignset item and the second is the targetset one (It may take some time before yielding the first tuples, because all the computation must be done…)

[3]The available modes are kdtree, kmeans and minibatch for numerical data and minhashing for text one.

Try it online !

We have also made this little application of Nazca, using Cubicweb. This application provides a user interface for Nazca, helping you to choose what you want to align. You can use sparql or rql queries, as in the previous example, or import your own cvs file [4]. Once you have choosen what you want to align, you can click the Next step button to customize the treatments you want to apply, just as you did before in python ! Once done, by clicking the Next step, you start the alignment process. Wait a little bit, and you can either download the results in a csv or rdf file, or directly see the results online choosing the html output.

[4]Your csv file must be tab-separated for the moment…

LMGC90 Sprint at Logilab in March 2013

2013/03/28 by Vladimir Popescu

LMGC90 Sprint at Logilab

At the end of March 2013, Logilab hosted a sprint on the LMGC90 simulation code in Paris.

LMGC90 is an open-source software developed at the LMGC ("Laboratoire de Mécanique et Génie Civil" -- "Mechanics and Civil Engineering Laboratory") of the CNRS, in Montpellier, France. LMGC90 is devoted to contact mechanics and is, thus, able to model large collections of deformable or undeformable physical objects of various shapes, with numerous interaction laws. LMGC90 also allows for multiphysics coupling.

Sprint Participants

https://www.logilab.org/file/143585/raw/logo_LMGC.jpghttps://www.logilab.org/file/143749/raw/logo_SNCF.jpghttps://www.logilab.org/file/143750/raw/logo_LaMSID.jpghttps://www.logilab.org/file/143751/raw/logo_LOGILAB.jpg

More than ten hackers joined in from:

  • the LMGC, which leads LMCG90 development and aims at constantly improving its architecture and usability;
  • the Innovation and Research Department of the SNCF (the French state-owned railway company), which uses LMGC90 to study railway mechanics, and more specifically, the ballast;
  • the LaMSID ("Laboratoire de Mécanique des Structures Industrielles Durables", "Laboratory for the Mechanics of Ageing Industrial Structures") laboratory of the EDF / CNRS / CEA , which has an strong expertise on Code_ASTER and LMGC90;
  • Logilab, as the developer, for the SNCF, of a CubicWeb-based platform dedicated to the simulation data and knowledge management.

After a great introduction to LMGC90 by Frédéric Dubois and some preliminary discussions, teams were quickly constituted around the common areas of interest.

Enhancing LMGC90's Python API to build core objects

As of the sprint date, LMGC90 is mainly developed in Fortran, but also contains Python code for two purposes:

  • Exposing the Fortran functions and subroutines in the LMGC90 core to Python; this is achieved using Fortran 2003's ISO_C_BINDING module and Swig. These Python bindings are grouped in a module called ChiPy.
  • Making it easy to generate input data (so called "DATBOX" files) using Python. This is done through a module called Pre_LMGC.

The main drawback of this approach is the double modelling of data that this architecture implies: once in the core and once in Pre_LMGC.

It was decided to build a unique user-level Python layer on top of ChiPy, that would be able to build the computational problem description and write the DATBOX input files (currently achieved by using Pre_LMGC), as well as to drive the simulation and read the OUTBOX result files (currently by using direct ChiPy calls).

This task has been met with success, since, in the short time span available (half a day, basically), the team managed to build some object types using ChiPy calls and save them into a DATBOX.

Using the Python API to feed a computation data store

This topic involved importing LMGC90 DATBOX data into the numerical platform developed by Logilab for the SNCF.

This was achieved using ChiPy as a Python API to the Fortran core to get:

  • the bodies involved in the computation, along with their materials, behaviour laws (with their associated parameters), geometries (expressed in terms of zones);
  • the interactions between these bodies, along with their interaction laws (and associated parameters, e.g. friction coefficient) and body pair (each interaction is defined between two bodies);
  • the interaction groups, which contain interactions that have the same interaction law.

There is still a lot of work to be done (notably regarding the charges applied to the bodies), but this is already a great achievement. This could only have occured in a sprint, were every needed expertise is available:

  • the SNCF experts were there to clarify the import needs and check the overall direction;

  • Logilab implemented a data model based on CubicWeb, and imported the data using the ChiPy bindings developed on-demand by the LMGC core developer team, using the usual-for-them ISO_C_BINDING/ Swig Fortran wrapping dance.

    https://www.logilab.org/file/143753/raw/logo_CubicWeb.jpg
  • Logilab undertook the data import; to this end, it asked the LMGC how the relevant information from LMGC90 can be exposed to Python via the ChiPy API.

Using HDF5 as a data storage backend for LMGC90

The main point of this topic was to replace the in-house DATBOX/OUTBOX textual format used by LMGC90 to store input and output data, with an open, standard and efficient format.

Several formats have been considered, like HDF5, MED and NetCDF4.

MED has been ruled out for the moment, because it lacks the support for storing body contact information. HDF5 was chosen at last because of the quality of its Python libraries, h5py and pytables, and the ease of use tools like h5fs provide.

https://www.logilab.org/file/143754/raw/logo_HDF.jpg

Alain Leufroy from Logilab quickly presented h5py and h5fs usage, and the team started its work, measuring the performance impact of the storage pattern of LMGC90 data. This was quickly achieved, as the LMGC experts made it easy to setup tests of various sizes, and as the Logilab developers managed to understand the concepts and implement the required code in a fast and agile way.

Debian / Ubuntu Packaging of LMGC90

This topic turned out to be more difficult than initially assessed, mainly because LMGC90 has dependencies to non-packaged external libraries, which thus had to be packaged first:

  • the Matlib linear algebra library, written in C,
  • the Lapack95 library, which is a Fortran95 interface to the Lapack library.

Logilab kept working on this after the sprint and produced packages that are currently being tested by the LMGC team. Some changes are expected (for instance, Python modules should be prefixed with a proper namespace) before the packages can be submitted for inclusion into Debian. The expertise of Logilab regarding Debian packaging was of great help for this task. This will hopefully help to spread the use of LMGC90.

https://www.logilab.org/file/143755/raw/logo_Debian.jpg

Distributed Version Control System for LMGC90

As you may know, Logilab is really fond of Mercurial as a DVCS. Our company invested a lot into the development of the great evolve extension, which makes Mercurial a very powerful tool to efficiently manage the team development of software in a clean fashion.

This is why Logilab presented Mercurial's features and advantages over the current VCS used to manage LMGC90 sources, namely svn, to the other participants of the Sprint. This was appreciated and will hopefully benefit to LMGC90 ease of development and spread among the Open Source community.

https://www.logilab.org/file/143756/raw/logo_HG.jpg

Conclusions

All in all, this two-day sprint on LMGC90, involving participants from several industrial and academic institutions has been a great success. A lot of code has been written but, more importantly, several stepping stones have been laid, such as:

  • the general LMGC90 data access architecture, with the Python layer on top of the LMGC90 core;
  • the data storage format, namely HDF5.

Colaterally somehow, several other results have also been achieved:

  • partial LMGC90 data import into the SNCF CubicWeb-based numerical platform,
  • Debian / Ubuntu packaging of LMGC90 and dependencies.

On a final note, one would say that we greatly appreciated the cooperation between the participants, which we found pleasant and efficient. We look forward to finding more occasions to work together.