The EuroSciPy2009 conference was held in Leipzig at the end of July and was
sponsored by Logilab and other companies. It started with three talks about speed.
In his keynote, Fransesc Alted talked about starving CPUs. Thirty years back,
memory and CPU frequencies where about the same. Memory speed kept up for about
ten years with the evolution of CPU speed before falling behind. Nowadays,
memory is about a hundred times slower than the cache which is itself about
twenty times slower than the CPU. The direct consequence is that CPUs are
starving and spend many clock cycles waiting for data to process.
In order to improve the performance of programs, it is now required to know
about the multiple layers of computer memory, from disk storage to CPU. The
common architecture will soon count six levels: mechanical disk, solid state
disk, ram, cache level 3, cache level 2, cache level 1.
Using optimized array operations, taking striding into account, processing data
blocks of the right size and using compression to diminish the amount of data
that is transfered from one layer to the next are four techniques that go a long
way on the road to high performance. Compression algorithms like Blosc increase
throughput for they strike the right balance between being fast and providing
good compression ratios. Blosc compression will soon be available in PyTables.
Fransesc also mentions the numexpr extension to numpy, and its combination with
PyTables named tables.Expr, that nicely and easily accelerates the computation
of some expressions involving numpy arrays. In his list of references, Fransesc
cites Ulrich Drepper article What every programmer should know about memory.
Maciej Fijalkowski started his talk with a general presentation of the PyPy
framework. One uses PyPy to describe an interpreter in RPython, then generate
the actual interpreter code and its JIT.
Since PyPy is has become more of a framework to write interpreters than a
reimplementation of Python in Python, I suggested to change its misleading name to
something like gcgc the Generic Compiler for Generating Compilers. Maciej
answered that there are discussions on the mailing list to split the project in
two and make the implementation of the Python interpreter distinct from the GcGc
framework.
Maciej then focused his talk on his recent effort to rewrite in RPython the part
of numpy that exposes the underlying C library to Python. He says the benefits
of using PyPy's JIT to speedup that wrapping layer are already visible. He has
details on the PyPy blog. Gaël Varoquaux added that David Cournapeau has
started working on making the C/Python split in numpy cleaner, which would
further ease the job of rewriting it in RPython.
Damien Diederen talked about his work on CrossTwine Linker and compared it
with the many projects that are actively attacking the problem of speed that
dynamic and interpreted languages have been dragging along for years. Parrot
tries to be the über virtual machine. Psyco offers very nice acceleration, but
currently only on 32bits system. PyPy might be what he calls the Right
Approach, but still needs a lot of work. Jython and IronPython modify the
language a bit but benefit from the qualities of the JVM or the CLR. Unladen
Swallow is probably the one that's most similar to CrossTwine.
CrossTwine considers CPython as a library and uses a set of C++ classes to
generate efficient interpreters that make calls to CPython's
internals. CrossTwine is a tool that helps improving performance by
hand-replacing some code paths with very efficient code that does the same
operations but bypasses the interpreter and its overhead. An interpreter built
with CrossTwine can be viewed as a JIT'ed branch of the official Python
interpreter that should be feature-compatible (and bug-compatible) with CPython.
Damien calls he approach "punching holes in C substrate to get more speed" and
says it could probably be combined with Psyco for even better results.
CrossTwine works on 64bit systems, but it is not (yet?) free software. It
focuses on some use cases to greatly improve speed and is not to be considered a
general purpose interpreter able to make any Python code faster.