Martin Cracauer is a software engineer for Google’s Travel team and a dedicated Lisp enthusiast. Below, he shares his impressions of the recent European Lisp Symposium.
In April, I attended the 8th European Lisp Symposium in London. It was good to be there and I'm proud to have played a part by giving a talk about unwanted memory retention.
More than anything, I was struck by the professionalism of the performance-oriented Lisp programmers giving talks. The Lisp community has moved beyond fighting with their compilers and settling for a couple useless microoptimizations. At a modern Lisp conference like this one, the same terms used at any other performance computing conference rain down upon the audience. There isn't a generic "probably didn't fit the cache" -- now we talk specific caches and line counts. We don't say "swapping" -- we give specific counts of major and minor page faults and recognize the relative cost of each. We have a deep awareness of what will end up being a system call, and which ones are cheap in which OS. I had a lot of interest at the 2006 European Common Lisp Meeting by describing how ITA uses Lisp only at compile time and gets full performance at runtime. In 2015, that’s just normal.
There’s still work to do, however. It’s not there yet, but I think Lisp should become the ideal language for both SIMD computing (via new primitives allowing the programmer to tell the compiler instead of relying on arbitrarily smart compilers) and for speculative execution (allow the programmer to make promises and crash if they turn out untrue). I'm always hoping somebody (else) will kick off that effort.
The second thing that struck me was how much people at this conference leverage two of Lisp’s major advantages:
- compile time computing (having the full runtime language at compile time to expand your compiled code from whatever abstraction is most suitable)
Several presenters combined those features to construct 3D objects, and even built a bridge between computed 3D objects and interactively built objects (in a graphical editor). One of those sessions was Dave Cooper’s tutorial. Both could create sets of 3D objects that mixed computed objects and interactive building at an astonishing rate.
Breanndán Ó Nualláin’s talk, "Executable Pseudocode for Graph Algorithms", was useful to me because it gave a digestible example of more complex compile time computing. It’s difficult to illustrate the concept, but Breanndán used Lisp’s power as a "programmable programming language" to make a frontend that expresses pseudocode for algorithms in an optimized s-expression syntax. The result is readable, executable, and fast. In addition, you can easily create a backend that targets LaTeX so that you could put your running algorithm in a textbook. This is so useful when trying to understand what the power of a "programmable programming language" really means. Now your LaTex for the algorithms paper is derived from proven working code.
To me, the most jaw-dropping talk of the conference was Christian E. Schafmeister’s "Clasp - A Common Lisp that Interoperates with C++ and Uses the LLVM Backend". The title is the understatement of the year. What is going on here is building tiny 3-nanometer protein-like structures to do useful things like cure cancer and destroy sarin. Although many C++ libraries exist for building such structures, it would be too painful to glue them all together in C++. Instead of feeding C++ through a layer of C and back into some object representation (like the rest of us, cough) Christian presented a Common Lisp implementation running in LLVM, using the LLVM runtime libraries that provide introspection to directly interface to C++. He was kind enough to give the talk again at our Google Cambridge office where it was recorded.
At large scale, Lisp exposes some rough spots. A lot of Googlers like really clean modularization, but Common Lisp packages don't quite provide it. This used to be a big problem for CMUCL, reducing the number of people who could build it. Robert Strandh talked about “First-class Global Environments in Common Lisp”. I am sure people would love to see that in SBCL. I also liked Paul van der Walt's talk, bringing forward ideas to improve restricted runtime environments (such as mobile devices) while keeping them easy to describe in their dependencies.
In my own talk on “Unwanted Memory Retention”, I didn’t just limit the discussion to Lisp, SBCL, and its garbage collector. I addressed group culture and perception bias: how imbalanced performance tradeoffs come about in long-running software projects, how they mix with rapid changes in the computing environment around your Lisp, and how Lisp is just a bit more flexible dealing with them.
This conference was an enlightening experience and I hope slides and videos will become available. For now, many of the topics covered are also discussed in the Symposium’s peer-reviewed papers (16MB PDF). But honestly, just reading doesn't do justice to the conference. People there were great presenters, too, and attending it was inspirational.
by Martin Cracauer, Travel team
2015-06-24 Edited to clarify the subject of Martin's 2006 talk.
2015-06-24 Edited to clarify the subject of Martin's 2006 talk.