Mesa and LLVM

I've been toying for a while with the idea of rewriting programmable pipeline in Mesa. The most obvious reason is the fact that fragment shaders are absolutely crucial when it comes to modern graphics. We use them extensively in Qt and I'm using it all over the place in my special effects library. As those small (mostly GLSL based) programs become more and more complicated a need for an extensive compiler framework with especially good optimization support becomes apparent. We already have such a framework in LLVM.

I managed to convince the person I consider an absolutely genius when it comes to compiler technology, Roberto Raggi to donate some of his time and together with me rewrite the programmable pipeline in Mesa. The day after I told Roberto that we need to lay Mesa on top of LLVM I got an email from him with GLSL parser he wrote (and holly crap, it's so good...). After picking up what was left of my chin from the floor I removed the current GLSL implementation from Mesa, integrated the code Roberto sent me, did some ninja magic (which is part of my job description) and pushed the newly created git repository to

So between layers of pure and utter black magic (of course not "voodoo", voodoo and graphics just don't mix) what does this mean, you ask (at least for the purpose of this post). As I pointed out in the email to the Mesa list last week:

  • it means that Mesa gets an insanely comprehensive shading framework
  • it means that we get insane optimization passes for free (strong emphasis on "insane". They're so cool I drool about being able to execute shading languages with this framework and I drool very rarely nowadays... And largely in a fairly controllable fashion.)
  • it means we get well documented and understood IR,
  • it means we get maintenance of parts of the code for free, (the parts especially difficult for graphics people)
  • it means that there's less code in Mesa,
  • it means that we can basically for free add execution of C/C++, soon Python, Java and likely other languages, code on GPU's because frontend's for those are already available/"in work" for LLVM. (and even though I'm not a big fan of Python the idea of executing it on GPU is giving me goose-bumps the way only some of Japanese horror movies can)

I think it has all the potential to be by far the best shading framework in any of the OpenGL implementations out there. Now having said that there's a lot of tricky parts that we haven't even begin solving. Most of them are due to the fact that a lot of modern graphics hardware is, well, to put it literary "freaking wacky" (it's a technical term). We'll need to add pretty extensive lowering pass and most likely some kind of transformation pass that does something sensible with branch instructions for hardware that doesn't have support for them. We'll cross that bridge once we get to it. Plus we'll need to port drivers... But for now we'll bask in the sheer greatness of this project.

Ah, the git repository is at;a=shortlog;h=llvm , Roberto and I have tons of unpushed changes though . Of course this is an ongoing research project that both Roberto and I work on in our very limited spare time (in fact Roberto seems to now have almost what you'd call a "life". Apparently those take time. Personally I still enjoy sleepless nights and diet by starvation patched by highly suspicious activities in between. Which by the way does wonders to my figure and if this is not going to work I'll try my luck as a male super-model) so we can only hope that it will all end up as smoothly as we think it should. And in KDE 4 most graphics code will be able to utilize eye-popping effects with virtually no CPU price.

Blog Topics: