Subnavigation

QML Scene Graph in Master

Earlier this week, we integrated the QML Scene Graph into qtdeclarative-staging.git#master. Grab the modularized Qt 5 repositories and start hacking!

The primary way to make use of it is by running qmlscene or by instantiating a QSGView and feeding it your own .qml file. The import name has been upgraded to QtQuick 2.0, so upgrade your .qml. If you do not upgrade your .qml files, the qmlscene binary will try to load all QtQuick 1.0 files as 2.0. QDeclarativeItem based plugins will not be loaded.

For a quick tour of what the QML Scene Graph is all about, we've compiled this video:

The source code for the video is available here. It uses the QML presentation system.

I'll answer the question of why we are not using an existing system, to preempt the comment: We wanted something small and lightweight to solve the use cases for QML. The scene graph core is less than 10K lines of code (including class documentation) and tailored to our use case. Everything else is integration with QML and Qt. Code we would have had to write for any system. We could not have achieved something so lean for QML based on existing technologies.

Disclaimer: Do not take this blog post as the final documentation. I'm explaining the current state of things. They may change in the time to come.

Primary Features

The scene graph is not so much about offering new features as it is about changing the core infrastructure of our graphics stack to ensure that Qt and QML work their best. Some of my colleagues and I did a series of posts outlining our existing graphics stack and some of the issues with it. In the initial scene graph blog I explained how we intend to address these issues.

The QML team is working on additions to QML, like the new particle system, but I'll let them comment on that when they feel they are ready.

  • Kim already talked about the ShaderEffectItem in his post The Convenient Power of QML Scene Graph. The main idea of the shader effect item is to open up the floodgates and let creativity run loose.
  • We're using a new method of text drawing. The default is now based on distance fields, which gives us scalable glyphs with just a single texture. This technique supports floating point positioning and when GPU processing power allows, we can also use it to do sub-pixel anti-aliasing. This effectively makes Text elements in QML, faster, nicer and more flexible than before. We also have a different font rendering mechanism in place that is similar to native font rendering (how we draw glyphs with QPainter today), but that is not enabled at the moment.
  • We have changed some of the internals of Qt's animation framework to be able to drive animations based on the vertical blank signal. I talked about the concept in my post Velvet and the QML Scene Graph.

Public API and Back-end API

We have split the API into two different types. The public API, which is what we expect application developers to use, and back-end API which we expect system integrators to use. The public API contains all the needed classes to render graphics in QML; to introduce new primitives and custom shading plus some convenience on top of the low-level API. All files that are visible in the generated documentation are to be considered public API. I wished I could point to the public docs, but we don't have automatic documentation generation for the modularized repositories yet, so that will have to come later.

The back-end API includes the things like renderer and texture implementations. The idea is that we can optimize certain parts of the system on a per-hardware basis where needed. The back-end API is in private headers. Some of it might move into the public API over time, but we are not comfortable with locking down this API just yet.

Rendering Model

The scene graph is fundamentally a tree of predefined nodes. Rendering is done by populating the tree with geometry nodes. The geometry node consists of a geometry which defines the vertices or mesh to be rendered and a material which defines what to do with that geometry. The material is essentially a QGLShaderProgram with some added logic.

When its time to draw the nodes, a renderer will take the tree and render it. The renderer is free to reorganize the tree as it sees fit to improve performance, as long as the visual rendering looks the same. The default renderer will separate opaque geometry from translucent geometry. The opaque geometry is rendered first, ordered by its material to minimize state changes. Opaque geometry is rendered to both the color buffer and the depth buffer. The depth of items is decided by their original ordering in the graph. When drawing the translucent geometry, depth testing is enabled, so if opaque geometry  is covering transparent geometry, the GPU will not be doing any work for the transparent pixels. The default renderer also has a switch for rendering the opaque geometry strictly front-to-back (right now enabled by passing --opaque-front-to-back to qmlscene). Custom renderers to use different algorithms for rendering the tree can be implemented through the back-end API.

The scene graph operates on a single OpenGL context, but does not create or own it. The context can come from anywhere, and in the case of QSGView, the context comes from the QGLWidget base-class. With the graphics stack inversion which is expected to land in master later this summer, the OpenGL context will come from the window itself.

Threading Model

The QML Scene Graph is thread agnostic, it can run on any thread that has a OpenGL context bound. However, once the scene graph is set up in that context / thread, it cannot be moved. Initially we wanted to run QML animations and all the OpenGL calls in a dedicated rendering thread. Because of how QML works, this turned out to not be possible. QML animations are bound to QML properties which are QObjects which will trigger C++ code. If we were to process animations in the rendering thread and these would call into C++ we would have a synchronization mess, so instead we came up with a different approach.

The OpenGL context and all scene graph related code runs on the rendering thread, unless explicitly stated otherwise in the documentation. Animations are run in the GUI thread, but driven by an event sent from the rendering thread. Before a frame rendering starts, there is a short period where we block the GUI thread to copy the QML tree and changes in it into the scene graph. The scene graph thus represents a snapshot of the QML tree at that point in time. In terms of user API, this happens during QSGItem::updatePaintNode(). I tried to visualize this in a diagram.

The benefit of this model is that during animations, we can for the upcoming frame calculate the animations and evaluate the JavaScript related to bindings on the GUI thread while we are rendering the current frame on the render thread. Advancing the animations and evaluating the JavaScript typically takes quite a bit longer than doing the rendering, so that continues to happen while the render thread blocks for the next vsync signal. So even for a single-core CPU, there is a benefit in that animations are advanced while the render thread is idly waiting for the vsync signal, typically via swapBuffers().

Integration with QPainter

The QML Scene Graph does not use QPainter itself, but there are different ways to integrate it. The most obvious way is to make use of the QSGPaintedItem class. This class will open a painter on a QImage or an FBO depending on what the user requests and has a virtual paint() function which will be called when the user has requested an update() on the item. This is the primary porting class when changing a QDeclarativeItem to work with the scene graph. The two classes are API equivalent, but their internal workings are somewhat different. The paint() function is by default called during the "sync" phase when the GUI thread is blocked to avoid threading issues, but it can be toggled to also run on the rendering thread, decoupled from the GUI thread.

Another option is to manually render to an FBO or a QImage and add the result to a QSGSimpleTextureNode or similar.

Integration with OpenGL

We have three primary ways of integrating with OpenGL.

  • The QSGEngine::beforeRendering() signal is emitted on the rendering thread after the rendering context is bound. This signal can be used to execute GL code in the background of the scene graph with the QML UI rendered on top. Naturally, we need to not clear the background when rendering the scene graph when using this mode and there are properties in QSGEngine help with that. A typical usecase here would be to have a 3D game engine render in the background and have a QML UI on top.
  • The QSGEngine::afterRendering() signal is emitted on the rendering thread after the scene graph has completed its rendering, but before swapping happens. This can be used to render for instance 3D content on top of a QML UI.
  • Render to an FBO and compose the texture inside the scene graph. This is the preferred way to embed content inside QML that should conform to the QML states like opacity, clipping and transformation. Below is an example of Ogre3D embedded into the QML Scene Graph using an offscreen texture. An easy way to do this is to subclass the QSGPaintedItem, use FBO based rendering and do QPainter::beginNativePainting() in the paint() function.

 

Debug Modes

Right now we offer a few environment variables to help track down runaway animations.

  • QML_TRANSLUCENT_MODE=1 Setting this environment variable makes all geometry nodes render with an opacity of 0.5. Some materials may choose to completely ignore opacity, in which case this variable has no effect for it, but these should be few. This is helpful if you have expensive QML elements which are completely obscured by something else.
  • QML_FLASH_MODE=1 Setting this environment variable is similar to the QT_FLUSH_UPDATE we have in Qt. Any QML element that has any kind of graphical update happening to it will get a flash rectangle on top of it for one frame. This is helpful in tracking down runaway animations

Together these two can for instance be used to track down runaway animations behind the current view.

Where to find us?

When you starting using the new stuff, you might find issues to report, features that are missing, suggestions for improvements. The relevant places to contact us are:

  • Bugs and features: http://bugreports.qt.nokia.com. For scene graph related topics, meaning the rendering API's, there is a "SceneGraph" component.
  • IRC: #qt-graphics on freenode.net is where most of the graphics people are.
  • Mail: There is the qt5-feedback mailing list which was set a few weeks back.
  • We will also have a discussion on QML Scene Graph during the Qt Contributor Summit.

Some Numbers

We thought it would be nice to share a few numbers on where we are at right now. Below are the numbers from running the photoviewer demo under demos/declarative with both QML 1 using Raster, OpenGL and Mesa software rendering using LLVMpipe and QML 2 using OpenGL and Mesa/LLVMpipe. Its run on an Intel Sandy Bridge i7-2600K using the on-die Intel HD Graphics 3000 GPU, Linux, Qt 5 HEAD using the XCB back-end (equivalent to X11 in 4.8 for Raster and the OpenGL paint engines).

As you can see, the QML Scene Graph gives an overall 2.5x speed up of an arbitrary QML example compared to the graphics stack we have in QML 1. The other interesting part is that LLVMpipe is in the same range as our software raster engine, in fact it is a little bit faster. This is not too surprising given that they are essentially doing the exact same. With QML 2, the multi-threaded LLVMpipe version is in fact faster than the OpenGL based QML 1. I hope this helps to reduce some of the concerns that have been raised towards Qt 5's dependency on OpenGL.


Blog Topics:

Comments