Layered rendering part 2, it helps solve many problems… :-)

As part of Qt 4.5, we added QGraphicsItem::opacity. Which is great! But it doesn't work as well as it could. We're receiving a few comments about how this implementation could work better. Trouble is, in Qt 4.5 we only have two ways of rendering: direct, and indirect ("cached", e.g., ItemCoordinateCache). And in order to apply opacity, you really need full composition support... :-/

Here's a rundown of the trouble with today's opacity support:

  • The current behavior modifies the input opacity of the painter
    • ...so you can override this opacity on a per-item basis (intentionally or not!)
    • ...and each primitive (line, rect, pixmap) you draw with QPainter will be composed one at a time inside the item (which is bad!)
  • The behavior is slightly different depending on whether you cache an item or not.
    • ...if you use caching, the offscreen pixmap is rendered with an opacity, whereas the item itself doesn't have any opacity set on its painter.
  • Each item applies opacity locally, and even though opacity propagates to children, each item is still rendered by itself, cause each item to be transparent (as opposed to rendering one subtree as a whole with an opacity)

All these problems could have been solved if we treated opacity as an effect that you can apply to one layer as a whole, instead of on each item. I think this is how opacity should work in the first place... but that's how it goes sometimes. But fear not, we can fix this in a later release! :-)

To illustrate the current behavior, here's what you get if you construct a scene with four items, each a child of the previous item, in colors red, then green, then blue, then yellow. Logical structure in hypothetical markup:


Rect {
    color: red;
    rect: QRectF(0, 0, 100, 100);

    child : Rect {
        color : green;
        pos: QPointF(25, 25);
        rect: QRectF(0, 0, 100, 100);
        opacity: 0.5;

        child : Rect {
            color : blue;
            pos: QPointF(25, 25);
            rect: QRectF(0, 0, 100, 100);
            rotation: 45;

            child : Rect {
                color : yellow;
                pos: QPointF(25, 25);
                rect: QRectF(0, 0, 100, 100);
                scale: 2x2;
            };
        };
        };
};

This will in Qt 4.5 render the following output:

Opacity, in Qt 4.5

What's important to notice here is how all elements are transparent; so you can see the blue through the yellow item, the green through the blue item, and the red item through the green item. But the yellow item doesn't actually have any opacity assigned. It inherits opacity from the green item (opacity: 0.5).

By rendering the "green sub-tree" into a separate layer, we can combine all items and apply one uniform opacity as part of composing these items together. In my last blog I wrote about off-screen rendering. This work has progressed and is in quite a usable state (although the code is really ugly). It works! The rendering output for the same application as the above looks like this:

Opacity, in Qt 4.6

The essential difference is how the "green sub-tree" is treated as if it had one _combined_ opacity. The yellow item, for example, isn't transparent by itself at all. You can't see the blue through the yellow. By "collapsing" the subtree into a single layer we can both avoid unnecessary rerendering of items (i.e., if you move the green item around, the children are not repainted, not even from cache). Which is pretty cool!

(We get this at the cost of allocating and spending an extra pixmap, and the first time we render there's an extra level of indirection.)

But there's even more applications to this technique... :-) With code for handling explicit composition of layers, we can now use pixmap filters and shaders to compose one layer onto another. And this is very useful for any future effects API we might add to Qt in general. Imagine applying a gaussian blur effect to the layer represented by the green item. The result is below:

Opacity w/blur, in Qt 4.6

The result is very neat; the whole layer is blurred before it's rendered. You don't get funny artifacts that might have occurred in overlapping regions if each of the items was blurred individually.

Now none of this gives any mind-blowing screenshots (at least not yet), but it's an important step that provides faster rendering of subtrees (as sliding and transforming a complex graph of items ends up just being matrix operations on a single pixmap/texture, and in software this prevents overdrawing), a more accurate processing of the opacity property of QGraphicsItem, and it makes it very easy to apply fancy composition effects to groups of items.

Questions: Should layers be explicitly defined, or implicitly handled by logics inside Graphics View? In the example above, I've used the opacity property to detect whether an item's subtree should be rendered into a separate layer or not. This is easy to do and very clear/unsurprising. But what if you want to collapse a subtree for performance reasons? An alternative, or possibly an additional approach would be to add an explicit setting: QGraphicsItem::setLayer(true) or setCacheMode(QGraphicsItem::DeepItemCoordinateCache). I'm torn. Experimentation and feedback will show (yes I know I need to push this code out, once our SCM goes public this will be much easier!).

There's also the problem with these darn flags QGraphicsItem::ItemIgnoresParentOpacity and QGraphicsItem::ItemDoesntPropagateOpacityToChildren. There's no way to make these flags work with a layered rendering approach. But are these flags very important, or could we just (*cough* *cough*) disable/deprecate them? ;-)

Finally I still haven't found out how to handle ItemIgnoresTransformations. The most probable solution is that these items are automatically rendered into a separate layer.

Happy hacking! :-)


Blog Topics:

Comments