When I started working on our multimedia framework (aka Phonon) our goal was pretty simple and clear : we needed sound and video support in Qt. We had 3 systems to support and thus 3 completely different underlying API to implement our backends. It was tough and we worked hard on it. But then we got some cool demos and we saw that basic multimedia integration in a cross-platform way is actually possible. We struggled a lot but we got there. The mediaplayer demo is a nice little app that's fully cross-platform and easy to understand.
Having a look at the bigger picture we realized that another great feature of Qt 4.4 is the ability to get widgets on the canvas (aka QGraphicsView). All our features need to work correctly together. That's the basic principle of a framework. Displaying video is usually hardware-accelerated and all multimedia API/system use their own way to the graphics card. That means no composition with other widgets, no ability to paint over it and of course no perspective transformation. My first thought at the time was that we would never be able to achieve the goal of full integration of our own features (ie. having videos on the canvas) and that felt bad.
We still tried and our target was Qt4.5. The biggest challenge was to write our own video renderer. Until Qt 4.4 we always used the native video renderers (you know, the ones without composition at all and no transform). The 1st thing you have to do is manage the rendering of video frames and use the Qt paint engine. This way you get all the power of Qt. So after a few weeks of development, tweaking and bug fixing I had something working.
So TADA! (If you were at dev days you've already seen that)
Now you can simply do something like that with your VideoWidget: QGraphicsScene scene; QGraphicsView view(&scene); scene.addWidget(videoWidget);
I'm a bit lazy so I didn't put the code that creates the Phonon objects. You can find out in the documentation and examples how to do that. And now you can apply transformations to your video. Add things on top with alpha-blending... Possibilities unilimited. And now you're probably thinking that there's a catch... and you're right. If you look at your CPU usage it will be high. Because now everything is done in software including resizing/transforming/compositing/drawing. So we optimized, rewrote some components provided by the system ("OMG, how can the colorspace-conversion in MS DirectShow be so slow?!").
Still, there is a way of greatly improve performance. Simply add that line to your code:
This magical line enables OpenGL through the whole canvas including your video. So scaling/transformation/composition is done by your graphics card. Plus on Windows I also use fragment shader to do the conversion from YUV to RGB (colorspace conversion) on the GPU of your graphics.
To make it even clearer I just made up an additional demo. I simply took the "embedded dialogs" demo from the graphicsview guys and slightly changed it. You can run this videowall demo. You have to pass at least one parameter which is either a video file or a directory (that contains video files) path. So for example you do: videowall c:videos An additional parameter is the dimension of the wall. By default it is 1, meaning 1 video. If you pass for example 3, it will build a video wall with 3x3 videos. I ran it successfully with 9 videos (3x3 wall) with WMV HD content that I got from here. If you want to watch those videos you need to make sure you have the necessary codecs (Windows has them by default).