Subnavigation

Touch support in Qt?

A few weeks ago, Denis blogged about some thoughts he had on how gesture recognition could work in Qt. One of the things he mentioned, and that was also brought up in the comments, was support for multi-point touch gestures. As a MacBook Pro owner, I already use these gestures every single day. Needless to say, he got my attention. I would certainly like to see multi-point touch events and gestures in Qt so that I can start using them myself. The only question is... how do we do it?

Before answering that question, I think we need to look at 2 other important questions. The answers to these will affect how we design the API for adding touch support to Qt.

What is multi-point touch?

We're already seeing consumer multi-point touch devices that you can buy today. Apple has the iPhone and iPod touch, Dell and HP have multi-point touch Tablet PCs. There are things like the DiamondTouch table top as well. Have you seen the TED Talk video demoing multi-point touch using only the IR sensor on a Wii remote?

There are many differences, form factor being the biggest, but there are some commonalities between the above examples.

  1. Single input device that reports multiple input/contact points.
  2. Coordinates are reported in absolute coordinates.
  3. User input is superimposed over the display (i.e. the user interacts directly with the screen).

What is multi-point touch not?

An equally important question that needs to be answered. I feel I can safely say that multi-point touch is not multiple devices reporting single points (i.e. two mice != multi-point touch). Peter Hutterer wrote a good summary of the difference between multi-point touch, multi-pointer, and multi-device.

One gray area is for devices that report relative cursor position changes (like the trackpad on my MacBook Pro). Such devices tend not to be attached to the display (so the input is not superimposed). But it should still be possible to get multi-point touch events from this kind of device and feed it into a gesture recognizer. This will require more thought and testing.

Ideas for a Touch API in Qt

After thinking about the above, and also have a look at the Windows 7 and iPhone SDK, we've come up with the idea of having one event (call it QTouchEvent for now) that contains a list of all the touch points reported by the device. Each touch point would have a state (pressed, moved, stationary, released), a unique identifier, and a position (both in widget local and global coordinates). I like the QGraphicsSceneMouseEvent API that includes the previously reported and starting positions, so I would say those are a must in the API. Other information may or may not be reported by the device as well, for example pressure.

So, we end up with something like this:

namespace Qt {
enum TouchPointState {
TouchPointPressed,
TouchPointMoved,
TouchPointStationary,
TouchPointReleased
};
}

class QTouchEventTouchPointPrivate;
class Q_GUI_EXPORT QTouchEvent : public QInputEvent
{
public:
class Q_GUI_EXPORT TouchPoint
{
public:
TouchPoint(int id = -1);
~TouchPoint();

int id() const;

Qt::TouchPointState state() const;

const QPointF &pos() const;
const QPointF &startPos() const;
const QPointF &lastPos() const;

const QPointF &globalPos() const;
const QPointF &startGlobalPos() const;
const QPointF &lastGlobalPos() const;

qreal pressure() const;

private:
QTouchEventTouchPointPrivate *d;

friend class QApplicationPrivate;
};

QTouchEvent(QEvent::Type type, Qt::KeyboardModifiers modifiers, const QList<TouchPoint *> &touchPoints);

inline const QList<TouchPoint *> &touchPoints() const { return _touchPoints; }

protected:
QList<TouchPoint *> _touchPoints;

friend class QApplicationPrivate;
};

The QGraphicsView framework would also have a QGraphicsSceneTouchEvent that looked almost identical, the only difference being that the global coordinate functions would be replaced by scene and screen coordinate functions.

So that means that with a simple addition, the scribble example could be modified to support multi-point touch painting by doing something like this:

bool ScribbleArea::event(QEvent *event)
{
switch (event->type()) {
case QEvent::TouchBegin:
case QEvent::TouchUpdate:
case QEvent::TouchEnd:
{
QList<QTouchEvent::TouchPoint *> touchPoints = static_cast<QTouchEvent *>(event)->touchPoints();
foreach (QTouchEvent::TouchPoint *touchPoint, touchPoints) {
switch (touchPoint->state()) {
case Qt::TouchPointStationary:
// don't do anything if this touch point hasn't moved
continue;
default:
{
int diameter = int(50 * touchPoint->pressure());
QRectF rectF(0, 0, diameter, diameter);
rectF.moveCenter(touchPoint->pos());
QRect rect = rectF.toRect();

QPainter painter(&image);
painter.setPen(Qt::NoPen);
painter.setBrush(myPenColors.at(touchPoint->id()));
painter.drawEllipse(rectF);
painter.end();

modified = true;
int rad = 2;
update(rect.adjusted(-rad,-rad, +rad, +rad));
}
break;
}
}

event->accept();
return true;
}
default:
break;
}
return QWidget::event(event);
}

Other things to consider

First off, what about existing code? Adding a new event type for a new input device type means that none of the existing widgets out there will support touch events at the time of release. The solution here, as I see it, is simple. We do the same thing that we did with tablet support and send mouse events to the widget that don't handle the touch events. We could even go a step further and say that we only send touch events to widgets and QGraphicsItems that explicitly enable touch event support (we could add a function or widget attribute to do this). I kind of like this idea, so that we don't spend time sending events to widgets that usually won't accept them.

Second, what about interleaving mouse events and touch events? Should the API allow a widget to receive both touch events and mouse events at the same time? I don't think so. I am inclined to say that a widget that accepts the first touch event should only get touch events after that, otherwise it gets only mouse events.

Lastly, what about touch events that affect multiple widgets or QGraphicsItems? For example, what if there was an application with a big, round volume knob that you rotate with 2 fingers, but your second finger happens to press the screen just outside of the knob's shape? And what if there happens to be another item or widget under that second touch point? I can certainly see this happening on a small screen, or on Tablet PCs that have lots of elements on the screen. I'm unsure of what to do here, so for now I'm going to say that perhaps the best thing to do for now is to say that the item or widget under the first touch point gets ALL touch points, regardless of where they may be. This means that you couldn't, for example, use 2 hands to turn 2 separate knobs on such a knob (if this was some sort of DJ mixing application, for example). Some prototyping and testing will be necessary here to see what we decide to do.

There are probably other things to think about that I've not covered here. Please let me know if you can think of any, or if you have any good ideas and comments on the above thoughts, API, assumptions, etc.

Hopefully we can get something working in Qt at some point so that I can finally stop getting frustrated with my Linux laptop when my two- (scroll) and three-finger (back, forward) gestures don't work... :P


Blog Topics:

Comments