MultiTouch


This document presents an overview of the Multi Touch input handling and gesture recognition inside Ventuz. It describes the technical aspects of what happens internally from a touch being registered to the Interaction node reacting on it. Although much of the complexity of implementing Multi Touch has successfully been hidden from the user, a deeper understanding of what really goes on can seriously improve the way a user designs a Multi Touch scene.

General Concept

Due to the complexity of implementing a proper Multi Touch handling, the usual Ventuz philosophy of providing raw input information (as for example in the form of the Mouse Node) and the means to process them to the user has not been followed by the Multi Touch implementation. Instead, ease of use was the primary concern.

Ventuz supports three forms of touch interaction:

All these sources of input are internally combined to one virtual touch device and therefore are treated exactly the same. As a consequence, it is not possible to limit the input information from hardware devices to specific nodes.


When a new touch is registered, Ventuz figures out which geometry has been touched and relates that information to all interaction nodes that are associated with it - all interaction nodes that lie on the path from the geometry to the scene root. The interaction nodes perform some calculations to translate the 2D touch position to meaningful values like the angle of rotation that is implied by the users movement in case of a Touch Rotation node. Those values are both provided as output properties as well as used by the nodes themselves. For example, a Touch Translation node contains an axis inside and automatically performs the appropriate motion for the touch movement. A Touch Paint node works like an Overlay Rectangle and automatically displays the texture that is painted by the touch movement.


In the example shown here, touching any of the three rectangles and moving the touch will cause the Touch Translation to update its internal axis and thus move all nodes in its subtree. The sphere however is neither affected by this nor does it itself react to touch input.

No bindings are required. Just inserting the interaction node automatically associates it with all geometries underneath it.


Intersection

When a new touch is detected, Ventuz will cast a ray to estimate which geometry (or Volume) the user has hit when the touch began (the same is done for touches which are hovering). Ray-intersection is a performance intensive operation due to the scene flexibility of Ventuz. For example, there is no bounding volume hierarchy which could be used for acceleration and a rays origin and direction through the scene can only be calculated when the mesh is reached as camera/view nodes can be on any level of the hierarchy. Therefore intersection is only calculated for meshes beneath interaction nodes and by default only against an objects bounding volume (this can be changed to mesh intersection for each interaction node). The result of this intersection is the mesh that has the smallest z-value with respect to the viewer, regardless of z-function rendering options or alpha value specified in the scene. Starting at the mesh hit, Ventuz traverses the scene hierarchy towards the scene root and notifies interaction nodes on its way.

It is often advisable to use simplified intersection proxies under a gesture and bind the output values to a separate axis that performs the motion. The simplest way to achieve this is to use an alpha node with an alpha of zero and the auto-block property deactivated, then add the appropriate primitive(s) underneath.


There are three different types of interaction nodes: Those that are based on an object in the scene, based on the screen space or a specific marker object. The type of node will both imply how the touch coordinates are mapped and how interaction nodes are activated. In general, screen-space nodes (like Touch Painting) are only activated if the touch does not activate any object-based interaction node (i.e. a translation).

At the time of writing, intersection testing ignores Render Targets and Overlay Rectangles.


Once a touch is accepted and bound to an interaction node (which happens when the touch hits the control surface and is no longer hovering), it will trigger no more intersection tests. Instead, changes in position are directly transmitted to that node. As a result, an object can be moved underneath another interaction area and the touch will not be re-assigned to the interaction node in front of it.

Mapping

When an interaction node receives touch information, it performs some calculations of its own to map the 2D touch position to meaningful values. For example, the Touch Translation first computes the intersection point of the viewing ray with the X/Y-plane of the world coordinate system as it is for the translation node. So any Axis above the interaction node will affect how the touch movement is mapped. If the axis contains a rotation of 45 degrees in the Y-Axis, the translation will map and move along the rotated X-Axis.


The translation node will output the amount of units that the internal axis would have to be shifted in the X- and Y-axes such that the object stays fixed with respect to the touch. Other nodes have similar properties that provide the interpreted values.

Except for the initial intersection point, no part of the mapping process uses the geometry that was hit. It is therefore important what position/orientation the interaction node - rather than the geometry - has.


The Touch Transformation nodes have Limit properties that can artificially restrict the interval into which the touch information is mapped. For example, a Touch Translation can be restricted to always have a Y-value of zero or a Touch Rotation can be restricted to only rotate between plus and minus 45 degrees. These values also are based on the world coordinate system at the interaction node level.

To re-iterate, the way that touch movement is mapped, and thus translated to numeric values, does not depend on the shape/size/position of the object but the position/scaling of the interaction node itself. Whether inserting a translating/rotating/scaling axis as a parent or child of an interaction node produces two different results!

Stacked Gestures

Things get more complicated when an interaction node is used as a child of another interaction node. Starting from the intersected mesh, all object-based interaction nodes will be bound to the touch but not create any changes in output. Instead, each interaction node continuously maps the touch’s movement into its own mapping system and estimates the amount of effective movement. For example, if an interaction node is limited to a minimum X translation value, any movement in that direction will at best go up to this limit. All interaction nodes bound to this touch will do this calculation until a nodes estimated effective movement lies above the movement threshold specified in the Project Properties. Once the threshold is reached, the node claims exclusiveness for all its bound touches and starts processing, i.e. generating output values. The motion threshold should be as small as possible to minimize the movement it takes before a node reacts, but on the other hand it must be large enough that the motion can be positively associated to one of the nodes.

Example: Two translation nodes are stacked on top of each other in the scene hierarchy. Both have activated limits but one has a min/max value of zero in X direction and the other in Y direction. The motion threshold shall be 5 pixels. When the user touches a mesh beneath those interaction nodes, nothing happens until he moves at least 5 pixels along the free mapping axis of one of those nodes. Once this happens, the appropriate node will start updating its values and thus move the mesh.

Hardware Setup

There are two main scenarios supported by the Ventuz Multi Touch system:

Using multiple touch input devices at the same time (for example as in a 3x3 wall of displays with each delivering touch input) is not yet supported.


The Project Properties contain a number of settings to influence touch processing. Most notably, they contain the parameters that handle smoothing of the raw input and duration settings for single tap as well as tap-and-hold events. The Machine Configuration contains the hardwared-dependent settings like which input device types are to be used for touch handling.

Windows Mouse

The mouse position and left mouse button state are used to create an artificial touch. Through this mechanism, all interaction nodes respond to mouse input in the same way as with a single touch coming from a Multi Touch device.

TUIO 1.1

The TUIO network protocol is an OSC-based protocol to transmit multi touch information from hardware input devices to software applications. Except for simple touch LCD displays (which rather implement Windows Touch), pretty much any multi touch device worth its name supports this protocol. For more information, see TUIO.

TUIO 2.0 is currently not implemented and no tests with TUIO 1.0 have been performed.


Most TUIO devices do not seem to deliver more than 20-30 position updates per second. Since this is lower than the rendering update rate, moving objects might seem to produce jaggy motion. Use the input processing capabilities provided in the Project Properties and the input diagnostics visualization to improve results on such devices.

Windows Touch

The Windows Touch API was introduced by Microsoft to create a uniform way in which touch displays can be supported by the Windows Operating System. Since Windows is a mouse-based operating system, Windows Touch has certain limitations, for example when it comes to mouse focus.

Windows Touch by design is directly tied to the standard mouse. By default, a Windows Touch device will emit legacy windows mouse messages in addition to touch messages. The Ventuz Input Handling automatically blocks those events so that a clear separation between touch and mouse device exist. But the windows mouse cursors position is still affected. Especially moving the mouse will unfortunately affect the touch and cause a "touch up" window message. So one is advised to remove/disable the windows mouse device when running presentations based on Windows Touch (or at least make sure the mouse is not moved and does not produce micro movement due to jitter even if it is sitting still).


If Windows Touch is activated in the Machine Configuration, Ventuz will ignore any artificial mouse information emitted by the touch display. To use a touch display with Mouse I/O nodes, deactivate Windows Touch in the Machine Configuration.


A number of settings are available in the Windows Control Panel that are important for correct operation. Make sure Windows Touch is active and mapped to the correct display:

Input Diagnostics

A button has been added to the renderer window toolbar that presents an overlay with helpful statistics such as average number of updates per frame, input sub-system mode and so on. While at least one touch is inside the render window, markers are drawn to visualize the raw touch input:


Input Processing, Smoothing & Interpolation

To improve behavior for hardware devices that deliver fewer updates than the render loop, two new capabilities have been added to improve the smoothness of objects. Both can be set in the Project Properties. The Smoothing Weight specifies with what amount positions from previous frames are weighted into the final position. The Interpolation Delay specifies the number of frames between the render loop and the point in time where input positions are evaluated. The larger this value, the higher the chance to be able to interpolate values when the hardware device was not able to deliver any updates. Both result in some form of delay but create much smoother moving gestures.