Multimodal and Multi-Touch Interaction

Involved members: 

Over the past few years, multi-touch user interaction has emerged from research prototypes into mass market products. This
evolution has been mainly driven by emerging devices such as Apple’s iPad or Microsoft’s Surface tabletop computer. The realisation of these multi-touch applications is often time-consuming and expensive since there is a lack of software engineering abstractions in existing multi-touch development frameworks. In our research on multi-touch gesture recognition we are developing a rule-based language to improve the extensibility and reusability of multi-touch gestures. Our Midas solution for the declarative gesture definition and detection is based on logical rules that are defined over a set of input facts.

A similar approach cannot only be used for multi-touch interaction but also for the development of multimodal interfaces. For this purpose, we are currently developing Mudra, an innovative multimodal gesture interaction framework that enables a declarative description of gestures for different input modalities and devices including, for example, the Microsoft Kinect, digital pen and paper solutions or the Emotiv EPOC brain interface. We feel confident that our research will support the implementation and investigation of novel multimodal and multi-touch interactions that go beyond the current state-of-the-art.

 

Multimodal Interaction

 

Based on specific requirements for our pen and paper-based user interfaces, we have developed iGesture, a general gesture recognition framework. The Java-based iGesture solution is suited for application developers who would like to add new gesture recognition functionality to their application as well as for the designers of new gesture recognition algorithms. Recently, iGesture has been extended to support new input devices. In addition to traditional screen and mouse-based interaction and digital pen and paper input, iGesture now also provides support for the Wii Remote and for TUIO devices.