Research Projects of bruno dumas

ArtVis - Exploring Information Through Advanced Visualisation Techniques  (Finished)

The ArtVis project investigates advanced visualisation techniques in combination with a tangible user interface to explore a large source of information (Web Gallery of Art) about European painters and sculptors from the 11th to the mid-19th century. Specific graphical and tangible controls allow the user to explore the vast amount of artworks based on different dimensions (faceted browsing) such as the name of the painter, the museum where an artwork is located, the type of art or a specific period of time.The name ArtVis reflects the fact that we bring together artworks and the field of Information Visualisation (InfoVis) in order to achieve a playful and highly explorative user experience in order to get a broader understanding of the collection of artworks.

Mudra - A Unified Multimodal Interaction Framework  (Finished)

Mudra is a unified multimodal interaction framework supporting the integrated processing of low-level data streams as well as high-level semantic inferences. Our solution is based on a central fact base in combination with a declarative rule-based language to derive new facts at different abstraction levels. The architecture of the Mudra framework consists of three layers: At the infrastructure level, we support the incorporation of any arbitrary input modalities, including skeleton tracking via Microsoft’s Xbox Kinect, multi-touch via TUIO and Midas, voice recognition via CMU Sphinx and accelerometer data via SunSPOTs. In the core layer a very efficient inference engine (CLIPS) was substantially extended for the continuous processing of events. The application layer provides flexible handlers for end-user applications or fission frameworks, with the possibility to feed application-level entities back to the core layer.

Midas - A Declarative Multi-Touch Interaction Framework  (Finished)

The main goal of the Midas framework is to provide developers adequate software engineering abstractions to close the gap between the evolution in the multi-touch technology and software detection mechanisms. We advocate the use of a rule language which allows programmers to express gestures in a declarative way. Complex gestures which are extremely hard to be implemented in traditional approaches can be expressed in one or multiple rules which are easy to understand.

MobiCraNT - Second Generation Mobile Cross-media Applications - Scalability, Heterogeneity and Legacy  (Finished)

The MobiCraNT project aims at the development of software engineering principles and patterns for the development of mobile cross-media applications that operate in a heterogenous distributed setting and interact using cross-media technology.