Multimodal Interaction for a Semantically Enhanced Presentation Tool

Type of Thesis: 
Master Thesis

With more than 30 million PowerPoint presentations that are created every single day, we are all familiar with modern presentation tools such as Microsoft's PowerPoint or Apple's Keynote. However, it is not difficult to see that most of these tools are based on the same essential ideas and thus share similar flaws and limitations. Our new semantically enhanced presentation tool takes a radically new approach to create, share and deliver presentations. By stepping away from current presentation standards and by designing our presentation tool to address the unfulfilled needs of the people involved with the presentations, we plan to make every single step in the process more enjoyable for both the presenter and the audience.

Our tool separates content and visualisation, by providing a LaTeX-like language for the easy creation of content. This content is then automatically visualised by an HTML5 visualisation layer, which makes use of user-definable templates to provide a competitive aesthetic quality. The tool differentiates itself from current tools by providing some innovative features such as the plug-in architecture, extreme portability, support for multimodal input, semantic linking and navigation of information, non-linear traversal, a zoomable user interface, transclusion, interactivity, innovative ways of visualising specific types of information and much more.


 

The goal of this thesis is to investigate different types of input and identify those that are useful for giving presentations. The student will propose a way how to use different input modalities for giving presentations and implement one (or multipe if time permits) for use in our semantically enhanced presentation tool. As multimodal input is a prominent research topic in the WISE lab, you could potentially use some of our other projects (e.g. Midas for multi-touch gestures) to facilitate the implementation process. Some examples of multimodal inputs include but are not limited to:

  • Physical gestures for navigation
  • Using PaperPoint for annotations
  • Using a mobile device for multi-touch gestures and annotation
  • Vocal cues 
  • ...
Background Knowledge: 
  • Java
  • HTML5
Technical challenges: 
  • You will have a look at the current state of the art of multimodal input (literature study), identify those that could be a used for presentations and investigate how they could be used
  • You will implement one or more of your ideas for use in the presentation tool
  • Optionally you could conduct a small user study to test your implementation(s) against "regular" input methods
Contact: 
Beat Signer
Academic Year: 
2012-2013