The Perfect Presentation: A Training Tool for Presentations
Existing guidelines in the context of presentations seem to indicate that aspects such as body language, voice and gestures are almost as important as the actual content when conveying a message to the audience. Not everyone is born as a great presenter but everyone has the potential to become one. The goal of this thesis is to develop a training tool that automatically points out commonly made mistakes in order that presenters can improve their presentation and become more confident in their presentation skills.
For instance, the camera and microphone in your laptop could be used to monitor your body language (e.g. body stance and movement of your legs and hands), the intonation and volume of your voice or the amount of eye contact with the audience. Feedback could be given after the practice session, perhaps with a video clip of the moment where you did something wrong, but also during the presentation (e.g. "slow down" or "stand straight").
To implement such a training tool, a framework will need to be created to support a variety of physical factors in an extensible manner. One plug-in might count the amount of filler words ("uh") one used while another plug-in measures the voice volume. This allows us (and others) to extend the system with more sophisticated ideas at a later point in time, for instance the usage of a heart rate monitor to measure stress levels. Next you will also need to integrate a rule engine into the system. This will be used to define what is good and what is not. As an example, a rule could be defined that you should not use more than 5 filler words ("uh") per minute. Additionally, this gives us the benefit that different sets of rules can be used for different scenarios. For instance, it might be good to make a lot of eye contact in some cultures, but in others it might be considered disrespectful.
- Good programming skills (the used programming language is discussable)
- Experience with one or more of the related technologies is not mandatory but is a plus
- computer vision
- speech recognition
- rule engines
- You will create a framework for gathering physical information via various plug-ins (e.g. voice volume or use of filler words)
- You will integrate a rule engine to process the data from the various plug-ins in order to give feedback to the presenter
- You will need to visualise this feedback to the presenter after or during the presentation
- You will implement a basic set of plug-ins for some of your own ideas or for the ideas presented in this proposal