Font Size: a A A

Evidential reasoning for multimodal fusion in human computer interaction

Posted on:2008-03-24Degree:M.A.ScType:Thesis
University:University of Waterloo (Canada)Candidate:Reddy, Bakkama SrinathFull Text:PDF
GTID:2448390005971854Subject:Engineering
Abstract/Summary:
Fusion of information from multiple modalities in Human Computer Interfaces (HCI) has gained a lot of attention in recent years, and has far reaching implications in many areas of human-machine interaction. However, a major limitation of current HCI fusion systems is that the fusion process tends to ignore the semantic nature of modalities, which may reinforce, complement or contradict each other over time. Also, most systems are not robust in representing the ambiguity inherent in human gestures. In this work, we investigate an evidential reasoning based approach for intelligent multimodal fusion, and apply this algorithm to a proposed multimodal system consisting of a Hand Gesture sensor and a Brain Computing Interface (BCI). There are three major contributions of this work to the area of human computer interaction. First, we propose an algorithm for reconstruction of the 3D hand pose given a 2D input video. Second, we develop a BCI using Steady State Visually Evoked Potentials, and show how a multimodal system consisting of the two sensors can improve the efficiency and the complexity of the system, while retaining the same levels of accuracy. Finally, we propose a semantic fusion algorithm based on Transferable Belief Models, which can successfully fuse information from these two sensors, to form meaningful concepts and resolve ambiguity. We also analyze this system for robustness under various operating scenarios.
Keywords/Search Tags:Human computer, Fusion, Multimodal, System
Related items