Font Size: a A A

Towards a robust framework for visual human-robot interaction

Posted on:2013-01-28Degree:Ph.DType:Thesis
University:McGill University (Canada)Candidate:Sattar, JunaedFull Text:PDF
GTID:2458390008968094Subject:Engineering
Abstract/Summary:
This thesis presents a vision-based interface for human-robot interaction and control for autonomous robots in arbitrary environments. Vision has the advantage of being a low-power, unobtrusive sensing modality. The advent of robust algorithms and a significant increase in computational power are the two most significant reasons for such widespread integration. The research presented in this dissertation looks at visual sensing as an intuitive and uncomplicated method for a human operator to communicate in close-range with a mobile robot. The array of communication paradigms we investigate includes, but are not limited to, visual tracking and servoing, programming of robot behaviors with visual cues, visual feature recognition, mapping and identification of individuals through gait characteristics using spatio-temporal visual patterns and quantifying the performance of these human-robot interaction approaches. The proposed framework enables a human operator to control and program a robot without the need for any complicated input interface, and also enables the robot to learn about its environment and the operator using the visual interface. We investigate the applicability of machine learning methods -- supervised learning in particular -- to train the vision system using stored training data. A key aspect of our work is a system for human-robot dialog for safe and efficient task execution under uncertainty. We present extensive validation through a set of human-interface trials, and also demonstrate the applicability of this research in the field on the Aqua amphibious robot platform in the under water domain. While our framework is not specific to robots operating in the under water domain, vision under water is affected by a number of issues, such as lighting variations and color degradation, among others. Evaluating the approach in such difficult operating conditions provides a definitive validation of our approach.
Keywords/Search Tags:Robot, Visual, Framework
Related items