Font Size: a A A

Robust real-time vision modules for a personal service robot in a home visual sensor network

Posted on:2008-07-20Degree:Ph.DType:Dissertation
University:University of Southern CaliforniaCandidate:Kim, KwangsuFull Text:PDF
GTID:1448390005472997Subject:Artificial Intelligence
Abstract/Summary:
The Intelligent Home, which integrates information, communication and sensing technologies with/for everyday objects, is emerging as a viable environment in industrialized countries. It offers the promise to provide security for the population at large, and possibly assist members of an aging population. In the intelligent home context, personal service robots are expected to play an important role as interactive assistants, due to their mobility and action ability which complement other sensing nodes in the home network. As an interactive assistant, a personal service robot must be endowed with visual perception abilities, such as detection and identification of people in its vicinity, recognition of people's actions or intentions.; We propose to frame the problems in terms of distributed sensor network architecture with fixed visual sensing nodes (wall-mounted cameras) and mobile sensing/actuating nodes such as one (or more) personal service robot. Each fixed node processes its video input stream to detect and track people, and to perform some level of behavior analysis, given the limited resolution. It may also communicate with the robot, directing it to move to a specific area. The robot, in addition to navigation, must process visual input from the on-board camera(s) to also perform person detection and tracking, but at a finer level, since it is closer to the person. In particular, it should locate a person's face, possibly identify the person, in order to interact with humans in a social setting. Each sensor node is connected to the intelligent home network, and performs its task independently, according to the range of interaction and the object of perception. Each fixed camera node on the wall detects and tracks people in its field of view, and analyzes their behavior. It may then trigger other sensor nodes, or communicate with the robot for further sensing and closer analysis, by integrating multiple sensing nodes in various levels, according to the range of interaction, mobility, or required resolution. We also extend this strategy to the fusion of different kinds of sensing, such as sound and vision, as human robot interaction is multi-modal.; This fusion strategy can provide robustness and efficiency, compared to the traditional image level analysis from a single camera, through a certain level of redundancy, as well as the cooperation among the sensor nodes. We have obtained encouraging results with this framework on a real robot in a realistic environment, such as multiple, different types of sensor nodes in several different places, changing illumination, and uninterrupted processing for long period of time.
Keywords/Search Tags:Sensor, Personal service robot, Home, Nodes, Visual, Sensing, Network
Related items