Font Size: a A A

Appearance-Based Hand Gesture Recognition And Study On Human-Robot Interaction

Posted on:2009-11-09Degree:DoctorType:Dissertation
Country:ChinaCandidate:L Z GuFull Text:PDF
GTID:1118360242995151Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
To coexist with human in the society, it is critical for humanoid robot to be capable of interacting with human naturally. From the robot aspect, interaction involves perception modality and effective modality. Perception modality receives environment information and effective modality operates on the environment and controls information receiving during human-robot interaction. As a natural and intuitive interaction modality, gesture plays an important role in human-robot interaction. It is not only a modality for transferring information intent but also a necessary media during the task of learning from demonstration. Gesture helps common users to easily communicate and control robots. During human-robot interaction, coordinated head-eye motion control contributes to obtaining environment information and the interested focus for robot. Hence it enhances interaction naturalness between human and robot.This dissertation studies view-independent and user-independent hand gesture recognition from the starting research point of natural human-robot interaction. Based on a self-developed general platform of humanoid robot head system, a natural gesture-based human-robot interaction system has been constructed, which paves the way for future research, such as learning from the demonstration. The main work is summarized as follows:(1) As for hand posture recognition, features used in current frontal view hand posture recognition systems are not sufficient to represent natural hand postures. Thus a complete feature set has been proposed to equivalently describe hand posture characteristics. According to the reconstructed effect with Zernike moments extracted from the image, the complete feature set, which is sufficient to represent hand posture, is determined. During the extraction process of Zernike moments, the concept of minimal square borderline is proposed to obtain maximal representation capability for hand posture with the same order of Zernike moments. In addition, the number of equivalent features reveals the complexity of a hand posture set and reflects the recognition rates for a certain posture objectively. As for feature dimensionality reduction, a new method based on CN-Isomap (Category Neighbor-Isomap) is proposed to deal with the classification task. The category information is utilized to preserve the intrinsic geometric structure of the original data set during the feature dimensionality reduction process. Hence the purpose of correct dimensionality reduction is realized.(2) As for hand gesture recognition, firstly current existing hand gesture recognition methods are investigated and the characteristics of hand gesture during natural human-robot interaction are analyzed. A novel modeling method based on temporal compression is proposed to describe hand gesture in order to eliminate spatial-temporal variances among the same gesture. This modeling method is also the basis for recognizing gestures issued by various users from different viewing directions. According to the process of gesture state transferring, velocity edge detector is proposed to the spot gesture. A sequence of discrete trajectory points representing a hand gesture is obtained by tracking the hand motion in the image plane. Based on the perception law in Gestalt psychological research foundation, cubic B-spline is adopted to approximately fit those discrete trajectory points and obtain the appearance trajectory of hand gesture. Curve moments and affine curve invariant moments are introduced to represent the appearance trajectory of hand gesture. Considering the irregularly distributed samples in the feature space, multivariate piecewise linear decision tree (MPLDT) is proposed to discriminate hand gestures. Compared with common multivariate decision tree, the proposed decision tree has smaller tree size and better generalization performance.(3) Coordinated head-eye motion control on a general robot head is studied for gesture-based human-robot interaction. Human-robot interaction is a two-way process and the robot should respond to user's command gesture. It is necessary to effectively coordinate the head and two eyes motion to ensure a continuous interaction process and obtain clear images on the robot head with multi-degrees-of-freedom. By making two eyes focus at the same point, the coordinate of fixation point and desired motion angles of the robot head and two eyes are obtained. Compensatory movement models of two eyes and corresponding control algorithm are studied when the robot head moving. A humanoid robot head system platform with six-degrees-of-freedom has been developed. From the bionics aspect, the six degrees of freedom are of the most representative ones and simulate the major movements on the human head with the most simplified motion joints. Gaze experiments on the robot head with multi-degrees-of-freedom show that the derived model can realize accurate three-dimensional coordinated head-eye motion control.
Keywords/Search Tags:human-robot interaction, Hand gesture recognition, complete feature set, feature dimension reduction, temporal compression, multivariate piecewise linear decision tree, head-eye coordination, humanoid robot head, learning from demonstration
PDF Full Text Request
Related items