Font Size: a A A

Research On Gesture Segmentation And Recognition For Human-computer Interaction

Posted on:2021-03-11Degree:MasterType:Thesis
Country:ChinaCandidate:J L FanFull Text:PDF
GTID:2428330611999284Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of artificial intelligence technology,the intelligence of various electronic products has been gradually improved.Therefore,efficient and natural man-machine interaction has become a necessary demand.As a universal way of interaction in the world,gestures are expected to be applied to human-computer interaction,so that in the process of human-computer collaboration,humans can get rid of the limitations of traditional input devices such as mouse and keyboard,and control artificial intelligence devices more efficiently and naturally.Gesture recognition,as a new human-computer interaction,has been developed to a certain extent.Combining with visual recognition,motion information acquisition and EMG signal,gesture recognition has been realized in many ways.The research on gesture recognition of isolated speech is quite mature,but the research on continuous speech,especially on military sign language,is still relatively insufficient.It is essential for the recognition of continuous sign language for efficient human-computer interaction.Based on the inertial sensor and the recognition algorithm based on rule matching,this paper studies its application in the recognition of continuous sign language.The main research contents of this paper are as follows:(1)Many kinds of rules are used to model arm gestures.The recognition method of rule combination is actually to divide gestures into primitives.Then the primitives are identified in the method of combinatorial generation.This method can be combined into a variety of gestures with a limited number of primitives and has a good expansibility.(2)A continuous gesture recognition and segmentation method based on syntax model is proposed.In the process of isolated gesture recognition,it is not necessary to consider the semantic structure relation of sign language in the process of sign language statement recognition,it is necessary to separate the gesture statements with meanings from the gesture flow of the gesture recognition system.In the daily use,there is a grammatical structure,which can be divided by the integrity of the syntactic structure.The gesture recognition system has the function of recognizing sign language statements.(3)In order to solve the problem of improper collocation and ambiguity in continuous gesture statement,a gesture semantic modeling method based on n-gram model is proposed.In the process of sign language sentence recognition,the sign language sentence satisfying the syntactic structure may have ambiguity.In order to solve this problem,the n-gram model is introduced to constrain the collocation relationship between each hand word,so as to reduce the occurrence of ambiguities in sign language.(4)The application of human-machine interaction platform is built,and crawler robot trolley is used as the human-machine interaction platform.(5)To complete the real-time communication between the car and the upper computer system as well as gesture control for the car's mobile grasping function.We used 9 kinds of single arm gesture to realize the control of human-machine platform and descriptive complex human-machine interaction instruction.
Keywords/Search Tags:gesture recognition, inertial sensor, syntactic model, gesture semantic modeling, human-machine interaction
PDF Full Text Request
Related items