Font Size: a A A

Research On Gesture Recognition Technology Based On Vision

Posted on:2022-04-01Degree:MasterType:Thesis
Country:ChinaCandidate:T P ShaoFull Text:PDF
GTID:2518306491492514Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Gesture recognition technology has a broad application prospect in the field of human-computer interaction,and it has brought great convenience to people's production and life in many fields such as education,medical treatment and automatic driving.In recent years,gesture recognition technology has gradually changed from relying on external devices such as data gloves to relying on computer vision to obtain gesture information.In daily life,hands are usually in a complex natural environment,so people's hand shape and gesture movements are easily disturbed by the complex environment and light,thus affecting the effect of gesture recognition.Traditional gesture recognition algorithms based on vision have weak anti-interference ability to background environment and illumination,low accuracy and poor robustness in gesture recognition,and also need to manually design all kinds of gesture features,which has caused inconvenience to people's production and life.Therefore,aiming at the above technical shortcomings,this paper carried out the following research on gesture recognition algorithm based on vision:In the part of static gesture,the Convolutional Neural Network(CNN)in deep learning was used as the basic framework to build the Faster-R-CNN model and the improved YOLO-V2 model with Darknet-19 as the backbone Network.The two models are used as gesture recognition models.A large number of gesture samples were collected under different environments,different lighting and different angles,and the gesture samples were enhanced to form a gesture training data set.The data set was used to train the two models,and the detection accuracy and detection speed of the two models were compared.Compared with the traditional visual-based gesture recognition algorithm,the detection accuracy was improved and was not easily disturbed by the background environment and illumination.Considering the recognition accuracy and detection speed of the two models,the YOLO-V2 model was selected for static gesture recognition,and interaction experiments were conducted between the gesture recognition results and the hexapod robot.Four static gesture control instructions were designed to control the movement of the hexapod robot.In the part of dynamic gesture recognition,four parts of gesture segmentation,gesture centroid calculation,gesture track feature calculation and gesture track recognition are studied.Similarly,the depth information collected by Kinect sensor was used to segment the target gesture in the video sequence in the gesture segmentation part,aiming at the low accuracy of traditional dynamic gesture recognition methods and being susceptible to different illumination intensity.In the gestural centroid calculation,the contour of the gesture is obtained by using the segmented target gesture and the moment of the contour is calculated.The gestural centroid is composed of the gestural centroid track sequence and quantified into a one-dimensional feature vector by using chain code rules in the calculation of gesture track feature.Finally,in order to avoid the determination of the start and end of the gesture,the Dynamic Time Warping(DTW)algorithm was used to recognize the one-dimensional feature vector,and finally the dynamic gesture recognition results were obtained.Interaction experiments were carried out between the gesture recognition results and the hexapod robot,and four kinds of dynamic gesture control instructions were designed to control the movement of the hexapod robot.
Keywords/Search Tags:Static gesture recognition, Convolutional neural network, Dynamic gesture recognition, Human computer interaction
PDF Full Text Request
Related items