Font Size: a A A

Research And Implementation Of Hand Gesture Recognition Based On Kinect

Posted on:2019-12-15Degree:MasterType:Thesis
Country:ChinaCandidate:Z Q ZhaoFull Text:PDF
GTID:2428330596965436Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
With the continuous development of technologies such as robots and virtual reality,the traditional way of human-computer interaction is gradually difficult to meet the needs of natural interactions between human and computers.As a novel humancomputer interaction technology,machine-vision based hand gesture recognition has received widespread attention from researchers at home and abroad.However,the performance of color camera is limited by their optical sensors,making it difficult to cope with complex lighting conditions and cluttered backgrounds.Therefore,depth cameras which contains more information(eg.Kinect)become an important tool for researchers to study gesture recognition.Although research on Kinect-based gesture recognition has been greatly developed in recent years,there are still room for improvement.Aiming at the deficiencies of the existing methods,research on hand detection and segmentation,feature extraction and gesture classification has been conducted in this paper,The main research work is as follows:(1)A object detection based hand segmentation method is proposed.In this paper,hand was treated as an deformable object and hand detection method based on deep learning framework was studied.And then an real-time and accurate hand detector was trained on the hand detection dataset collected by ourselves.Hand segmentation method based on skin color and adaptive depth threshold was proposed on the basis of hand detection,which achieved the goal of robust hand detection and segmentation.(2)A unified spatial feature extraction method was designed.In this paper,the Convolutional auto-encoder(CAE)was used to unify the feature extraction methods of color images and depth images,and the CAE was trained in an unsupervised manner so that the trained CAE can be used as common spatial feature extractor of static gesture and dynamic gesture.Finally,the classification effect of CAE is compared with supervised CNN on the collected gesture recognition dataset,which verifies the effectiveness of CAE.(3)An gesture recognition method based on CAE is proposed.For static gestures,pre-trained CAE is used to extract spatial features of static gestures and Softmax is used to classify gestures,the proposed static gesture classifier achieved extremely high classification accuracy.For dynamic gestures,CAE is also used to extract spatial features but the output of CAE are taken as input of an two-layer Long Short Term Memory(LSTM)network which is used process the temporal characteristics of dynamic gestures.Then,a simple CNN is used to further extract the spatial-temporal feature of dynamic gestures and finally gestures is classified by Softmax classifier.Good results have also been obtained on the proposed dynamic gesture classifier.(4)An multi model fusion method based on Random Forest Classifier(RFC)is proposed.Considering the different nature of the color data and depth data provided by Kinect,this paper train individual gesture classifier for color and depth data respectively,and output of the two classifier is fused by RFC in the classification stage,further improves the accuracy of gesture recognition.
Keywords/Search Tags:Machine Vision, Kinect, Hand gesture recognition, Deep learning
PDF Full Text Request
Related items