With the rapid development of artificial intelligence and VR technology,gesture recognition plays an increasingly important part in human-computer interaction.Among them,vision-based gesture recognition is most common in the application of human-computer interaction systems,but vision-based gesture recognition is susceptible to interference from external environments and other factors,thereby reducing the effectiveness of gesture recognition.In view of this,this paper mainly studies the fusion of gesture features collected by various sensors such as vision,surface electromyography,and data gloves,so as to improve the robustness and recognition rate of gesture recognition in human-computer interaction.The specific research work and innovations of this paper are as follows:(1)The static gesture recognition method based on vision is studied.Firstly,the three-dimensional coordinate features of the 21 key points in the hand image is extracted through a deep neural network,and then the gesture skeleton is constructed according to the relative positions of the extracted features.Finally,the proposed geometric template matching method is used to match and classify the constructed gesture skeleton.The robustness of the method in human-computer interaction is studied through online gesture recognition.(2)A method of dynamic gesture recognition based on surface EMG signals is studied.Firstly,use the MYO wristband to collect gesture data to establish a database.Then the samples in the database are filtered separately and the features are extracted by integrating the EMG value.Finally,the LDA classifier is trained using the extracted surface EMG signal features,and an accurate and fast online dynamic gesture recognition is achieved.(3)A dynamic gesture recognition method based on VR data gloves and upper limb nodes is studied.After using the IMU-based motion capture system to capture the quaternion of the motion gesture,the proposed mathematical logic operation is applied to the quaternion signal to extract the curvature features of each finger.After that,the single-finger joint angle is binarized(that is,0 and 1),and different gesture features are defined.The extracted curvature features of each finger are classified according to the logic of the defined gesture features,thereby achieving accurate and fast Dynamic gesture recognition.This method is not only more robust in the actual human-computer interaction than the previous two methods,but also verifies the reliability of the limb space pose obtained by the IMU.(4)A gesture recognition method based on the fusion of surface EMG signals and image features is studied.Firstly,the MYO wristband and camera is used to collect surface electromyographic signals(SEMG)and RGB images of the limbs and establish a database.Then use the filtering,integration and deep neural network to extract the SEMG signal features and image features from the samples in the database,and fuse the extracted features.Finally,the fused features are classified and identified using an LDA classifier.This method can not only improve the gesture recognition rate,but also verify the complementarity of the fusion of image information and SENG information.(5)A gesture recognition method based on the fusion of surface electromyographic signals and inertial signals is studied.Firstly,the gesture SEMG and IMU signals collected by the MYO wristband is used to establish a Chinese sign language gesture database.Then,the SEMG and IMU signals in the database are filtered separately,and the root mean square(RMS)method is used to extract the features of SEMG.Finally,the features of the SEMG signal and the IMU signal are fused,and the fused features are classified end-to-end using BiLSTM+CTC.This method not only avoids the problem of reduced gesture recognition rate due to inaccurate gesture information segmentation,but also verifies the complementarity of the fusion of SEMG signals and IMU signals. |