| The commercial application of dexterous prosthetics is subject to the development of its control algorithm: traditional prosthetic hand control algorithms can no longer meet the control of multi-freedom prosthetics;in the current algorithm,the electromyographical coding control is poorly intuitive,lacks real-time,and requires longterm training;The electric pattern recognition control is unstable and susceptible to misclassification due to confounding factors.The multi-degree-of-freedom synchronization control can control the degree of freedom less(2~3),and is mostly concentrated on the control of the wrist.The performance is not reliable and is still in the research stage.In view of the control problem of dexterous prosthetic hand,this paper avoids the idea of widening human-machine information interaction ability as mentioned above,and studies the control method of prosthetic hand based on RGB-D grab pattern recognition.In order to implement this method,the main research contents of this paper include: establishment of RGB-D image database with one-hand graspable items,convolutional neural network model research with grab pattern recognition under RGBD multi-modal data fusion,and prosthetic hand control strategy and system research based on RGB-D-EMG.This paper first summarizes the research status of prosthetic human hand electromyographic control,deep learning,RGB-D object recognition based on deep learning,RGB-D image dataset,and classification of object grasp pattern at home and abroad,and discover some problems with the current research institute.Then I determine the main research content of this article.(1)In order to solve the current RGB-D image dataset's problem that is not suitable for article pattern recognition,in this paper,121 typical objects are selected for the four basic grasping modes: cylindrical grabbing,spherical grabbing,three-finger grabbing,and side grabbing,and an RGB-D dataset with 47,245 pairs of RGB-D image data is established.Compared with the existing dataset,it includes images of different poses of the single-hand graspable object and is independently classified,and the selection of the items is more concerned with the difference in size and shape.(2)In order to construct the RGB-D object grab pattern recognition model,this paper first uses the convolution network to experiment with three single modal data of RGB,Gray(RGB graying)and Depth,and finds that Depth data is the best,Gray data is second,RGB is the worst.The importance of the Depth data with three-dimensional information for the recognition of the grab pattern is verified,and it is found that the color feature does not contribute much to the object grab pattern recognition.In order to further improve the accuracy of the recognition,this paper combines the Depth data with RGB and Gray data,and proposes a dual data stream convolutional network model.Among them,the multi-modal RGB-D and Gray-D models have been improved by 10% compared with the traditional single-mode RGB recognition technology,which is 92.388% and 92.168%,respectively.(3)In order to further embed the identification model obtained in(2)into the system of the dexterous prosthetic hand to solve the control problem of the dexterous prosthetic hand,this paper proposes a control strategy based on RGB-D and EMG fusion: Firstly,the dual-stream convolution model is used to identify the grab pattern of desktop object in real time.And then four kinds of myoelectric modes are designed: one mode controls whether to pass the current recognition result to the prosthetic hand,and the remaining three modes control the “closed”,“unfolded”,and “relaxed” of the prosthetic hand.This control strategy achieved a success rate of 95.74% in 96 grab experiments. |