| As a kind of unmanned equipment that uses radio to control,the control accuracy of UAV greatly depends on the strength of communication signals.In complex scenes,it’s often to occur some situations,such as the UAV is difficult to control and command,the equipment is incompatible and the data transmission is easy to be interferenced.Therefore,it is necessary to find a more efficient,more compatible,and safer human-computer inter-action method.With the development of deep learning,more and more algorithms have emerged to solve the problem of action recognition in the field of human-computer in-teraction.However,most of the methods use the form of image+optical flow for joint training,which costs a lot of calculation and is easily affected by the shallow visual fea-tures.Based on the pose estimation algorithm,this thesis proposes a research method for the automatic recognition of UAV commands,which can realize real-time and accurate recognition of the control commands implemented by the commander on the UAV The research contents of this thesis are as follows:1.Aiming at the difficulties of interaction with radio-controlled command UAVs,this thesis studies the command recognition by computer vision.In addition,the previous action recognition methods are not robust enough and the calculation cost is high,so the action feature extraction technology based on pose estimation is proposed to improve the accuracy and real-time performance of the classification model.2.Customize a set of standardized guidance and command gesture rules,collect and make relevant data sets.By defining the action specification,it is possible to continuously locate the occurrence and end positions of the action from the continuous video stream,so as to ensure the smooth process of automatic command recognition.3.Design a set of classification action analysis rules based on the feature model.In order to solve the problem of lacking skeleton spatial information in action recogni-tion,this thesis proposes to transform the skeleton into a graph structure,and then use spatial-temporal graph convolution network to extract action temporal and spatial feature information.In addition,aiming at the speed mismatch between the feature extraction model and the action recognition model in the test process,a scheme to supplement the simulation frame is proposed to ensure high recognition accuracy.4.Design and implement a UAV command automatic recognition system.The sys-tem realizes the real-time and accurate conversion from the commander’s action to UAV,s control command.In order to cut the complete action from the continuous skeleton,the template matching technology is proposed,which can cut the action completely.In ad-dition,aiming at the problem of airborne deployment,the system is migrated to a more lightweight embedded device,and the performance of the airborne recognition system was optimized,which greatly improved the model inference speed.Finally,the performance of the airborne recognition system was tested in complex scenarios.The average accuracy rate is 94.29%,and the real-time processing rate was about 15FPS.It shows the effectiveness of this scheme in solving the problem of UAV interactive control,and greatly simplifies the control process,improves the compatibility,and has a good application prospect in the environment of rapid development of UAV. |