| With the development of science and technology,robotic teleoperation technology has greatly improved the efficiency of human industrial production and daily life.Currently,industrial robots mostly rely on fixed programming and robotics demonstration instruction to complete established repetitive tasks,while some flexible tasks are still mainly performed by human laborers.In order to improve production efficiency while freeing humans from heavy labor,a production model in which humans and robots work together has received increasing attention.In this thesis,we use computer vision and deep learning to identify and reconstruct human keypoints in 3D and use virtual reality technology combined with industrial robots for remote human-robot interaction.The main work accomplished is as follows:First,the vision-based human pose estimation method is studied,and two typical network structures based on deep learning are analyzed: Top-Down structure and BottomUp structure.A suitable network structure for human keypoints detection—Top-Down model is selected in combination with the robot teleoperation application scenario in this thesis,and a suitable human target detector and a human keypoints detector are constructed in the Top-Down network framework according to the actual scene requirements.Then the training of the network was completed on the dataset,and the final model obtained has good real-time and accuracy,which can provide good human action perception for the human-robot interaction scenario of robot teleoperation.Next,the camera model and multi-view geometry principles are studied,and a multiview human pose capture platform is designed and built.Firstly,the preliminary camera internal and external parameters are obtained by camera aberration correction and multiview cameras calibration,and the internal parameters and external parameters of the camera are optimized by using Bundle Adjustment.Then based on the multi-view human pose capture platform,this thesis proposes a Thresholded Triangulation method,which can reduce the misclassification caused by self-occlusion during key point detection and reconstruct a robust 3D model of human key points.Finally,a vision-based remote human-robot interaction system platform was designed and built and experiments on human-robot interaction were completed.The platform consists of a multi-view human posture perception system,a 6-axis ELITE collaborative robot and a virtual reality remote visual feedback system.Using this platform the operator can interact with the robot through body movements and can transmit the working scene of the remote robot to the operator in real-time from a firstperson perspective through a VR headset,making the remote human-robot interaction system form a closed loop.In the human-robot interaction experiments,two human-robot interaction methods based on the human vector model are proposed in this thesis,and the robot is controlled by these two methods to perform human action imitation and remote grasping experiments.The experimental results show that the proposed methods can realize human motion perception,human motion imitation and object grasping by the remote robot. |