Font Size: a A A

Research On The Control Method For Grasp Tasks Facing Target Objects

Posted on:2021-09-08Degree:MasterType:Thesis
Country:ChinaCandidate:S F LiuFull Text:PDF
GTID:2518306353953309Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
The robotic grasping algorithm based on deep learning can accurately control the robotic arm to grasp small objects.However,due to the limitation of the size of the end effector,a single robotic arm cannot accurately grasp large objects.It must rely on the cooperative operation of both arms to grasp large objects.This requires the use of somatosensory interaction technology to complete this task.Through the somatosensory interaction technology,the human body control robot can only complete the operation tasks of large objects.For relatively small objects,due to the limitation of somatosensory accuracy,its success rate of grasping is not very high.In this paper,a remotely operated body-sensory control robot is used to grasp large objects,and a deep learning grasping system is used to grasp small objects.The main research contents of this article are as follows:(1)Using Kinect depth camera,Baxter robot,computer,router to build a complete somatosensory interaction system framework under the robot operating system ROS.The human skeleton information was obtained through Kinect.Based on the theme communication method,UDPROS protocol was selected to transmit the joint data to the computer,and a hybrid filter was designed to smooth the joint data to eliminate joint jitter.(2)The robot kinematics D-H method is used to model and analyze the robotic arm and the human arm.A space vector method is used to calculate the joint angle of the human body,and the maximum range of motion is used to perform angle mapping to realize the man-machine dual-arm movement pass and perform body sensory follow up experiments to verify the effectiveness of the system.(3)Establish a target capture system based on deep learning.Under the Darknet deep learning framework,the pre-trained model of YOLO v3 is used to train the network in own data samples,and the samples are collected in the real environment and the virtual environment,respectively,and the data set is labeled with LabelImg,which enhances the network's learning ability and ultimately obtains high accuracy YOLOv3 target detection network model.This network model can accurately obtain the bounding box position information of the target object,model and analyze the virtual Kinect,obtain the three dimensional coordinate information of the object,and solve the joint angle through the inverse kinematics of the robot,and the robotic arm can perform the grasping action.(4)Set up a complete experimental platform and perform grab experiments in a virtual environment.Through the somatosensory interaction system,the human body controls the arms to grasp larger objects in cooperation.Through deep learning's target grasping system,robotic end effectors accurately grasp smaller objects.
Keywords/Search Tags:depth camera, convolutional neural network, somatosensory interaction, motion transfer, robotic grasping
PDF Full Text Request
Related items