Font Size: a A A

Research On Robot Grab System Based On Visual Detection

Posted on:2021-11-12Degree:MasterType:Thesis
Country:ChinaCandidate:S C MaFull Text:PDF
GTID:2518306464977919Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
In recent years,the basic research and industrialization development of intelligent robot grasping technology is more and more rapid,which can replace human beings to complete high intensity work,and have high application value in industrial sorting,agricultural picking and so on.Robot grasping technology has become a research hotspot.Because the task of grasping the target object will be affected by the shape,attitude and complex environment of the object,it is difficult to use the traditional formula to calculate.Most studies in the past have used computer vision and robotics technology,although it has certain effect,but the degree of intelligence is still not high.Robot grasping operation needs the robot to perceive the environment information,process the acquired sensor information,finally make the decision and complete the grasping operation,and the deep learning has a good effect on the target perception,which can improve the intelligent level of the robot grasping.This paper combines deep learning with computer vision and robot technology,taking object perception,positioning and robot grasping as the research goal,and carries out research around robot kinematics,camera calibration and hand-eye calibration.The main contents of this thesis are as follows:(1)Aiming at the robot kinematics,the D-H model and the forward and inverse kinematics solutions of the six DOF Manipulator are studied and analyzed,and the control method of the manipulator is obtained.Also studied Move It! Control the manipulator,use the Action communication mode to realize the control of the manipulator.(2)The calibration of the vision system of the depth camera and the manipulator is studied.The camera calibration is completed by Zhang Zhengyou calibration method,the color camera is calibrated by MATLAB calibration toolbox,and the position relationship between the color camera and the infrared camera is calibrated by using the calibration library brought by the depth camera,the reprojection error and distortion are analyzed,and the internal parameters of the camera are obtained.The Eye-to-hand hand-eye system was constructed,the conversion between the camera coordinates and the manipulator base coordinates is completed by identifying the Ar Uco label and the tsai hand-eye labeling method.(3)The visual detection and localization of manipulator are studied.By analyzing the method of template matching and deep learning to detect objects,the final selection of deep learning YOLOV3 to detect objects.In order to reduce the computational complexity,improve the speed of target detection and maintain high detection accuracy,two improvements were made to YOLOV3: select lightweight Mobile Net network instead of Darknet-53 network in YOLOV3,and use focus function instead of traditional cross entropy function.The improved YOLOV3 model is used to detect the object and obtain its pixel coordinates and depth values in the image.(4)In the actual environment,the target object detection and grabbing test is carried out.The pixel coordinates and depth values of the target are obtained by the improved YOLOV3 detection model and the depth sensor.,according to the coordinate system transformation,the target position under the camera was obtained,and then calibrated by hand and eye,the target position was transferred from the camera to the robot,and finally the robot inverse kinematics solution is used to drive the mechanical arm to grab the target.
Keywords/Search Tags:Robot Kinematics, Hand-eye Calibration, Deep Learning, Object Detection and Location, Object Grab
PDF Full Text Request
Related items