Font Size: a A A

Research On Robot Grasping Posture Estimation Based On Vision Guidance

Posted on:2022-07-21Degree:MasterType:Thesis
Country:ChinaCandidate:Q LuFull Text:PDF
GTID:2518306551480744Subject:Instrument Science and Technology
Abstract/Summary:PDF Full Text Request
Grasping operation is one of the main operations of robots in the human-robot collaborative system.Enabling the robot's scene understanding and improving the intelligence of the grasping operation through vision is the current research hotspot in the field of robot applications.However,existing vision-guided robot operations still have problems such as simple application scenarios,inability to effectively solve occlusion,recognition and positioning accuracy to be improved,and high cost of obtaining labeled data.Based on this,this paper uses the deep learning method to deeply study the robot grasping target space pose estimation and other content,proposes an object pose estimation method based on RGB image,and conducts physical capture verification through physical experiments.The main work of this paper is as follows:(1)The development status and existing problems of vision-guided robot operation at home and abroad are reviewed.In the robot grasping operation,the commonly used plane grasping method is difficult to grasp the target object in the complex scene,so this article grasps by estimating the 6D posture of the object;the input data for estimating the 6D posture of the target object is usually a point cloud or RGB-D has the problems of difficulty in data acquisition and large amount of data calculation.This paper designs a scheme that only uses RGB image input to estimate the 6D posture of an object.(2)The overall frame of the vision guided robot grasping is designed.The ABB robot in the laboratory was used to build a grasping platform,the internal parameters of the camera were obtained through camera calibration,and the “Eye-in-Hand” hand-eye calibration system was built,and the posture conversion relationship between the camera and the robot was obtained according to the actual situation.(3)This article is based on CDPN(Coordinates-based Disentangled Pose Network)network to improve.The CBAM(Convolutional Block Attention Module)module is added to the CDPN's feature extraction network to introduce channel and spatial attention mechanisms to suppress useless features and enhance useful features.And at the output end of the CDPN feature extraction network and rotation estimation network,the PPM(Pyramid Pooling Module)is used to separately extract the different scale information of the feature map,and include it in the features of the current pixel,providing more information for the classification of the current pixel.Effective feature expression.(4)An experimental platform for robotic grasping was constructed for experimental verification.A posture estimation data set and a target detection data set were constructed using the items in the laboratory,and actual grasping experiments were carried out to verify the reliability and robustness of the algorithm in practical applications.
Keywords/Search Tags:Vision guidance, Robot grasping, Deep Learning, Attitude estimation, CDPN
PDF Full Text Request
Related items