Font Size: a A A

Research On Key Technologies Of Robot Grasping Based On 3D Vision For Disordered Stacked Parts

Posted on:2024-09-21Degree:DoctorType:Dissertation
Country:ChinaCandidate:T J ZhangFull Text:PDF
GTID:1528306917496444Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Intelligent manufacturing is the inevitable path for the transformation,upgrading and development of traditional manufacturing industries.Robotics and artificial intelligence technologies are expected to alleviate the adverse effects of aging to a great extent.With the policy support of intelligent manufacturing and robotics industries,traditional manufacturing industry dominated by humans is gradually shifting towards a production model dominated by industrial robots.Currently,most industrial robots have a low level of intelligence and poor flexibility,making it difficult to adapt to complex and changeable scenarios,especially in the field of robot grasping.Therefore,intelligent manufacturing technology combines machine vision intelligent perception technology with industrial robots to enhance their perception ability and achieve visual-guided robot grasping.There is an increasing demand for robot disorderly grasping in intelligent manufacturing,and the disorderly superimposition and mutual occlusion of parts make it difficult for traditional 2D visual grasping technology to accurately locate parts.Therefore,how to accurately locate parts in disorderly scene for grasping detection is the key problem and the "holy grail" problem in industrial robot operations,playing an important role in promoting the development of intelligent manufacturing.Unlike 2D vision,obtaining the three-dimensional information of the scene through 3D vision can effectively solve this problem.However,due to serious stacking and occlusion of parts in disordered scene,as well as external light intensity exceeding the camera’s dynamic range,3D visual information has problems such as reduced measurement accuracy or incomplete measurement,greatly affecting the performance of grasp detection algorithms.Therefore,based on high-quality 3D visual information,research on accurate,efficient and robust grasping algorithms is of great significance for improving work efficiency,reducing production costs and risks,and enhancing production flexibility.Taking the grasping of disorderly stacked parts in manufacturing as the application background,combined machine vision technology with artificial intelligence technology,this dissertation studies the key technologies of robot disorderly grasping base on 3D vision,involving depth estimation of stacked parts,suction region prediction,planar grasp detection and six degree-of-freedom(6-DOF)grasp detection,and builds the robot disorderly grasping platform to verify the feasibility of the algorithm.The main research contents and implementation scheme of this dissertation are summarized as follows.(1)The robot disorderly grasping platform is built to verify the feasibility of the proposed methods.Firstly,the functional requirements of the robot disorderly grasping platform are determined through a demand analysis.Then,the software and hardware systems of the robot disorderly grasping platform are designed and built according to the functional requirements,providing platform support for the subsequent feasibility verification of the proposed methods and the effectiveness verification of the software and hardware systems.(2)Aiming at the high-quality imaging problem of depth images,the depth estimation method combining virtual and real-world is proposed for disordered scene of stacked parts to apply depth estimation to industrial scenes.By learning the illumination model of disordered scenes using image relighting models,and applying it to disordered scenes based on part poses and Blender material rendering,scene images and ideal depth images that are similar to the real environments are generated,solving the problem of difficulty in obtaining depth datasets.The depth estimation method based on RGB image and learned image normal vector is proposed to achieve depth estimation for the disordered scene of stacked parts.Experimental results demonstrate that the proposed method overcomes the limitations of traditional algorithms that require multiple views to recover scene depth and has good generalization performance.And the estimated depth image has high quality and clear contour information.(3)Aiming at the insufficient theoretical of end-to-end suction region prediction models,the suction region prediction method from knowledge to learning is proposed for disordered scene.The suction reliability matrix based on depth image is proposed based on the theoretical analysis of suction,which considers the influence of suction cup model,part model,centroid and depth on suction reliability.The suction region prediction model based on fully convolutional neural network is proposed and trained using the disordered suction dataset annotated with suction reliability matrix.Experimental results show that the model can be customized according to the suction cup and part models to achieve the knowledge-to-learning transformation and can be quickly redeployed to adapt to disorderly suction of different parts,with short detection time,high accuracy and good generalization performance.(4)Aiming at the contradiction between the detection efficiency and accuracy of end-toend planar grasp detection methods and sampling evaluation planar grasp detection methods,as well as the difficulty in labeling dataset,the planar grasp detection method based on selfsupervised learning is proposed for disordered scenes,applying self-supervised learning to robotic grasping angle classification.Firstly,the grasping angle classification model based on self-supervised learning is proposed.Secondly,the automatic annotation method for disordered grasp dataset is proposed based on the grasping angle classification model.On this basis,the proposed fully convolutional grasp detection network is trained using the labeled disordered grasping dataset to predict the pixel-level grasping possibility.Finally,a two-stage training and end-to-end detection planar grasp detection model is built for disorderly scenes.Simulation and experimental results show that the proposed method makes full use of the advantages of the two types of planar grasp detection methods,has strong generalization performance,high efficiency and accuracy,and solves the labeling problems caused by the disorderly superposition and self-occlusion of parts,as well as the rapid update of small-batch part categories.(5)Aiming at the accessibility problem of 6-DOF grasp detection caused by robot motion and bin restriction,the difficulty of dataset acquisition and long detection time,the 6-DOF grasp detection method combining depth image and point cloud is proposed.A hybrid collision detection algorithm combining custom space partition and AABB tree collision detection method is proposed to detect the reachability of 6-DOF grasping configurations converted to disordered scenes,ensuring reachability considering both robot motion and bin restriction.Considering that the same grasp point and approach vector may correspond to different combinations of in-plane rotation,approach distance and opening width,the 6-DOF grasp detection method combining depth image and point cloud uses the pixel-level classification model based on depth image to obtain the grasp point and approach vector of robot grasp operation,and then uses the multi-label classification model based on point cloud to obtain the in-plane rotation,approach distance and opening width of end effector.The experiment results show the feasibility and good generalization performance of the proposed method.(6)The disordered scene of metal cylindrical parts is selected as the experimental scene to test the robot disorderly grasping platform.The experimental results verified the feasibility of the proposed method and the effectiveness of the software and hardware system of robot disorderly grasping platform.
Keywords/Search Tags:Stacked parts, disorderly grasping, depth estimation, suction region prediction, planar grasp detection, 6-DOF grasp detection
PDF Full Text Request
Related items