Font Size: a A A

Study On 3D Object Recognition And Localization Based On Robot Virtual Binocular

Posted on:2019-07-25Degree:MasterType:Thesis
Country:ChinaCandidate:L W XueFull Text:PDF
GTID:2428330545973298Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
The recognition and location of 3D objects has always been a difficult problem in the field of automation.The existing methods of 3D object recognition and positioning include binocular stereo vision,fringe structure light auxiliary,line laser scanning 3D object,fixed point ranging method,ultrasonic distance measurement and nuclear magnetic resonance.In this paper,based on the hand eye system(Eye in hand)composed of six degrees of freedom robot and CCD camera,the virtual binocular system based on robot is studied.It provides a new method for 3D object recognition and location.It takes six degrees of freedom robot to swing a CCD camera with its relative fixed position to two different positions,respectively.The target object image is taken and the features of each target object are identified.The object features identified by the target are matched stereoscopic,and the three-dimensional information of the object is obtained.Finally,the 3D object recognition and location are realized.Compared with the existing 3D object recognition and location methods,the virtual binocular has the characteristics of simple structure,flexible motion mode and wide applicability.Firstly,the working principle of virtual binocular system based on robot is described in theory.The principle of binocular stereo vision 3D reconstruction based on the combination of robot control technology,through the CCD camera swing in two different positions by triangulation principle picture taking,after obtaining different relative position information of the same point,the recovery point in space coordinates of the actual position.The calibration of the camera coordinate system involved in the above process is carried out,and the hand eye calibration between the camera and the robot is theoretically analyzed and demonstrated.The theoretical analysis is made on how to get the spatial coordinates of the point through the two images at the same point.Secondly,in order to get better matching results and achieve 3D reconstruction of objects,we need to plan the location of the two shots.In this paper,the convolution neural network is used to introduce the pose information of the first image,and the first module of the first estimate of the pose of the head image is made up of a joint deep learning framework.Through artificial supervision,a set of training samples is constructed,and the parameters of the models are optimized through these training samples.The first pose of the actual images is taken to complete the estimation of the object pose,and the robot's second image uptake posture is planned.Finally,the study of 3D matching theory,how to recognize the feature points in the image are analyzed,explored how to apply the theory of epipolar geometry to match feature points in two images,and how to further realize the 3D reconstruction of the object to do the research,analysis of the impact of the error sources and 3D reconstruction study on how to eliminate the influence of error,3D reconstruction,and demonstrates the research by experiment.The paper studies the attitude measurement principle based on the robot virtual binocular,the planning of robot's pose and posture,the principle of 3D matching and 3D reconstruction,the error source and compensation of 3D reconstruction.Finally,the 3D object recognition and location of the object is realized.
Keywords/Search Tags:Eye in hand system, Calibration, Stereovision, Three-dimensional reconstruction
PDF Full Text Request
Related items