Font Size: a A A

3D Visual Perception For Robotic Flexible Spray-painting Production Line

Posted on:2023-01-14Degree:DoctorType:Dissertation
Country:ChinaCandidate:J H GeFull Text:PDF
GTID:1521307097974149Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Since the rapid development of intelligent manufacturing,the robotic flexible spraypainting production line has been widely used in the field of industrial manufacturing with the merits of high efficiency,flexibility,environmental protection and safety.However,the traditional fixed-operation mode focusing on large quantities of standard products has struggled to meet the growing demand for customization characterized by small batch and multiple varieties.In order to achieve flexible spray-painting,large-scale enterprises employ 3D visual perception technology to acquire workpiece topography feature,and perform workpiece identification and spraying path planning online.However,the exorbitant cost of imaging equipment and technology has deterred many small and medium-sized enterprises(SMEs).To lower the cost and technical barrier of a robot flexible spray-painting production line,this paper proposed a set of online 3D measurement solutions based on low-cost RGB-D cameras,which focuses on 3D imaging,workpiece recognition,workpiece reconstruction,pose estimation and other key visual perception technologies,providing support for robot path planning and autonomous spray-painting operation.The following are the main contents and contributions of this paper:1)Aiming at the requirement of workpiece measurement in robot flexible spray-painting,an online 3D imaging system for pipeline workpiece was designed based on low-cost RGB-D cameras.The system employs two RGB-D cameras installed on the opposite side to in-real capture data stream from a moving workpiece in all directions without affecting the production takt time.In order to achieve high-quality imaging,the system is calibrated from two aspects: First,a compensation method based on multi error-factor modeling is proposed for the depth measurement error caused by the low precision of the RGB-D camera.Utilizing the measurement principle of RGB-D camera and the geometric attributes of the workpiece,this method analyzes the influence of the change of viewing angle and spatial distance on the depth error,and compensate the measured value by multivariate high-order modeling.Second,an intersection coplanar joint optimization method based on a double-sided calibration plate which is designed using the consistency of the correspondence points in physical space is proposed to address the problem of external parameter calibration of camera systems without coincident field of view.By establishing the distance error objective function with the constraints of point line coplanar and a priori physical model,the reference points are optimized to realize the accurate calibration of the external parameters of the imaging system.2)To meet the need of recognizing small batch and multiple varieties workpieces in robotic spray-painting production line,a recognition method combining multi-modal features was proposed to distinguish the workpieces with similar structure.First,Mask R-CNN model trained off-line with enhanced small sample dataset is used to cluster the similar workpieces using its invariance in scale and direction space on 2D image.Then,on the basis of preliminary workpiece recognition and positioning,using the strong discrimination of 3D point cloud features to local details,the rough classified workpieces are further accurately recognized with the ideas of key point detection,vectorization and feature matching.Among them,to address the problem of mismatched features of similar workpieces,a step-by-step verification algorithm is proposed for corresponding feature pairs,where the linear correlation of vectorized features is used as the initial matching standard,then the consistency of topological structure and spatial transformation relationship are taken as the constraint to realize the accurate matching of 3D features eventually.In the experiment,more than1000 workpieces are used to verify the proposed algorithm.The results demonstrate that the recognition accuracy is as high as 99.26 %,and the running time is less than 1.5 s.3)An online 3D reconstruction method based on motion recovery(Sf M)and multiview RGB-D image fusion was proposed to solve the problem of lack of workpiece model used for robot autonomous programming in flexible spraying production line.It utilizes the multi-view data stream to realize the complete modeling of spraying workpiece according to the ideas of workpiece segmentation,pose estimation and point cloud fusion.First,an RGB-D workpiece segmentation algorithm is proposed based on the guidance of color image.In order to make use of the accuracy and reliability of the workpiece in the gray and spatial space on the color map,the Background Matting model is used to accurately segment the workpiece in the color map,which is used as the mask of guiding filtering to enhance the corresponding depth map disturbed by noise,so as to realize the fast and high-quality segmentation of the workpiece.Then,a local improved ICP and global closed-loop optimization algorithm is proposed to quickly estimate the pose of multi frame workpiece.Based on the geometric properties of the workpiece and the approximate linear motion characteristics,this algorithm improves the ICP to estimates the pose of workpiece in adjacent frames,and then globally optimizes the cumulative error of the pose of the workpiece in multiple frames with the constraint of the minimum distance between pairing points,so as to realize the accurate pose estimation.Finally,based on the spatial probability density distribution of point cloud and the correlation between point cloud quality and viewing angle change,a joint weighted point cloud fusion algorithm is proposed to filter the noise and error points in the aligned point cloud and realize the high-quality fusion of multi view cloud.Experiments show that the modeling error is less than 5 mm and the running time is about 1.5 s,which can meet the actual production requirements.4)A pose measurement method based on enhanced deep-learning feature En-3Dmatch was proposed to calculate the pose deviation between the workpiece and the reference during robot operation.First,an automatic labeling method is proposed to solve the problem of low labeling efficiency of 3D dataset.Based on the internal mapping relationship from2 D image to 3D point cloud of RGB-D data,a large number of extracted and paired ORB feature points are transformed into degraded matched 3D feature point patches,using as labels; subsequently,the labeled data is employed to train the 3DConv Net model to obtain a more descriptive and distinguishing En-3DMatch feature.Then,aiming at the interference of mismatched points and outliers in feature pairs,the Ada LAM algorithm applied to 2D image is extended to 3D point cloud to realize the accurate filtering of mismatched features;Finally,by minimizing the distance between paired feature points,the pose of the workpiece is accurately measured by nonlinear optimization.Experiments show that the proposed algorithm can accurately and quickly measure the poses of the workpieces.Based on the data collected by the low-resolution consumer RGB-D camera,the measurement error is 4.022 mm and the time is less than 1 s.
Keywords/Search Tags:Flexible Production line, Robotic Spray Painting, 3D Visual Perception, RGBD Sensor, 3D Imaging System, Workpiece Recognitnion, 3D Reconstruction, Pose Estimation
PDF Full Text Request
Related items