| Assembly robots have long been widely used in industrial production.With the advent of the era of intelligent manufacturing,assembly robots guided by visual technology have received more and more attention in both theoretical research and actual production.At present,visual manipulators often recognize and locate target parts through traditional image detection techniques such as geometric matching,feature analysis and pattern learning.However,the above methods are only suitable for occasions where the contour of the part is simple or the features are easy to extract.Under the complicated circumstances of random placement of parts and environmental factors,the traditional target detection algorithm based on artificial design features has great limitations,often facing problems such as low recognition accuracy,slow detection speed and large positioning error.In order to realize the accurate recognition and positioning of parts by complex robots in complex situations,combined with the advantages of deep learning with powerful characterization and modeling capabilities,deep learning techniques are applied to the vision system of robots to improve the robustness of assembly robot applications.The image of the part is collected by an industrial camera,and the trained deep neural network model is used to detect the image to obtain the category information and position information of the target part,and the coordinates of the part in the robot base coordinate system are obtained by camera calibration and hand-eye calibration.The work is as follows:(1)By comparing the characteristics of each mainstream deep learning target detection algorithm,the YOLOv3 algorithm with excellent performance in both real-time and detection accuracy is selected.At the same time,for the small part size,the feature loss and the application scene change are easy to occur under the action of multiple convolution kernels.The Darknet-53 feature extraction network in YOLOv3 is optimized and improved.(2)Collect enough images of the parts in the actual complex situation,and complete the data set production by the annotation tool.The K-means clustering algorithm based on particle swarm optimization is used to statistically analyze therectangular marker boxes in the dataset to obtain the maximum number of frames and sizes required for training.(3)Based on the Zhang Zhengyou calibration algorithm,the camera calibration under the comprehensive multi-distortion factor is completed,and the parameters and distortion coefficients of the camera are solved.The hand-eye calibration is performed according to the camera and the robot installation method,and the transformation relationship between the camera coordinate system and the base coordinate system is obtained.The mapping relationship between the part coordinate system and the robot base coordinate system is obtained by two calibration results.Finally,the hardware of the assembly robot part positioning system is introduced and the software system is designed according to the previous work. |