At present,machine vision technology is still an important technology to restrict the efficiency of robot sorting and assembly in automobile parts factories.The present industrial robots in China,the unstructured environment complex diversified auto parts positioning fetching research,most still stays on the traditional machine visual methods,resulting in slow speed of detection,low precision,residual serious problems,greatly affected the industrial robot assembly,sorting,etc.The working efficiency of the production process.In recent years,the computer deep learning target detection algorithm based on GPU in the detection speed and precision has greatly promoted,but high requirements for hardware resources,network parameter is larger and the forward inference speed is slow,cannot meet the requirements of industrial production real-time detection,but also difficult to count in low power resources of the computer,running on embedded devices or mobile end,therefore,A lightweight convolutional neural network with high detection accuracy and good real-time performance needs to be developed.To solve the above problems,this paper proposed an industrial robotic grabbing system based on the integration of YOLOv4 network-improved lightweight parts detection network and full convolution network,which can complete the task of parts classification,positioning,and grabbing on Jeston Xavier NX embedded equipment.The main contents of this paper are as follows:1.According to the actual requirements of target recognition and location,compared with the advantages and disadvantages of the current mainstream target detection algorithm,YOLOv4 algorithm with good open-source and considering both speed and accuracy of detection was elected the basic algorithm.To solve the problem of slow recognition speed in NX embedded devices,this paper proposes to extract network CSP Darknet-53 by fusing parallel hybrid attention module and lightweight network Mobile Netv2 with Inception modules instead of YOLOv4 features,at the same time,using adaptive feature fusion structure ASFF to enhance the ability of PANet detection network feature fusion and to optimize clustering by using K-means++ algorithm.Finally,Tensor RT inference engine was used to reduce the time-consuming detection of the improved YOLOv4 algorithm on embedded devices.Compared with the detection effect on NX embedded devices before and after the algorithm improvement,Experiments show that the improved network detection speed is faster,the accuracy is higher,and the part positioning frame is closer to the part.2.Calibration of binocular camera is based on Zhang Zhengyou’s calibration method.The calibration experiment was completed by the calibration program written by Open CV and verified by the calibration result of MATLAB software.Then the mapping relationship between the camera coordinate system and robot coordinate system was obtained by handeye calibration.Then,aiming at the problem of inaccurate positioning of irregular parts,a method of integration based on a full convolution network and improved YOLOv4 network was designed.Through pixel-level segmentation and image processing of irregular parts,the positioning accuracy of irregular parts is improved based on meeting the detection accuracy.Then,the identified part category and centroid coordinate are stereo matched to get the part classification information and grab the pose.3.According to the process and requirement of part grabbing,an experiment platform for target part recognition and grabbing based on NX embedded binocular vision was established,the hardware and software system for recognition and grabbing was designed by modularization idea,then the forward and reverse kinematics analysis was carried out for NACHI small robot,the movement track of industrial robot was simply planned,Finally,combined with Qt development interface,the system software is compiled,and the task of part identification,classification and grasping is realized through the embedded experimental platform. |