Nowadays,with the gradual integration of industrialization and information,science and technology and industry technology are showing an increasingly obvious blending trend.The intelligent industry represented by robot systems is constantly prospering,responding to highly dangerous radiation environments and emergency response.Rescue and prevention and control of new coronary pneumonia outbreaks have important roles.The use of robotic arms for catching operations is one of the important applications of robotic systems.Due to the limited capture range of the robotic arm,the first step of the capture experiment is to use the visual navigation function to make the robot move closer to the target object.Within the catching range of the robotic arm,it is also necessary to use the pose estimation method to identify the pose of the target object,so that the catching direction of the robotic arm is more accurate.In order to meet the practical application requirements of the capture scene,this paper carried out the research of visual navigation and pose estimation algorithm based on convolution neural network,and on this basis,the robot capture system was assembled to complete the capture task of the target object.In the traditional visual navigation algorithm,SLAM has the problems of error accumulation,low real-time performance,and large computational resource consumption.However,GPS has the problems of being unable to locate indoors.In response to these problems,this paper proposes a visual navigation algorithm based on a multi-task network.The neural network consists of a backbone network and two branch networks.The two branch networks output the robot’s movement direction and collision probability respectively,and jointly make the robot.The next move decision.In order to improve the accuracy of visual navigation algorithms,this paper extracts edge features as model input,and improves the network structure based on the deep residual network model.The comparison experiment on the public data set proves that the algorithm has good performance.In the existing pose estimation algorithm,due to the complexity of the background environment and other external interference,the accuracy of the algorithm is not ideal.To solve this problem,this paper proposes a pose estimation algorithm that combines color information and point cloud information,and uses confidence to eliminate pollution information.The method first uses semantic segmentation to separate the target objects,and then extracts and fuses the color information and point cloud information of the target objects;improves the network structure and decouples the global features into 3D translation and 3D rotation;The ICP algorithm is used to optimize the output of the final pose estimation results.Comparative experiments on public data sets show that the algorithm has better accuracy.Finally,based on visual navigation and pose estimation algorithms,this article independently assembled a robotic catching system,set up a physical experiment scene,and built an experimental platform.This paper completes the self-made data set experiment of capturing real objects,the camera and the robot arm hand-eye calibration experiment,and finally uses the visual navigation and pose estimation algorithms to complete the robot’s visual navigation experiment and comprehensive grab experiment in indoor scenes. |