| In the context of the continuous progress of human society and science,information technology,artificial intelligence,Internet,mechatronics and other related industries have made rapid development,and indoor mobile robots are more and more used in industry and life.SIAM technology,namely simultaneous positioning and map creation,is the key technology to realize the Autonomous positioning and navigation of indoor mobile robots.In recent years,due to the continuous expansion of robot application scenarios,the requirements of robot technology are also constantly improving.For robots in complex large scene environment,a single sensor can no longer meet the perception ability of environmental information.Therefore,robot multi-sensor fusion technology is gradually favored by many researchers in related fields,not limited to this,In order to improve the autonomous working ability of the robot in the indoor environment,positioning and navigation accuracy,map construction quality and other problems are still urgent to be solved.This thesis also puts forward some improvement methods for the existing problems of the indoor robot and carries out the experimental verification.At present,the robot using a single sensor for indoor mapping can not meet the requirements of map construction in large scenes and the accuracy requirements of map construction in complex scenes.A single sensor can only obtain one-dimensional information in the environment,and the limited environmental information can not provide more environmental features for robot to assist positioning,which leads to the phenomenon of low mapping quality and low navigation accuracy,To solve this problem,this thesis proposes a mapping method of lidar and binocular camera fusion.In the fusion algorithm,there are two stages of fusion.The first stage is to fuse the position and pose of lidar and camera.In the fusion process,the Kalman filter algorithm in nonlinear system is used to fuse the two positions and pose to output better position and pose.Aiming at the phenomenon of cumulative error in the process of robot mapping,which leads to the robot mapping can not form a closed loop,a method of lidar and vision fusion loop detection is proposed.The two loop detection methods are fused to make up for the shortcomings of their own loop.The experimental results show that the accuracy of the proposed algorithm is improved by 9.6%compared with Gmapping mapping algorithm.The current global path planning algorithm and local path planning algorithm have the problem of poor positioning accuracy.To solve this problem,this thesis proposes a sensor fusion method,which fuses the binocular camera and inertial measurement unit.The main idea is to use the binocular vision odometer technology to predict the initial pose of the robot,add IMU pre integration constraint in the optimization stage of the initial pose and use the IMU measurement value to correct the initial pose parameters,So as to eliminate the cumulative error and further improve the positioning and navigation accuracy of the robot,Compared with A*navigation algorithm,this algorithm improves the positioning accuracy by 7.9%and improves the positioning and navigation efficiency of the robot. |