Font Size: a A A

Research On Autonomous Navigation System Of Mobile Robot Based On Multi-sensor Fusion

Posted on:2023-05-08Degree:MasterType:Thesis
Country:ChinaCandidate:J LiFull Text:PDF
GTID:2558306914962609Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
The popularity of artificial intelligence has promoted the rapid development of robotics.Autonomous navigation has become an indispensable part of robotics,involving various modules such as SLAM(Simultaneous Localization and Mapping),path planning,and perception decision-making.As an important core link in the autonomous navigation technology of robots,the SLAM module has become a hot research topic.If a mobile robot wants to walk autonomously like a human,it needs accurate positioning information and high-precision map information.A single sensor cannot meet the requirements due to certain defects in its own properties,such as the camera will be affected by light,the representation ability of lidar is weak,and the IMU(Inertial Measurement Unit)will have accumulated errors,etc.In view of this,it has become a trend to complement the advantages of a single sensor and perform multi-sensor-fusion for localization and mapping.Building high-precision maps requires highly accurate and robust odometry information.This paper mainly focuses on how to build an accurate and robust lidar odometry.By introducing image semantic information and laser odometry construction constraints,the recall rate and accuracy of loop closure detection are improved,and finally a highly accurate and robust odometiy is realized.On this basis,in order to verify the performance of the proposed algorithm,this paper builds a four-wheel-drive hardware platform,and completes the robot positioning and navigation test for practical scenarios.The research content and main results obtained in this paper include the following three parts:(1)Aiming at the poor accuracy and robustness of indoor and outdoor large-scale scene localization and mapping,this paper proposes a 3D lidar odometry based on image semantic information constraints.Due to the sparseness of multi-line lidar point clouds,it is difficult for robot mapping algorithms to extract features in low-textured and open environments.This paper introduces image semantic information to provide prior information constraints for lidar SLAM mapping.Then,according to the depth information of the point cloud,the depth value of the pixel in the area is obtained by interpolation,and the 3D coordinates of the semantic pixel point are obtained;finally,the extracted semantic information of the image is used as a landmark to form a constraint with the point cloud at the back end of the lidar odometer,and the pose optimization for lidar SLAM.Experimental tests are carried out on the KITTI dataset and the actual campus scene.The results show that adding semantic information constraints can effectively improve the accuracy of the loop closure detection of the laser SLAM algorithm.According to the evaluation results of evo,after adding image semantic constraints,the absolute error of lidar odometry is reduced by 1.64%on average.(2)Aiming at the problem that a single sensor is prone to error loop closure or cannot detect loop closure in similar scenes or lighting changing scenes due to its own attributes,this paper proposes a loop closure detection algorithm based on multi-sensor fusion.Extract global descriptors from point clouds and images through Minkloc++network,then use Faiss library to find candidate frames,and then perform clustering operation to eliminate abnormal candidate frames,convert point clouds into image,and use Superglue to calculate the constraint between query frames and candidate frames.Finally,multiple global consistency checks are performed to obtain the correct loop closure frame.Experiments on the KITTI dataset and actual scenes show that the loop closure detection algorithm designed in this paper performs 6.95%higher than the Scan-Context algorithm on the KITTI dataset and 14.6%higher than the Intensity-Scan-Context algorithm.(3)In order to verify the performance of the algorithm proposed in this paper for localization and mapping in actual scenarios,we designed a low-speed unmanned driving algorithm verification hardware platform based on a four-wheel drive chassis.In this paper,the construction of the autonomous navigation system is completed,and the software development and experimental verification are carried out from the modules of data processing,mapping,path planning,relocation and multi-point navigation.Among them,16-line laser radar,6-axis IMU,binocular camera and industrial computer are selected as hardware devices.The software architecture is based on Ubuntu18.04 and ROS(Robot Operating System)system.Based on the verification platform built in this paper,autonomous navigation in the headquarters of Beijing University of Posts and Telecommunications is realized,and functions such as precise positioning,mapping and path planning are realized,which fully verifies the performance of the algorithm proposed in this paper,and shows that the designed low-speed no Feasibility and Robustness of Human Driving Verification Platforms.
Keywords/Search Tags:Unmanned driving, autonomous navigation, lidar SLAM, semantic segmentation, multi-sensor fusion
PDF Full Text Request
Related items