Font Size: a A A

Research On SLAM Technology Of Mobile Service Robot Based On Multi-Sensor Fusion

Posted on:2024-08-18Degree:MasterType:Thesis
Country:ChinaCandidate:Y X SunFull Text:PDF
GTID:2568306923956199Subject:Electronic information
Abstract/Summary:PDF Full Text Request
With the rapid development of global artificial intelligence and unmanned driving technology,intelligent mobile robots have been widely used in the field of national defense and security,urban construction,industrial manufacturing,medical rehabilitation and other fields.Mobile service type robots need to have the most basic positioning function,while being able to perceive the surrounding environment information,so in the field of mobile robots synchronous positioning and mapping as the most basic core technology,synchronous positioning and mapping is also referred to as SLAM(Simultaneous Localization and Mapping),it specifically refers to the mobile robot in the unknown environment movement,through its own sensors to observe environmental information,It is possible to estimate one’s own posture using environmental information features,and at the same time build an environment map.This paper combines the advantages of vision and laser SLAM technology,configures lidar sensors,monocular cameras and inertial measurement unit IMU on the mobile robot body,and complements and fuses the information obtained by lidar,vision sensors and IMUs at different information levels through algorithms to construct a multi-sensor information fusion SLAM system with higher fault tolerance,accuracy and robustness,so that more target data and optimized observation information can be obtained.This paper mainly does the following research:For mobile service machines working in complex dynamic environments,the motion distortion of the lidar point cloud is compensated by the IMU of the inner-type sensor inertial measurement unit to obtain the motion estimation information of the.vehicle body,and ORB feature extraction is adopted in the visual tracking module,and the robustness of feature extraction is enhanced by histogram equalization filtering.Build vision and IMU residual constraints while optimizing pose variables as well as velocity and offset amounts.The loop detection factor is provided by two parts,visual information and lidar data,and double loop detection is performed based on bag-of-word model and Scan Context respectively to reduce the cumulative error.Finally,the global pose map optimization is carried out by combining the laser odometer factor,visual inertial odometer factor,IMU pre-integration factor constraint and loopback detection factor,and the accuracy and robustness of positioning and mapping of mobile robots in complex dynamic environments are enhanced by constructing multi-factor constraints.Based on ROS system,the verification of multi-sensor fusion algorithm based in this paper is realized,and the positioning trajectory accuracy based on single sensor and multi-sensor SLAM algorithm is compared by setting up multiple sets of ablation experiments in the public KITTI dataset,and the influence of different loopback detection modules and visual auxiliary information on the multi-sensor fusion algorithm is measured,wherein in the 20111003drive0027 data series,the visual auxiliary information of the multi-sensor-sc-ci in this paper reduces the rmse error value of relative translation by 0.3171%.Based on the real scenario,multiple sets of ablation experiment is carried out,and the superiority of the multisensor fusion SLAM algorithm is compared according to the experimental results.At the same time,this paper qualitatively compares the algorithms of different schemes by visual positioning trajectories in multiple scenarios,and the effectiveness and reliability of the proposed multi-sensor fusion algorithm are verified in extreme environments.
Keywords/Search Tags:SLAM, Lidar sensor, Vision sensor, IMU, Multi-sensor fusion
PDF Full Text Request
Related items