Font Size: a A A

Research On Robot Mapping And Object Tracking Based On The Fusion Of Lidar And Camera

Posted on:2021-01-07Degree:MasterType:Thesis
Country:ChinaCandidate:H ChenFull Text:PDF
GTID:2428330611999802Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Automatic drive is a kind of high technology that uses many kinds of sensors to perceive the information of the surrounding environment,and then uses computers to control the operation of vehicles.Mature automatic drive technology can make the vehicle do not need human intervention in the process of driving,which is an important part of intelligent transportation system.In this paper,we use monocular camera and multi line lidar as sensors to study the two problems in driverless scene,which are the construction of point cloud map and the detection and tracking of multitarget.Point cloud map can be used as a priori in the auto driving navigation scene to make the autonomous vehicles achieve the function of high-precision positioning.In this paper,the camera and lidar fusion are used for mapping.First,a monocular camera is used to calculate the pose transformation matrix of the autonomous vehicles during the movement,and it is provided to the lidar as a prior value.Then the lidar frame feature matching calculate the pose transformation of the autonomous vehicles,and finally match the current point cloud frame with the local map at a lower frequency to construct the initial point cloud map,and establish the weakly constrained edges in the pose map.At the same time,for the case where the initial point cloud map is broken,the monocular camera is used to extract the visual key frames,and the loop detection is continuously performed during the autonomous vehicles construction.And iteratively optimize the pose map using the LevenbergMarquardt method to obtain the optimal pose of the autonomous vehicles and optimize the point cloud map.Multi-target detection and tracking is to detect and track multiple moving targets around the vehicle in the process of autonomous vehicle navigation,so as to realize the task of environmental awareness.In this paper,the problem of multi-target tracking is transformed into the problem of bipartite image matching.Firstly,the sensing data of monocular camera and 3-D lidar are used to detect the dynamic targets around the autonomous vehicle,and then the targets detected by the two sensors are correlated.Then,Kalman filter is used to predict the targets' moving track in the next moment,the correlation matrix between the detection target and the tracking target of position information and the color feature information are extracted.Finally,the Hungarian algorithm is used to associate the target data between the two frames and update the targets' trajectory.In this paper,an outdoor park scene(150 m * 200 m)is selected for the experiment of fusion mapping and multi-target tracking.In the experiment of fusion mapping,the point cloud map which only uses the feature matching between frames to calculate the pose is compared with the point cloud map which uses the graph optimization framework to optimize the pose,which verifies that the loop detection and global pose optimization improve the map consistency.In the multi-target tracking experiment,the performance of the multi-target tracking algorithm and other mainstream algorithms is improved by using the public KITTI data set,and then the tracking experiment is carried out in the real campus scene,and the tracking effect in several challenging scenes is shown.
Keywords/Search Tags:sensor fusion, simultaneous localization and mapping, Kalman filter, multi-target tracking
PDF Full Text Request
Related items