| In recent years,with the rapid development of intelligent driving technology and the continuous enrichment of unmanned driving application scenarios,intelligent vehicle environment perception technology is also constantly facing higher challenges.Real-time terrain reconstruction technology in front of the vehicle is one of the key technologies for intelligent vehicle environment perception.Compared with perception and positioning systems equipped with only a single sensor,a multi-sensor positioning perception system often has richer information,higher robustness,higher positioning accuracy,and better reconstruction effects.Currently,the level of real-time terrain reconstruction technology for vehicles is not accurate enough in terms of positioning precision.The reconstructed models have a single color and lack realistic and rich color information.Furthermore,there is no segmentation for road areas,which cannot provide better perception for autonomous driving vehicles.In order to solve the above problems,this paper uses multiple sensors such as lidar,inertial measurement unit,and color camera,and uses data fusion to realize real-time 2.5D reconstruction of the terrain in front of the vehicle.And by building an experimental platform,the horizontal positioning accuracy,reconstruction effect,road segmentation effect and real-time performance of the system are verified.The contents of the work completed in the paper are as follows:1.In order to achieve spatial synchronization,this paper firstly calibrates multisensors.In this paper,Zhang Zhengyou calibration method is used to calibrate the internal parameters of the camera,and the checkerboard plane calibration method is used to calibrate the external parameters.By transforming the pose of the calibration board multiple times to construct multiple sets of constraint equations,the homogeneous transformation matrix between the camera coordinate system and the lidar coordinate system can be obtained by solving the equations.Using the internal parameter matrix of the camera and the external parameter matrix of the camera and the lidar,combined with the three-dimensional points in the lidar coordinate system,the corresponding two-dimensional pixel points in the image physical coordinate system can be found,and the RGB values of the corresponding pixel points are assigned to the 3D points to realize the coloring operation.2.Complete the overall scheme design and build the hardware platform.Based on the vehicle-mounted sensors(lidar,inertial measurement unit,color camera),the system obtains lidar point cloud data,image data in front of the vehicle,and motion data such as acceleration and angular velocity of the vehicle body,and calculates the vehicle position through the extended Kalman filter algorithm.Through the external parameter matrix and internal parameter matrix between the sensors,the data of each sensor is converted into the same coordinate system,and finally,according to the obtained vehicle pose data,lidar point cloud data and image data,real-time reconstruction of the terrain in front of the vehicle is completed.3.In order to improve the positioning accuracy,this paper adopts a pose estimation method based on iterative error state Kalman filter.Firstly,data preprocessing and point cloud feature extraction are performed on the point cloud,and the iterative method is adopted.Firstly,for the point cloud distortion phenomenon caused by motion,the pose error is calculated by linear interpolation,and then the point cloud distortion is eliminated,then the point cloud is processed.Next,ground point segmentation is performed on the point cloud,and then edge features and surface features are extracted from the ground point set and non-ground point set respectively.Then the pose estimation method based on the iterative error state Kalman filter uses the motion data of the inertial measurement unit and the feature point cloud after the distortion is eliminated to estimate the pose of the car body.4.In order to realize the reconstruction of the road surface in front of the vehicle,a method of using a sliding window to construct a local point cloud map in front of the vehicle is first proposed,and the terrain is reconstructed by using a 2.5D elevation map expression.For 2.5D elevation maps,coloring is performed based on camera raw images and height values,and then lane area recognition is realized based on FastSCNN semantic segmentation network,and lane area segmentation of 2.5D terrain models is realized,which enhances vehicle perception ability to support future decision-making control and path planning.In order to comply with human driving observation habits,the image of the driving perspective is obtained by projecting the terrain model in the 3D scene.5.The outdoor real vehicle experiment was designed,and the horizontal pose estimation accuracy experiment,reconstruction effect experiment,road segmentation experiment,and reconstruction real-time performance test were respectively carried out to verify the effectiveness of the algorithm in this paper. |