Font Size: a A A

Research On Obstacle Detection Method Of Unmanned Vehicle Based On Multi-sensor Fusion

Posted on:2020-06-23Degree:MasterType:Thesis
Country:ChinaCandidate:J HanFull Text:PDF
GTID:2492306308950739Subject:Transportation planning and management
Abstract/Summary:PDF Full Text Request
The autonomous unmanned system combines various technologies such as traffic engineering,communication technology and computer technology to provide convenience for human travel.In the future,the widespread use of autonomous unmanned vehicles can greatly reduce traffic congestion and the rate of traffic accident.Therefore,the development of autonomous unmanned vehicles has received widespread attention.It first obtains traffic environment information through various sensors in the platform,and then identifies the collected data information based on multi-language algorithms,and then calibrates and corrects the identified data by decision-level fusion to output a more accurate effective obstacle target.Finally,the position prediction of the moving target is completed by the appropriate target tracking model,which provides a strong technical guarantee for the safe driving of the autonomous unmanned vehicle.This topic relies on sensors such as LiDAR and vision camera to collect traffic environment information in real time.In order to accurately output the position,shape and confidence information of different obstacles in different driving environments,the improved target recognition YOLO(You only look once)algorithm is adopted in this paper to calculate the initial data.Then the fusion distance measurement matrix is adopted to build a data fusion platform to achieve the fusion of multi-sensor data.Correction of the model is accomplished by methods such as cross-sectional area and finally the purpose of accurately identifying obstacles is achieved.Finally,the Kalman filter tracking algorithm is introduced to complete the position prediction of moving targets.Based on the above introduction,the paper mainly includes the following research contents:Firstly,from the perspective of LiDAR and visual camera,the working principles of two kinds of sensors are introduced respectively.On this basis,combined with the target recognition algorithm of deep learning,the YOLO algorithm in the convolutional neural network algorithm is selected for detailed analysis.The YOLO model algorithm based on Caffe framework is built,and the process of sample selection and optimization of training steps is analyzed and selected to prepare for the optimization of subsequent target recognition algorithms.Secondly,the data from the LiDAR and vision camera are processed by pixel set calculation and noise reduction to generate depth images and monocular visual color images,respectively,and input into the improved YOLO algorithm.The above algorithm can perform secondary detection on dimmer targets such as pedestrians and non-motor vehicles.In the case where a large number of image samples are used to train the YOLO algorithm to obtain relevant parameters and establish a target detection model,the test samples are used for model-level fusion of decision-level fusion to prove the rationality of the model.The final result shows that when the training step is set to 10000 and the learning rate is 0.01,the proposed model scheme has the better performance.At the same time,the secondary image detection scheme of the improved YOLO algorithm can determine whether to perform multiple image detections according to the identified target type within 39 ms.Therefore,the object of pedestrians and non-motor vehicles can guaranteed not to be lost furthest.Finally,Kalman filter target tracking algorithm based on YOLO is proposed.Firstly,Kalman filter is used to establish a tracking model for moving targets.Then,based on YOLO algorithm,a state vector model is established.After the variance matrix is defined,the tracking of the target is completed by continuously moving the model update.The final results show that the position error predicted by Kalman filtering and the actual detected position error are within 5 pixels;under the different picture frames,the ratio of the same tracking target cross ratio is maintained above 0.7.Compared with other methods,the proposed method has a smaller overall error and can more accurately reflect the relationship between the predicted target model and the actual model.
Keywords/Search Tags:Autonomous unmanned vehicles, Multi-sensor, Data fusion, Obstacle detection, Target tracking
PDF Full Text Request
Related items