| Driverless cars are the hot spot and frontier of the world’s automotive research,which integrates environment perception,route planning,and assisted driving into a comprehensive intelligent system.The emergence of driverless cars has improved traffic efficiency,increased driving safety and freed human hands to a certain extent.Driverless cars should have the ability to complete tasks independently,so the autonomous navigation technology of driverless car is the most important,of which environment perception is the most critical and basic problem.The so-called environment perception means that driverless cars use radar,laser,camera and other sensors to perceive the surrounding environment,among which the use of cameras for environment perception to complete autonomous navigation is the driverless car’s visual navigation method.In this paper,computer vision technology and deep learning algorithm are used to process the environmental data of visual perception,and the visual navigation method of driverless vehicle is studied.The main research contents are as follows:(1)Aiming at the current lane line detection algorithm’s weak robustness and susceptibility to environmental factors,a lane line detection algorithm based on improved DeepLabv3+ is proposed to turn the lane line detection task into a semantic segmentation binary classification task to achieve the detection of lane lines.Aiming at the characteristics of slender and unevenly distributed of lane lines,the DeepLabv3+ model is optimized,a featur pyramid network is added,and the ASPP model is channel spliced,which effectively improves the detection effect of the model on the edge part of the lane lines and recover the detail information lost during DeepLabv3+ sampling;Finally,an attention mechanism is introduced to make the model targeted to train effective feature information.The m IOU of the improved DeepLabv3+ model in the Tu Simple dataset is 77.02%,which has good accuracy.(2)A dynamic target detection method with improved YOLOv5 m is proposed based on the analysis of the BDD100k dataset combined with the actual situation of the driverless car vision navigation scene.First,its prediction layers are increased from three to five to enhance the detection capability for different sizes;second,the BiFPN structure is introduced to make the feature information fusion richer and more efficient;finally,the Soft-NMS non-maximum suppression method is added to solve the situation that the scene is complex and prone to occlusion which leads to missing detection.The improved model changes the mAP index from69.2% to 73.4% on the BDD100k dataset,and although the FPS drops to 67,it still meets the real-time requirements.(3)Morphological processing and lane line fitting operations are performed on the binarized images output from the improved DeepLabv3+ to make them provide a basis for lane departure warning and lane keeping,and to build a multi-task fusion detection model to detect lane lines and dynamic targets simultaneously. |