Whether it is assisted driving,autonomous driving or future all-weather,all-area unmanned driving,every change in driving mode requires the advancement of intelligent detection algorithms.Reliable traffic environment perception is extremely important in every link.the most basic.In traffic scenes,especially car-following scenes that account for a large proportion of driving scenes,vehicles and lane lines are the key targets that constitute these scenes,and their detection research is the key technology.Using visual sensors to capture vehicles and lane lines in traffic scenes can greatly reduce the cost of detection hardware.At the same time,with the increase of computer GPU computing power,cognitive algorithms are gradually optimized to improve the real-time and accuracy of detection.Therefore,research on detection methods of vehicles and lane lines based on deep learning is an extremely urgent need for current traffic environment perception.For vehicle detection,this paper uses the YOLOv4-CBAM model to quickly and accurately detect the type and location of the vehicle in the road scene;for the detection of lane lines,the asymmetric SegNet model is used to greatly reduce the detection time;for vehicles The joint detection with lane lines uses the improved MobileNetV3 vehicle and lane line joint detection method to ensure the optimal joint detection performance of the two.The research on the joint detection method of vehicles and lane lines in this paper provides good decision-making conditions not only for the following scenes but also for the path planning of driving vehicles.The specific research contents are as follows:(1)Aiming at the requirements for vehicle detection accuracy in the intelligent driving assistance system,this article starts with the network feature extraction capability of YOLOv4,introduces the attention mechanism,and uses both spatial attention and channel attention at the same time to further improve the feature extraction capabilities of the model.Upgrade,thereby improving the detection accuracy.The improved YOLOv4-CBAM is trained and tested on the BDD100 K data set.The experiment shows that it performs well in the accuracy,recall,F1 value and m AP indicators.It can be seen that the introduction of the attention mechanism can improve the accuracy of detection.The detection is effective.(2)Aiming at the requirements of real-time performance and hardware memory in the intelligent driving assistance system,this paper proposes a method that uses asymmetric SegNet algorithm and connected domain constraints to achieve lane line detection and recognition.Change the symmetric SegNet network algorithm to an asymmetric structure to extract the lane lines pixel by pixel: use convolution and pooling to extract the lane line features,combine the binarized image with the connected domain constraint association to classify the lane feature points,and finally the same category The lane feature points are fitted to the lane line.The improved asymmetric SegNet algorithm is trained and tested on the TuSimple data set and has excellent performance in terms of accuracy,recall and F1 value.It shows that the use of the asymmetric SegNet algorithm and the connected domain constraint combined lane line post-processing method can greatly Reducing the detection time is helpful for real-time detection of lane lines.(3)Aiming at the two common car-following scene targets of vehicles and lane lines,this paper proposes a joint detection method of vehicles and lane lines suitable for car-following scenes.Its essence is based on lightweight network real-time vehicle detection and lane line detection.Multi-task learning method.This method uses the codec structure and multi-task ideas,shares the feature extraction network and feature enhancement and fusion module,and uses different detection branches to detect vehicles and lane lines respectively.The model was trained and tested on the BDD100 K data set,and it was also tested on four different data sets and Chongqing road images,and focused on the detection effect in the car following scene.Experiments show that the improved multi-task network model in this chapter is superior to single-task YOLOv4 and SegNet in terms of accuracy,recall,F1 value,and detection time.It fully proves the feasibility and effectiveness of the improved method in this chapter and the applicability of the design in this chapter The joint detection method of vehicles and lane lines in the following scenes is very valuable. |