| The detection of objects in front of the vehicle is a part of the vehicle environment perception,and it is also the premise of the decision-making of the vehicle assisted driving system.Deep learning has good detection performance and has been widely used in the field of environmental perception of automobiles,which is based on camera sensors for detection.However,the driving environment of vehicles is complex,and cannot maintain good lighting conditions all the time,making the problem of leakage.detection and false detection.Moreover,it has huge amount of computation and parameters,which makes it difficult to ensure the real-time performance of target detection on the resource-constrained vehicle terminal.Aiming at the above problems,this paper uses radar and vision fusion technology to make up for the shortcomings of single-camera sensors,and improves from deep learning model compression and sensor fusion strategy to ensure the accuracy and real-time performance of target detection.The main research content and conclusion show as below:Aiming at the speed of target detection,a multi-layer parameter model compression method for YOLOv4 is proposed to reduce the amount of parameters in target detection,compress the model volume,and improve the target detection speed.Firstly,this method uses L1 regularization to conduct channel-level sparse training of the network,and determines the initial threshold of channel pruning according to the size of the scaling factor after sparse training.Secondly,the least squares method and the method of channel weighting are used to reconstruct the front-back error of pruning,and the most suitable pruning threshold for the convolutional layer is obtained by minimizing the error,and the pruning is completed.Finally,the pruned model is quantified to complete the final model compression.Experiments show that the parameters of compressed YOLOv4 network are reduced by 4.6 times,and the detection speed is also increased by 1.6 times,which greatly reduces the network computation and model space,and improves the target detection speed of YOLOv4.The fusion strategy of sensors is improved,and a radar and vision fusion method based on spatiotemporal data detection combined with least squares decision-making is proposed to improve the accuracy of target detection.This method fuses the millimeter-wave radar and the vision sensor spatially and temporally,obtains the detection data of the two sensors at the same time and the same space,and calculates the target intersection ratio(Io U)of the millimeter-wave radar and the visual detection to determine the target detection data that can be fused,decision output is carried out by the weighted least squares method to obtain the final detection result of sensor fusion.The experimental results show that the detection accuracy of the target detection algorithm with sensor fusion has reached 86%,which is 2.7%higher than that of the single vision sensor,and the detection speed has also reached 31.8FPS. |