| With the transition of intelligent driving from primary driving assistance to highlevel automatic driving,the perception demand of automobiles as independent intelligent individuals for external environmental information is also increasing with the deepening of intelligence.The common independent sensing unit’s ability to perceive the external environment is limited by the sensor’s own characteristics and algorithm level,and cannot obtain comprehensive sensing information independently under conditions such as rain,fog,and night.Therefore,the development of multisensor fusion technology is of great significance to solve the dilemma of intelligent perception.In this paper,based on the complementarity,cost-effectiveness and maturity of independent detection technology of vehicle-mounted sensing components in sensing performance,an extended network-based fusion target detection algorithm suitable for radar and vision fusion is proposed.Algorithm network structure design and algorithm test analysis were systematically studied.The specific research contents are as follows:(1)Based on the analysis of the technical route of radar and vision fusion,it is determined that feature-level fusion is adopted in this paper,and the algorithm training and test evaluation are carried out based on the nu Scenes dataset and the test data of the self-made data acquisition platform.(2)Aiming at the inconsistency of sampling frequency and trigger time of different sensors,this paper proposes the time fusion mechanism of OOSM and Non-OOSM,and introduces the sensor space fusion technology.(3)Aiming at the irregular motion of the target in the time series caused by the detection noise of the radar,this paper proposes a target tracking and filtering algorithm based on "EKF-Mahalanobis distance-Hungarian matching" for data preprocessing,and introduces the target’s tracking and filtering algorithm.The "spawn-live-death" mechanism maintains radar target information.Aiming at the determination of the observation covariance matrix in the data filtering process,a covariance fitting test based on RTK true value data was carried out,and the feasibility of the filtering tracking algorithm was verified.(4)The radar targets are spatially transformed to realize the radar image generation process from the radar coordinate system to the image coordinate system,and the vertical extension of the image plane of the radar point cloud is carried out based on the radar image and the target radial distance as the pixel values of the two-channel radar image for feature fusion research with visual images.(5)In order to introduce radar images as auxiliary information for visual image target detection,this paper studies the expansion of the Retina Net one-stage target detection algorithm based on the VGG-16+FPN backbone detection network,using two-channel radar images(360?640?2)and three-channel visual images(360?640?3).As the input of the fusion network,the Extended VGG-16 network and the Extended Feature Pyramid Network(E-FPN)suitable for the fusion of radar and vision are proposed.Through the serial fusion of multi-layer radar features and visual features,a deep extended fusion target detection network for the fusion of radar and vision is proposed.The test results show that compared with the reference network for pure visual image target detection,the proposed network m AP improves by 2.9%,and the accuracy of small targets is improved by 18.73%,which verifies the performance of the extended fusion target detection network proposed in this paper for visually insensitive targets.Detection capability and algorithm feasibility.The research in this paper has a certain reference value for the research of multisensor fusion technology for intelligent driving vehicles,especially in the field of fusion target detection,which has positive significance for breaking through the bottleneck of single-type sensor detection and improving the comprehensive perception performance. |