Accurate recognition of traffic signs is a prerequisite for driverless and assisted driving technologies,and the first task of recognition is the detection of traffic signs.With the development and application of deep learning algorithm,the detection algorithm based on deep learning has gained rapid development,but the algorithm has difficulty in locating small targets like traffic signs,and there are problems of difficult recognition and low detection accuracy.To adapt to the domestic scene,the TT100 K traffic sign dataset is analyzed and improved.First,for the low data volume of some categories,the data volume of these categories is increased by image enhancement to equalize the data.Second,combining different weather conditions such as cloud,rain,snow,fog and wind,the image enhancement of weather types is performed on some data to increase the image diversity.Model training tests are conducted on the original dataset and the improved TT100 K dataset,and the test results show that the improved TT100 K dataset has been greatly improved in both accuracy and recall,and solves the problem of little category data affecting model training.To address the problems of difficulty in locating smaller targets in traffic signs,difficulty in recognition and low detection accuracy of YOLOv5 s algorithm,an improved traffic sign detection algorithm with YOLOv5 s is proposed.The spatial attention mechanism module Coordinate Attention is added behind the backbone network to effectively obtain the relationship between location information and channel,so that the model can more accurately locate to the recognition target area to better extract the image features and improve the small target detection accuracy.After the 17 th layer of the original algorithm network structure,we continue to upsample the feature map and introduce a detection layer to expand the feature map,and obtain a 160*160 size feature map in the 19 th layer,which is spliced and fused with the 2nd layer of the backbone network to obtain a larger feature map for small target detection.In the model training phase,the original CIo U localization loss function is replaced by α-CIo U loss to obtain higher quality anchor frames.The test results show that the detection accuracy is improved by 1.9% to 95.1%,the recall rate is improved by 2.8% to 93.8%,the small target detection accuracy is improved by 7.2% to 67.6%,the medium target detection accuracy is improved by 1.8% to 83.4%,the large target detection accuracy is improved by0.8% to 80.3%,the full category average accuracy m AP is improved by 1.4%,and the detection speed is increased from 0.4 ms to 0.5ms,increasing the detection time by a small amount still meets the real-time detection requirements.The lightweight feature extraction network Ghost Net is introduced to replace the network backbone CSPDark Net53 to generate redundant feature maps by a less computationally intensive structure with fewer model parameters.In the feature fusion module,the PANet network is replaced by the Bi FPN network structure,which retains the feature information generated by the original structure while fusing more upper-level feature information.Finally,an EIOU loss function was added to better reflect the difference between width and height as well as the confidence level,thereby speeding up the convergence of the network and solving the overall lightweight problem of YOLOv5.Based on the average accuracy improvement of 0.1% in the whole category,the amount of algorithm model parameters is reduced by 1/3,and the detection speed is changed from 0.4ms to 0.3ms.Therefore,the algorithm has strong real-time performance and high practical value. |