With the continuous advancement of computer technology,artificial intelligence technology based on deep learning has achieved rapid development,especially in low-power and low-cost embedded devices.In the current intelligent transportation system,there are a large number of microprocessor-based embedded devices,which are widely distributed in various traffic arteries to provide perception of common information in traffic scenarios such as pedestrians,vehicles,and obstacles,especially for pedestrians.Real-time monitoring is the top priority of traffic safety assurance,and the use of target detection algorithms to quickly detect pedestrians in collected videos is a key technology in intelligent transportation systems.Traditional pedestrian detection solutions often use high-precision sensors,which are expensive and have limited detection capabilities.Cameras can also be used to stream video back to the data center for centralized image algorithm detection,which results in long latency and high bandwidth resource consumption.Pedestrian detection in intelligent transportation systems based on traditional methods has high cost and low efficiency.In the field of edge intelligence,edge devices are required to have certain computing capabilities,especially many deep learning algorithms can be directly run in a large number of distributed embedded devices.,which greatly improves the real-time performance of the intelligent transportation system.This paper conducts in-depth research on the reasoning framework and target detection algorithm for the realization of rapid pedestrian edge detection technology in intelligent traffic scenarios,and builds a complete set of rapid pedestrian detection systems from hardware to software.verified on the microprocessor.The specific research work is as follows:(1)We propose Micro Infer,a novel deep learning inference framework suitable for microprocessors.It mainly solves the problems of difficult deployment of existing deep learning frameworks,high memory overhead,and poor portability.It can complete the deployment of deep learning models on corresponding chips with one click and obtain the best performance possible.The design of Micro Infer adopts the method of combining the upper computer and the lower computer.The upper computer automatically performs the steps of model quantization,model pre-compilation,operator matching,and automatic code generation,and estimates the memory required on the lower computer chip in advance.important information such as overhead and inference acceleration scheme,and automatically generate a customized AI software package.The lower computer uses the improved memory management strategy to use the smallest possible memory space and execute algorithm inference without affecting the speed..Among them,a Xidian OS operating system suitable for bit processors was specially customized for Micro Infer,and the AI Framework was designed,which has good support for various back-end AI reasoning frameworks.Experiments show that the Micro Infer reasoning framework designed in this paper has a great improvement in memory usage compared with other reasoning frameworks.(2)A pedestrian object detection algorithm and deployment method based on cropped YOLO-fastest is proposed.The YOLO-fastest model is further lightweighted,and the feature pyramid and backbone network are trimmed.The trimmed model compresses the model’s computational load and model weight to close to 1/10 of the previous model,and the inference accuracy appears a little.slip.In the model deployment,the use of optimization operators adapted to the target platform can greatly speed up the inference speed,and the post-processing algorithm for the YOLO model is implemented on the MCU.Experiments show that the tailored YOLO-fastest model has greatly improved in inference speed and memory usage. |