| With the development of information technology,the deep neural network has become the main way to deal with a large number of data.While with the growth of parameters,real-time process become harder.It is imperative to simplify the network and improve the operation delay.The light neural network greatly simplifies the network.With one percent of the parameters and options,it achieves the same accuracy as the ordinary neural network.At the same time,with the development of HLS,develop a low-energy accelerator with FPGA became easily,greatly accelerating the operation speed of neural network.In this paper,a Mobile Net accelerator architecture based on FPGA is proposed.Firstly,the paper introduces the neural network,and the optimization method on Open CL.After that,we designs a flexible and scalable neural network accelerator architecture for the Mobile Net using parallel computing.The accelerator is divided into four modules,which accelerates the Mobile Net with a pipeline architecture.Reduce the data exchange with DDR,and improves the efficiency of computing resources.Speed up the inference,but also reduces the accelerator power.We run the accelerator on the quantized Mobile Netv1,with the Inter arria10 FPGA programed by Open CL.This accelerator reaches an inference speed of 39.85 ms,with energy of 22.5W,and its peak throughput gets 48.8GOPs.Accelerator is speedup 2.7x compared of CPU,which has 3x energy efficiency compared of GPU. |