Font Size: a A A

Convolutional Neural Network Accelerator Based On Dynamic Hardware Reconfiguration

Posted on:2022-07-19Degree:MasterType:Thesis
Country:ChinaCandidate:F L YuanFull Text:PDF
GTID:2518306323462444Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
In recent years,artificial neural network technology has been widely used in artifi-cial intelligence,computer vision,speech recognition and other fields,and has achieved remarkable success,which is a current research hotspot.In order to solve more complex and abstract problems and pursue higher recognition accuracy,the scale and number of layers of the network model increase continuously,and the computational complexity and amount of computation also increase accordingly.There are severe performance and energy efficiency issues when deploying neural networks on general computing platforms.FPGA-based neural network accelerator can take full advantage of the parallelism of CNNs algorithm and is an efficient solution.However,the previous static reconfiguration design methods have problems of low resource utilization and limited resource.In order to solve the problem of resource limitation when implementing convolutional neural network,this paper proposes a neural network accelerator based on FPGA dynamic reconfiguration,which time-division multiplexes FPGA resources and makes full use of the dynamic reconfigurable capability of FPGA.The research work of this paper includes:1.Considering that it is different from traditional static reconfiguration accelerators,in the accelerator deployment of runtime reconfiguration,the dynamic reconfiguration overhead of FPGA is an important factor that affects the overall performance of hardware acceleration.However,there is still a lack of relevant methods that can accurately estimate the dynamic reconfiguration overhead in the early stage of reconfigurable hardware design.Therefore,this paper analyzes the bitstream configuration files of mainstream FPGAs,and proposes a method for estimating the size of the corresponding part of the reconfigurable bitstream file based on the calculation/storage properties of the reconfigurable functional module.On this basis,a reconfigurable performance cost model at runtime is constructed.2.This paper proposes a dynamic and reconfigurable neural network accelerator architecture based on FPGA,and adopts a pipeline method to dynamically configure each calculation pipeline segment to the FPGA.At the same time,the calculation core uses the Winograd algorithm to save DSP resources.Finally,a combination optimization model is constructed for the accelerator architecture design,and the design space is solved based on genetic algorithm.3.Verify the dynamic reconfiguration performance cost model proposed in this article.Through the dynamic reconfiguration hardware acceleration of Winograd algorithm on the FPGA platform,different bitstream files and reconfiguration time can be obtained.Compared with the estimated value of the performance cost model,it is found that the prediction accuracy of the model meets the actual demand.Finally,the performance analysis of the dynamically reconfigurable neural network accelerator proposed in this paper is carried out.Through the deployment and analysis of the VGG-16 network model on the FPGA platform,the comprehensive simulation results show that under this design method,higher performance and DSP performance efficiency can be obtained than previous accelerators.
Keywords/Search Tags:Convolutional Neural Network, Dynamic reconfiguration, Cost model, FPGA, Hardware acceleration
PDF Full Text Request
Related items