Font Size: a A A

Design Of Convolutional Neural Network Accelerator Based On ZedBoard

Posted on:2022-07-30Degree:MasterType:Thesis
Country:ChinaCandidate:J Q LiFull Text:PDF
GTID:2518306542491364Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the rapid development of artificial intelligence technology,the advantages of artificial intelligence in the field of data processing have become more and more obvious,especially in intelligent voice,mobile edge devices,image classification and intelligent car and other applications.As the hierarchy and network structure of neural networks become more and more complex,traditional processor calculation methods can no longer meet the requirements of mobile edge devices in terms of power consumption and delay.FPGA(Field Programmable Gate Array)has the ability of parallel computing.This computing method is mainly used in scenarios with large data volumes and complex calculations.In addition,FPGA has obvious advantages over other platforms in terms of power consumption and cost.In view of the above situation,a design of a convolutional neural network accelerator based on ZedBoard is proposed,the research content is mainly divided into the following parts.(1)A method of compressed convolutional neural network that integrates iterative pruning and binarization quantization is proposed.The paper analyzes the importance of compressed convolutional neural networks from the perspectives of time delay,power consumption and cost.Combining the characteristics of FPGA,the iterative pruning method is used to prun the convolutional neural network AlexNet,and the weight distribution of AlexNet after pruning is given.Through the analysis of the network model after pruning,it is concluded that there are still a large number of floating-point multiply-add MAC operations in the network after the pruning is completed.On this basis,the network is selected to perform binary quantization.Comparing the quantified network model with the original analysis,it is found that the accuracy loss is less than 2%,and the memory usage is greatly reduced.Finally,the sparse matrix is stored using the method of row concatenation.(2)The operation framework of FPGA accelerated convolutional neural network is optimized and designed,the traditional data single buffer transmission mode is improved,and the convolutional module and the pooling layer module are designed by optimizing the row buffer mode.Firstly,the overall structure of the framework is designed,the data transmission mode of PL side and PS side data reading and writing is analyzed.Secondly,it is designed and optimized based on FPGA modularization,and the acceleration principle of convolutional neural network in FPGA operation is analyzed.Finally,the modular design of the convolutional network is completed.(3)FPGA based convolutional neural network accelerator and performance index analysis are realized.Firstly,the FPGA development platform is analyzed,and the IP output of the whole framework is realized through Vivado HLS,and the experiment is carried out on Zedboard.The acceleration effect of convolutional neural network accelerator is analyzed by experimental data.The focus is on the acceleration test effect and power consumption ratio of convolutional neural network,and the results are compared with those of other platforms.Compared with the CPU and GPU,the experimental acceleration effect is significantly improved,it also has obvious advantages in power consumption.Compared with the experimental power consumption,the advantages of CPU and GPU are obvious.
Keywords/Search Tags:ZedBoard, Convolutional Neural Network, FPGA, Acceleration
PDF Full Text Request
Related items