Font Size: a A A

Research On Binarization Of Convolutional Neural Network And FPGA Implementation

Posted on:2020-09-02Degree:MasterType:Thesis
Country:ChinaCandidate:Y F BaiFull Text:PDF
GTID:2428330575456608Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the continuous development of convolutional neural networks,the depth of the network has been increasing,and higher and higher requirements have been placed on the computing power and storage space of hardware devices.Reducing the resource consumption of convolutional neural networks is of great significance for the deployment of deep learning on embedded platforms.The computing efficiency of hardware devices can be effectively improved by using binary neural network.It can not only accelerate hardware computing,but also reduce memory overhead.A new method is provided for deep learning deployment on embedded devices.There are plenty of logic and computing units in the chip of FPGA.Its high performance and low power consumption make it very suitable for embedded computing devices,which can meet the computing needs of deep learning algorithm.In this thesis,aiming for realize high efficiency binary neural network on embedded platform,a new type of binary neural network is proposed,and the forward calculation acceleration on the FPGA platform is realized.This thesis makes contributions as follow:(1)In order to improve the performance of the binary convolutional neural network,the back propagation algorithm of the binary neural network is studied and improved.The dense connection is used to enhance the representation of the binary neural network,which effecti.vely improves the performance of the network.At the same time,it ensures that the number of parameters of the network model does not increase.(2)In order to effectively migrate the binary neural netw ork model to the hardware platform,firstly,the parallelism of the network model in this thesis is studied,and the parallel schemes at multiple levels are determined.Secondly,based on the above parallel scheme,the design of the overall parallel architecture of the binarized convolutional neural network is completed.Finally,for the characteristics of binary neural network operation,the neural network operator optimization method is studied,which further improves the operation speed and reduces the resource consumption of the hardware devices.(3)The implementation of binary convolution neural network forward accelerator on the platform of FPGA can accomplish the task of image recognition correctly,and better computing acceleration effect than CPU and GPU platforms can be achieved.
Keywords/Search Tags:convolutional neural networks, binary neural networks, FPGA, parallel architecture, image classification
PDF Full Text Request
Related items