Font Size: a A A

Software And Hardware Acceleration Design Of Shift Convolutional Neural Networks

Posted on:2020-05-28Degree:MasterType:Thesis
Country:ChinaCandidate:Z C LiuFull Text:PDF
GTID:2428330620460083Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,convolutional neural networks have made a huge breakthrough in the field of machine identification.In many of these areas,convolutional neural networks can even surpass human accuracy.However,the high precision of convolutional neural networks comes at the cost of high computational complexity.The parameters of a typical deep neural network of millions of orders make it highly demanding for computing and storage devices.The power consumption required to deploy a neural network on embedded devices is difficult to meet.A convolutional neural network based on shift operations is proposed.A weighted training method based on power is proposed,and the inference precision of the neural network is still maintained.Two FPGA-based shift convolutional neural network accelerators are designed.The first one is a configurable parameter neural network accelerator.When configured as LENET-5,it can achieve a throughput of 4.3 GOPs.The second is a universal shift convolution accelerator based on the TVM compiler that can achieve 3.2 GOPs throughput.Both accelerators greatly reduce the DSP usage on FPGAs,saving approximately 20%overall power.In addition,the ASIC prototype of the displacement convolutional neural network is preliminarily established,and a shift convolutional network operation array based on SAC is designed.The energy efficiency ratio of the core part reaches 2.3×10-11 J/op,which is about 1/7 of the normal MAC array.
Keywords/Search Tags:convolutional neural networks, FPGA, energy efficiency ratio, shift
PDF Full Text Request
Related items