Font Size: a A A

The Design Of Energy-Efficient DNN Accelerator Based On Hybrid Bit-Width Approximate Calculation

Posted on:2019-11-16Degree:MasterType:Thesis
Country:ChinaCandidate:X RuanFull Text:PDF
GTID:2428330590975460Subject:Integrated circuit engineering
Abstract/Summary:PDF Full Text Request
In recent years,Deep Neural Network(DNN)accelerator design has become a research hotspot in academia and industry.As DNN is developing in the direction of more accurate prediction and more powerful functions,the scale and the computational complexity of DNN is increasing,which make the traditional DNN accelerator face the bottleneck of both access and calculation.In this thesis,starting from the challenges faced by traditional DNN accelerators,an energy-efficient DNN accelerator based on hybrid bit-with approximate calculation is designed and implemented.In this thesis,the network compression method for typical DNN model,hardware unit design scheme based on energy-efficient approximate calculation,the basic structure and working mechanism of DNN accelerator are discussed based on the algorithm features of DNN model.And then,the specific characteristics of the data flow and control flow in accelerator system are analyzed.In terms of network compression,this thesis studies pruning,hierarchical quantization,hybrid precision storage,and Huffman coding schemes.By progressively using this series of compression schemes,the size of a typical DNN model can be greatly compressed,effectively solving the access problem of the neural network.In the aspect of approximate calculation,this thesis introduces an energy-efficient iterative logarithm multiplier into the DNN accelerator system,and closely couples this unit with the compression algorithm,greatly optimizes and simplifies its design scheme,and solves the computation problem in the accelerator.Finally,by studying the specific characteristics of data flow and control flow in DNN accelerator system,this thesis proposes and implements an efficient network mapping scheme and a system scheduling scheme,which eliminate redundant calculations and bubbles in the system.With this method,higher accelerator schedule performance can be achieved with lower hardware overheadThe experiments show that the proposed network compression method saves the memory storage of deep neural network by 7x-8x with negligible accuracy loss.In addition,the critical path of the DNN accelerator designed in this paper is 1.25ns,and the area after layout and routing is about 4.34mm~2.With the power consumption of 120mW,the accelerator has a processing power of 51.2 GOPS working directly on a compressed network,corresponding to 409.6GOPS on an uncompressed network.Comparing with state-of-the-art architectures EIE and Thinker,this work achieves over 2.5x and 2.7x better in energy efficiency respectively.
Keywords/Search Tags:DNN, network compression, hybrid bit-width, approximate calculation, DNN accelerator
PDF Full Text Request
Related items