Font Size: a A A

Design And Implimentation Of Energy-Efficient Binary Neural Network Accelerator

Posted on:2022-01-24Degree:MasterType:Thesis
Country:ChinaCandidate:S H LiangFull Text:PDF
GTID:2518306536476214Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
Deep neural network has excellent performance in many fields such as machine vision and natural language processing due to its bionic characteristics.With the wave of AIo T,the demand for intelligent data processing by using neural network on edge devices is increasing day by day.However,the traditional high-precision neural network have high computational complexity and storage requirements,so it is difficult to deploy them at the resource-constrained edge devices.In addition,general-purpose processor is also not suitable for edge computing with limited power consumption because of its low energy efficiency when inferring neural network.Therefore,the research and design of energy-efficient and lightweight neural network model and its special hardware architecture have great practical significance.In view of the problems above,this thesis proposes a hardware architecture of binary neural network accelerator with high energy efficiency,low power consumption and low cost,which can assist the general processor to complete the inference of binary neural network.In this thesis,the research of energy-efficient and lightweight binary neural network is carried out,the ensemble binary neural network model with high energy efficiency is established,and the binary neural network accelerator based on data-stream mechanism is designed.In addition,based on FPGA,this thesis simulates and implementes the accelerator,and builds an embedded So C verification system including ARM processor and binary neural network accelerator IP.The main innovations of this thesis are as follows: 1)According to the requirements of edge end for lightweight neural network model,EBMLP-4-4 and EBCNN-7-4lightweight neural network models are proposed by using the method of ensemble learning and binary quantization,which ensures the accuracy and reduces the amount of computation and the number of parameters.2)Aiming at the problem of neural network computing with high energy efficiency,a hardware architecture of binary neural network accelerator based on data-stream is designed,and an ensemble pipelined hardware architecture is proposed to realize the energy-efficient computing of ensemble binary neural network.3)Aiming at the problem of neural network computing with high hardware resource efficiency,hardware micro-architectures such as binary convolution module with lookup table structure and feature map buffer using a combination of padding and cache are designed to ensure performance and reduce cost.After simulation and test,the accelerators have excellent performance and can work normally and stably.The EBMLP-4-4 accelerator has an on-chip power consumption of1.7 W,energy efficiency of 2.9 TOPS/W,and resource efficiency of 151 GOPS/k LUT.The EBCNN-7-4 accelerator has an on-chip power consumption of 8.3 W,energy efficiency of 1.3 TOPS/W,and resource efficiency of 88 GOPS/k LUT.Compared with some advanced similar designs,the accelerators designed in this thesis have higher energy efficiency and resource efficiency.
Keywords/Search Tags:Deep Learning, Binary Neural Network, Ensemble Learning, Accelerator for AI Computing
PDF Full Text Request
Related items