Font Size: a A A

Research On A Deep Learning Neural Network Circuit Design Based On Floating Gate Transistor

Posted on:2022-10-11Degree:MasterType:Thesis
Country:ChinaCandidate:H Z XunFull Text:PDF
GTID:2518306605469974Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
As the development of artificial intelligence,it brings up new challenges for hardware devices for neuromorphic calculation with respect to performance and energy consumption.The real bottleneck is not the algorithm,but the hardware implementation.It is of great scientific value to develop professional and efficient chips for deep learning.To break the bottleneck of conventional computer structure,this thesis focuses on in-memory calculating based on the non-volatile memory cells.As a highly integrated and rewritable non-volatile memory,FLASH memory is widely applied in commercial applications.The floating gate(FG)transistor is the basic cell of the FLASH memory and has been highly developed compared with other emerging memories.Thanks to its technology maturity and high-density,the FG transistor has the potential to implement the deep neural network(DNN)as an analog synaptic device.This design adopts the method of analog circuit design,based on the FG cell structure of FLASH memory,which can control the number of internal electrons through the tunneling effect,so as to change the threshold voltage.Based on the FG transistor design,an analog crossbar array classifier circuit is built for deep learning.The different drain voltages on the data line in the circuit array are taken as the input value,and the summed voltage of the bit line is taken as the output value.The change of the threshold voltage of each FLASH cell is used to represent the change of the weight value,and a memory array of which output value conforms to the product sum is obtained.The main research work and innovation of this thesis are as follows:(1)The current characteristics of bsim3 transistor model with variable threshold voltage are simulated in HSPICE,and the vector-matrix multiplication calculation principle of current and voltage is analyzed when floating gate cell works in linear region.Based on the current characteristics of floating gate transistors operating in the linear region,a synaptic circuit cell with good linearity is proposed.(2)Based on the proposed floating gate synaptic cell,a single crossbar array deep neural network analog circuit with feedback circuit is designed and built.The array uses a group of floating gate cells to represent a weight,and uses the current characteristics in the linear region and Kirchhoff principle to calculate multiplication and addition respectively.Based on the new full analog memory computing integrated crossbar array with peripheral circuits,the forward implementation access time of 90 ns and ultra-low power consumption of 17.25 n J are successfully realized,which has obvious advantages compared with the von-Neumann architecture processor.(3)Based on Cadence Virtuoso and HSPICE software,analog peripheral circuits such as voltage subtraction circuit,Re LU activation function circuit and voltage follower circuit based on differential amplifier are designed and simulated.An analog 4-5-3 neural network circuit for Iris flower dataset recognition and a three-layer 784-64-10 circuit based on 51,664 floating gate transistors for MNIST dataset classification are proposed based on Cadence Virtuoso 180 nm process and 1.8 V supply voltage.On this basis,the circuit visualization is completed.(4)Taking 0.9 V as the mathematical zero voltage,a full analog and complete circuit implementation schematic of deep neural network forward reasoning calculation is proposed.The function of the proposed neural network circuit is verified based on Iris flower dataset and MNIST handwritten digit dataset respectively.In Python environment,using the random gradient descent algorithm to train Iris flower dataset,the accuracy is 98% after 1100 rounds of training;in HSPICE environment,the model circuit is verified and the accuracy is the same as that of the software;the extended array proposed in this thesis is applied to MNIST dataset,which is based on NOR FLASH.The single crossbar array of floating gate cell is simulated,and 94.8% hardware recognition accuracy and 97% software recognition accuracy are obtained.Therefore,the linear region of the FG transistor has been used to achieve multiplication in DNN and the hardware circuit based on 180 nm process shows advantages as an ultra-low power accelerator.
Keywords/Search Tags:Floating Gate Transistor, Neuromorphic Calculation, Deep Neural Network, Processing in Memory, MNIST Dataset
PDF Full Text Request
Related items