| Since the third wave of artificial intelligence(AI)development,deep learning models with high performance,such as deep belief network,Alex Net,Res Net,VGG,etc.,have been brought up successively.While the accuracy of AI models keeps breaking records,the scale of the models is also increasing day by day.The complicated calculation and the huge amount of data make it difficult for the traditional von Neumann architecture to meet the needs of the model.For the purpose of solving the problem of “memory wall” and “power wall” caused by the separation of storage and computing,the new compute-in-memory architecture has gained extensive attention.With the development of novel non-volatile memory(NVM),thanks to the excellent scaling ratio,simple peripheral circuit and virtually zero leakage power,NVM has become a competitive contender for future CIM architecture.Magnetic random access memory(MRAM)is a kind of NVM with high durability,fast read-write speed and low power consumption,as well as relatively mature process technology and better scalability.Moreover,the voltage-controlled magnetic anisotropy(VCMA)effect of MRAM makes it particularly suitable for implementing on-chip neuron to emulate the characteristics of biological neurons and synapses.In the same time,the binary property of magnetic tunnel junction(MTJ),the core component of MRAM,makes it appropriate to realize the weight storage and operation of efficient binary convolutional neural network.Therefore,the voltage-controlled MTJ enabled on-chip AI accelerator is studied in this thesis.The works mainly include the following aspects:(1)Aiming at the VCMA effect of MTJ device,voltage-assisted spin-Hall effect(SHE)switching MTJ is proposed,as well as the macro spin approximation model established based on LLGS equation.It is proved that the writing current density and pulse width of the heavy metal layer can be effectively reduced by the VCMA voltage with appropriate amplitude and pulse width.In addition,aiming at the influence of thermal fluctuation on MTJ’s magnetization precession,a voltage-controlled stochastically switching SHE-MTJ model established based on s-LLGS equation is proposed and a stochastic device with sigmoid type probabilistic switching curve was designed.(2)Based on the designed voltage-controlled stochastically switching property of SHEMTJ model,a neuron with the capability of performing the Gibbs sampling operation of the restricted Boltzmann machine(RBM)is proposed.In the meantime,a RBM specific NVM enabled CIM acceleration array is designed,which has both positive and negative weight representations and capable of on-chip training.MNIST dataset was used to verify the feature extraction performance.The proposed on-chip training scheme shows good accuracy and excellent resistance to temperature change,while the off-chip scheme greatly reduces the number of activated neurons by optimizing the training process and improves the accuracy to a certain extent.(3)Based on the designed voltage-controlled stochastically switching SHE-MTJ model,a neuron capable of performing the XNOR-Net input binarization operation while introducing stochasticity is proposed.A CIM acceleration scheme is designed for Squeeze Net.In this scheme,MTJ crossbar array was used for weights storage in the modified Squeeze Net.XNOR operation was performed by different configurations of MTJs states and inputs,and the sum of XNOR results was calculated by adding resistance to complete the binary convolution operation.MTJ neurons were sampled according to the calculation results to make them more proximate to the biological characteristics and avoid network overfitting.CIFAR-10 dataset is used to verify the performance of the proposed scheme,which ensures good classification accuracy when the number of parameters is extremely low and has advantages in speed and power consumption of the CIM architecture. |