Font Size: a A A

Neural Network Adaptation And Handwriting Recognition In Edge Computation Devices

Posted on:2021-05-28Degree:MasterType:Thesis
Country:ChinaCandidate:W F LiuFull Text:PDF
GTID:2428330614965720Subject:Electronic and communication engineering
Abstract/Summary:PDF Full Text Request
With the increasing demand for sophisticated Artificial Intelligence(AI)and machine learning applications on edge computing devices,technical methods supporting micro-machine learning are constantly emerging.The original researchers held the view that deep learning algorithms are generally not recommended for small samples,but now related researchers are beginning to think that small sample learning also has potential applicable scenarios,such as optimized machine learning(ML)that can be deployed in edge computing situations to effectively realize edge intelligence.In the edge computing scenario,although the device can send the collected data to the data center for training and obtain deep learning models,this is still subject to many constraints,such as the device's memory during inference,the device's computing power,and communication delay.Through reasonable selection of scale quantization schemes and fixed-point and low-precision integer operations,the performance of the compressed neural network model will approach the performance of floating-point arithmetic design in edge applications.After discussing the basics of neural network scale compression and micro machine learning methods,this thesis will focus on neural network adaptation in edge computing devices,which is a kind of micro machine learning and neural network scale compression.Based on the above ideas,the first work in this thesis proposes a distributed training architecture suitable for edge devices and an optimized design of feedforward neural networks with 8-bit quantization.It may facilitate miniaturized,fast distributed training and inference on multiple devices and reduce overall network latency.Secondly,the second work of this thesis builds a shallow convolutional neural network.By using the Application Programming Interface(API)of Pytorch to compress,we realized the weighting of this case.Experimental results on the handwriting data set MNIST and CPU configuration show that the performance loss caused by the 8-bit quantization design is less than 1%.Since the quantization and compression of deep convolutional neural networks is one of the main content of network size quantification and compression,the third work will summarize the latest literature on deep convolutional neural network quantization.In the end,a compression case based on hybrid quantization was evaluated on the CIFAR10 data set.The experimental results under GPU configuration also show that the performance loss caused by quantization design is small.
Keywords/Search Tags:micro machine learning, neural network, model compression, weight quantification, handwriting recognition
PDF Full Text Request
Related items