Font Size: a A A

Research On Deep Learning Framework And Its Basic Implementation

Posted on:2019-08-19Degree:MasterType:Thesis
Country:ChinaCandidate:Z SunFull Text:PDF
GTID:2428330548461898Subject:Software engineering
Abstract/Summary:PDF Full Text Request
As a popular branch in the field of machine learning,the deep neural network has been made remarkable achievement in computer vision,intelligent search,unmanned driving,pattern recognition and other fields,and as deep learning widely used it will still maintained a rapid development in the future.In recent years,with the structure of deep learning model get more and more complex,the general programming method has already can not meet the requirement.Developers will spend a lot of time on traditional programming method to implement the basic algorithms,it is unnecessarily time waste in the study.Many companies and research institutions always want to get faster and more efficient way of deep learning research,in this case Caffe,Tensor Flow,Torch,and many other deep learning frameworks appeared.These deep learning frameworks not only can automatically complete complicated symbolic operation and underlying GPU acceleration operating but also provides higher granularity module of neural network model can be used directly.For researchers and industry provides a convenient development mode.Many deep learning framework provides many commonly used deep learning model,developers can modify or study directly on existing neural network model.In this paper,I will introduce the deep learning framework for the following exploration and practice:First of all,I made a brief introduction of deep learning and neural network in different fields and the three mainstream deep learning framework and the most commonly technology in deep learning framework used to accelerate training——CUDA.After the structure of the perceptron and neural network is introduced,I had already narrates the main features of convolution neural network either its algorithm.This paper designs and implement a simple deep learning framework ”SCNN”.SCNN describes neural network with many “layers”.Its basic data type is “matrix”.Matrix save data and derivative information of neural network by tensor.Layer is the basic module of SCNN,a layer can be either a layer of neural network can also be a algorithm like matrix multiplication.SCNN structure a neural network by add layers.Multiple layer constitute a complete network model by NET.A NET includes many operations like initialize a neural network,read data,training,and predict.This paper first introduces the overall design of SCNN.I also describes the matrix,layer,NET,the three basic module design and implementation.In addition also designs the module of GPU to accelerate the network training speed.I especially introduced the “convol Layer”,“pooling Layer” and CUDA design of the parallel optimization in fully connected layer.Finally,this chapter use neural network bulid by scnn for algorithm and the acceleration testing.The experiment includes two parts: First I use scnn established a convolution neural network which contains two convolution layer and two pooling layers,use MNIST data set for training,and reached over 96% accuracy.The second is parallel efficiency test: a fully connected neural network created by scnn for GPU parallel module efficiency test,The result is the GPU version made more than five times speed ratio.The experiments showed that scnn can implement CNN and DNN models and have acceptable speed parallel modules.Although,compared with the mature deep learning framework scnn seems to be very rough,it already has the most features of deep learning framework.The conclusion is scnn has meet the main objective.
Keywords/Search Tags:GPU, Deep Learning, Deep Learning Framework, Convolution Neural Network
PDF Full Text Request
Related items