Font Size: a A A

Research Of Deep Neural Networks Optimization Based On GPU

Posted on:2016-12-17Degree:MasterType:Thesis
Country:ChinaCandidate:Y M ChenFull Text:PDF
GTID:2348330479954701Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Recently, deep neural networks have achieved great successes in the field of computer vision. Due to excessive training parameters and large scale data set, training deep neural networks costs too much time to apply it to other fields. According to the characteristics of GPU, this paper proposed a novel algorithm based on look-up table to compute convolution to improve training efficiency.The key to improve training efficiency is to speed up convolution computation. According to the features of CUDA(Compute Unified Device Architecture), firstly change all convolution kernels to small ones. Secondly, a novel algorithm based on look-up table is proposed to speed up convolutional networks with small filters by applying GPU. By transforming multiplications in the convolution computation to some table-based summation operations, the overhead of convolution computation can be reduced largely. The process of creating table and looking up table is very appropriate for parallelization on GPU. Moreover, this paper designed GPU memory storage mechanism according to the characteristics of every layer's data and operation.Optimization mechanism is evaluated on three data sets: MNIST, CIFAR-10 and CALTECH-101. Experiment results show that the proposed approaches can improve the speed of convolution computation by 20%-30% without loss of precision, compared with state-of-the-art existing ones. Experiment results demonstrate that the optimization scheme is practical and efficiency.
Keywords/Search Tags:Deep neural network, Convolutional Neural Networks(CNN), Convolution Computation, GPU, Deep Learning
PDF Full Text Request
Related items