Font Size: a A A

Research On Parallelization Method For Deep Neural Networks

Posted on:2019-04-06Degree:MasterType:Thesis
Country:ChinaCandidate:D S RongFull Text:PDF
GTID:2428330548487383Subject:Engineering
Abstract/Summary:PDF Full Text Request
As an important part of machine learning,deep neural networks(DNN)are widely used in many fields such as automatic speech recognition,image classification,and natural language processing.Convolutional neural network(CNN)is a classical network model in deep neural network.Compared with common neural network model,it has great advantages in local connection and weight sharing.With the expansion of data volume,the number of parameters in the network model has become larger and the structure has become more complex.Therefore,how to make the training of the network model more efficient has become a topic worthy of study in the deep neural network field.Convolutional neural network computing tasks are mainly concentrated in the convolution process.Overfitting is prone to occur during model training.In response to this phenomenon,traditional methods are mainly based on serial programming ideas to train neural networks.However,this method cannot fully exploit the computer hardware performance and has poor scalability.This article analyzes the standard convolutional neural network model and its training process,and incorporates the existing parallel framework.The main research content of this article is as follows:(1)We designed a separable filter for linear combination to optimize the convolutional neural network model.For reducing or avoiding overfitting,this paper used such techniques as data preprocessing,improved Dropout algorithm and parameter regularization method.(2)In the single-node environment,according to the idea of multi-thread shared memory,a CNN parallel method based on the OpenMP programming model is proposed.In the multi-process cross-cutting environment,a CNN parallel method based on MPI programming model is proposed.(3)Concerning the poor scalability of OpenMP parallel model in high-performance clusters,using MPI to communicate between nodes across processes,this article proposed a parallel CNN approach based on MPI+OpenMP hybrid programming model.Finally,the experiment verifies the feasibility of the optimized model and the effect of parallel acceleration.
Keywords/Search Tags:DNN, CNN, Parallel computing, Parallel programming model
PDF Full Text Request
Related items