Font Size: a A A

Research On Optimization And Acceleration Method Of GPU-based Parallel Recurrent Neural Networks Model

Posted on:2021-08-26Degree:MasterType:Thesis
Country:ChinaCandidate:L PengFull Text:PDF
GTID:2518306470970339Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Deep neural networks have been used widely in various industries,among which recurrent neural networks have good performance in time series prediction and speech recognition.The models designed according to actual applications are complicated.Recurrent neural networks usually contain a large number of parameters that need to be trained,and it needs a long time to train a model that performs well in use.Therefore,how to improve the performance of deep learning methods in solving complex prediction problems is a research hotspot in the field of computational intelligence.Existing recurrent neural networks prediction methods are challenged in the face of large-scale complex computing problems,so it is necessary to design an efficient and stable learning model.To solve this important problem,a graphics processing unit(GPU)has been used to accelerate the optimization process of learning models.In this study,an integrated aggregation model is proposed on the GPU platform to achieve parallel optimization of the recurrent neural networks and to predict the light curves.Firstly,analyzing the recurrent neural networks studied in this paper and selecting the appropriate network as the sub-network of the integrated model according to the characteristics of the recurrent neural networks,and then studying the impact of various optimization algorithms on the convergence rate of the model so that the amount of calculation for model training can be reduced and the training speed can be improved as well.As a result,the model will be optimized without affecting the accuracy.Secondly,In-depth research on the acquisition and processing of light curves,and using sampling data,data enhancement and feature engineering make it more in line with the requirements of the model,and construct more significant features to accelerate model training.Thirdly,optimizing the network of sub-models and improving GPU utilization,solve the problem of huge computational cost of recurrent neural networks from the perspective of model parallelism and data parallelism,optimize the model training framework,and aggregate sub-models into integrated models for parallel training.Finally,conducting experiments on the GPU platform according to the designed model and optimization method to study the acceleration effect of using single-GPU training and multi-GPU training.And then adjusting the parameters and using the integration method to obtain the optimal sub-model and the optimal integrated model respectively.In summary,this paper takes the astronomical light curves as the research object,using the recurrent neural networks model as a sub-learner and building a multi-layer deep learning model and then aggregating it into an integrated model,which can improve the training speed of the model greatly while meet the prediction of the brightness of the stars.Research on the method of extracting time series features and optimization training related algorithms,through the design and implementation of GPU-based parallel recurrent neural networks.A series of experiments were conducted to evaluate the performance of the parallel accelerated learning method,and various model evaluation methods were used to measure.As a proof,the model used in this paper can improve training efficiency without decreasing prediction accuracy so that the method proposed in this paper is of rationality and practicality.
Keywords/Search Tags:Recurrent neural networks, GPU, time series prediction, optimize acceleration
PDF Full Text Request
Related items