Font Size: a A A

Research On Deep Learning Syntax Extension And Compilation Method Of COStream Language

Posted on:2021-05-21Degree:MasterType:Thesis
Country:ChinaCandidate:B Q YuFull Text:PDF
GTID:2518306104988279Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Deep learning has achieved effective results in image recognition,natural language processing and other fields.With the increasing of the scale of deep learning model and data,the training process of neural network puts forward higher requirements for the performance of computer.COStream is a data flow programming language,which can make full use of multicore computing resources and reduce training time based on the parallelism provided by data flow model.But the original syntax of COStream has poor performance on generating neural network graph.A new structure called "sequential" and its compilation process are proposed to solve the above problems by extending COStream's syntax.Sequential structure uses linear sequence model to describe the structure of feedforward neural network,which supports adding fully connected layer,convolutional layer,pooling layer,activation layer and dropout layer.In parsing process implemented in the compiler,each level declared in the sequential structure is unfolded as forward propagation actors and back propagation actors.Then the data flow edge is used to connect the calculation actors and generate the data flow graph.Cause the COStream uses phased pipeline parallelism,for the data-dependent computing model between neural networks,there will be inconsistencies between parameters and data batches.The “synchronization in group and asynchronization between group” scheduling method is proposed to solve it.The experiment uses the X86-64 architecture multi-core processor as the target platform,which tests the performance of the full connection neural network and convolutional neural network programs generated by COStream with the sequential structure.The experimental results show that the sequential structure can effectively reduce the source code lines of neural network programs and the execution time on multi-core platform.
Keywords/Search Tags:Deep Leaning, COStream compiler, Data stream programming model, Pipeline parallelism, Data parallelism
PDF Full Text Request
Related items