Font Size: a A A

The Optimization Of Program Recognition Algorithm Based On Deep Learning

Posted on:2019-08-15Degree:MasterType:Thesis
Country:ChinaCandidate:Z D LiFull Text:PDF
GTID:2428330572951515Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Deep learning is based on an important branch of machine learning.Its main advantage lies in capturing highly complex data features and implementing complex nonlinear mapping.Today,it has become a mainstream machine learning method.At present,deep learning has achieved certain research results in the field of natural language processing,computer vision,etc.However,the study of program language processing is still not mature enough.The recognition of program algorithm is a hotspot in the field of software engineering.Through the recognition of program functions,it provides a way to evaluate the behavior of the algorithm,the program function and the complexity of the system.It is of great significance in software module reuse,system maintenance,and the improvement of software development efficiency.However,programming languages have rich and strict structural features and cannot be trained effectively using traditional natural language processing methods.At the same time,in the training process,due to the existence of gradient diffusion and overfitting problems,the network model can not extract useful program structure features and the algorithm recognition effect is not ideal.Therefore,how to optimize the existing program recognition algorithm is very important for the improvement of algorithm recognition.This paper first introduces four commonly used activation functions in the unsupervised pre-training stage.By comparing the advantages and disadvantages of each function,and combining the advantages of the non-saturated correction functions Softplus and Re LU,a segmented non-saturation correction activation function Softplus-Relu is constructed.The Stochastic Gradient Descent(SGD)algorithm is used to implement the program vector representation model based on the Softplus-Relu function.The improved model was trained separately from the network model based on the other four activation functions,and the convergence of the experimental process was compared.The results show that the improved TCNN improves the recognition accuracy of the program algorithm to 95.8% and accelerates nearly one-third of the process of supervision training.This completes the improvement of the pre-training method in the program algorithm recognition model.Secondly,due to the excessive complexity of the model and the presence of data noise in the process of supervising training,the network will inevitably produce overfitting problems.Therefore,this paper further optimizes the process of supervising training in the TCNN model.The Dropout layer with a coefficient of 0.6 is added after the fully connected layer,and the generalization ability of the optimized model is verified through experiments.After using the Dropout strategy,the difference between the accuracy rate of the verification set and the training set in the training convergence is controlled to be about 0.5%.Compared with the difference of the accuracy rate of the former 3.6%,the degree of overfitting is significantly reduced.Finally,based on the improved activation function and Dropout network model to complete the recognition of the program algorithm,the recall rate and F1 value of the model are all above 97%.After comprehensive analysis,it is concluded that after the optimization of the unsupervised learning and supervised learning processes in the TCNN model,the recognition effect of the algorithm has been greatly improved.Compared with the existing program classification model,the improved model can achieve a good recognition effect in the recognition of generalized program algorithms.
Keywords/Search Tags:Program Algorithm Recognition, Deep Learning, Activation Function, Convolutional Neural Network, Fully Connected Layer
PDF Full Text Request
Related items