Font Size: a A A

Neural Network Structure Search Based On Reinforcement Learning And Parameter Sharing

Posted on:2022-06-11Degree:MasterType:Thesis
Country:ChinaCandidate:B XieFull Text:PDF
GTID:2518306521979929Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Deep learning is widely used in many fields,which benefits from the exquisite network structure designed by experts.With the development of technology and the promotion of practical needs,naturally,people hope that machines or algorithms can automatically generate excellent neural network structure.Therefore,automatic network design has become one of the most popular directions in the field of deep learning,and more and more researchers are involved in the field of neural network structure search(NAS).Based on the improvement of ENAS,one of the current mainstream NAS methods,this paper proposes a method to reduce the search space,which is used to speed up the ENAs search process and reduce the bias problem caused by ENAS'parameter sharing.The effectiveness of the method are proved by experiments,and excellent results are obtained on cifar-10[1]dataset.Firstly,aiming at the training process of traditional eans model and the lack of its wide search space,this paper proposes pre-training before ENAS search to get the prior information about the network structure of specific data set,and puts forward the guidance for the next NAS search setting,which greatly reduces the resource requirements and computational overhead of network training.Experiments show that the first step can improve the training effect of super network and the search results of subnetwork model.Secondly,the data set is composed of the model structure coding and the corresponding precision obtained from the pre training as the ANN training data set,and then the ANN model is trained to predict the optimal result of the network structure.This prediction method reduces the scale and complexity of the pre training.Finally,in view of the problem that the traditional super network training generally pursues the balance of network training and ignores the use of the effective information of the previously trained sub network,this paper proposes the sub network in the pre training stage to share the parameters with similar structure,which ensures the efficiency of training and avoids the bias of training.It can correct the subsequent ENAs search.This paper makes a detailed analysis of the conventional parameter sharing methods,discusses the different super parameter sharing methods,and puts forward the corresponding sharing methods to improve the effectiveness of the algorithm.
Keywords/Search Tags:Neural Network Architecture search, Auto ML, ENAS, Search Space, Reinforcement Learning
PDF Full Text Request
Related items