| With the expansion of deep learning in various fields,there are more and more task scenarios with large-scale and high-performance requirements,and the design of deep learning models becomes more and more complex.Usually,designing a reasonable and high-performance deep learning model requires a lot of human involvement,which implies a professional knowledge background,a lot of human resources,and time,all of which limit the rapid development and implementation of deep learning.Neural network architecture search is the mainstream research direction currently used to solve this problem.However,existing neural network architecture search algorithms still have some shortcomings.Therefore,this thesis focuses on the optimization research of neural network architecture search,focusing on how to improve the search efficiency and reduce the search time.The main research contents of this thesis are as follows.A graph neural network architecture search optimization algorithm based on Monte Carlo search tree and prediction model is proposed for the problem of low search efficiency and time-consuming evaluation caused by too large search space in graph neural network architecture search.The optimization algorithm uses a Monte Carlo search tree to partition the search space in the form of a tree structure that divides the entire search space into regions with good performance and regions with poor performance in a hierarchical manner.The Monte Carlo search tree provides a potentially valuable sub-search area for the search strategy,which helps the search strategy to avoid unnecessary exploration and thus improve the search efficiency.The optimization algorithm uses a prediction model for accelerated evaluation.In the experiments,it is found that the continuous use of the prediction model for a long period of time will lead to degradation of the search performance,so an alternating update algorithm is proposed to ensure accelerated evaluation while reducing the impact of errors on the search strategy.In addition,the optimization algorithm uses reinforcement learning as the search strategy.The traditional reinforcement learning-based neural network architecture search algorithms all use the strategy gradient algorithm for updating,but this updating algorithm has the problems of low sample utilization and training instability,so this thesis uses the proximal strategy optimization algorithm to further improve the search efficiency and training stability.A neural network architecture search optimization algorithm based on prediction models and mixed batches is proposed for the problem of how to use reinforcement learning search strategies and prediction models efficiently together.First,the impact of the prediction model on the search strategy is analyzed,and a general form of its monotonic performance improvement is proposed.Then,two errors of the prediction model(generalization error and policy error)are quantified and further analyzed based on these two errors,and their specific forms of monotonic performance improvement are proposed successively.1 and 2.Finally,an optimization algorithm based on the mixed batch mechanism is proposed for the specific form of monotonic performance improvement.2.In addition,an adaptive update algorithm is proposed to address the inflexibility of the fixed update algorithm in the mixed batch mechanism.Finally,the proposed optimization model in this thesis is compared with other advanced optimization algorithms on several data sets for experiments,and the experimental results prove the advantages of the proposed optimization model in this thesis.Secondly,the proposed optimization models are all subjected to relevant ablation experiments,and the results prove the scientificity and effectiveness of each component in the optimization model. |