Generative Adversarial Network(GAN)is a key technology for generating high-fidelity images.It has been widely used in image generation,style migration,data augmentation and other computer vision fields,and has achieved certain results in other fields,such as multi-model image synthesis.However,the learning effect of GAN depends on the careful design of its network architecture.For this reason,researchers need to spend a lot of efforts on architecture design and hyper-parameters finetuning,which are very time-consuming.The emergence of neural architecture search(NAS)has greatly reduced the human resources.However,the training process of GAN is unstable,sensitive to the architecture design,and the search process is limited to the performance estimation,these problems above have caused many obstacles to applying NAS to GAN.Therefore,the purpose of this dissertation is to propose some solutions based on the prototype algorithm(Auto GAN),and improve the efficiency and and results of NAS algorithm.Furthermore,this dissertation applys GAN to the field of data augmentation,including the high-fidelity image synthesis task with small data samples.The scientific research could be applied into practice to improve the application effect of GAN.Aiming at solving the problem of long-time performance evaluation for GAN architecture(mainly caused by numerical calculations of large-scale images),this thesis proposes Distributed GAN to use discriminator in GAN to directly evaluate the images,and then convert the value into reward to guide the architecture search process.Furthermore,at the search stage,progressive neural architecture search ignores the overall consistency of previous and subsequent local architectures.Therefore,Distributed GAN proposes a global architecture search method to enhance the integrity of all local architectures,and a reinforcement learning based reward shaping technology to provide personalized reward values for each local network to enhance the diversity of each local architecture.Through experiments,Distributed GAN has shown that the improved search efficiency is more than twice that of Auto GAN,which needs 2 days,while Distributed GAN only takes 0.8 days and ensures that the selected GAN have the same performance at the mean time.However,due to the instability of GAN training,directly obtaining the output value of discriminator as reward is unstable and ineffective,which would influence the stability of search process.Therefore,based on Distributed GAN,this thesis proposes Multi Self GAN,which uses Monte Carlo tree search to acquire reward and hence stabilizing the search process.In addition,although Distributed GAN uses a reward shaping technology to distributedly allocate the reward values,they will ultimately be aggregated into a single controller for optimization,making it difficult to distinguish the contributions made by various local architectures.Therefore,a multi controller search scheme is proposed,where each controller is responsible for searching different local architectures.Experiments have shown that Multi Self GAN not only ensures search efficiency,but also improve search results and search stability.Namely that,we could always find good GAN architectures in multiple searches.Compared to Auto GAN,Multi Self GAN could shorten the distance between genernation and original dataset,e.g.,CIFAR-10 by 1.34.Finally,this thesis applies NAS algorithm to the image generation task for small-sample datasets,with the aim of further improving the synthesis results.This thesis proposes Auto Info GAN to avoid overfitting problems caused by over-training with small samples,by a two-stage search method and dynamic restart scheme.When retraining the searched optimal GAN,a contrastive loss function based on unsupervised learning is used to improve the learning process of GAN.This thesis has conducted comparative experiments with Fast GAN model on 11 datasets,and achieves the state-of-the-art results. |