Font Size: a A A

Optimization Method Of Generative Adversarial Networks Based On Tsallis

Posted on:2024-07-19Degree:MasterType:Thesis
Country:ChinaCandidate:L Y JiangFull Text:PDF
GTID:2568307064955709Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the development of big data technology and the popularity of intelligent applications,deep learning has been widely used in various fields.As a typical deep learning model,Generative Adversarial Networks(GANs)has become one of the research hotspots.However,its performance depends on the network structure and parameter settings.The key to improving model performance while also considering model complexity lies in model optimization.Nowadays,GANs model training still faces the following challenges:(1)The complexity of the model is high,and it is easy to affect the generalization performance of the model due to overfitting,which leads to the problem of model collapse(i.e.,the model generates repeated single samples);(2)Model training instability problem,which cannot generate realistic samples,will reduce model fitting ability.In order to solve the above problems,this thesis takes improving the generalization ability and fitting ability as entry points and then puts forward the corresponding optimization methods.The main contents include:(1)Tsallis entropy regularization method.A regularization term is introduced into the target function of GANs to address the problem of high complexity.Since the information entropy can be used to measure the uncertainty of information,the uncertainty of the generated sample data distribution can be reduced by introducing the entropy regularization term.However,the traditional Shannon entropy can easily cause calculation overflow problem due to logarithm calculation.Therefore,Tsallis entropy in exponential form is introduced.Furthermore,an optimization method of generalization ability based on the regularization term of Tsallis entropy is proposed by converting the original logarithmic calculation into exponential calculation.Meanwhile,Dirac-GAN is utilized to visualize the convergence effect,and then the models with the best convergence state on Dirac-GAN are selected for Tsallis entropy regularization to demonstrate the effectiveness of Tsallis regularization.The experimental results on CIFAR-10 dataset show that the WGAN-GP and WGAN-div with Tsallis entropy regularization have obvious optimization effect,which can effectively reduce the time cost of parameter search,and improve the execution efficiency by 7.6% and 16.4% respectively.In addition,there is a significant improvement in Inception Score(IS)and Frechet Inception Distance Score(FID)indicators,which verifies that the proposed method can significantly reduce the complexity of the model and contribute to converge better.(2)Tsallis Generative Adversarial Network(TGAN).The fundamental reason for the problem of training instability lies in the defect in the divergence of the objective function that narrows the true and false distributions.In order to solve the above problem,Tsallis divergence based on Tsallis entropy is derived and introduced into the target function to measure the distance between the real and fake distributions,and then the TGAN model is proposed.The two distributions can be brought closer without causing mode collapse,and the gradient does not disappear due to the constant target function by minimizing the Tsallis divergence.In addition,the gradient of the loss function in the discriminator is further restricted to meet the Lipschitz density limit.The Lipschitz continuous regularization term is introduced to generate more realistic samples.In order to verify the effectiveness of the proposed method,it is compared with the frontier GANs model on the CIFAR-10,STL-10,and Celeb A datasets,respectively.The experimental results show that compared with WGAN,TGAN can optimize the accuracy of IS metric and FID metric by 0.6 and 6.5,respectively,which fully demonstrates that the proposed method can effectively improve the model fitting ability without causing mode collapse.
Keywords/Search Tags:Deep Learning, Generative Adversarial Network, Tsallis Entropy, Entropy Regularization, Tsallis Divergence
PDF Full Text Request
Related items