Font Size: a A A

Optimization Design For Deep Belief Network And Its Applications

Posted on:2020-07-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:G M WangFull Text:PDF
GTID:1368330623956533Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Deep learning(DL)is essentially a class of training techniques for artificial neural networks(ANNs)with deep structures,which is to simulate a hierarchical processing mechanism for perceptual signal from brain neural system.Deep belief network(DBN)has been proposed to be a deep structure during simplifying the difficulties in logical reasoning of logistic belief network,now it is one of popular approaches to implement DL.DBN is constructed using stacked restricted Boltzmann machines(RBMs),and its training process includes two parts: unsupervised pre-training and supervised finetuning.Unsupervised pre-training is to train stacked RBMs using an unsupervised method,and then backpropagation(BP)algorithm is used to fine-tune the whole DBN.Such training method in stages contributes to successes in training deep structure,including image recognition,natural language processing,regression and prediction,and so on.However,the existing DBN has many limitations when it comes to complex data processing,such as the time-consuming of pre-triaining and low accuracy of finetuning.On the other hand,the structure of existing DBN is often determined by artificial experience and sufficient training data,and the structure will be fixed once it is determined before training.Meanwhile,DBN only changes the weight parameters to adjust different dynamics of tasks.Therefore,it is a development trend to design a DBN whose structure and weight parameters simultaneously remain changing during the training process,which is also an open and unsolved peoblem.The research on structure optimization design for DBN is not only to overcome timeconsuming pre-training and low fine-tuning accuracy,but also to obtain an effective method for dynamical structure design.According to design requirements,the training for dynamical structure of DBN also includes two parts,and structure changes all the time during triaining process.Therefore,the existing learning algorithms for fixed structure face many difficulties when they come to variable structure.In order to overcome those problems in variable structure design,the contributions of this thesis,based on detailed analysis for the current researches of DBN,are mainly: designing an adaptive learning rate;building a novel supervised fine-tuning model of DBN based on a partial least square regression(PLSR)and its sparse representation;studying a series of methods for self-organizing structure,including growing and growing-pruning structures;obtaining learning algorithms for weight parameters during the adjustment process of dynamical structure;analyzing the convergence of algorithms in dynamical structure in details.Furthermore,exploratively combining DBN with reinforcement learning(DL),generative adversarial network(GAN)and model predictive control(MPC)to an effective learning and optimization control model.The main contents and innovations of this thesis are as follows:(1)Improvements researches on unsupervised learningDue to that the unsupervised learning of DBN has to face the problem of timeconsuming traning,an improved algorithm is designed.Because feature extraction in unsupervised learning and data reconstruction of original input are conducted simultaneously,the fixed learning rate in contrastive divergence(CD)algorithm undermines the speed of pre-training.According to the difference between every two update directions of parameter in CD algorithm,an adaptive learning rate is proposed to dynamically increase and decrease learning rate value.Such adaptive learning not only accelerates the speed of pre-training,but also avoids the problem of local minimum.The experimental results show that the improved method mentioned above is effective in improvements of learning speed and fearture-extraction efficiency.(2)Improvements researches on supervised learningFor the problems that poor robustness and low accuracy of supervised learning,an adaptive sparse DBN with PLSR(AS-PLSR-DBN)is proposed to improve robustness and fine-tuning accuracy.Firstly,adaptive learning rate is designed to accelerate an RBM training process,and two regularization terms are introduced into such a process to realize sparse representation.Secondly,initial weight derived from AS-RBM is further optimized through layer-by-layer PLSR modeling from the top layer to bottom one.Thirdly,we present the convergence and stability analysis of the proposed method.Finally,the proposed method is tested on Mackey-Glass time-series prediction,twodimensional function approximation and complex system identification.Simulation results show that it has higher learning accuracy and faster learning speed.It can be used to build a more robust model than the existing ones.(3)Self-organzing deep belief networkAlthough DBN makes successes in many kind of applications,its structure is almost determined according to artificial experience,and the structure will be fixed during the traning process once it is determined.This fixed structure,similar to hyperparameters assignment,can not satisfy the requirement of data diversity,which undermines the effective learning of DBN.For the structure design problem of DBN,a self-organizing method is proposed for DBN structure design,which is based on the spiking intensity of hidden neuron and descent rate of error.During training process,a neuron with a small value of spiking intensity will be pruned,and the neuron with a large value of spiking intensity will be divided into two neurons.Meanwhile,when the descent rate of error increases with training iterations,a new hidden layer will be added;when the descent rate of error decreases for the first time,a hidden layer will be pruned.The experimental results of nonlinear system modeling,effluent total phosphorus(TP)prediction in wastewater treatment process(WWTP)and air pollutant concentration prediction demonstrate the effectiveness of the proposed algorithm.(4)Growing deep belief network based on transfer learningTo solve the problem of time-consuming training process resulted from repeated weight-initialization of self-organizing structure,a growing DBN with transfer learning(TL-GDBN)is proposed.Firstly,a single DBN with single hidden layer is initialized and pre-trained,and the pre-trained DBN is fixed,which is considered as knowledge source domain.Secondly,new hidden layers and neurons are added to the initial DBN and considered as target domain,and transfer learning is used to transfer knowledge from source domain to traget domain to accelerate the training process,which can achieve a growing structure until a stopping criterion is satisfied.Finally,a stopping critira is designed according to output error,when it is satisfied,the structure size will be fixed,and TL-GDBN goes to supervised learning.The proposed TL-GDBN is tested on CATS missing time-series prediction and effluent TP concentration prediction in wastewater treatment process(WWTP).The experimental results show that the TLGDBN has better modeling performance,faster learning speed and more robust structure.(5)Deep reinforcement learning networkAccording to existing analysis,feature perception is a powerful method to achieve knowledge proliferation,and decision-making is an effective method to recognize the key features.Deep learning can obtain priori knowledge using its powerful unsupervised feature-extracting capability,and reinforcement learning can maximize cumulative reward signal to learn the optimal decision-making strategy,which interacts with environment based on a trial-and-error mechanism.In order to improve recognition accuracy for handwritten digits in MNIST dataset,an adaptive DBN with Q-learning strategy(Q-ADBN)is proposed for this problem.Firstly,Q-ADBN extracts the features of original images using an adaptive deep auto-encoder(ADAE),and the extracted features are considered as the current states of Q-learning algorithm.Secondly,Q-ADBN receives Q-function(reward signal)during recognition of the current states,and the final handwritten digits recognition is implemented by maximizing the Qfunction using Q-learning algorithm.Finally,experimental results from the well-known MNIST dataset show that the proposed Q-ADBN has a superiority to other similar methods in terms of accuracy and running time.(6)Generative adversarial deep belief network based on energy functionGenerative adversarial network(GAN)has become a hot research in artificial intelligence,and is attracting attentions from scholars.For low efficiency of generative model and gradient disappearance of discriminative model,a GAN based on energy function(RE-GADBN)is proposed,in which reconstruction error(RE)acts as the energy function.Firstly,an adaptive deep belief network(ADBN)is presented as the generative model,which is used to fast learn the probability distribution of given sample data and further generate new data with similar probability distribution.Secondly,the RE in adaptive deep auto-encoder(ADAE)acts as an energy function evaluating the performance of discriminative model;the smaller energy function,the closer to Nash equilibrium the learning optimization process of GAN will be,otherwise.Meanwhile,the stability analysis of the proposed E-REGAN is given using the inverse inference method.Finally,the simulation results from MNIST and CIFAR-10 benchmark dataset experiments show that,compared with several existing similar models,the proposed E-REGAN achieves significant improvement in learning rate and data generation capability.(7)Model predictive control based on deep belief networkDuring optimization design for DBN structure,a conclusion is found that DBN with a dynamical structure has a strong capability of nonlinear system modeling.Model predictive control(MPC)is a process control method based on rolling optimization,which relies on an accurate predictive model.A deep-learning model predictive control(DL-MPC)is explanatorily proposed to model and control nonlinear systems.The proposed DL-MPC consists of a growing deep belief network(GDBN)and an optimal controller.First,the GDBN with transfer learning is presented as a predictive model of the nonlinear systems.The model can accurately approximate the dynamics of the nonlinear systems with a uniformly ultimately bounded error.Second,a quadratic optimization is developed for the solution of the optimal controller,which can obtain an effective manipulated variable.Finally,the proposed approach is applied to model and control the nonlinear dynamical systems,and the experimental results demonstrate that the DL-MPC has satisfactory tracking and anti-disturbance performances.
Keywords/Search Tags:Deep Learning, Deep belief network, Unsupervised learning, Selforganizing structure, Transfer learning, Deep reinforcement learning, Deep model predictive control
PDF Full Text Request
Related items