Font Size: a A A

Research And Design On Optimization Algorithms For Complex-Valued Feedforward Neural Networks

Posted on:2021-03-17Degree:MasterType:Thesis
Country:ChinaCandidate:S F ZhangFull Text:PDF
GTID:2428330605474754Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
As one of typical models for complex-domain problems,complex-valued feedforward neural networks have been widely studied and applied in many engineering fields.In the research of complex-valued feedforward neural networks,training algorithms have always been an active research topic.Complex-valued feedforward neural networks have similar structures analogue to real-valued counterparts,and most of their training algorithms are generalized from real domain.Among them,complex-valued gradient descent algorithm is particularly common.However,it has the shortcomings of slow convergence rate and easily falling into local minimums.According to the need of information processing,not only the choice of parameter optimization algorithms must be considered,but how to design and optimize the network structure is also crucial.To deal with these issues,this thesis mainly focuses on the investigation of parameter optimization algorithms and structure optimization algorithms for complex-valued feedforward neural networks,and proposes several improved algorithms.Firstly,in this thesis,a first-order hybrid incremental algorithm based on the combination of gradient descent and least squares algorithms is proposed to train split complex-valued neural networks.Compared with traditional complex gradient descent algorithm,the hybrid optimization algorithm has the advantage of significantly reducing the number of parameters to be adjusted during the training process.Thus the computational complexity is effectively reduced and the convergence speed is accelerated.In order to determine the size of the network structure adaptively,new hidden neurons are added one by one through an incremental mechanism such that the training can jump out from local minima of the cost function.Secondly,Levenberg-Marquardt(LM)algorithm is one of the efficient second-order optimization algorithms with fast convergence rate.However,it needs to store the entire Jacobian matrix,which is not applicable for large-scale data sets.Therefore,to handle large-scale data sets,it is necessary to reduce memory requirements.In this thesis,the traditional complex-valued LM algorithm is improved to train fully complex-valued neural networks.Instead of storing the entire Jacobian matrix,our algorithm only needs to store the row vectors of the Jacobian matrix in turn for subsequent multiplication,which improves the training efficiency.Finally,to guarantee desired performance with compact network structure,this thesis proposes a complex-valued second-order hybrid constructive algorithm,which adopts complex-valued LM and least squares algorithms to train the nonlinear parameters between the input and the hidden layers and the parameters between the hidden and output layers respectively.When the training falls into a local minimum,an incremental constructive mechanism is employed to adjust the network structure in a timely manner,and the structure with the minimum error on the verification set is selected as the final structure.In addition,the new network is trained based on the previously optimized parameters,which greatly reduces the computational complexity of the network and improves the convergence speed of the model.The three improved algorithms proposed in thesis are applied to solve practical problems such as real-valued classification,function approximation,and channel equalizer simulation.Experimental results show that,compared with some previous algorithms,better performance is achieved by the proposed algorithms.
Keywords/Search Tags:Complex-valued neural network, Complex-valued LM algorithm, Complex-valued least squares algorithm, Incremental mechanism, Optimization
PDF Full Text Request
Related items