Font Size: a A A

Study Of Learning Algorithms Of Complex-Valued Feedforward Neural Networks With Applications

Posted on:2019-04-15Degree:MasterType:Thesis
Country:ChinaCandidate:R R WuFull Text:PDF
GTID:2428330545471736Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
Complex-valued feedforward neural networks have been widely applied in many fields due to its strong computing power and good generalization capability.During the design process of these networks,their activation functions should be both bounded and analytic.However,according to Liouville's theorem,the function satisfying the two conditions must be a constant.There are two ways to deal with this issue.The first one is to separate a complex number into two parts,such as real and imaginary parts,or amplitude and phase parts.Then real-valued activation functions,which are bounded and differentiable in the real domain,can be used to handle them separately.But splitting the complex number will lead to losing some information.The second one is to adopt fully complex-valued functions with several singularities in the complex domain.Therefore,when using these functions,one should take measures to avoid these singularities.Considering these,this thesis studies several kinds of complex-valued feedforward neural networks,including real and imaginary type complex-valued feedforward neural networks,amplitude and phase type complex-valued feedforward neural networks and fully complex-valued feedforward neural networks.The first two types are so-called split complex-valued neural networks suitable for different applications.The backpropagation based stochastic gradient descent algorithm is the most common method for training complex-valued feedforward neural networks.However,there exist some drawbacks,such as easily entrapped in the local minimum and slow convergence speed.To address these issues,this thesis proposes the L-BFGS based algorithm in the complex domain and combine it with the backpropagation method to train complex-valued neural networks.This algorithm can accelerate the convergence speed in the training process.At the same time,due to its own advantages,this algorithm can reduce much memory space during the training.Therefore,the efficiency of the designed complex-valued neural networks can be further improved.For complex-valued neural networks,besides the learning algorithms,there are many other factors contributing to the network training.According to different structures,we adopt suitable optimal methods.For the real and imaginary type complex-valued neural networks,to reduce the effects of the saturated region of activation functions,a new method for adjusting the gain parameters of the functions is proposed.Moreover,some experienced guides are given to help select proper initial values of the networks,by which the global minimum can be achieved faster.For amplitude and phase type complex-valued neural networks,the regularized term is added to the loss function to improve the generalized capability of the network.For fully complex-valued neural networks,the complex-valued learning step is used to accelerate the convergence speed.The three different types of complex-valued feedforward neural networks studied in this thesis are separately evaluated on real-valued classification problems,coherent wave signal processing and complex-valued functional approximation problems,and simulating the nonlinear channel equalizer.The experimental results show that the L-BFGS based algorithm combined with some optimal methods for the complex-valued neural networks can handle the information efficiently with comparatively less memory space.
Keywords/Search Tags:Complex-valued feedforward neural networks, L-BFGS algorithm, Gain parameters, Classification, Signal processing
PDF Full Text Request
Related items