Font Size: a A A

Learning Algorithms Of Complex-valued Neural Network Based On C-R Differential Operator

Posted on:2017-06-07Degree:MasterType:Thesis
Country:ChinaCandidate:J DongFull Text:PDF
GTID:2310330518972312Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Complex-valued neural network (CVNN) has been widely researched and applied in recent years. It has great application value in engineering, biology and physics. Based on the difference of activation functions, complex-valued neural networks are divided into the split complex-valued neural network and the fully complex-valued neural network. The split complex-valued neural network is a neural network structure that the activation function is the split complex-valued function. Although outputs of neurons are bounded, the activation function does not satisfy the Cauchy-Riemann condition, so the function has no complex derivative. The activation function of fully complex-valued neural network is the analytic function, which can guarantee the activation function to be complex-differentiable and convenient to calculate.In the practical application, nonlinear optimization problems of real-valued functions with complex variables are frequently encountered. The optimization of these problems often require that the first or second order approximation of the objective function to generate a new descent direction. However, such method cannot be applied to real-valued functions of complex variables. Because this class of functions is non-analytic with complex variables,namely there no exist the Taylor series expansion for non-analytic functions. In order to overcome this problem, the objective function is usually defined as a function of the real part and imaginary part of the complex variables, and the standard real-valued optimization algorithms can be put into effect. However, this method may neglect the internal structure of the complex datas. Therefore, we plan to generate the objective function. We can use the C-R differential operator to acquire the Taylor series expansion of the non-analytic function by regarding the complex independent variables and its conjugate as a whole to carry out analysis.Thus, real-valued optimization algorithms can be generalized.In this paper, the unconstrained optimization problem of the real-valued functions with complex variables is researched, and optimization algorithms are applied to CVNNs. Based on C-R differential operator, we have a comprehensive analysis of the properties of the complex derivatives and the Hessian matrix. We also get the Taylor expansion of the non-analytic. Then, the iterative formula of the optimization algorithm can be derived, and the convergence of algorithms is proved. In this paper, three classes of complex-valued learning algorithms based on the unconstrained optimization are researched. Firstly, the complex-valued Newton method is introduced. Using C-R operator, the Schwartz symmetry restriction of activation function of complex-valued neural network is eliminated and the convergence of complex-valued Newton method is proved. Then, according to the mapping relationship between complex variables and real variables, the Quasi-Newton equation and formula of DFP algorithm can be obtained. The feasibility and convergence of the DFP algorithm are proved. Finally, by defining the concept of conjugate relationship between vectors with respect to a matrix, we can obtain the formula of complex-valued conjugate gradient method.The convergence of the algorithm is analyzed. In the simulation,algorithms are applied to the complex-valued neural network, and the convergence effects of the algorithms are shown by comparing with the existing complex-valued gradient descent method.
Keywords/Search Tags:Complex-valued neural network, C-R differential operator, Newton method, DFP algorithm, Conjugate gradient method
PDF Full Text Request
Related items