Font Size: a A A

Learning Algorithms Of Complex-Valued Neural Networks With Logarithmic Performance Index

Posted on:2018-08-27Degree:MasterType:Thesis
Country:ChinaCandidate:X Y XuFull Text:PDF
GTID:2348330542491465Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Complex-valued neural networks?CVNNs?have received extensive attention and research in recent years.And it has great potential value in different fields such as cognitive science,intelligent field and radar signal processing.The characterizing feature of complex-valued neural networks is that the input-output and weights are plural.According to the difference of activation function,the complex-valued neural networks can be divided into two kinds of neural networks: split complex-valued neural network and fully complex-valued neural network.The activation function in the split complex-valued neural network does not satisfy the Cauchy Riemann condition,so there is no complex derivative.While the activation function in the fully complex-valued neural network is analytic function.Hence it can guarantee its complex conductivity,which is also very easy to grasp the algorithm derivation.In this paper,we will conduct a comprehensive analysis of these two networks separately.Normally,the square error function is chosen as the objective function in traditional study of the complex-valued neural network.However,regarding to the traditional square error function,only the amplitude of the error is considered,the argument of the error is ignored.So it is necessary to find the error function,which takes amplitude and argument into account.In the practical process,although the back propagation-training algorithm is widely used by scholars,its network weight is easy to diverge.An effective way to overcome this problem is by adding a penalty function into the error function,so as to achieve the effect of weight dilution.Among them,the most common forms of penalty function are the weights of L--1 and L2.Recently,A Chinese scholars,Zong Ben Xu,a famous academician,puts forward a kind of relatively novel method,which is L1/ 2 regularization method.In this paper,the regularization method of L1/ 2 is applied to complex-valued neural networks.Using logarithmic error function as the objective function of the complex neural network,this paper studied the fully complex-valued neural network on-line gradient algorithm,which is characterized by logarithmic objective function.With the help of the CR differential operator,the limit of Schwarz symmetry is eliminated from the complex neural network.This method simplifies the algorithm derivation process,gets the iterative updating gradient,and gives the convergence theorem and proof of the algorithm as well.Then using fully complex-valued network algorithm as the model,analyzing the weights of the fully complex-valued neural network,this paper illuminated the singularity problem of learning pause,considered the plateau phenomenon,which may occurred in the process of steepest-descent-learning.By combining the CR differential operator,the update gradient is obtained.Also,by adding the natural matrix,the influence of singularity problem to algorithm convergence is reduced.Finally,leading the regularization method into complex domain,this paper gave the non-analytic activation function complex gradient algorithm with regular termsL1/ 2.Also by giving the non-analytic complex gradient algorithm with regular terms,it studied the regularization method in complex domain,the convergence problem of the batch gradient algorithm with regular items and gave a detailed proof process.
Keywords/Search Tags:Complex-valued neural networks, Polarization function, SmoothL1/2regularization, C-R differential operator, Convergence
PDF Full Text Request
Related items