Font Size: a A A

Study On Generalized Congruence Neural Networks

Posted on:2006-10-03Degree:MasterType:Thesis
Country:ChinaCandidate:Y ChenFull Text:PDF
GTID:2178360182961623Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Over the past decades, many achievements have emerged in neural network theoretical research and practical application. As a classic neural network model, BP network has experienced an extremely rapid growth. Meanwhile, it also shows some deficiencies, such as slow convergence rate and difficulty for digital implementation. With the progress of theoretical research and application, neural network with much faster convergence and larger scale is required, and the necessity for solving these problems becomes imperative.Recent studies on Generalized Congruence Neural Network (GCNN) show that the convergence rate of GCNN is faster than that of BP network. However, there are still some drawbacks in GCNN, such as weaker learning ability, difficulty to set the modulus of the generalized congruence function, difficulty to develop effective learning algorithm, and lack of rigorous theoretical foundation.In this thesis, the above problems of GCNN are investigated, and their corresponding solutions are given. A novel GCNN network architecture with its learning algorithm is proposed. Some theoretical analysis for it is done. And this GCNN is also used in e-mail filtering. The major contributions of this thesis are summarized as follows:Firstly, a novel GCNN network architecture is proposed based on an improved generalized congruence activation function. It is easy to set the modulus for this architecture. It is also proved that such GCNN with a single hidden layer can approximate any continuous function with arbitrary accuracy.Secondly, two gradient descent learning algorithms, that is, the modified GCNN BP algorithm and the Large Margin algorithm, are proposed. The time complexity and the convergence property of the modified GCNN BP algorithm are analyzed. Experimental results show that the proposed GCNN performs better than the standard BP network and some improved versions of it in terms of convergence rate and approximation/classification accuracy.Thirdly, the reason of fast convergence of GCNN is discussed. The theoretical analysis and experimental results show that the fast convergence is due to themultiple minima of the error function, which is generated by the generalized congruence function.Fourthly, the proposed GCNN is successfully applied in e-mail filtering. Experimental results show that the GCNN can achieve better classification accuracy than some other machine learning techniques and the BP network, and requires less learning time compared with the BP network.
Keywords/Search Tags:neural network, generalized congruence, activation function, learning algorithm, convergence analysis, e-mail filtering
PDF Full Text Request
Related items