Font Size: a A A

The Application And Comparison On Shape Classification Of Leaves Based On Three Type Neural Networks

Posted on:2008-05-10Degree:MasterType:Thesis
Country:ChinaCandidate:E M ShenFull Text:PDF
GTID:2178360212996473Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Classification is a problem which is frequently faced in real life. People always care a lot about how to distinguish unkonwn stuff correctly and fastly. Artificial neural networks hold the capability to process huge data and to do large scale parallel computation. Neural networks are highly nonlinear dynamic and self organized system which could be used to discribe cognition, decision, control and some other intelligent behaviors. According to its powerful ability on learning and simulating, artificial neural networks are frequently used to deal with the problem of classification. After learning some training samples, neural networks can recoginize unkonwn stuff and to classify them.Prom its birth in 1940's till now, neural networks have made a lot of progresses. During the last ten years, many researchers swarm into this area and made a large number of new theoretic and practical findings. They brought up hundreds of neural networks in different fileds including association memory, self-learning, self-organize and computer visulization, etc..In this article, we choose three different type neural networks to implement the classification of leaves in Iris data set. The three different neural networks are Back Propagation Neural Network, Radial Basis Function Neural Network and Self-Organized Map Neural Net- workMulti-layer Feed Forward Neural Networks with Back Propagation AlgorithmBack propagation algotithm:1°Initialize the weights to small random values2°Randomly choose an input pattern xμ3°Propagate the signal forward through the network4°ComputeδiL in the output layer (oi = yiL)where hjl represents the net input to ith unit in the lth layer, and g' is the derivative of the activatioin function g.5°Compute theδfor the preceding layers by propagating the errors backwards6°Update weights using7°Go to step 2 and repeat for the next pattern until the error in the output layer is below a prespcified threshold or a maximum number of iterations is reached.Radial Basis Function Neural Networks First apply K-Average Clustering Algorithm II to choose the centers for basis function1°Initialize the clustering center {cp}2°Randomly choose training vector xj from training sample3°Cluster xj according to the distance from {ci}i=1p,4°Update cp' (η> 0 is the chosen learning rate):Then update weight within Orthogonal Least Squares Algorithm and Pseudo Inverse method until the output error is below a prespci-fied threshold.Self-organizing Map Neural NetworksSOM learning algotirhm with normalization:1°Initialize weights to small random numbers; set initial learning rate and neighborhood.2°Present a pattern xt and evaluate the network outputs3°Select the unit (ci, Cj) with the minimum output 4°Update weights according to the following learning rulewhere Ncicj(t)is the neighborhood of the unit (ci, Cj) at time t andη(i) is the learning rate.5°Decrease the value ofη(t)and shrink the neighborhood Ncicj.(t)6°Repeat steps 2°through 5°until the change in weight values is less than a prespecified threshold or a maximum number of iterations is reached.We implement the classfication task with these three different type neural networks and compare the performance. The results shows that with the reasonable architecture, learning algorithm and parameters, all the three neural networks perform well on this task.
Keywords/Search Tags:Classification
PDF Full Text Request
Related items