Font Size: a A A

A Study Of Tea Taste Signals Identification Based On Minimal Uncertainty Neural Networks

Posted on:2005-05-05Degree:MasterType:Thesis
Country:ChinaCandidate:Y WangFull Text:PDF
GTID:2168360125950570Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Machine vision, hearing, tactual sensation and force sensation are greatly developed in the domain of robotics and some of theirs have been used for practical purposes. Machine taste and smell sensation have wide applications in the intellectualized management in food industries, quality inspection of food, evaluation of taste and smell and so on. However, the progress made in machine taste and smell sensation are far from satisfactory. The most challenging tasks in the domain of machine taste and smell sensation include to construct taste and smell sensors with high sensitivities and to build recognition systems with high correct classification percentages. Many scientists in Japan had pursued their studies on the domain since the 80's of the twentieth century. Nowadays, not only had the basic tastes of sourness, sweetness, bitterness, umami and saltiness been successfully measured, but the quantitive sampling analysis for various foods and beverages, such as coffee, tea, mineral waters and rice, are made considerable headway.How to determine the structure and the parameters of the neural networks promptly and efficiently has been a difficult point all the time in the field of neural networks research [47][48]. At present the basic idea to solve this problem is to dig proper information from the data of research, and then guide the construction of neural networks via the previously acquired information, such as successful in constructing neural networks in light of Bayesian Theorem [49] [50]. Optimizing neural networks according to Particle Swarm Optimization (PSO) is newly invented in recent years [51] [52],A new model of minimal uncertainty neural networks (MUNN) to construct the neural networks is discussed in this paper. It is derived from Minimal Uncertainty Adjudgment to construct the networks structure.Theorem 1. Let X=(x1,x2...xn) be a N-dimensional input vector, and all attributes x1,x2...xN are separate independent; let P(k) denote the probability ofevent k, then is called uncertainty of X to y, andCorollary 1: When Y={y1,y2...yM) is a classified collection set, the uncertainty of X to yj ( j [1,2...M] ) is determined as: When classified, choose the minimaluncertainty j as the finial adjudgment, which is defined as Minimal Uncertainty Adjudgment.We make the identifications defined as minimal uncertainty neural networks (MUNN). thestructure of neural networks is determined as shown in Fig. 1. The significations of each layers are as the followings:Layer[A](input samples): A is the observation set, x ii'is the property of xj,.Layer[B](weights selection): Where Oii' =1, if xii' A, and Oii'=0,otherwise.Layer[C](transmited calculation): From Eq. (1) and Eq. (2) we can get Sj.Layer[D](output uncertainty): Using Eq. (3) to obtain the output of .Fig. 1. A simply minimal uncertainty neural network with 2 input attributes and 2 classes (N=2).The MUNN combines with Bayesian Theorem and with Particle Swarm Optimization (PSO) for training.When determining weights and biases of MUNN by Bayesian Theorem, some counters are calculated first:where r is the pattern number in the training set, and indicate the presence of in and in is the "strength" of the pattern, typically is 1. Therefore, we deduce:a is usually very small a number, and in later experiments we assume that ais 1/C, so the error of the classification is close to log1/C2.When determining weights and biases of MUNN by PSO, the weights and biases of minimal uncertainty neural networks model can act as the parameters of the particles directly, and the misclassified ones are the fitness values.Updating particles' velocity and positions with the following formulae:v(t) is the particle velocity, Persent(t) is the current particle. pBest(t) and gBest(t) are defined as individual best and global best. randQ is a random number between [0 1]. c1, c2 are learning factors. Usually c1 = c2= 2. The final result of...
Keywords/Search Tags:Identification
PDF Full Text Request
Related items