Font Size: a A A

Bayesian Networks Inference Algorithm And Application On Images Classification

Posted on:2007-09-10Degree:MasterType:Thesis
Country:ChinaCandidate:P WangFull Text:PDF
GTID:2178360182996283Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Uncertain information processing is an important research area in Artificial intelligence. All the processing approaches fall into two categories - rule-based and model-based ones. Bayesian network was developed in 1980's and has been paid increasing attention to since 1990's. Compared with the early rule-based approaches, it has more clear semantics and usually makes more reasonable conclusions, but involves much more calculation.Inference in general Bayesian networks is NP hard. So far tens of Inference algorithms have been developed to make Bayesian networks as practical as possible. All the algorithms fall into two categories- exact ones and approximate ones. One early exact algorithm is variable elimination algorithm. One early approximate algorithm one is logic sampling algorithm. The basic idea of logic sampling is to stimulate the event that the Bayesian networks created. The inference result is the proportion of the amount of the samples, in which query event happened and the amount of all samples. This algorithm laid the foundations of other sampling inference algorithms, such as importance sampling algorithm and like-hood weight sampling algorithm.In this paper, the description of accurate and complexities of some kinds of sampling algorithms are presented on the basis of introduction to basic concepts of Bayesian networks. First, the mathematical foundations of them are given here in view of statistics. Second, the algorithms as well as the implements are given in detail in view of programming. Last, the complexity and accuracy analysis of each algorithm was presented.In probability logic sampling algorithm, all random events were simulated a large number of times. The inference result is the rate of samples, in which query event happened. The accuracy would increase along with the increase of the samples amount. Any two samples were independent. One sample would be discarded, if the sample was not coincident with this evidence node. So the complexity of the algorithm is O(N·M·2K), N is the amount of nodes, M is the amount of samples, K is the amount of evidences.At the following part, a new inference algorithm, samples-mean algorithm, was presented. The principle of this algorithm is an integral method extension from continuous unitary variable to discrete multivariable. We can get one entire sample by combining non-evidence-and-non-query nodes'random sample with evidence nodes and query nodes. And then, we can get a large number of samples by repeating such a process. The inference...
Keywords/Search Tags:Classification
PDF Full Text Request
Related items