Font Size: a A A

Inference, Learning And Classification Based On Belief Networks

Posted on:2006-03-21Degree:DoctorType:Dissertation
Country:ChinaCandidate:C ZhangFull Text:PDF
GTID:1118360155472604Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Belief networks are a core method for uncertainty representation and reasoning in artificial intelligence. This dissertation focuses on efficient exact inference on belief networks, learning belief networks from data, and classification using belief networks. Belief propagation algorithm (BPA) is commonly used to do exact inference on tree and polytree belief networks. With its local computation, each belief network node can be regarded as a processor. A new calculation model is presented to balancedly assign BPA's computation load among variable multi-processors, and leads to an effective parallel computation method. Exact inference on general belief networks is a NPC problem. The main difficulty is to triangulate the network and construct a join tree with minimum weight. The dissertation proposes a new triangulation algorithm: MsLB-Triang. Based on both the Dirac property and LB-simple property of triangulation graph, MsLB-Triang outperforms the popular using minimal weight heuristic triangulation algorithm at total weight and total filling edges of inducted graph. Many methods are used to induct belief network structures from data. When using GA to learn belief network structures, the internal representation of DAGs is very important. Direct approaches are those in which the search is conducted over the space of all possible DAGs. An obvious potential problem in these approaches is the generation of infeasible solutions (i.e., digraphs with cycles). A new encoding method is given which has the same complexity as adjacent matrix encoding. Based on the new encoding method, none of the descendant will be illegal after crossover and mutation operation, so it improves the learning efficiency. Belief networks can be learned in batch and incremental mode. The incremental learning is the task of updating a belief network in the light of new cases. In this dissertation, an incremental learning algorithm is proposed. The algorithm is implemented according to two incremental updating rules and a selection criterion. It iteratively uses the incremental updating rules to refine the structure and parameters of the belief network according to newly entered data and selects the most suitable descendant by the criterion to be current result. The numeric experiment shows that it has a good performance. The naive Bayes classifier is widely used in machine learning due to its computational efficiency and competitive accuracy. However, its conditional attribute independence assumption can result in bad performance in real world problems. A number of techniques have explored simple relaxations of the attribute independence assumption in order to increase accuracy, but always cost much more computing time. In this paper, we investigate enhancement to naive Bayes classifier using feature weighting technique based on rough set theory. The feature weighting coefficients are directly induced by rough upper approximation of attributes, and can be regarded as the significance of each attribute when evaluating the posterior probability of the particular class value. Experimental results shows that the new algorithm —feature weighting naive bayes(FWNB) can reach the same classification performance as much more state-of-the-art classifiers, e.g. TAN and NBTree, but much faster and needing less resources. Using an automatic method to monitor the run-time status of some key-role softwares is an important application in computer engineering area. It also needs some way to warn the system administrators in time when some software components are working on exceptional status. Based on the internal CPU hardware performance counters and native Bayesian method, a software status diagnosis model is presented, with parameter learning method. A software monitoring system named "SoftDiagnose"has developed. Numeric experiment shows that only based on a few CPU performance data, this model can recognize more than 99 percent of unknown software exceptions including resource insufficiency, virus, etc., under a variable environment.
Keywords/Search Tags:belief networks, belief network reasoning, belief network learning, Bayesian classifiers, software status monitoring
PDF Full Text Request
Related items