Font Size: a A A

K-dependence Bayesian Classifiers For Strengthening Attribute Dependencies

Posted on:2020-12-27Degree:MasterType:Thesis
Country:ChinaCandidate:H M JiangFull Text:PDF
GTID:2428330575477302Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In recent years,artificial intelligence has been greatly developed.What is more,it is called the key technology of the fourth industrial revolution.Many business giants also start to pour more money into the research and development of artificial intelligence which includes face recognition,intelligent security and pilotless.The core technology of artificial intelligence is to let machines learn human's judgment.The research represented by Bayesian network has inherent advantages in this way.Bayesian network uses probability distribution to express decision factors of causal reasoning and presents its process graphically.In Bayesian networks,Naive Bayes(NB)model is famous for its simplicity and efficiency,but it supposes that predictive attributes are independent of each other except class variables.The assumption is quite different from the real scenarios.In practice,NB is more suitable for some time-efficient scenarios.In order to weaken NB's conditional attribute independence assumption,some representative models have been proposed,such as Tree Augmented Naive Bayes(TAN),Averaged One-Dependence Estimators(AODE)and K-Dependence Bayesian Networks(KDB).Among them,NB and TAN are more suitable for small data sets,while AODE performs well on large data sets,but the model complexity is also high.Among many Bayesian network classification models,K-Dependence Bayesian Networks(KDB)has a higher classification accuracy and stability.However,KDB only considers the direct dependence on class,but neglects conditional dependence between attributes.As a result,some useful information is left out.In addition,KDB does not have feature selection.If there are redundant attributes in the data sets,it may increase the risk of model over-fitting and lower the classification accuracy.To solve above problems,we extend KDB to KDBSM by representing and strengthening the dependency relationships between attributes.We derived the local mutual information from the simplest local structure of Bayesian networks,Then we propose the corresponding general information.We use general information to find an optimal attribute order.Besides,attributes reduction is used to remove redundant attributes and prevent over-fitting.Since the local structure of the first K attributes is complete,causal inference is not required.Starting from the K+1 attribute,we carry out the structure learning and attribute selection based on the greedy search strategy.The proposed algorithm is tested on 21 UCI data sets and compared with other classical methods in terms of 0-1 loss?Goal Difference(GD)and macro-F1.The results show that our approach obtains higher classification accuracy and robustness.
Keywords/Search Tags:Intelligent Reasoning, Bayesian Network, attributes order, attributes reduction, redundant attributes
PDF Full Text Request
Related items