Font Size: a A A

Extracting Optimal Explanations For Ensemble Trees VIA Logical Reasoning

Posted on:2022-12-11Degree:MasterType:Thesis
Country:ChinaCandidate:G L ZhangFull Text:PDF
GTID:2518306752453084Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
In recent years,artificial intelligence technology has made great progress,and it has been widely used in various industries and our daily life.From consumption,transporta-tion and medical treatment to credit,justice and administration,artificial intelligence tech-nology is exerting profound influence on more fields.Along with this,there are concerns about the future of artificial intelligence: can this rapidly developing ”partner” be trusted?Many recent studies focus on the field of explainable artificial intelligence,and people try to explain the predictive behavior of various machine learning algorithms from dif-ferent perspectives.Tree ensembles learner is a popular machine learning algorithm with high prediction performance.A single decision tree is usually considered as a white box,which deemed explainable by nature.However,a tree ensembles contains a large number of decision trees.All trees act on the prediction results,which makes the internal logic of the model very complex and forms a black box model that is difficult to understand..Therefore,we study the explainability of ensemble trees.Main contributions of this thesis are as follows:· Extracting explanation from tree ensembles We propose an approach that faith-fully extracts global explanation of random forest based on logical reasoning,which can transform a tree ensembles model into a set containing several decision rules.Decision rules of IF-THENform facilitate people to understand the prediction logic of the original model.· Optimal explanations We propose a criterion for the evaluation of explanations,which considers the size of explanation and the accuracy of explanation in a bal-anced way.We optimize the explanations based on the criterion to get the optimal explanations(Opt Explain).Opt Explain is an interpretable surrogate model that is as close as possible to the prediction ability of the original model.· Profile of equivalent classes Building on top of Opt Explain,we propose a method called the profile of equivalent classes(Pro Class),which simplify the ex-planation even further by solving the maximum satisfiability problem(MAX-SAT).Pro Class gives the profile of the classes and features from the perspective of the model.· Analysis of explainability Experiment on several datasets shows that our approach can provide high-quality explanations to large ensemble trees models,and it betters recent top-performers.
Keywords/Search Tags:Explainable Artificial Intelligence(XAI), Tree Ensembles, Classification, Decision Rule Extraction
PDF Full Text Request
Related items