Font Size: a A A

Verification And Analysis Of Fairness Of Tree Model Based On Probabilistic Model Checking

Posted on:2022-11-14Degree:MasterType:Thesis
Country:ChinaCandidate:Y WangFull Text:PDF
GTID:2518306722471744Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
More and more social decisions are made by machine learning models,including legal decisions,financial decisions and so on.For these decisions,the fairness of the algorithm is extremely important.In fact,one of the goals of introducing machine learning in these environments is to circumvent or reduce human bias in decision-making.However,datasets often contain sensitive features,or may have historical biases,leading machine learning algorithms to produce biased models.Tree-based model including decision tree and tree ensemble is widely used in decision system in various fields because of its high efficiency,easy implementation and strong generalization ability.However,due to its data-driven and dependence on data characteristics,it is easy to introduce bias elements into the model,resulting in inequity.Many of the existing fairness verification and promotion algorithms are based on heuristics and lack of formal guarantee.At the same time,it is also a challenge to study group fairness based on tree ensemble model.In this paper,a method based on probabilistic model checking is proposed to formally verify the fairness of tree-based models.The main work and contributions of this paper are as follows:1.We transformed the fairness problem into the probability verification problem,and proposed a tree-based model fairness verification framework based on probability model checking,which can measure and verify the fairness of the model through different fairness metrics.The framework supports both single decision tree and tree ensemble models,as well as larger models.Meanwhile,it can handle compound sensitive attributes.2.Based on the verification framework,we further studied the discovery and analysis of implicit biases of the model,and provides verification and analysis of implicit discrimination from different perspectives such as attribute correlation and causal fairness.3.We selected three datasets of different sizes and characteristics from common data sets of fairness,and prove the validity of the framework based on them.The frame-work can not only verify the fairness of the model,but also evaluate the efficiency of fairness improvement algorithm.In addition,compared with the existing fair-ness verification methods,the framework shows good performance advantages on large-scale models.
Keywords/Search Tags:Fairness Verification, Decision Tree Ensemble, Probabilistic Model Checking, Trustworthy Machine Learning
PDF Full Text Request
Related items