Font Size: a A A

Research And Implementation Of Artificial Intelligence Interpretability Evaluation Method On Rule-Based Surrogate Model

Posted on:2023-02-22Degree:MasterType:Thesis
Country:ChinaCandidate:Y LiFull Text:PDF
GTID:2558306914483544Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of artificial intelligence technology,artificial intelligence models are becoming more and more complex,which makes it impossible for people to understand the logic on which they make decisions,thus affecting the applications in critical fields such as medical and military.Explainable artificial intelligence(XAI)has emerged as a result..There are many interpretative techniques available,but there is still a lack of effective evaluation methods for XAI interpretability.Only a systematic and scientific evaluation of XAI interpretability can improve the trustworthiness and security of XAI.Focusing on the problem of XAI interpretability evaluation,this paper aiming at the widely used XAI generated by the rule-based surrogate model interpretation technique,mainly does the following work:(1)Conduct the design of XAI interpretability evaluation indicators and the construction of indicator sets.By combining the principles of scientific indicator system design with the characteristics of XAI interpretability based on surrogate model,based on a large amount of preliminary research work,a total of 21 quantitative indicators are evaluated for XAI interpretability from seven perspectives:consistency,complexity,causality,clarity,sufficiency,effectiveness,and stability.The indicator sets includes both commonly used indicators,such as those in consistency and complexity,and new indicators,such as those in causality,effectiveness,and stability.(2)A multidimensional quantitative assessment method for XAI interpretability is proposed.Combining scientific indicator system design principles with the application of XAI in real scenarios,the interpretability is evaluated in five dimensions:consistency,user comprehension,causality,stability,and effectiveness,and an objective evaluation model is constructed using entropy weight method and TOPSIS evaluation method to fuse and process multiple indicators in each dimension to obtain the final evaluation results of the dimension.Finally,the evaluation method is applied to six XAIs based on surrogate models in several practical application scenarios to verify the effectiveness of the evaluation method and to analyze and suggest the most appropriate XAI for users to choose.(3)A comprehensive quantitative evaluation method for XAI interpretability is proposed.The mutually independent indicators in the existing indicator set are fully retained,and a hierarchical indicator system is constructed from seven perspectives of consistency,complexity,causality,clarity,sufficiency,effectiveness,and stability,and then the indicator system is combined with hierarchical analysis method,fuzzy comprehensive evaluation method,and TOPSIS evaluation method to establish two comprehensive evaluation models to comprehensively assess the interpretability level of XAI.Finally,the method is used to evaluate the interpretability of three rule-based surrogate models of XAI under practical application scenarios,and a user experiment is designed to compare with the evaluation results.The experimental results verify the effectiveness of this evaluation method.
Keywords/Search Tags:explainable artificial intelligence, interpretability evaluation, surrogate model, evaluation model
PDF Full Text Request
Related items