| Many artificial intelligence models are unexplainable,also known as black box.People can use these models to make accurate decisions,but they can not explain the logic behind the predictions.In areas involving people’s life,health and property safety,such as medical diagnosis,automatic driving,financial lending and so on,the consequences of decision-making mistakes are very serious.Explainable artificial intelligence(XAI)helps users understand the process and logic of model predictions through interpretation technologies such as feature importance,model distillation and surrogate model.Among them,the construction of surrogate model is the mainstream interpretation method,which not only ensures the prediction accuracy of the model,but also enhances the interpretability of the model.As for the evaluation of interpretability,most of the existing studies evaluate it by organizing user experiments,but user experiments need to invest more human and material resources and take a long time.Few studies involve quantitative evaluation,and most of the existing quantitative evaluation studies discuss single indicators separately without comprehensive evaluation combined with various indicators.Aiming at the imperfection of interpretability quantitative evaluation,this paper studies the interpretability evaluation method based on surrogate model.Firstly,based on the characteristics of surrogate model,a set of interpretable evaluation index system is proposed,which covers four evaluation topics:fidelity,clarity,complexity and stability.Based on the decision tree model commonly used in surrogate model,the four topics are subdivided into a total of 8 indexes,and the calculation formulas of each index are given;Secondly,based on the analytic hierarchy process and entropy weight method in the comprehensive evaluation model,combined with the above index system,a comprehensive evaluation method for decision tree surrogate model is given;Finally,for a specific XAI application,through the subjective and objective comparison between the user questionnaire experiment and the calculation results of this evaluation model,it is fully verified that the proposed interpretability evaluation method is effective and feasible.Based on the research of artificial intelligence model interpretability evaluation method,this paper develops a set of interpretability evaluation system.The system has the functions of XAI interpretability evaluation and evaluation result analysis,including five functional modules:data set management,model management,index calculation,evaluation,registration and login.The system can be applied to the quantitative evaluation of XAI interpretability,enhance users’ trust in AI models,and promote the application of AI in many fields. |