With the continuous development and widespread application of machine learning technology,machine learning models have achieved great success,However,due to the inherent black-box nature and opaque learning process of machine learning models,people have been hindered from gaining a deep understanding of their behavior and rationale.Therefore,it is crucial to provide explanations for machine learning models to increase their transparency and credibilityAs a typical application of machine learning,image classification models have also received extensive attention for their explanation approach.However,current research efforts on the explanation of image classification models are still limited.On the one hand,existing explanation approach cannot guarantee to generate both sufficient and necessary explanations for image classification models; on the other hand,although image segmentation methods have been widely used in the explanation of image classification models,their The impact on explanation approach remains to be analyzed and researched.In response to the above problems,this thesis focuses on the explanation approach for image classification models and carries out two aspects of work.Firstly,this thesis proposes a new method for generating local explanations for image classification models,called DDImage,to efficiently generate explanation that are sufficiently necessary.For a given image classification model and its input image,DDImage implements a series of image reduction operations on the input image,and then validates the reduced image based on the adequacy and necessity of explanation to ensure that a small,sufficient,and necessary local explanation is ultimately obtained.And on the Image Net,Roaming-panda data sets and Mobile Net V2,Res Net V2 models,a comparative experiment between DDImage and the current advanced explanation approach Bay LIME and SEDC was carried out.Through experimental data,it is found that the DDImage method can generate sufficient and necessary explanations 100%,and has shown superiority over other methods in terms of the ability to generate small-scale explanations and the stability of processing similar input images.Secondly,this thesis conducts empirical research on open datasets,models,and methods to reveal the impact of image segmentation methods on explanation approach.The empirical study selected three mainstream image segmentation methods,two mainstream local explanation approach,and conducted comprehensive experiments on Image Net,Roaming-panda dataset,and Mobile Net V2,Res Net V2 models.Our experimental results show that different image segmentation methods will affect multiple aspects of explanation approach,including the ability to generate sufficient and necessary explanations,and the ability to generate small-scale explanations,and there is a large gap between different image segmentation methods.The main contributions of this thesis are summarized as follows:(1)This thesis proposes an explanation approach DDImage for image classification models,and experimental research has verified the feasibility and effectiveness of the method.DDImage can generate local explanation that meet both sufficient and necessary requirements,making up for the shortcomings of current machine learning model explanation approach,and providing a reference for the subsequent development of local explanation approach.(2)This thesis conducts an empirical study of the impact of image segmentation methods on model explanation approach,and the correlation and influence degree of the two are revealed through experiments.The results of our empirical study provide reference and guidance for the subsequent use of image segmentation techniques in the explanation of machine learning models. |