Font Size: a A A

Research On Interpretation Methods Based On Class Activation Mapping In Image Classification

Posted on:2022-08-19Degree:MasterType:Thesis
Country:ChinaCandidate:A D LiFull Text:PDF
GTID:2518306575466484Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In recent years,many network models in deep learning have excellent performance in image classification.However,these models often appear as black boxes,making it difficult to understand the decisions.The lack of interpretability severely limits the application of deep learning models in academia and industry.The emergence of interpretable methods has alleviated this problem.One of the representative studies in image classification tasks is the class activation mapping-based interpretable method.This method uses a back-propagation algorithm to make a weighted summation of the feature maps of the last convolutional layer.And the part of network interest is shown as a heatmap on the input image.However,the interpretation result of this method is heuristic.It is deficient in explaining the importance of features and enhancing human understanding.And this interpretable method often addresses the problem of a single decision,so it is difficult to detect problems in the network or data from a macro perspective.Besides,image classification studies often ignore the influence of semantic correlation between background and foreground or directly constrain the network by translating semantic correlation into parameters based on prior knowledge.The information of whether the neural network learns and utilizes the semantic correlation is hidden.In view of the above problems,the corresponding methods are put forward and effective results are obtained.The main researches are given as follows:An interpretable feature joint representation framework is proposed.The framework mitigates the bias in calculating feature importance by using the constraint of activation value and weight coefficient.It discovers representative features and defines feature attribute tags by the statistics of the contribution features to the target class.The tags are combined with image semantic labels and interpretation results to provide semantic concepts for features and discover problems in the network.The explainable results specify the contribution of each feature to the classification by a linear combination.And the application of feature visualization enriches the explanatory information.Experiments demonstrate that the framework avoids the shortcomings of heuristic results,improves human comprehension,and identifies problems in the network.A method for discovering latent semantic correlation in neural networks is proposed.This method defines the semantic correlation impact range by constructing background-free data.And it judges the semantic correlation based on the interpretable joint representation framework and surfacely expresses the semantic information.Experiments show that this method reveals the latent semantic correlation information and discovers the influence of semantic correlation on image classification.
Keywords/Search Tags:interpretable machine learning, image classification, class activation mapping, semantic correlation, neural network
PDF Full Text Request
Related items