Font Size: a A A

Research On Interpretability Method Of Deep Recognition Network

Posted on:2022-01-30Degree:MasterType:Thesis
Country:ChinaCandidate:Y X WangFull Text:PDF
GTID:2518306572459124Subject:Instrumentation engineering
Abstract/Summary:PDF Full Text Request
With the advancement of deep learning technology,deep neural networks are widely used in many fields,and the issue of interpretability has attracted more and more attention.The current deep recognition network lacks mathematical theoretical support and has the problem of unclear mechanism.It is difficult to accurately evaluate the risk,reliability,adaptability,and robustness of target recognition,which limits the actual deployment and application of the model.Therefore,it is urgent to develop the research on the interpretability method of deep recognition network.For this reason,based on the in-depth study of the content of interpretability,this paper conducts research on the interpretability of deep recognition networks from three method levels: visualization,local approximation,and black box testing.Aiming at the interpretability problem of the feature deduction process of deep recognition network,a network interpretable method based on feature map visualization is proposed.This method is to process the feature maps of each layer in the network during the recognition process of the input image samples in the deep neural network.Visualize,observe which areas of the image activate the network,and understand the activation of neurons in the inner layer of the network model.And designed a visual interpretation software program,selecting six types of algorithms,class activation mapping and gradient-based backpropagation,conducted detailed analysis,and integrated design to form the software,and it was experimentally verified on the remote sensing image classification model.,The research results of this method show that visualizing the deduction of the feature map during the propagation of the network model can effectively understand the basis for the network model to make decisions.Aiming at the problem of deep target recognition network characteristics and decision-making internal connection,a locally understandable interpretation method independent of the model is studied.This method obtains the response feedback data of the deep model by detecting input disturbances,and then builds a local linear model based on this data.The model is used as a specific input value,and the simplified agent of the depth model interprets each sample individually,and the weights trained represent the importance of features.The research results of the local approximation method show that the weight coefficient in the linear model represents the importance of the feature to decision-making.This method finds the internal relationship between the feature and the model decision,so that users can intuitively understand the model's impact on the input sample from the semantic and visual perspective.Decision logic and basis.Aiming at the performance interpretability of the "black box" characteristics of the deep target recognition network,the measurement of the model performance boundary under the interference of random noise and anti-attack noise is studied,and the interference sample evaluation method and the model performance index quantification method are designed.Tested and verified.Experimental results show that this method can evaluate network performance more comprehensively,provide a basis for the interpretation of performance boundaries,and enhance the reliability guarantee of the black box model.
Keywords/Search Tags:Deep recognition network, Interpretability, visualization, local approximation, black box testing
PDF Full Text Request
Related items