| With the in-depth research and wide application of deep neural networks,new deep network models continue to emerge.How to construct interpretable deep neural network models and how to improve the interpretability of existing deep neural network models has become a hot issue in the field of deep learning.The related research status at home and abroad is analyzed in this thesis.Aiming at the existing interpretability-related problems,the interpretability method of deep neural network integrated with decision-making reasoning theory is studied.The specific research work is as follows:(1)An interpretable model construction method based on dynamic pruning inference decision-making and information bottleneck theory verification is proposed.It assists the deep neural network model to get rid of weeds and save seeds,and improves the interpretability of model decision-making.The dynamic pruning reasoning decision algorithm is designed,the key feature vectors in the convolution layer are selected,and other redundant features are deleted,so the calculation amount of the overall model is reduced.The method of verifying the interpretability of the model based on the information bottleneck theory is proposed.It provides explanations and analyzes verified by the information bottleneck theory,improving the accuracy and understandability of the attribution of the entire network model.(2)An interpretable model construction method based on human-in-the-loop reasoning and hierarchical decision-making mechanism is proposed.It not only improves the ability of model training to correct errors,but also improves users’ understanding of the internal mechanism of the model.A Draw CAM method for human-in-the-loop manipulation of key features of deep neural networks is proposed.This method is used to manage key feature maps and update convolutional layer parameters.By masking the target area in the class activation map drawn by experts,the model can pay attention to and learn important parts of the target area.The hierarchical learning structure with sequential decision tree is designed,and the decision path is displayed intuitively through the saliency map visualization of key points,which provides strong interpretability for the fully connected layer of deep neural network.(3)An interpretable model construction method integrating reasoning decision theory is constructed.Task accuracy,human-in-the-loop,and interpretable technologies are combined to build interpretable reasoning items and interpretable decision items,thereby improving the transparency of deep models,improving engineers’ trust in models,and users’ understanding of model results.An online interpretable regularization library was built,which provides technical support for building interpretable deep neural network models for various types of application scenarios.An adaptive selection algorithm for selecting the best interpretable item in different scenarios is designed,and we verify the rationality of the selection algorithm through experimental analysis.(4)An interpretability evaluation system that integrates data,models,and results is proposed,namely the Data-Model-Result(DMR)evaluation metric,which is used to evaluate the interpretability of data,models,and results.This evaluation metric realizes the comprehensive evaluation of the credibility of the data,the comprehensibility of the model and the interpretability of the results,and achieves the interpretability evaluation of the whole process of the deep neural network.(5)A deep neural network interpretable model construction and analysis system that integrates reasoning and decision-making theory is designed and implemented.According to the application scenario,the evaluation metrics are selected from the three levels of data,model and result,so that the dataset,explainable reasoning items and explainable decision items are determined.It can view the model system and interpretable display of the results during the training phase and the testing phase. |