Font Size: a A A

Interpretability Study Based On Game Theory And Person Re-Identification

Posted on:2024-06-08Degree:MasterType:Thesis
Country:ChinaCandidate:Y M MaFull Text:PDF
GTID:2530307067972249Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
Deep learning has achieved great success in fields such as image recognition,natural language processing,and speech recognition,but its black box nature has also drawn attention to the opacity and interpretability of the model decision process.As a complex nonlinear model,the internal feature representation and model parameters of deep neural network models are usually difficult to understand and interpret.This lack of interpretability of the model may lead to unreliable and unfair model decisions and limit the trustworthiness and reliability of the model in practical applications.Therefore,research on the interpretability of deep learning has become a hot topic in the field of deep learning research.This dissertation mainly studies the interpretability of deep learning from the following two aspects:1.In order to study the interpretability of neural network models,the concept of Shapley value is used to calculate the contribution of each input feature,and the multi-order interaction of adversarial perturbations is modeled based on game theory.The research results show that adversarial perturbations mainly affect high-order interactions rather than low-order interactions.This means that adversarial attacks mainly target the complexity and global information among many pixels in the image.At the same time,the same conclusion is obtained on the dataset and model of person re-identification,which provides valuable reference for designing more robust person re-identification networks.2.A method for generating salient feature maps on person re-identification models is proposed to address the black box nature and lack of interpretability of the model,which can lead to security issues.The method calculates the similarity of two images by calculating the cosine distance between their feature vectors,and then obtains the weights of the corresponding channel features through the backpropagation of similarity.Finally,the weights and features are added to obtain the salient feature map of the image.Experiments on the Market-1501 and Duke-MTMC datasets validate the feasibility of the method.Finally,the experimental results are analyzed,and the salient feature map provides a basis for the model decision,and provides an appropriate explanation for the correct comparison of person re-identification models and successful adversarial sample attacks.In addition,it is found that the sensitivity of the person re-identification model to color information makes it vulnerable to adversarial attacks.These conclusions improve the interpretability of person re-identification models and provide possible research directions for improving the adversarial robustness of the model.
Keywords/Search Tags:deep neural networks, person re-identification, interpretability, feature complexity, saliency maps
PDF Full Text Request
Related items