Font Size: a A A

Research Of Interpretability Technology For Graph Neural Network

Posted on:2024-08-30Degree:MasterType:Thesis
Country:ChinaCandidate:Y Z LiuFull Text:PDF
GTID:2530306941484094Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
Graph neural networks are now the state-of-the art method for graph representation in many applications,such as molecule property prediction,social network,recommendation systems and knowledge graph.However,due to the highly asymmetric structure and the complex propagation rules for the graph neural network,the interpretability method of the graph neural network faces many challenges.The existing interpretability is mainly for static graph,but in the real word,graphs are usually evolving,with edges/nodes added and removed.In addition,the graph neural network may learn some sensitive information(such as gender,age,etc.)from the training data,resulting in the loss of fairness for the graph neural network.The lack of interpretability and fairness will hinder the use of the graph neural network in many application fields related to fairness,privacy and security.In order to solve the above problems,this paper proposes an interpretability method of graph neural network on dynamic graphs.From the perspective of differential geometry,this method regards the prediction probability distribution of graph neural network as a low-dimensional manifold,and embeds it into the high-dimensional Euclidean space,then reparameterizes the curve on the manifold.To find the critical paths as the explanation,we design a convex function as the objective function,and solve the convex optimization problem.On nine graph datasets for node classification,link prediction and graph classification,we demonstrate the better sparsity,fidelity,and intuitiveness of the proposed approach over the state-of-the-art in various graph evolution scenarios.In addition,this paper proposes an interpretability method for the fair graph.We define and analyze a novel upper bound of group fairness to optimize the adjacency matrix for fairness without significantly harming prediction accuracy.In order to understand the reasons for the fairness change of the graph neural network,the interpretable methods of node level and path level are further proposed.The important nodes are selected through stratified sampling and linear programming to explain the fairness of the graph neural network.The important paths are selected by solving the convex optimization method to explain the reasons for the probability change of the key nodes.On seven graph datasets,we demonstrate the novel upper bound can achieve more efficient fairness-accuracy trade-offs and the intuitiveness of the explanation methods can clearly pinpoint where the trade-off is improved.
Keywords/Search Tags:graph neural network, fairness, evolving graphs, explanation
PDF Full Text Request
Related items