| Deep learning,driven by a large amount of data and super computational power,has achieved important results in various application areas with its excellent representation capabilities.Graph is a data type with powerful expressive power and has a wide range of applications.Traditional deep learning models cannot be directly applied to graph data because graph data does not have a regular Euclidean spatial structure like images and text.Therefore,researchers have proposed graph deep learning models.Recent researches have found that graph deep learning models inherit the weakness of other deep models that degrade performance when subjected to adversarial attacks.Graph data often involve sensitive information about individuals,organizations,or social networks.If these graph data are subject to adversarial attacks and exploited maliciously,it will bring serious privacy leakage and security issues.Studying graph neural networks adversarial attacks is one of the current urgent problems in the field of graph deep learning.Understanding the vulnerability of graph neural networks through attack methods is important for improving model robustness,protecting privacy security and exploring potential threats.However,related studies are still in the beginning stage,there are the following difficulties:(1)Most attacks cannot be implemented when the attacker cannot manipulate all nodes.(2)Existing graph backdoor attacks cannot be adapted to node classification tasks because the coupling between nodes is tighter than between graphs.(3)Existing graph injection attacks focus only on the effectiveness of the attack,ignoring the invisibility of fake node features and structures,making the attack easy to expose.(4)Higher-order graphs have complex interactions,and graph attacks are difficult to apply to the study of the robustness of higher-order graph neural networks.To address the above difficulties,this thesis develops research on graph deep learning models adversarial attacks,mainly focusing on the robustness of graph neural networks and hypergraph neural networks.Specifically,the results of this thesis include the following:1.Aiming at the problem of graph adversarial attack under extreme attack scenarios,this thesis proposes a single node structure attack.Traditional gradient attacks require access to information between all pairs of nodes,our attack only needs to access the training gradient of a single attacking node with the rest of the nodes.However,the single node attack is easy to fall into local optima,resulting in inefficient attacks.For this reason,this thesis uses the sampling candidate set strategy to solve this problem.2.Aiming at the problem that graph backdoor attacks cannot be efficiently implemented in node classification tasks,this thesis designs a graph backdoor attack based on feature triggers.We found that embedding only feature triggers will destroy the similarity of the original node feature space,resulting in the attack not being able to distinguish the poisoned nodes from the clean nodes well.For this reason,this thesis uses an adaptive method to adjust the graph structure to make neighboring nodes in the perturbed graph have similar features,which improves the accuracy of the backdoor attack.3.Aiming at the problem of easy exposure of graph injection attacks,an imperceptible graph injection attack is designed in this thesis.In this attack,the features and links of fake nodes are generated by normal distribution sampling and mask learning mechanism,respectively.Since the high flexibility of the graph injection attacks will destroy the homogeneity distribution of the original graph,this thesis uses the homogeneity imperceptibility constraint to adjust the graph structure and node features which will enhance the invisibility of the attack.4.Aiming at the problem of high-order graph neural network robustness,momentum gradient-based hypergraph feature attack and derivative graph-based hypergraph structure attack are studied.(1)Based on the difference between the training process of graph neural networks and hypergraph neural networks,this thesis proposes an untargeted attack before hypergraph modeling.The attacking node features are obtained by the momentum gradient algorithm.The discrete and continuous features are modified using the direct inversion mechanism and the gradient sign mechanism,respectively.(2)In this thesis,we propose a hypergraph structure attack based on the derivative graph.The derivative graph is able to quantify the node similarity in the hypergraph,and the higher the value of the derivative graph indicates the worse similarity between nodes,resulting in lower performance of the classification model.The attack first trains the hypergraph neural networks to obtain the gradient of the correlation matrix,uses the gradient rule to modify the hypergraph structure to generate the attack candidate set,and then selects the candidate with the largest value of the derivative graph as the optimal perturbation hypergraph.The results of multiple graph classification tasks show that our models are able to achieve high-performance attacks in different datasets and scenarios.The work in this thesis contributes to building more secure and reliable neural networks,and promotes research and innovation in related fields. |