| Neural Networks(NNs)have been proved to be effective in improving many machine learning tasks.For example,Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNN)have achieved great success in Eucliden domain data such as image recognition,machine translation and speech recognition.However,modeling and representing non-Eucliden domain data,such as complex networks with geometric information and structural manifolds,is more challenging.Mapping non-Eucliden domain data into Eucliden space is the most commonly used technology.Graph Neural Networks(GNNs)are representative works among them,and have achieved the most advanced performance in various graph-based tasks.Based on spatial graph convolution,GNNs learn the representation of each node by recursively aggregating the representation of the node itself and its neighbors.Although GNNs perform well in graph analysis,it still has many problems in robustness and generalization.Firstly,like other deep neural networks,GNNs are vulnerable to adversarial attacks.By adding malicious small perturbations to the input node features or adjacency matrix,attackers can mislead the GNN model to make wrong predictions.Secondly,because GNNs optimize the supervision loss of the labeled data,it faces the over-fitting problem.Large-scale graphs usually contain a large number of out-of-distribution test nodes.Among the promising technologies that have emerged recently,adversarial training has been proved to have advantages in terms of robustness and generalization.However,in graph analysis,the existing methods based on adversarial training usually focus on enhancing the robustness rather than improving the generalization ability.Therefore,inspired by the work related to the use of adversarial training to improve the generalization ability in Computer Vision,we propose a new co-adversarial training framework to improve the generalization ability of GNNs.Specifically,we first analyze the close connection between the loss landscape and generalization of GNNs,then propose the co-adversarial adversarial training to flatten the weight and feature loss landscapes alternately.The two kinds of adversarial training complement each other to flatten the feature and weight loss landscape iteratively.In addition,we divide the training process into two stages: in the first stage,we use the standard cross entropy loss to train to ensure the fast convergence of GNNs.In the second stage,we use our co-adversarial training to avoid the model falling into local sharp minima.Extensive experiments on five real world datasets indicate that our co-adversarial training framework can improve the generalization performance of GNNs.On this basis,we regard adversarial training as data augmentation to improve the performance and robustness of graph contrastive learning.Specifically,we analyze the robustness vulnerability of vanilla graph contrastive learning methods and explain the motivation of using adversarial view generation.After that,we propose a method to generate adversarial views using adversarial weight perturbation.By perturbing the weight of the encoder,we generate an adversarial view on the hidden representation.By explicitly Maximizing the contrastive loss,adversarial contrastive learning can learn the key information related to the downstream task,and ignore the irrelevant redundant information.In addition,the utilization of adversarial training also improves the robustness of graph contrastive learning.Comparing the experimental results of the proposed adversarial contrastive learning method for node classification on multiple datasets with other baseline methods,our method shows a significant improvement in effectiveness both on clean data and attacked data,which illustrates the effectiveness of our method in improving the representation ability and robustness of graph contrastive learning. |