| In recent years,Graph Neural Network(GNN)has received great attention in the field of data mining.They have achieved great success in many tasks related to graph representation learning,such as node classification,community detection,link prediction,etc.Although GNNs have made significant progress in graph representation learning,most of them suffer from poor robustness and over-smoothing.The main idea of GNNs lies in learning expressive node representations through a message passing mechanism.In the message passing mechanism,there are two important operations: 1)feature transformation,which is inherited from traditional neural networks,and 2)neighborhood aggregation,which updates the representation of a node by aggregating its neighborhood representation.This mechanism can lead to problems of over-smoothing and poor robustness.In the case of over-smoothing,when performing message passing,the representation after aggregating neighbor information is combined with the current node’s representation to form an updated node representation.After this process is repeated several times,different nodes will have similar representations,making it difficult to distinguish between different classes of nodes.In terms of robustness,the message passing mechanism forces each node to be highly dependent on its neighbors,which makes the nodes vulnerable to potential data noise misleading and thus makes the GNN vulnerable to adversarial interference.After suffering from graph adversarial attacks,the neighborhood aggregation operation incorporates the representations of noisy nodes into the representations of target nodes,making the learned node representations perform poorly in downstream tasks.As a result,GNNs are usually less robust in the face of graph adversarial attacks.This paper aims to solve the problems of over-smoothing and poor robustness in GNNs,including the following three methods.The research in this paper aims to address the problems of over-smoothing and poor robustness in GNNs,including the following three approaches.Firstly,an Enhanced Attribute-aware and Structure-constrained Graph Convolutional Network(EAS-GCN)is proposed.EAS-GCN first uses a degree prediction module to incorporate the graph local structure information into a specific representation of the self-encoder using the degree prediction module.A transfer mechanism is then designed to pass the self-encoder-specific representation to the corresponding Graph Convolutional Network(GCN)layer.The self-encoder mainly assists the GCN in learning the enhanced attribute information,while the node degree prediction module assists the GCN in learning the local structure information.In addition,we theoretically analyze that the over-smoothing problem in GCN can be mitigated with the self-encoder.Various experimental results show that EAS-GCN has high accuracy for node classification and can better mitigate over-smoothing.Secondly,a simple and effective Network Embedding framework Without Neighborhood Aggregation(NE-WNA)is proposed.NE-WNA removes the neighborhood aggregation operation from the message passing mechanism.It takes only node features as input and then learns node representations by a base self-encoder.For structural information learning,an Enhanced Neighboring Contrastive(ENContrast)loss is designed to incorporate the graph structure into the node representation.In the representation space,the ENContrast loss encourages lower-order neighbors to be closer to the target node compared to higher-order neighbors.Various experimental results show that NE-WNA has high accuracy on node classification tasks and is robust to graph adversarial attacks.Finally,a graph representation learning algorithm via Adaptive Multi-layer Neighborhood Diffusion Contrast(AM-NDC)is proposed.The algorithm removes the message passing mechanism and uses a feature encoder(e.g.,MLP)to learn node embeddings.Since feature encoders can only learn attribute information from the nodes themselves,in order to efficiently incorporate graph structure information into node embeddings,we design Neighborhood Diffusion Contrast(NDC)loss to force node embeddings of different layers and their multi-hop neighbor embeddings to approach each other in the representation space.In addition,the node embeddings of different layers are considered to contain multi-scale information.In order to fully integrate the multi-scale information,an adaptive aggregation mechanism is designed to aggregate the node embeddings of different layers.Experimental results on five benchmark datasets show that AM-NDC achieves state-of-the-art results compared with existing baseline methods.Meanwhile,AM-NDC can better alleviate over-smoothing and improve the robustness of the model. |