In recent years,Graph Neural Networks have gained popularity in machine learning due to their powerful ability to extract node features and graph topology information.In comparison to traditional neural networks,Graph Neural Networks excel at handling graph data,providing unique advantages in analyzing and mining a variety of graph data,and achieving superior performance in multiple graph-related domains such as social network analysis,drug research and development,recommendation systems,and natural language processing.The functioning of Graph Neural Networks relies on the assumption that sufficient information has aggregated for any node during Message Passing.However,in reality,node degrees in many graphs follow the Power Law distribution which results in most nodes being low-degree nodes with limited and biased neighbors.Consequently,Graph Neural Networks struggle to attain optimal performance in downstream tasks.Therefore,it is paramount to narrow the gap between low-degree node embeddings and high-degree node embeddings to improve the performance of Graph Neural Networks.This paper proposes new information augmentation methods for Graph Neural Networks to capture additional information while generating low-degree node embeddings with comparable performance to high-degree node embeddings in downstream tasks.Firstly,we analyze the potential missing information of numerous low-degree nodes in the graph and propose that compared to high-degree nodes,low-degree nodes suffer from severe information loss – both node identity and node neighbors information.The research involves two primary aspects:(i)designing and implementing the OGT information augmentation method,which focuses on graph topology optimization,and(ii)designing and implementing AIC-GNN,a similar method that approaches the issue from the perspective of information completion,as follows:(1)We introduce an information augmentation method called OGT that trains Graph Neural Networks by optimizing graphs.OGT consists of two main steps: first,topological optimization is performed on the input graph by using inherent information in the nodes and the prediction results to enhance the accessibility of low-degree nodes during Message Passing;second,the optimized graph is used to train the model,where the low-degree node embedding will contain additional information from similar nodes.OGT method is applied to existing models without modifying the original architecture or proposing a new Graph Neural Network model,similar to data pre-processing.By comparing with various state-ofthe-art methods on benchmark citation datasets,OGT method improves the ability of lowdegree nodes with limited neighbors to access more information,giving Graph Neural Networks better expressive power.(2)We propose the Adversarial Information Completion Graph Neural Network(AICGNN)as an information augmentation method.Instead of modifying graph data,AIC-GNN uses a Generative Adversarial Networks-based approach to directly mine and complete information for low-degree node embeddings in the embedding space.Inspired by Generative Adversarial Networks,AIC-GNN includes a Graph Information Generator for learning the mapping from node local context information to missing information in the node embedding.Furthermore,the Graph Embedding Discriminator distinguishes between ideal and completed node embeddings,and adversarial training between the two further enhances the expressive power of the model.Meanwhile,a novel dual embedding alignment mechanism is also introduced for the guidance of the Graph Information Generator to predict high-quality missing information.Finally,we demonstrate the effectiveness of the AIC-GNN using four benchmark datasets. |