| Graphs are commonly found in reality as a kind of unstructured and complex data,such as protein molecular structures,social networks,etc.However,this irregular,disordered data structure with samples not independent of each other is not well handled by traditional deep network models such as convolutional neural networks and recurrent neural networks.Inspired by the convolutional operations in the field of computer vision,researchers proposed graph neural networks based on spectral and space domains to process graph structure data,and achieved good results on node classification and graph classification tasks.Subsequently,to address the downstream tasks such as link prediction and node clustering when label information is missing,researchers proposed graph autoencoders based on graph convolution,whose structure consists of a graph convolution encoder and an additional decoder,and have been successfully applied in social networks,drug generation,and network optimization.However,existing graph autoencoders tend to aggregate feature information of neighbor nodes from a single view,i.e.,to obtain the embedding representation of nodes from the adjacency matrix only.Meanwhile,in order to prevent overfitting,graph autoencoders tend to have shallow structure,i.e.,no more than 4 layers of graph neural network layers,which results in learning hidden information from the adjacency matrix only to aggregate the feature information of low-order neighbors,and not to learn more comprehensive and rich feature information.In addition,the existing graph autoencoders do not fully utilize the feature information in the embedded representation in the encoding stage,and the key feature information is under-extracted,which further affects the generalization ability of the model.In this paper,we propose two solutions based on graph representation learning,graph multi-view learning,and graph self-supervised learning to address the abovementioned problems of graph autoencoders:(1)A multi-view learning-based graph auto-encoder method is proposed.On top of the original view,additional views based on global topology and feature similarity are added to perform encoding learning simultaneously.Then,the attention mechanism is used to calculate the importance of each of the three views to obtain an information-rich embedding representation that can be aggregated to lower-order neighbors and higher-order neighbors.Finally,the feature matrix and the adjacency matrix are reconstructed simultaneously,and the reconstructed feature matrix is used as an auxiliary task to help with the link prediction task.(2)A graph autoencoder method based on self-supervised learning is proposed.Graph contrast learning shines in the field of graph self-supervised learning,however,graph contrast learning suffers from reliance on negative samples,the need for momentum update,stop gradient,and other training techniques to optimize the model.In this paper,we propose a hybrid self-supervised graph encoder method combining contrast learning and generative learning to achieve the extraction of key feature information.This process is able to achieve improved model performance without negative samples and additional training techniques.This thesis proposes two solutions to address the problem that graph autoencoder models cannot learn comprehensive and rich feature information and under-extract key feature information.The experimental results demonstrate that the proposed methods in this thesis can obtain good performance. |