| As a kind of data structure that can model a set of objects and their relationship,graph is widely used to represent the data systems of different domains,ranging from social science(e.g.,social networks),natural science(e.g.,physical systems and protein interaction networks),knowledge graph,and other research domains.To model and analyze this unstructured non-Euclidean data,machine learning algorithms for graph data are proposed to perform the representation learning of graphs.The typical graph representation learning consists of two tasks:node representation learning task and graph representation learning task.The former aims to simultaneously model node feature and topology structure information to obtain expressive low-dimensional dense vectors,and the latter further maps all node information to the representation vector of the entire graph on the basis of node embedding.With the rapid development of deep neural networks in recent years,the end-to-end deep learning algorithms are utilized to learn node and graph representations,i.e.,graph neural network,which has attracted a lot of attention from academia and industry.The existing graph neural network methods broadly follow the message passing framework which consists of the message passing phase and the readout phase,where the message passing phase is to aggregate each node’s representations of neighbor to generate its new representation and the readout is then to capture the global graph information from node representation space.Note that how to design a reasonable and efficient message passing function and readout function is a key problem for graph representation learning.Hence,this thesis focuses on the two major tasks of graph representation learning and conducts systematic researches.A variety of effective message passing functions and architectures are proposed to effectively model node and graph.To sumlarize,the main research contents of this thesis can be listed as follows:●This thesis proposes a structured multi-head self-attention mechanism to learn graph representations.The proposed structured multi-head self-attention mechanism includes three different self-attention mechanisms,i.e.,node-level,layer-level,and graphlevel self-attention mechanism.To make full use of the information of graph,the nodelevel self-attention firstly aggregates neighbor node features with a scaled dot-product manner,and then the layer-level and graph-level self-attention serve as readout module to measure the importance of different nodes and layers to the model’s output by assigning attention weights to obtain the final graph representation embedding.●This thesis proposes a propagation enhanced message passing framework to learn graph representations.Aiming at the issue that the existing readout approaches merely focus on obtaining the graph representation from the current step but lack attention to the preceding steps,the proposed algorithm first introduces a simple but efficient propagation enhanced extension,i.e.,self-connected neural message passing,which aggregates the node representations of the current step and the graph representation of the previous step.To further enlarge the receptive fields of graph neural networks,we also propose a densely self-connected neural message passing which connects each layer to every other layer in a feed-forward fashion.Both proposed architectures are applied at each iteration step and then the graph representation can be used as inputs into all subsequent steps.In addition,these two proposed architectures can also be combined with the existing graph neural networks to achieve more effective graph representation learning.●This thesis proposes a Markov clustering regularized multi-hop graph neural network for learning the graph representations.There are two main limitations of the existing methods:computational inefficiency and the limited representation ability of the multi-hop neighbor.For the former limitation,An iteration approach is utilized to approximate the power of a complex adjacency matrix to achieve linear computational complexity.For the latter limitation,the regularized Markov clustering is introduced to regularize the flow matrix(i.e.,adjacency matrix)in each iteration step.The proposed algorithm consists of node embedding module and graph embedding module where node embedding aims to learn a multi-hop node representation vector and then graph embedding aggregates the node embedding to generate a graph representation vector.●This thesis proposes a hierarchical layer aggregation strategy and neighbor normalization for training deep graph neural network to learn the node and graph representations.To alleviate the difficulties of training,A deep hierarchical layer aggregation strategy is introduced,which utilizes a block-based layer aggregation to aggregate representations from different layers and transfers the output of the previous block to the subsequent block,so that deep model can be easily trained.Additionally,a novel normalization strategy,neighbor normalization,is developed to stabilize the training process,which normalizes the neighbor of each node to further address the training issue.Our analysis reveals that the neighbor normalization can smooth the gradient of the loss function,i.e.,adding neighbor normalization makes the optimization landscape much easier to navigate.●This thesis proposes a mutual information maximization method across feature and topology views for learning the node representations.The existing methods are typically effective to capture information from the topology view but ignore the feature view.To circumvent this issue,a novel approach using mutual information maximization across feature and topology views is proposed.Specifically,a multi-view representation module is first utilized to better capture both local and global information content across feature and topology views.To model the information shared by the feature and topology spaces,a common representation module using mutual information maximization and reconstruction loss minimization is then developed.To explicitly encourage diversity,a disagreement regularization is introduced to enlarge the distance between representations from the same view.To verify the effectiveness of the algorithms proposed in this thesis,experimental evaluation on the popular benchmark node representation and graph representation datasets are performed to evaluate the performance of all algorithms proposed in this thesis.Experimental results show that the performance of the graph representation learning models based on deep neural networks proposed in this thesis has gained a significant improvement compared with the existing node and graph representation learning baseline models,which further demonstrates the effectiveness and applicability of the algorithms proposed in this thesis. |