| As a data structure widely existing in nature,graphs mainly consist of nodes and edges between nodes.Since graph is a non-Euclidean structure data,each node has a different number of neighboring nodes and its order is uncertain,which makes the learning process of graph representation difficult.Graph learning is a machine learning method specializing in graph data feature extraction.Graph Neural Networks(GNNs)represent graph nodes as low-dimensional dense feature vectors while retaining certain structural information,greatly facilitating the development of graph representation learning.In industrial applications of artificial intelligence,there are many practical problems such as lack of a priori information and high cost of label annotation.Self-supervised learning methods that mine their own supervised information from data instead of relying on label information have made a great progress.Contrastive learning,as one of the self-supervised learning paradigms,relies mainly on data augmentation and sampling of positive and negative samples to learn the most effective features.So that contrastive learning can distinguish things by measuring the similarity between positive and negative samples.Graph contrastive learning applies the concept of contrast learning to graph representation learning.Most of the current graph contrast learning frameworks are based on graph augmentation methods.However,graph data are usually abstracted from varieties of completely different realistic graph structures,which have huge differences between each other.Moreover,for graphs,graph properties are more easily corrupted by some data-sensitive random augmentation methods.The above two graph data characteristics lead to low-quality graph representation results and inefficient human attempts on a large number of graph augmentation methods in a graph contrasitive learning framework based on graph augmentation,which greatly limits the development of graph contrastive learning.Therefore,in this paper,we combine the characteristics of graph convolutional neural networks,propose the concept of layer mutual information,and further design various network models to avoid the defects and shortcomings in the graph comparison learning approach based on graph augmentation.The core contribution of the paper is stated below:· Firstly,we propose layer MI conception which considers the corelations between different layer outputs in GNNs for graph CL.Compared to MI between augmentation views,layer MI can adaptively take the characteristics of datasets into consideration with the help of learnable parameters in GNNs.We propose Layer Mutual Information Graph Contrastive Learning(LMIGCL),which is trained by contrasting the outputs of each convolutional layer.We evaluate the framework in several empirical datasets to validate the availability of layer-contrasting information.· Secondly,we propose the improved framework Droplayer Graph Contrastive Learning (DLGCL).DLGCL modifies the contrasting process by random droplayer operation when training the encoder.The optimization enhances the training randomness and reduces computation quantity.With the learning parameters,the two droplayer embeddings can also be regarded as totally learnable augmentations combining the topology and attribute level characteristics of the specific training dataset.We conduct systematical experiments over several common node classification datasets to evaluate DLGCL models.· Finally,we propose Layer Mutual Information Evolve Graph Convolutional Netwowrk (LMI-EGCN),which is for dynamic graph representation learning base on layer mutual information.We conduct experiments on link prediction task on several dynamic graph datasets.The experimental results demonstrates consistant profits of layer mutual information in different fields of graph leaning. |