| With the development of technologies such as the Internet,cloud computing,and the Internet of Things,graph-structured data is widely used in social networks,communication networks,transportation networks,and other fields.However,processing these large-scale graph-structured data requires specialized graph algorithms and computing power.Therefore,in recent years,the graph convolutional neural network has achieved rapid development,and it has been widely studied and applied in many application fields,including social network analysis,chemical molecular analysis,traffic network analysis,etc.Among them,realizing node classification through graph convolutional neural network has become one of the hot downstream tasks of his research.In the node classification task,the richness of the information contained in the node embedding representation greatly affects the accuracy of the model classification.Therefore,in order for the graph convolutional neural network to learn more abstract and complex feature representations,it is necessary to increase the convolution depth to increase the receptive field.However,as the number of convolutional layers increases,the graph convolutional neural network will experience over-smoothing,which will cause the node features to converge to the same value and reduce the representation learning ability of the model.Not only that,as the number of convolutional layers increases,the computational complexity and storage costs required will also increase significantly.Therefore,how to optimize the learning process of the deep graph convolutional neural network and apply it to the actual scene has become an urgent problem to be solved.Aiming at the problems and challenges faced by the above-mentioned deep graph convolutional neural network model,this paper mainly carries out the following work and innovations:(1)Aiming at the problem that the deep graph convolutional neural network is prone to oversmoothing,a deep residual graph neural network model is proposed.This model designs a multi-input residual structure,which combines initial residuals and high-order neighborhood residuals,so as to achieve a balanced extraction of initial features and high-order neighborhood features in any convolutional layer.In addition,the model also constructs a Page Rank-based residual sampling algorithm in the high-order neighborhood residual,so as to sample the feature matrix of the residual input at the node level,so that the nodes that are prone to over-smoothing can be added to the In the residual module,the overall model can be more targeted to alleviate the over-smoothing problem.Finally,a series of node classification experiments are designed to verify the proposed method.The accuracy rates of the model constructed in this paper are 86.2%,74.5%,and81.5%on the three data sets of Cora,Citeseer,and Pubmed,respectively,which are higher than the SOTA baseline model.The experimental results show that the multi-input residual structure and residual sampling enable the deep model to learn more effective embedding representations while alleviating over-smoothing,thereby improving the node classification ability of the model.(2)Aiming at the problem of insufficient utilization of node feature information by the deep graph convolutional neural network model,a multi-channel deep graph neural network model is proposed.The model preprocesses the feature matrix of nodes through PCA dimensionality reduction,and then reconstructs the feature space of the graph structure in a low-dimensional space,so as to solve the problem of inaccurate calculation of the similarity distance between nodes caused by the high-dimensional sparsity of the graph feature matrix.Then,use S~2GC and Drop Edge regularization in each channel to construct a channel propagation model,and use the multi-head attention fusion mechanism to achieve multi-channel fusion,and construct graph consistency constraints and parallax constraints to adaptively learn the intrinsic relationship between topological space and feature space,and then improve the deep representation ability of the model.Finally,a series of node classification experiments are designed to verify the proposed method.The accuracy rates of this method on the five data sets of Citeseer,UAI2010,ACM,Blog Catalog,and Flickr are74.2%,72.9%,91.3%,89.3%,and 80.6%,respectively,which are significantly higher than the SOTA baseline model,indicating that the model enhances the depth The ability of graph convolutional neural networks in fusing feature information and structural information.(3)Aiming at the low robustness of the deep graph convolutional neural network and the weak generalization when the marked nodes are rare,a neural network model for graph comparison learning under multi-view is proposed.This model constructs an importance-driven edge perturbation strategy to generate multiple local subviews.At the same time,a global view is generated using graph diffusion technique,and a two-view depth propagation is performed with the adjacency matrix as a local view.Then,local sub-view output feature regularization consistency constraints are constructed based on contrastive learning,and in order to utilize global information to assist feature representation learning of local information,consistency constraints between output features of different views are constructed.Finally,a series of node classification experiments are designed to verify the proposed method.The accuracy rates of the model constructed in this paper are 86.4%,76.2%,and 83.4%on the three data sets of Cora,Citeseer,and Pubmed,respectively,all of which are higher than the SOTA baseline model,indicating that the model improves the accuracy of unlabeled nodes in the semi-supervised node classification task. |