| Currently,graph neural networks and their variants are among the mainstream algorithms in the field of graph machine learning,including graph convolutional networks,graph attention networks,and graph isomorphic networks.However,traditional graph neural network models face challenges in handling global structural information and computational complexity.Additionally,the limitations of traditional graph convolutional networks make it difficult to model complex non-Euclidean structures.Therefore,research and applications of graph Transformers have gradually become a hot topic in the field of graph machine learning.Despite significant progress in processing graphstructured data,graph Transformers still have some unresolved issues.To address various problems,this thesis focuses on three specific innovative works:(1)To solve the issue of performance degradation in traditional graph Transformer models due to the loss of relative positional information between nodes and improper selection of neighboring nodes,a masked attention graph Transformer model(MAGT)is proposed.This model first designs a positional encoding fusion mechanism,which incorporates positional encoding into the output node features at each layer to strengthen positional information and enhance the model’s ability to learn the entire network topology.Secondly,an attention mechanism is introduced by adding a masked attention matrix to filter neighboring nodes,thereby improving the learning efficiency of the model.Finally,group normalization is employed to normalize node features during the model training process.Through ablation experiments and multi-model comparative experiments,MAGT demonstrates superior detection performance and better robustness compared to other state-of-the-art baseline models.(2)To address the problem of feature collapse that occurs with increasing model depth in graph Transformer models,a network residual graph Transformer model(NRGT)is proposed based on MAGT.This model incorporates a residual Transformer structure on top of MAGT by adding the input features of the current layer to the output features as cross-layer connections,thereby avoiding the issue of feature collapse.Additionally,normalization pre-processing and gating mechanisms are employed to weight different positional information,reinforcing task-related node information while alleviating interference from task-irrelevant node information.Through experimental analysis and comparison,NRGT achieves higher accuracy and recall performance metrics than other comparative models,demonstrating that the proposed approach can further optimize the model’s accuracy performance.(3)To address the issue of insufficient expression capability of edge features in traditional graph Transformer models,an edge feature enhancement graph Transformer model(EFEGT)is proposed.This model constructs a non-linear enhancement method for edge features,enhancing the input edge features and improving their expressive power.Additionally,an edge residual mechanism is employed to aggregate the edge feature information from the previous layer with the feature information of the current layer,enhancing the model’s non-linear representation capability.Furthermore,an edge gate is utilized to weight task-relevant edge feature information.Experimental results demonstrate that EFEGT outperforms other comparative models in graph regression tasks,showcasing its stronger performance in handling graph-structured data with edge information. |