Data in biomedical tasks generally have multiple graph structures.If this attribute can be used well,it will be helpful to solve the problems in biomedical tasks.This requires the model to be able to combine different graph structures in the data simultaneously.The existing graph neural network(GNN)can deal with single graph data effectively,but few works apply GNN to multi-graph structure data learning.However,the current multi-graph fusion model is still based on the traditional multi-mode method,and does not effectively use the current advanced graph neural network architecture,but uses the shallow model,which leads to the model can not capture the high-dimensional nonlinear structure of the multi-mode graph structure and the complex relationship between the graph structure.Based on the above problems,this paper focuses on the graph structure of biomedical multimodal data fusion method,especially the prior graph structure and feature space graph structure of biomedical data fusion.At the same time,the Transformer model was originally used as a translation model for natural language processing(NLP)tasks,modeling the relationship between two modal structures through the attentional mechanism.This design can serve as a framework for multi-graph fusion.Therefore,this paper uses Transformer model as the framework and Graph neural network structure to develop graph-Transformer model as the unified multi-graph fusion model framework of biomedical data,and uses this unified framework to develop a variety of graph-Transformer model-based multi-graph data fusion methods.To cope with different graph data structure types of fusion(two of the same scale and composition of a fusion,graph structure and the structure of the fusion,the fusion between different scales or attribute graph interaction),and according to the current research status of choice for this type of data interaction under 5 kinds of typical biomedical tasks,to solve the problems and difficulties.Main research achievements of this paper:1)Aiming at the data fusion problem of Graph structure and topological Graph structure with feature space at the same time,a GRAPH-Transformer(GT)model framework suitable for the fusion of two graphs at the same scale is proposed,which solves the problem that existing methods can only fuse at the decision level,but not at the feature level.In this method,the Graph Transformer model is constructed by combining the Graph structure module with the Transformer structure,and the non-graph sequence structure and Graph structure are fused by the decode side of Graph Transformer.At the same time,we use this model to solve the multimodal feature fusion problem of protein molecular structure learning,which is superior to the state-of-the-Art method in standard baseline data set.2)In order to make graph-Transformer model applicable to large-scale graphs and fusion tasks with more than two graphs,a Gated combination core graph-Transformer(Gated CK-GT)was proposed to solve the problem of excessive complexity of graph-transformer model.In order to avoid the feature smoothing problem caused by the fusion mode of combination kernel,a gating mechanism was established to provide soft induction bias for one graph,so that the model paid more attention to the feature of the graph in each transfer.At the same time,we use this model to solve the multi-graph fusion problem in the protein binding site prediction problem,which is superior to the state-of-the-art method on the standard baseline data set.3)Aiming at the existing multi-graph clustering algorithm can not be directly applied to the multi-graph clustering task of graph structure and topological graph structure with feature space at the same time,the similar network fusion method is used to expand the Gated CK-GT model so that the model can fuse multiple graph structures to get unified clustering results and solve the problems of the existing multi-graph clustering algorithm.At the same time,we use this model to solve the patient clustering problem based on multi-graph structured data.Experiments are carried out on the benchmark data set of cancer patient classification task,and our method is superior to state-of-the art method.4)Aiming at the fusion problem of different scale Graph structures,graph-Transformer model of interactive Graph is proposed.The updated mode of heterogeneous Graph and Gated CK-GT model are used to expand graph-Transformer model to deal with the interaction problem of two different scale graphs.We use this model to solve the problem of multi-graph interaction and mouse action localization in protein-protein interaction surface prediction.In protein task,NEA method based on graph neural network is superior to state-of-the-art method.Experimental results show that the proposed method is superior to the state-of-the-art method in mouse motion localization task. |