| In recent years,natural language processing(NLP)has been deeply applied in various fields,and has become an important technology for human-computer interaction and human communication.Meanwhile,the development of Graph Neural Networks(GNNs)have made the data value of various non-Euclidean spaces continuously mined.The research and application of graph neural network has also made new breakthroughs in NLP.However,natural language does not have an explicit graph structure as the basis for representation learning with GNNs,and generally assists the construction of language graphs through linguistic features such as syntax and semantics.For different tasks in natural language processing,the text length,domain and other characteristics also vary greatly,leading to differences in the constructed graph.In the task of sentence-level short texts the main focus is on syntactic and semantic-based sentence representation,but incorrect syntactic graph structure will affect the result of representation learning.In document-level texts the main focus is on the combination of the semantics from different sentences in the document,it is also difficult to integrate document-level text into contextual semantic information effectively.This paper targets the study of natural language graph construction methods at both sentence-level and document-level granularity.Firstly,a general natural language graph structure learning framework based on auxiliary features is proposed.The construction of language graphs is assisted by different linguistic features,which is combined with downstream GNN-based representation learning to optimize the performance of the whole model.And further on the basis of the graph structure learning framework,two models are further refined to validate the effectiveness of the framework against different problems in sentence and document level natural language processing tasks.In the sentence-level Aspect-based Sentiment Classification(ASC),the Learnable Dependency-based Double Graph Neural Network(LD2G)for syntax optimization is proposed,which uses the syntactic features to construct graphs,and further optimizes the graph structure by combining the representation of the downstream graph neural network.In the document level relationship extraction task,the Anchor-based Double Graph Neural Network(ADGCN)is proposed to model the cross-sentence relationship in the document,and the graph neural network is used for representation learning.Experiments on the two tasks outperform other baseline models on most benchmark datasets,proving the effectiveness of the framework and model. |