Font Size: a A A

Research On Reading Comprehension Of Pre-Training Language Model Fused With Knowledge

Posted on:2022-11-13Degree:MasterType:Thesis
Country:ChinaCandidate:Q ZhaoFull Text:PDF
GTID:2518306752954249Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
The current mainstream reading comprehension model usually relies on the multihead self-attention to obtain the answer with the highest similarity to the question.The secret of its success lies in the ability of the pre-trained language model to learn the similarity of language patterns,rather than based on natural language perform a high degree of abstraction and reasoning.This also means that the existing reading comprehension models can only answer based on the information at the surface of the question,but it is still difficult to solve some problems that require knowledge support or involve inference.To solve this problem,this paper proposes a Knowledge-Enhanced Graph Attention Network,which enriches the semantic representation of the original text with information from Concept Net and performs inference based on the evidence chain between entities modeled by the related subgraphs.At the same time,a Knowledge-enhanced Embedding is proposed,which is combined with internal sharing mechanism to prevent the model from over-inferring or under-inferring common knowledge.Experiments show that compared with the baseline model,the accuracy of this model in Com VE subtask B is relatively improved by 15.43%.In addition,this paper considers that the large amount of irrelevant information carried by the introduction of external knowledge based on the knowledge graph will interfere with the decision-making of the final answer.This paper proposes a knowledge enhancement method based on dynamic routing(Ke Mask Filter),which can automatically inject beneficial external knowledge into the appropriate network layer and discard insignificant knowledge.Experiments show that compared with the baseline model,the accuracy of this model is relatively improved by 1.22%,and the noise filtering is significantly improved compared to the unfiltered state.Furthermore,this paper considers that there are essential differences between the representation of entities in the knowledge graph and the representation of semantics,which makes the corresponding information difficult to fusion.This paper proposes a feature fusion method based on heterogeneous space(Smooth Ke BERT),which uses the knowledge graph and the description of related entities in Wikipedia,combined with the graph attention network,unify the representations of the semantic space and symbol space.Experiments show that the accuracy of this model is relatively improved by 0.63% compared with the model when the space is not unified.
Keywords/Search Tags:Knowledge Graph, Commonsense Knowledge, Graph Neural Network, Pre-trained Language Model, Reading Comprehension
PDF Full Text Request
Related items