Font Size: a A A

Research On Knowledge Base Question Answering Based On Deep Learning

Posted on:2024-09-21Degree:MasterType:Thesis
Country:ChinaCandidate:J J HuangFull Text:PDF
GTID:2568307142481894Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the growth of network information resources,Knowledge Base Question Answering(KBQA)technology has advanced quickly,and some large-scale knowledge bases have emerged both at home and abroad,such as the large-scale Chinese open domain QA knowledge base provided by the domestic NLPCC,and the English databases DBpedia and Freebase.Knowledge Base Intelligent Question Answering(QA)enables users to rapidly and accurately acquire valuable responses even without knowledge of the knowledge corpus by comprehending and parsing free-form inquiries and then inferring solutions by studying data from a vast corpus.Due to the richness of language expressions and the diversity of question types,existing KBQA usually relies on a priori knowledge and manually designed and written rule templates to transform the text into logical structures,which is not suitable for large-scale KBQA and has poor generalization.In this paper,we focus on the research of Chinese KBQA as follows:(1)To solve the problems of traditional KBQA relying on a priori knowledge and designwritten rule templates,this paper improves it using a contrastive learning approach and designs an end-to-end knowledge base intelligent QA model(Trans CL).First,potential knowledge is mined from a large corpus knowledge base and augmented information is generated in the form of question-answer pairs.Then we introduce a contrastive learning asymmetric network to effectively aggregate vectors to retain the original information and construct a distance metric on the feature space for the augmented information to capture the deep matching featu res between data samples by contrasts,which overcomes the reliance on manual rules and prior knowledge and improves model generalization.(2)Aiming at the problems of inadequate feature extraction and excessive noise in named entity recognition of KBQA,this paper improves the existing named entity recognition methods and designs a named entity recognition algorithm(ATTLE)based on multi-headed attention word association.In order to capture important word information from the extracted matching words,different weights are assigned to different related words according to their different degrees of importance to effectively control the flow of information from the beginning of the text to the end of the sentence.The fine-grained correlation between text characters and matching words is then captured using a multi-headed attention mechanism to improve the network training efficiency while obtaining more comprehensive information on tokens.(3)In this thesis,we use a feature transformation method based on positive extrapolation to pull apart the distance between intermediate text feature pairs and perform hybrid enhancement of higher-order semantic vectors by linear interpolation to transform simple features into difficult positive features.Semantic enhancement by the feature transformation method effectively solves the problems such as inconspicuous contrast of semantic features in contrast learning networks.Through experiments on the large-scale Chinese dataset NLPCC-2016 KBQA,the method in this paper achieves an F1 value of 85.5% compared with other KBQA models.The experimental results show that the method in this paper overcomes the reliance on prior knowledge and manual rules while also enhancing the quality of the KBQA model.
Keywords/Search Tags:Knowledge Base Question Answering, Contrastive Learning, Named Entity Recognition, Feature Transformation, Attention Mechanism
PDF Full Text Request
Related items