Font Size: a A A

Research On Key Technologies Of Intelligent Question Answering System

Posted on:2019-12-30Degree:MasterType:Thesis
Country:ChinaCandidate:X WangFull Text:PDF
GTID:2428330611993495Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,the enormous information-carrying capacity of the network has brought about the era of big data,and the way people learn and retrieve information has also undergone major changes.In the past,people used traditional search engines to retrieve information,such as Baidu,Google and so on.However,this retrieval method most uses keyword retrieval and returns a large number of related pages,which needs further manual filtering.The way that people prefer is to ask questions in natural language,and machines return brief and easy-to-understand answers.Natural Language Processing(NLP)is an important research direction in the field of artificial intelligence.Linguistics technology such lexical analysis,syntax analysis and statistical learning technology have been proposed gradually.Its main purpose is to bridge the information semantic gap between human and machine.Question Answering(QA)as an important topic of NLP,has always been concerned by people.Recently,the emergence of deep learning technology has brought a new development prospects for NLP research.The latest research of natural language QA task is also based on neural network technology.According to different knowledge sources,the text-based QA task and the knowledge graph-based QA task be divided into two research directions.This paper studies the two tasks separately.For the text-based QA task,this paper proposes a local reasoning model based on information discarding mechanism.In text-based QA,due to the text context is too long,so it contains large number of clues and evidence.Through the research on datasets,we find not all the clues can play a positive role in extract answers step,some redundant information may interfere with the final information reasoning.In previous models,all the passage information representations usually will be merged and processed in information fusion step,which may resulting in errors.In this paper,we propose a mechanism of semantic information discarding,which can filter out effective semantic vectors and discard interference information with less relevance to the problem.This local reasoning model can effectively improve the accuracy of the predicted answer.The results on open-domain question answering datasets prove that our model outperforms the benchmark model.Knowledge-graph based QA on are characterized by its structured feature of knowledge sources.And the information evidence is simple,which are usually asked by some simple questions.We choose the subject entity and predicate relation of the question through a pipelined way,and then select the most possible answer entity through information integration.This paper mainly improve the performance of relation extraction task.We find that in previous information processing method,the neural model most only use question statements to classify relation.However,the question sentence often lacks context and it is difficult to make full use of the computing power of neural network.In order to solve this problem,this paper use the information of sentence topic word and entity background as auxiliary reasoning information.This method can let the model get sufficient context.We use two attention mechanism models to get fuse semantics vector with auxiliary information.The results on the dataset show that our model can effectively improve the accuracy of relation extraction and improve the accuracy of final task.
Keywords/Search Tags:Question answering system, Natural Language Processing, Knowledge Graph, Neural Network
PDF Full Text Request
Related items