Font Size: a A A

Recognizing Inference In Text With Markov Logic Networks

Posted on:2015-01-14Degree:MasterType:Thesis
Country:ChinaCandidate:L CaoFull Text:PDF
GTID:2308330464463287Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
For the fast-growing of Artificial Intelligence (Al), computer system is demanded to understand natural language on higher accuracy. As a generic paradigm of natural language understanding, the researching area of Recognizing Textual Entailment (RTE) is attracting more and more widespread attention. The RTE task is generic relevant to various Natural Language Processing (NLP), including Information Retrieval (IR), Information Extraction (IE), Question Answering (QA), Machine Translation (MT).RTE task is to automatically recognize, given two text fragments, what is the logical relation between them. The logical relation includes entailment, equivalence, independence and contradictory. The state-of-art system to recognize textual entailment is based on machine learning. This framework extracts lexical, syntactic and semantic features from natural language, and utilizes machine learning models such as KNN, SVM for model training and classification.Machine learning based RTE systems have to face with a common decision that how to handle the variability of natural language by representing features. In natural language, a word may represent different meaning in different situation, and a meaning may be represented by different word forms. Even more, with same bag of words, the sentence meaning diverges depending on the arrangement of words. Such phenomenon of natural language is a great difficulty of RTE problem.Motivated by such situation, this paper proposes a novel approach to recognize text inference with Markov logic networks (MLNs) framework, which is the probabilistic extension of first-order logic. By combining first-order logic with weight, MLNs develop an efficient way to flexibly represent feature by first order logic and soundly handle uncertainty by probabilistic model. We firstly define first-order knowledge base on different feature representation of natural language including string information, lexical information, structural information and semantic information. Then we extract knowledge from extended knowledge bases and transform them into first-order logic. Finally we make predictions by MLNs logical reasoning. The experiment results certificate our method as a reasonable and efficient way to recognize inference in text.
Keywords/Search Tags:Recognizing Textual Entailment, Markov Logic Networks, Machine Learning, Natural Language Processing
PDF Full Text Request
Related items