Font Size: a A A

AMR Parsing With An Incremental Joint Model

Posted on:2017-06-15Degree:MasterType:Thesis
Country:ChinaCandidate:R LiFull Text:PDF
GTID:2428330488996685Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Abstract Meaning Representation(AMR)is a new formalism that aims to express the semantic representation of a natural language sentence.AMR uses a rooted,directed,acyclic graph to represent the semantic meaning of a sentence and it absrtracts away from syntactic notions.Obviously,AMR will effectively promote many natural language processing applications,such as information extraction,question answering,textual entailment,machine translation,etc.AMR parsing has recently attracted much attention in the community.In recent three years,a variety of algorithms for AMR parsing have been proposed.However,the performance of these AMR parsers is relatively low and AMR parsing is still in a nascent stage.Therefore,in this thesis,we focus on the AMR parsing to improve the performance.The main work of this thesis includes the following three aspects:(1)We first conducted in-depth analysis and comparison of various algorithms for AMR parsing.In general,the existing AMR parsers can be divided into two types.The first line of work is to translate a sentence to AMR graph directly,and the second one is to transforme a sentence to an AMR graph based on syntactic parsing.(2)We developed a new approach to improve AMR alignment.JAMR built an automatic aligner by using a set of rules to greedily align concepts to spans of words in the training data,which cannot handle more complex semantic relations.We first introduce a latent-variable log-linear model for word alignment with a contrastive approach that aims to differentiate observed training examples from noises,then we combine the AMR alignment based on the rules with the word alignment of latent-variable log-linear models.The experimental results show that the impoved approach obtains a better performance.To alleviate the error propagation in the traditional pipelined models for,we present an incremental joint model for AMR parsing.To implement AMR parsing as a joint task that performs the two subtasks:concept identification and relation identification simultaneously,we first develop a novel component-wise beam search algorithm for relation identification in an incremental fashion.Secondly,we adopt a segment-based decoder similar to the multiple-beam algorithm for concept identification,and then incorporate the decoder into a unified framework,which allows for the bi-directional information flow between the two subtasks in a single incremental model.Our joint model significantly outperforms the previous pipelined counterparts,and also achieves better performance than other approaches to AMR parsing,without utilizing external semantic resources.
Keywords/Search Tags:AMR, Alignment, Beam search, Joint model, Log-linear model, Latent variable
PDF Full Text Request
Related items