Font Size: a A A

Research On Natural Language Syntactic Parsing Based On Deep Learning

Posted on:2017-04-06Degree:MasterType:Thesis
Country:ChinaCandidate:Q Y ZhouFull Text:PDF
GTID:2308330503987197Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Syntactic parsing is one of the fundamental research problems in natural language processing community and has been focused for decades, which is essential in many tasks such as Question Answering, search query understanding, semantic parsing and knowledge base construction. Recently deep learning techniques show great success in many fields and it is becoming a popular research problem to apply deep learning in natural language processing tasks. Therefore, we will focus on the problem, applying deep learning in syntactic parsing task, in this dissertation.We first study a transition based dependency parsing model using a feed-forward neural network, which is simple and performs well. We re-implement and improve this model by carefully tuning the parameters.We propose a sequence learning dependency parsing model using long short-term memory(LSTM). The feed-forward neural network is used to build a greedy model from rich local features and used as the feature extractor. The output of the pre-trained feature extractor is used as the input of the LSTM. We further train a LSTM network as the transition action classifier in our dependency parser. The LSTM classifier learns not only the local rich features, but also the long distance dependencies and parsing action histories,which makes it capable of modeling the entire sentence sequence other than the local isolated parsing configurations. Experiments on English Penn Treebank show that our model outperforms the greedy feed-forward neural network model significantly.We study an end-to-end constituent parsing model. After conducting experiments on the sequence to sequence model with single attention mechanism, we find that the decoder relies on decoding rules heavily. We propose a BiAttention model to reduce the dependency of rules in the decoder by incorporating two attention mechanisms in the sequence to sequence model. Our model can not only capture the context information by paying attention to the source side, but also leverage the history information by paying attention to the target side. Experimental results show that our model reduces the malformed trees and improves the performance of the constituent parser on the correct trees.
Keywords/Search Tags:Natural Language Processing, Deep Learning, Dependency Parsing, Constituent Parsing, Long Short-Term Memory-II-
PDF Full Text Request
Related items