Font Size: a A A

Research On Syntax Parsing Using Neural Networks

Posted on:2018-10-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:H ZhouFull Text:PDF
GTID:1318330512497706Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the development of artificial intelligence,natural language processing(NLP)has been more and more important?The algorithms of core NLP could be divided into 3 layers:the lexical,syntactic and semantic layers,among which syntax parsing is a key module,taking in the results of the underlying lexical analysis and inputing the syntactic structures into the downstream semantic systems?Thus,syntax parsing is a key task in NLP.Conventional syntax parsing model gives considerable parsing accuracies by em-ploying large amounts of manually designed atomic features,and many combined high-order features,which are obtained by combining these atomic features.The complex feature set leads to huge feature engineering efforts?In recent years,the arise of deep learning triggers a number of syntax parsing models using neural networks,which adopt neural networks to model the features of syntax parsing.The neural syntax parsing models make use of the strong feature representation ability of neural networks,giv-ing much higher results than the linear counterparts.The neural network based parsing models give better parsing performance with much less feature engineering.However,current neural parsing model still has some drawbacks:1.The standard neural networks is not necessarily the best for syntax parsing Valina neural parsing model uses a simple forward-backward neural network,how-ever the parsing itself is a hierarchical process?The syntactic label is dependent on the syntactic structure,but current neural model takes the syntactic label and syn-tactic structure actions as the same objects for modeling in the output layer,which not only ignores the dependencies between syntactic structures and labels,but also makes the computation from the hidden layer to the output layer very large?2.Local optimization for trainingCurrent neural parsing model adopts local optimization for training,however,syn-tax parsing is a structured-prediction task.With regards to the complexity of syntax trees,parsing model always needs to perform many steps of prediction for obtain-ing the final resulting syntax tree.The local optimization of current neural parsing may lead to error propagation,in which the former error may affects the following predictions a lot.3.Insufficient exploiting of global featuresParsing needs much long-distance information for predicting the parsing tree,thus the global feature is very crucial for parsing.Conventional k-best reranking model is a good way for utilizing the global features.But the k-best reranking model is static reranking,the training data of which is limited in the k-best list,lacking of diversity and resulting in the insufficient learning of global features.In this article,we propose a series of models,which improve the baseline from 3 aspects:the architecture of networks,global optimization and better adopting global features:1.We propose a parsing model based on hierarchical neural networks,which is more close to the parsing task by introducing a hierarchical output layer,first predicting the parsing structure,then predicting the syntactic labels.For constituent and de-pendency parsing tasks,our hierarchical neural model obtains much faster speed than the baseline.2.We propose a structured-prediction neural parsing model,which models the whole syntactic trees directly by global optimization over the individual actions with linear complexity.The proposed model greatly alleviates the label bias and error propa-gation problems,improving the accuracies of syntax parsing.3.From the perspective of global features,we propose a search-based dynamic rerank-ing model for neural syntax parsing.Different to conventional k-best reranking,the proposed model integrates search and learning by utilizing a dynamic action revis-ing process,using the reranking model to guide modification for the base outputs and to rerank the candidates.The proposed model improves the diversity of the reranking model,giving better parsing performance.4.Finally,we also explore to use the syntactic information in neural-machine trans-lation.In conventional neural machine translation(NMT),the decoder generates a sentence word by word,packing all linguistic granularities in the same time-scale of RNN?We propose a new type of decoder for NMT,which splits the decode state into two parts and updates them in two different time-scales?In this way,the tar-get sentence is translated hierarchically from chunks to words,with information in different granularities being leveraged.Experiments show that our proposed model significantly improves the translation performance over the state-of-the-art NMT model.
Keywords/Search Tags:Natural Language Processing, Syntax Parsing, Deep Learning, Neural Net-works, Machine Translation
PDF Full Text Request
Related items