Font Size: a A A

Linguistic Knowledge Transfer for Enriching Vector Representation

Posted on:2018-07-29Degree:Ph.DType:Dissertation
University:The Ohio State UniversityCandidate:Kim, Joo-KyungFull Text:PDF
GTID:1449390002950925Subject:Artificial Intelligence
Abstract/Summary:
Many state-of-the-art neural network models utilize a huge number of parameters, where a large number of labeled training examples are necessary for sufficient training of the models. Those models may not be properly trained if there are not enough training examples for target tasks. This dissertation focuses on transfer learning methods, which improve the performance of the target tasks in such situations by leveraging external resources or models from other tasks. Specifically, we introduce transfer learning methods for enriching word or sentence vector representations of neural network models by transferring linguistic knowledge.;Usually, the first layer of the neural networks for Natural Language Processing (NLP) is a word embedding layer. Word embeddings represent each word as a real-valued vector, where semantically or syntactically similar words tend to have similar vector representations in vector spaces. The first part of this dissertation is mainly about word embedding enrichment, which is categorized as an inductive transfer learning methodology. We show that word embeddings can represent semantic intensity scales like "good" < "great" < "excellent" on vector spaces, and semantic intensity orders of words can be used as the knowledge sources to adjust word vector positions to improve the semantics of words by evaluating on word-level semantics tasks. Also, we show that word embeddings that are enriched with linguistic knowledge can be used to improve the performance of the Bidirectional Long Short-Term Memory (BLSTM) model for intent detection, which is a sentence-level downstream task especially when only small numbers of training examples are available.;The second part of this dissertation concerns about sentence-level transfer learning for sequence tagging tasks. We introduce a cross-domain transfer learning model for dialog slot-filling, which is an inductive transfer learning method, and a cross-lingual transfer learning model for Part-of-Speech (POS) tagging, which is a transductive transfer learning method. Both models utilize a common BLSTM that enables knowledge transfer from other domains/languages, and private BLSTMs for domain/language-specific representations. We also use adversarial training and other auxiliary objectives such as representation separations and bidirectional language models to further improve the transfer learning performance. We show that those sentence-level transfer learning models improve sequence tagging performances without exploiting any other cross-domain or cross-lingual knowledge.
Keywords/Search Tags:Transfer, Models, Vector, Linguistic knowledge, Training examples, Improve, Word
Related items