Font Size: a A A

A neural network model for the representation of natural language

Posted on:2005-01-31Degree:Ph.DType:Thesis
University:Georgetown UniversityCandidate:Koutsomitopoulou, EleniFull Text:PDF
GTID:2458390008986998Subject:Language
Abstract/Summary:
Current research in natural language processing demonstrates the importance of analyzing syntactic relationships, such as word order, topicalization, passivization, dative movement, particle movement, pronominalization, as dynamic resonant patterns of neuronal activation (Loritz, 1999). Following this line of research this study demonstrates the importance of also analyzing conceptual relationships, such as polysemy, homonymy, ambiguity, metaphor, neologism, coreference, as dynamic resonant patterns represented in terms of neuronal activation. This view has implications for the representation of natural language (NL). Alternatively, formal representation methods abstract away from the actual properties of real-time natural language input, and rule-based systems are of limited representational power.; Since NL is a human neurocognitive phenomenon we presume that it can be best represented in a neural network model. This study focuses on a neural network simulation, the Cognitive Linguistic Adaptive Resonant Network (CLAR-NET) model of online and real-time associations among concepts. The CLAR-NET model is a simulated Adaptive Resonance Theory (ART, Grossberg 1972 et seq.) model. Through a series of experiments, I address particular linguistic problems such as homonymy, neologism, polysemy, metaphor, constructional polysemy, contextual coreference, subject-object control, event-structure metaphor and negation. The aim of this study is to infer natural language specific mappings of concepts in the human neurocognitive system on the basis of known facts and observations provided within the realms of conceptual metaphor theory (CMT), and adaptive grammar (AG, Loritz 1999), theories of linguistic analysis, and known variables drawn from the brain and cognitive sciences as well as previous neural network systems built for similar purposes. Additionally, this study investigates the extent to which these linguistic phenomena can be plausibly analyzed and accounted for within an ART-like neural network model.; My basic hypothesis is that the association among concepts is primarily an expression of domain-general cognitive mechanisms that depend on continuous learning of both previously presented linguistic input and everyday, direct experiential (i.e. sensory-physical) behaviors represented in natural language as "common knowledge" (or "common sense"). According to this hypothesis, complex conceptual representations are not actually associated with pre-postulated feature structures, but with time-sensitive dynamic patterns of activation. These patterns can reinforce previous learning and/or create new "place-holders" in the conceptual system for future value binding.; This line of investigation holds implications for language learning, neurolinguistics, metaphor theory, information retrieval, knowledge engineering, case-based reasoning, knowledge-based machine translation systems and related ontologies.; This study finds that although STM effects in ART-like networks are significant, most of the time LTM calculation yields better semantic discrimination. It is suggested that the internal structure of lexical frames that correspond to clusters of congenial associations (in fact, neuronal subnetworks), is maintained as long as it resonates with new input patterns or learned in long-term memory traces. Different degrees of similarity (or deviation) from previously acquired knowledge clusters are computed as activation levels of the corresponding neuronal nodes and may be measured via differential equations of neuronal activity.; The overall conclusion is that ART-like networks can model interesting linguistic phenomena in a neurocognitively plausible way.
Keywords/Search Tags:Natural language, Model, Network, Linguistic, Neuronal, Representation
Related items