Font Size: a A A

Inference with classifiers: A study of structured output problems in natural language processing

Posted on:2006-03-13Degree:Ph.DType:Dissertation
University:University of Illinois at Urbana-ChampaignCandidate:Punyakanok, VasinFull Text:PDF
GTID:1458390008474508Subject:Computer Science
Abstract/Summary:
A large number of problems in natural language processing (NLP) involve outputs with complex structure. Conceptually in such problems, the task is to assign values to multiple variables which represent the outputs of several interdependent components. A natural approach to this task is to formulate it as a two-stage process. In the first stage, the variables are assigned initial values using machine learning based programs. In the second, an inference procedure uses the outcomes of the first stage classifiers along with domain specific constraints in order to infer a globally consistent final prediction.; This dissertation introduces a framework, inference with classifiers, to study such problems. The framework is applied to two important and fundamental NLP problems that involve complex structured outputs, shallow parsing and semantic role labeling. In shallow parsing, the goal is to identify syntactic phrases in sentences, which has been found useful in a variety of large-scale NLP applications. Semantic role labeling is the task of identifying predicate-argument structure in sentences, a crucial step toward a deeper understanding of natural language. In both tasks, we develop state-of-the-art systems which have been used in practice.; In this framework, we have shown the significance of incorporating constraints into the inference stage as a way to correct and improve the decisions of the stand alone classifiers. Although it is clear that incorporating constraints into inference necessarily improves global coherency, there is no guarantee of the improvement in the performance measured in terms of the accuracy of the local predictions---the metric that is of interest for most applications. We develop a better theoretic understanding of this issue. Under a reasonable assumption, we prove a sufficient condition to guarantee that using constraints cannot degrade the performance with respect to Hamming loss. In addition, we provide an experimental study suggesting that constraints can improve performance even when the sufficient conditions are not fully satisfied.
Keywords/Search Tags:Natural language, Inference, Classifiers, NLP, Constraints
Related items