Font Size: a A A

Towards Modeling Collaborative Task Oriented Multimodal Human-human Dialogues

Posted on:2015-07-29Degree:Ph.DType:Dissertation
University:University of Illinois at ChicagoCandidate:Chen, LinFull Text:PDF
GTID:1478390017993416Subject:Computer Science
Abstract/Summary:
This research took place in the larger context of building effective multimodal interfaces to help elderly people live independently. The final goal was to build a dialogue manager which could be deployed on a robot. The robot would help elderly people perform Activities of Daily Living (ADLs), such as cooking dinner, and setting a table. In particular, I focused on building dialogue processing modules to understand such multimodal dialogues. Specifically, I investigated the functions of gestures (e.g. Pointing Gestures, and Haptic-Ostensive actions which involve force exchange) in dialogues concerning collaborative tasks in ADLs.;This research employed an empirical approach. The machine learning based modules were built using collected human experiment data. The ELDERLY-AT-HOME corpus was built based on a data collection of human-human collaborative interactions in the elderly care domain. Multiple categories of annotations were further conducted to build the Find corpus, which only contained the experiment episodes where two subjects were collaboratively searching for objects (e.g. a pot, a spoon, etc.), which are essential tasks to perform ADLs.;This research developed three main modules: coreference resolution, Dialogue Act classification, and task state inference. The coreference resolution experiments showed that modalities other than language play an important role in bringing antecedents into the dialogue context. The Dialogue Act classification experiments showed that multimodal features including gestures, Haptic-Ostensive actions, and subject location significantly improve accuracy. They also showed that dialogue games help improve performance, even if the dialogue games were inferred dynamically. A heuristic rule-based task state inference system using the results of Dialogue Act classification and coreference resolution was designed and evaluated; the experiments showed reasonably good results.;Compared to previous work, the contributions of this research are as follows: 1) Built a multimodal corpus focusing on human-human collaborative task-oriented dialogues. 2) Investigated coreference resolution from language to objects in the real world. 3) Experimented with Dialogue Act classification using utterances, gestures and Haptic-Ostensive actions. 4) Implemented and evaluated a task state inference system.
Keywords/Search Tags:Dialogue, Multimodal, Task, Haptic-ostensive actions, Collaborative, Coreference resolution, Human-human, Gestures
Related items