Font Size: a A A

Applying human characteristics of trust to animated anthropomorphic software agents

Posted on:2011-08-02Degree:Ph.DType:Dissertation
University:State University of New York at BuffaloCandidate:Green, Brian DanielFull Text:PDF
GTID:1468390011972357Subject:Psychology
Abstract/Summary:
Trust has profound effects on humans' interactions with one another as well as with automated technologies (Lee and See, 2004). A series of empirical studies conducted by Reeves and Nass (2002) suggest that humans interact with different forms of media guided by the same psychosocial principles as they do with other humans. It is suggested that because technology changes much more rapidly than our ability to adapt to new social systems that we consider all social interactions to be similar to an interaction with another human.;The goal of the research conducted here was to identify likely physical and behavioral properties that have halo effects when humans interact with one another and apply them deliberately to autonomous anthropomorphic software agents. Three studies were developed and conducted to determine: (1) Can people readily interpret the characteristics of people typically associated with trust when applied to a software agent? (2) Will people consistently rate agents with more characteristics associated with trust as being trustworthier? And (3) what effect, if any, does altering these properties have on task performance?;The first study utilized the Jian, Bisantz, and Drury (2000) Trust Scale to measure subjective levels of trust of a series of 120 agents. It was found that three design factors influenced these agent ratings: the type of agent character (referred to as normality of form), the amount of agent eye movement, and the shape of the agent's chin. A fourth factor, the distance between the eyes was found to have no effect on subjective trust ratings. An interaction effect between normality of form and eye movement was also found.;Sixteen agents were selected from the 120 tested in the initial experiment studied levels of interest of the three significant factors. These agents were then presented to an independent sample in a paired comparisons experiment. Psychometric scaling of the results as described by Guilford (1954) revealed three levels of trustworthiness of the agents: Distrusted, Neutral, and Trusted. These groupings were confirmed by Dendrogram Analysis. GLM ANOVA confirmed that eye movement and normality of form were relevant factors, yet failed to confirm the findings of the first study with regards to Chin Shape.;The final study investigated three agents from the second study as well as a non-anthropomorphic agent condition and a control condition to determine what role, if any, the appearance and reliability of the agent had on reading comprehension performance. A software program was developed in which agents appeared to read passages of text and provide visual feedback to the user regarding relevant (and sometimes) irrelevant content essential in successfully answering a series of reading comprehension questions. Performance ratings varied based on the agent appearance. In particular, the alien agent was least trusted in previous studies, actually influenced participants to have better performance than the other agents. It is likely that users disused this agent and defaulted to their own ability rather than relying upon this agent (Lee and See, 2004). It was the users superior ability compared to that of the agent that provided increased task performance.;This document reviews the relevant preliminary research done by others and explains the rationale for the current studies. In addition, the research conducted for this dissertation is explained in detail. The results of the first and second study provide evidence that humans do in fact reliably and consistently apply properties of trust used in assessing humans to anthropomorphic agent designs. The results of the third study provide new evidence that humans tend to treat agents as if they were also human. Finally, the potential impact that the results may have in the areas of anthropomorphic agent design; trust of automation, and human factors engineering is discussed. (Abstract shortened by UMI.)...
Keywords/Search Tags:Agent, Human, Anthropomorphic, Software, Characteristics, Factors
Related items