Font Size: a A A

A Study Of Machine Ethics In Human- Artificial Intelligence Interactions

Posted on:2018-02-04Degree:MasterType:Thesis
Country:ChinaCandidate:H R SunFull Text:PDF
GTID:2428330566988332Subject:Management Science and Engineering
Abstract/Summary:PDF Full Text Request
This study evaluates people's attitude and preferences toward human-machine interaction from a machine ethics perspective.The objectives of this research are to explore how automation and decision-making approaches should be designed on AI when it faces ethical dilemmas under different scenarios,and to identify which stakeholder people tend to blame when unexpected failure happens.Three factors that were considered when building the scenario-based survey: severity,time limitation for decision-making process,and relativity of the impact on decision maker.An interview was first conducted with 30 participants to gather ideas and concerns about future AI technology in their imaginations in order to design a scenario-based survey that reflects the publics' interest and concerns for future AI machines.The results of the interview revealed four most popular categories of future AI technology: Smart homes,autonomous vehicles,AI professionals,and military combat.The scenarios presented in the survey was designed based on those four categories.The survey included a questionnaire which was conducted with 103 participants to collect quantitative data,and an in-depth interview held with 30 participants to support and provide insights to the questionnaire results.Results of repeated measures ANOVA on the questionnaire data showed that all three factors have significant impacts on people's choices over automation level,decision-making approach,and responsibility allocation.Either monitored control,consensual control,or both,were the selected as the most preferred automation levels in the different scenarios as opposed to manual control and full automation.The indepth interview revealed that this was due to the existence of distrust for AI technology among many people.As of the preference on decision-making approach,the fairness/justice approach was most welcomed in low severity and self-irrelevant scenarios,where the emotional/utilitarian approach was most welcomed in high severity and self-relevant scenarios.This was because people wanted the AI to be able to behave like humans and consider various aspects when making decisions,including emotions,on important issues such as those that have high severity or that are self-relevant.However,for less serious and less important cases,people would like the AI to make decisions that are just and fair.Finally,results showed that people believe that user/owner of the AI should bear the highest level of responsibility if an incident occurred.Results from the in-depth interview revealed that most participants centered their concerns around the idea that,in their opinion,AI's life does not have the same value as human life,therefore AIs shouldn't have the power assume responsibility or make decisions which can impact or even end the life of humans.The results of this study indicated that AI technology should be adaptively designed to suit specific situations with different combinations of influence factors.
Keywords/Search Tags:Artificial Intelligence, Machine ethics, Human-machine interaction
PDF Full Text Request
Related items