Collaboration between humans and artificial intelligence is one of the future trends,and it is possible that artificial intelligence will eventually be seen as a member of society.Currently,artificial intelligence is increasingly being used to make autonomous or assisted decisions for humans,including judicial decisions.However,if the decisions are incorrect,they could cause harm or damage to judicial fairness.Therefore,the application of artificial intelligence,whether in the judicial field or not,its practicability and attention to potential social and ethical issues must be considered.There is no denying that all agents should be held morally responsible for the consequences of their actions,but whether artificial intelligence can be an agent remains uncertain,the issue of responsibility cannot be ignored.While there are mature models for human moral judgment and responsibility attribution,the moral judgment model for artificial intelligence is still in its early stages and requires further exploration.Anthropomorphism is crucial to exploring the ethical issues of artificial intelligence because it allows people to treat artificial intelligence as a human being and bring it into the scope of moral consideration,thereby viewing artificial intelligence as a moral agent.As artificial intelligence becomes more intelligent and anthropomorphic,its ethics deserve more attention.This article aims to investigate the public’s moral judgment model of anthropomorphic artificial intelligence,that is,how anthropomorphism affects people’s moral judgments about artificial intelligence that made wrong judicial decisions.Additionally,another aim is to investigate the psychological mechanism of how anthropomorphism affects moral judgments about artificial intelligence by adopting the machine heuristic theory.In three contextual studies(N = 664),we found that anthropomorphism leads to more lenient moral judgments,that is,people significantly make more lenient moral judgments to a high-anthropomorphic artificial intelligence that makes wrong decisions than the low-anthropomorphic artificial intelligence.Specifically,the behavior of high-anthropomorphic artificial intelligence is seen as less morally wrong(Studies 1-3),more permissible(Studies 1,3),and less blameworthy(Studies 2-3)than low-anthropomorphic artificial intelligence.And the effect of anthropomorphism leading to more lenient moral judgments is fully mediated by a chain of perceived advancement,machine heuristics,and moral justification.The results of three studies have demonstrated the predictive effect of anthropomorphism on moral judgments,while also revealing the effect of anthropomorphism leading to more lenient moral judgments.These studies have established a chain mediation model,including perceived advancement,machine heuristic,and moral justification,providing insights into the design of judicial AI.For future directions,we suggest that more researches are needed to conduct cross-context and cross-cultural studies on anthropomorphism leads to moral lenient moral judgments,and what other possible explanations are associated with it are also worth considering. |