| It may lead to improper aggravation or reduction of responsibility if the subjects of responsibility allocation are vague.Thus the moral responsibility theory of artificial intelligence is in need to provide a basis for the allocation of responsibility.In car driving accident under the common control of man and machine,the human driver usually does not have full control with the car,while the artificial intelligence driving system is usually considered as having no necessary capacity needed for responsibility,so neither of them can be fully responsible for driving accidents.This article attempts to explore some weak moral responsibility theories,sort out different moral responsibility theories,and answer the question of whether the artificial intelligence driving system can bear moral responsibility.We found that the autonomous driving system cannot meet the excessively high moral responsibility conditions in some moral responsibility theories.While the remaining theories have received some criticism that moral responsibility is unfairly assigned.In addition to sorting out some existing problems in moral responsibility theories,we also confirmed the legitimacy of the theoretical position from a new perspective and defended one of these theories.We proved that the fairness problem is not enough to oppose the theory.Finally,we argued that the existing moral responsibility theories of consequentialism could solve the problem of moral responsibility of agents of artificial intelligence.This conclusion prompted us to reflect on the relationship between moral responsibility and artificial intelligence.We sorted out some thoughts generated in the research and listed some key problems that need to be solved in this field in the future. |