| With the upgrading of self-driving car driving mode,it is urgent to balance the contradiction between morality and law,especially in the perspective of ethical dilemma,the relevant issues of the responsible body of self-driving car accident.To establish legal standards for dealing with emerging AI technologies in line with theoretical and practical requirements.In ethical dilemmas,determining what decisions and actions are morally permissible,prohibited or mandatory for self-driving cars is a well-known philosophical conundrum.From the perspective of ethical dilemma,self-driving car accidents need to face the three dilemmas of technology,morality and law.The leading country in the field of self-driving cars has begun to formulate rules to clarify the responsibility framework,but the current legal norms in China are not enough to solve the problem of identifying and assuming the responsibility of the subject of such ethically related accidents.On the one hand,self-driving cars are different from other intelligent tools,which need to clarify the special role of self-driving cars and their systems in accidents.On the other hand,they need to set clear legal obligations to reduce the friction between ethics and technology.In order to identify the responsible subjects of self-driving car accidents from the perspective of ethical dilemma in a more reasonable way,a scenario experiment is set up to provide theoretical reference in the future practice process and finally realize the equity of interests.At present,the academic circle has not made a clear judgment on the responsibility subject of self-driving car accidents from the perspective of ethical dilemma.Besides,there are problems such as "black box" algorithm in the decision-making system of self-driving car algorithm,which makes it impossible to measure value by the system.International policy standards on related issues are also limited in China,so it is necessary to clarify the relevant issues of the subject of responsibility.As self-driving cars shift from a human control level to a fully intelligent level,algorithms take on more and more "control" functions.Although scholars hold different opinions on whether AI should have the dominant position to assume responsibility,the public generally accepts self-driving cars as part of human tools.In order to identify the type and study the status of the subject of autonomous vehicle accident liability in the perspective of ethical dilemma,it is necessary to measure the morality of the autonomous vehicle algorithm,which is an important factor in the study of the status of the subject of responsibility.According to the market,algorithm developers and caring options,the multiple screening of the algorithm decision model is ethical,and the reasonable results as one of the auxiliary criteria.However,due to the significant difference between the perspective of self-driving car accident and that of non-ethical dilemma,the driver should not lose control in the ethical dilemma.In addition,an ethical dilemma scenario model is established based on practical cases to identify other subjects of responsibility,such as algorithm developers and automobile manufacturers,and set their legal obligations soas to better explain how the subjects assume responsibilities.After confirming that the autonomous driving system does not have the ability to assume responsibility independently,the algorithm developer is supplemented with appropriate disclosure and reminder obligations,combined with the driver’s duty of care and take-over obligations,and a relatively feasible solution is proposed for the subject problem that may arise when the autonomous driving vehicle has an accident in the ethical dilemma. |