Artificial intelligence technology is one of the most important engines of world economic development in this century.After half a century of vigorous development,the deep integration of artificial intelligence technology and all walks of life has led to profound changes in the social structure.The black box of artificial intelligence technology algorithm and the characteristics of surpassing the human brain in calculation and logic make human beings enjoy the technological dividend,while human society has been exposed to the ethical risks caused by artificial intelligence technology.On the one hand,artificial intelligence products have sprung up in all walks of life,and on the other hand,the traditional ethical theory based on human beings is in trouble in the face of risks.From the perspective of risk ethics based on the principle of responsibility,this paper attempts to divide ethical responsibility and explore countermeasures,so as to achieve the purpose of avoiding ethical risk.The first chapter mainly discusses the development status of artificial intelligence and the theory of risk ethics,and analyzes the underlying logic of the application of risk ethics to artificial intelligence technology.Different from the instrumentality of traditional technology,artificial intelligence has the ability to participate in practice independently.The degree of autonomy(Intelligence)of artificial intelligence directly determines its position in morality.The characteristics and trends in its application show that the technical risk has the characteristics of wide range and irreversibility.Risk ethics based on responsibility ethics takes technology risk as the research object,which can restrict the risk of artificial intelligence technology and make technology better serve people under moral norms.The second chapter deeply analyzes the ethical risks caused by artificial intelligence technology.In terms of technology itself,artificial intelligence technology is to calculate the decision to deal with the scene by analyzing the data with the algorithm as the core.However,the "black box" of the algorithm,the autonomous preference and the autonomous characteristics of the algorithm determine that there are ethical disputes at the level of justice,responsibility and moral subject.From the perspective of social practice,in the application process of artificial intelligence,the risk of human life and property,the alienation risk of personal rights and the new man-machine relationship urgently need the standardization and response of ethical theory.The third chapter mainly analyzes the reasons for the ethical risk of artificial intelligence technology.From a technical point of view,the defects of algorithms and the alienation of big data are the direct causes of ethical risks.The ethical risk caused by the autonomy of artificial intelligence shows that the subjectivity and objectivity in the theory of technological neutrality and instrumental rationality are divided,and the concepts of rationality and neutrality are actually rationality in subjective will.By discussing whether artificial intelligence is "substantial autonomy" or "formal autonomy",this paper analyzes the defects of traditional ethical theory in dealing with the technical risks of artificial intelligence.The fourth chapter uses the theory of risk ethics to explore the avoidance path of ethical risk of artificial intelligence technology.Firstly,through the discussion of the relationship between man and artificial intelligence,it is determined to form a new composite subject in moral philosophy,which takes man as the subject and artificial intelligence as the "quasi subject" subordinate to man.Aiming at this composite subject,the basic principles of the application of artificial intelligence are determined.The responsibility of human and artificial intelligence is divided through the concept of responsibility in the theory of risk ethics,which ensures that the risk of artificial intelligence technology is always controllable.In the technological society,the threat of risk to people leads to the blurring of the boundary between morality and law.In the ethical risk of artificial intelligence,the institutionalized practice of moral responsibility not only needs to establish targeted systems and regulations in the real world,but also needs the digital world to be based on the value of technology serving people.Finally,due to the uncertainty of risk,artificial intelligence needs to set a circuit breaker mechanism before or when the accident occurs to ensure that its negative impact is minimized.Once artificial intelligence technology is abused,its impact will be extremely extensive.From the perspective of spatial scope,its risk is closely related to everyone;In terms of time duration,the risk may cause harm to future generations.Obviously,the existing technical level can not completely eliminate the risk,and the risk will accompany the development of human society for a long time.This requires us to bring the future society into the scope of responsibility and ensure that the risks of the society handed over to future generations are controllable.Only by adhering to the attitude of actively guiding the development of artificial intelligence technology and limiting the operation of artificial intelligence within a controllable range.In order to fully release the dividends of artificial intelligence technology and ensure that its risks are controllable. |