| The development of artificial intelligence technology has injected new momentum into various industries,and the medical field is no exception.In terms of medical decision-making,artificial intelligence has shown unique advantages in the diagnosis,prediction and treatment of diseases by virtue of mass storage,extreme speed computing and machine learning,and has gradually grown into a proper assistant for doctors,with remarkable achievements in improving the efficiency of medical diagnosis and optimizing the allocation of medical resources.However,the academic community has different attitudes towards the participation of artificial intelligence in medical decision-making.Many scholars believe that the application of artificial intelligence has certain challenges to the existing ethics,and artificial intelligence faces some real ethical problems.By comparing the connotation of moral subject in ethical thought,artificial intelligence has no free will,cannot make moral cognition and moral judgment,and therefore cannot make moral behavior and bear moral responsibility.It does not have the conditions to be a moral subject.The essence of artificial intelligence is a machine driven by code program,which lacks self-will and inner belief.Therefore,artificial intelligence is faced with the blank of moral responsibility.Conflicts among multiple ethics have caused a dilemma in moral decision-making,and there is still no consensus on which ethical norms to follow.Supporters of moral computation face difficulties such as the lack of hope for the realization of moral computation,the inadequacy of moral facts,the neglect of moral emotions,and the over-mechanization of moral evaluation.In addition,new ethical issues are emerging,more algorithmic biases seem to be on the rise,privacy protection is at odds with the use of big data,and worries about the future of humanity are no longer unwarranted.The limitations of artificial intelligence create a barrier of trust between artificial intelligence and human beings,and there is a natural aversion to artificial intelligence’s participation in decisions that involve human dignity.In order to release positive kinetic energy and avoid possible risks in the development of artificial intelligence,it is necessary to explore the path and space for artificial intelligence to participate in medical decision-making.At present,there is a top-down approach based on deontology,a bottom-up approach based on virtue,and a combination of the two approaches for artificial intelligence to participate in moral decision making.By investigating the history of the development of moral subjects,it is found that this system is indeed an open and expanding process,which provides a theoretical possibility for artificial intelligence to be incorporated into the moral system and to be presented in what kind of moral form.It also makes clear that there is a space for artificial intelligence to participate in medical decision-making.In terms of AI’s participation in medical decision-making responsibility and trust mechanism,the society should give more support to AI with more positive measures and inclusive attitude,which will help AI to give better play to its advantages and values.In the ethical construction of artificial intelligence machine,it is feasible to construct the moral ability processing center of artificial intelligence.The algorithm program should abide by the principle of fairness and transparency.The principle of fairness is the inevitable requirement of pursuing equality and eliminating prejudice,and the principle of transparency is the inevitable embodiment of the transparency of data and algorithm.Ethical design makes artificial intelligence system present the "goodness" and "rationality" of science and technology,and ensures that artificial intelligence has a "good core".For the ethical suggestions of AI in medical decision-making,the following three aspects should be considered.First,AI should be established as a consultative auxiliary role,including strong sensitivity to possible harm and weak autonomy design in moral decision-making.The former mainly relies on the emotional sensitivity design of artificial intelligence to realize the identification and avoidance of possible harm;the latter is to build the artificial intelligence "intentionally not to do" module,and for major decisions,humans should be asked for help instead of making decisions alone.Second,strengthen the integration of moral philosophy and artificial intelligence,strengthen the cooperation between technical experts and ethicists,deeply study the outstanding moral issues involved in the field of artificial intelligence and formulate effective ethical standards;The moral education and responsibility consciousness of technical personnel should be strengthened,and the education and cultivation of relevant moral responsibility consciousness should be carried out on a regular basis.The third is to construct ethical mechanism and laws and regulations.Ethical norms should be built into an AI system through appropriate code programs that are self-consistent with each other and refined through interaction with humans.Relevant laws and regulations should adhere to the core standards of putting people first,pay attention to the protection of user privacy,pursue fairness,eliminate bias,improve oversight and accountability mechanisms,and refine and clarify relevant rights and obligations.In the process of developing and using artificial intelligence,objective and comprehensive moral evaluation and effective supervision throughout the whole process are indispensable,and only in this way can the healthy development of artificial intelligence be escorted. |