The popularization of automated administration,has innovated the traditional way of government governance and effectively responded to the needs of digital social order maintenance.Although automated administration makes government departments more vulnerable in the process of social governance,in the macro background,it also plays a role in improving the protection level of citizens’ basic rights.But while this emerging approach of governance is widely applicable,there are also legal risks.Analysis and regulation of risk is the focus of this paper.The theoretical framework of this paper starts from the basic concept of automation administration,and clarifies that the research object of this paper is automation administration,refers to the administrative act that artificial intelligence participates in administrative activities,and to a certain extent exclude or completely exclude the participation of administrative subjects,so as to assist or independently make decisions.he legal risk of automatic administrative application is analyzed based on three typical cases.In this regard,the development requirements of automation administration are clearly defined as follows:highlighting the principle of risk prevention,integrating the core values of social subjects,and fitting the modernization concept of national governance.The legal risks of automated administration are divided into four categories: Firstly,the risk of abuse of algorithmic power.The characteristics of algorithm deep learning make it have the ability to make independent decision-making,while the administrative subject expects the function of algorithm governance pragmatism,which leads to the excessive reliance and even the abuse of algorithmic power.Secondly,the risk of missing procedural justice.The black-box nature of the algorithm technology determines the opacity of the algorithm making process,and the extreme speed makes the administrative counterpart too late to react.The two sides of the technology itself lead to the compression of the relative person’s right of presentation and defense and the ability of the administrative organ to interpret.Thirdly,the risk of damage to citizens’ legitimate rights and interests.The support of technology has improved the efficiency of government governance,but it is not matched by the lack of information protection,the imperfect "green channel" of digital vulnerable groups and the intensification of "digital divide".Fourthly,the responsibility definition of fuzzy risk.It is mainly reflected in the unclear subject of responsibility and the unclear principle of attribution.Due to the intervention of AI technology and external technology enterprises,it is necessary to define the legal status of AI and the responsibility way of enterprises when holding them accountable.On the basis of the determination of the responsibility subject,the principle of attribution should be determined from the perspective of protecting the rights and interests of the counterpart.The regulatory paths of automatic administrative legal risks are: strengthening the internal regulation of automatic administration,correcting the automatic administrative due process,strengthening the protection of citizens’ rights,and improving the legal responsibility mechanism.The specific performance is as follows: first,limit the application scope of automation administration,improve the supervision mechanism of administrative subjects and strengthen risk assessment.Secondly,protect the parties’ right to state and defend,supervise the performance of algorithmic interpretation obligations of administrative organs and learn from the foreign procedure restraint system.Thirdly,adhere to the data protection as the core,and narrow the "digital divide" of vulnerable groups.Finally,when the automatic administration causes damage to the rights of the counterpart,the responsibility subject status of the administrative organ should be clarified,and the relief to the citizens should be increased through the principle of no-fault liability. |