Algorithm recommendation technology provides necessary material conditions for emerging information,and the automated generation and execution of algorithm recommendations strengthen the interaction between various algorithm recommendation services and individual users.At the same time,strong automation means that human intervention is basically eliminated,affecting the personal interests of users.The transition of algorithm recommendation technology from traditional tool technology to socialized technology has also led to technological alienation,This could potentially lead to technology losing control.While affirming that algorithm recommendation can improve and facilitate people’s lives,we study cases and summarize that in real life,due to its automation technology logic and users’ business interests,it has brought users independent assault,as well as infringement of personal information rights,privacy rights,information equity rights,economic equality,and intellectual property rights.In the process of updating algorithmic technology,it is inevitable that algorithmic technology itself will also be introduced into fields where traditional governance rules cannot be applied,so it needs to be regulated and improved.In order to better regulate algorithm recommendation technology,the focus should be shifted to finding the entities behind intelligent algorithm recommendation technology,and analyzing the types and responsibilities of these entities.Moreover,based on the operational principles and service mechanisms of intelligent algorithm recommendation technology,it is believed that there are several types of intelligent algorithm recommendation infringement subjects: algorithm technology designers and algorithm service providers.However,there are still many issues with the identification and implementation of liability for these potential infringement subjects,including the unclear boundary between producer liability and user liability,the single responsibility subject determined based on results,and the difficulty in implementing responsibility.Therefore,it is necessary to carry out regulatory optimization in relevant aspects.In order to provide more clear and targeted regulatory suggestions,the root causes of the above situation were first analyzed,and two major sources of algorithmic black box and existing legal failure were proposed.Specifically,the determination of the infringement subject of product liability has undergone discussions on the essential attributes of algorithm recommendation,which involves some internal aspects of algorithm recommendation.However,fundamentally,it is still not possible to explore and define a well-defined liability bearing subject.Therefore,it can only revolve between the producers and users(algorithm service providers)of algorithm recommendation,with unclear boundaries and no specific legal provisions to define it,And this is also caused by the algorithm’s black box feature.It is necessary to uncover the root causes of regulatory difficulties in algorithm recommendation technology,in order to break through the black box of intelligent algorithm recommendation technology and improve legal norms.These include advocating the construction and improvement of the algorithm interpretation responsibility,refining the interpretation subject,interpretation content,interpreter,etc.,using the principle of empirical analysis technology neutrality cannot be a defense reason,and the strict application conditions of the principle of analysis technology neutrality.At the normative level,the main recommendations are to adopt a differentiated approach to attribution,recommend higher algorithmic standards for judging infringement faults,reduce the difficulty of providing evidence,reduce the recognition standards for mental damage,and improve the method of compensation liability for damages. |