| With the development of the mobile Internet in recent years,edge devices are generating a large amount of data almost every moment.This data has huge potential.If it can be used well,it will further enrich the services provided by mobile devices.At the same time,thanks to the development of chip technology and embedded technology,edge devices not only have greatly improved computing power compared to a few years ago,but also their hardware environments,such as memory and buses,have also greatly developed.Various artificial intelligence applications have gradually appeared on the device,such as iris recognition.In combination with the current development situation,it has become an inevitable trend to use machine learning algorithms to make the data generated by edge devices play a role.At the same time,because the data generated by edge devices is closely related to users to a large extent,the processing of this part of data needs to pay attention to protecting personal privacy and data security.And the embedded device itself has relatively high requirements for security,so this article focuses on how to provide secure machine learning services on embedded devices.This article first analyzes the existing secure execution environment technologies and makes an in-depth understanding of TrustZone technology.By analyzing the optee source code and combining with the GlobalPlatform specification,the GP specification interface is supported on a self-developed trusted microkernel,ensuring communication in the secure and non-secure world.Next,this paper analyzes the portability of ARMNN and ComputeLibrary.Utilizing the open source ARMNN inference engine of ARM,the TensorFlow model is converted into an instruction sequence recognized by the ARM chip to achieve the purpose of running a machine learning framework on an embedded platform.This article transplants the ARMNN framework and ARM's underlying computing library ComputeLibrary to the microkernel operating system environment,and provides support for providing machine learning services in a secure execution environment.Finally,this paper analyzes the computational bottleneck of the existing machine learning framework is matrix multiplication,so in combination with the Slalom framework,matrix multiplication is outsourced from the CPU to the GPU under REE,and the machine learning service for the ARM platform further improves its execution efficiency.In addition,the Freivalds algorithm is used to quickly verify the calculation results of matrix multiplication operations in the unsafe world,which not only ensures the purpose of accelerating machine learning,but also guarantees the required security in the embedded environment.Finally,this article analyzes the source code of ARMNN,ComputeLibrary,and optee,studies TrustZone technology,and combines ARM's investigation results of existing machine learning calculation bottlenecks.Based on the results of Florian Tramèr and Dan Boneh's research,the Slalom framework,To achieve trusted machine learning service support for the ARM platform,and further accelerate the trusted machine learning service. |