Font Size: a A A

Extraction And Analysis Towards API Security Issues For Android Native Function Layer

Posted on:2021-01-29Degree:MasterType:Thesis
Country:ChinaCandidate:Z LiFull Text:PDF
GTID:2428330602983774Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In the past decade,deep learning techniques have made great strides in a variety of challenging tasks such as computer vision,machine translation and autopilot.Deep Neural Networks(DNNs)play a vital role in the development of deep learning technologies as a core part of their development,and most major technology companies are building their Artificial Intelligence(AI)products and services with deep neural networks(DNNs)as the key components.While the application of deep neural networks has greatly advanced the field,building deep neural network models comes at a huge cost to us:massive labeled data sets,vast computational resources,and highly professional field knowledge.We therefore argue that the builder owns the intellectual property of the model,and it is essential to design a technique that protects the intellectual property of the deep neural network model and allows the builder to externally verify its ownership.To protect the intellectual property rights of deep neural network models,researchers are introducing digital watermarks,which have been widely used to protect multimedia intellectual property rights,to the field of deep learning.However,these watermarks differed significantly from the distribution of characteristics in the normal sample,resulting in these studies either failing to defend against evasion attacks or failing to explicitly address fraudulent ownership claims by adversaries.Moreover,they cannot make a clear connection between the model and the identity of the builder.To fill these gaps,in this paper,we propose a novel Intellectual Property Protection(IPP)framework based on blind-watermark for watermarking deep neural networks.Our framework accepts normal samples and the exclusive logo as inputs,outputting newly generated samples as watermarks,and infuses these watermarks into DNN models by assigning specific labels,leaving the backdoor as the basis for our copyright claim.The biggest progress compared to previous works is that these watermarks have exactly the same feature distribution as normal samples,are effective against evasion attack and fraudulent ownership claims.Further,our framework can establish a clear association between the model and the author's identity due to the use of exclusive flags as inputTo evaluate the performance of the framework,we conducted a large number of experiments on two well-known benchmark datasets and 15 publicly popular deep neural network models.The results show that our framework successfully verifies ownership of all host models(over 90%accuracy of watermark verification)and that there is no significant performance degradation of the host model on the original task(0.14%on average).By fine-tuning the model,the host model still has high accuracy on watermark validation,demonstrating the extraordinary robustness of our framework.More importantly,by implementing both evasion attacks and fraudulent ownership attacks,our framework achieves current optimal performance in both undetectable and non-falsifiable performance against evasion attacks and fraudulent ownership claims.Numerous experimental results show that our proposed framework based on blind watermarking meets the multiple performance requirements for protecting the intellectual property of deep neural network modelsFinally,the work in this paper will provide a new direction for the research of intellectual property protection of digital watermarking in the field of deep learning,providing a strong solution to the problem of intellectual property protection of deep neural network models,which is conducive to safeguarding the legal rights and economic interests of model owners and mitigating the corresponding economic risks.
Keywords/Search Tags:Deep neural networks, Intellectual property, Blind watermarks, Evasion attack, Fraudulent ownership
PDF Full Text Request
Related items