Font Size: a A A

Research On Intellectual Property Protection Methods For Neural Network Models And Data

Posted on:2024-08-27Degree:MasterType:Thesis
Country:ChinaCandidate:Y H JiangFull Text:PDF
GTID:2568307127453654Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the widespread application of deep learning in industry,the scale and complexity of deep learning models are becoming increasingly high.Deep learning models with excellent performance often involve huge intellectual labor and time economic costs.However,training deep learning models requires a large amount of high-quality data,and in reality,the cost of data acquisition is often too high or it is difficult to obtain sufficient data.Generative models that can generate a large number of realistic data have become an important tool to solve this problem.In addition,recently,generative models have achieved explosive development in the fields of painting and language interaction globally.Therefore,data generated by generative models also has increasingly high value.In recent years,protecting the intellectual property rights of these models and data has become a hot research direction.Previous methods for protecting the intellectual property rights of deep learning models mainly attempt to embed specific watermarks in the model’s parameters or set backdoors to verify intellectual property rights.For generative models,trigger sets can be used as inputs to generate watermarks that can point to the model’s intellectual property rights owner.However,if an attacker steals the generator model file and uses ordinary input data that is not in the trigger set,the model will generate expected high-quality data without generating images with watermarks.In this way,attackers can use the generated data to complete their tasks.Moreover,traditional deep learning models such as Res Net can still extract features to a large extent from watermarked images.In other words,generated images can be easily used for other tasks,regardless of whether they have watermarks.This makes it impossible to prevent data theft by adding watermarks to the data.Although model owners can take legal action,there are many objective difficulties and high costs in reality.Therefore,these methods are effective for declaring intellectual property rights,but we need to be able to access the parameters of deep models(white-box verification)or interact with the models for input-output(black-box verification)to verify intellectual property rights.To overcome this limitation,we propose a new self-adversarial perturbation attack method to train the generator model,which outputs feature-level damage for unauthorized input to prevent illegal theft and secret use of the model.In summary,the main contributions of this paper are as follows:1.To address the problem of unauthorized generation of data by generative deep learning models,this paper proposes a training method of input space offset.This method realizes the differentiation between authorized and unauthorized inputs,making the trained generator unable to be used by unauthorized users.2.To address potential fine-tuning attacks,we introduce a new generator loss regularization term IFD(Intermediate Feature Distance)in the training loss function of the generator to further damage the results obtained by unauthorized input through the generator model and make the generator have good resistance to fine-tuning attacks.3.This paper proposes a complete intellectual property protection method for generative models called EGAN(Encrypting Generative Adversarial Networks),which includes embedding model parameters in the signature,embedding output data in watermarks,and encrypting the generator model.Experiments verify the effectiveness of our method on different types of deep generative adversarial network models,and the data generated by the generative model has good protection effects on downstream tasks in different fields such as target classification,target detection,and fetal heart rate monitoring.
Keywords/Search Tags:Intellectual property rights, model encryption, adversarial perturbation, generative networks, data security
PDF Full Text Request
Related items