Font Size: a A A

Research On Noise Defense Methods To Deal With Adversarial Attacks On Deep Neural Network

Posted on:2024-06-05Degree:MasterType:Thesis
Country:ChinaCandidate:T Y LiaoFull Text:PDF
GTID:2568307109487864Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Artificial intelligence technologies,represented by deep learning,have already infiltrated various aspects of our lives.As a result,security concerns have become increasingly prominent.Adversarial attacks,which deceive deep neural network models by injecting well-designed,small and imperceptible attack data,have brought about many serious problems.Existing defense methods mainly target specific attack types.This paper proposes a new defense method called the Noise-Fusion Method(NFM),which can effectively defend against non-specific attack types.The main contributions of this paper are as follows:(1)The Noise-Fusion Method(NFM)was proposed for defending against nontargeted attacks.It adds noise to the input attack data during runtime and trains the model using noisy training data to effectively defend against adversarial attacks.The NFM does not require knowledge of attack features or detailed model information,making it applicable to various models and effective against different types of attacks.Experimental results demonstrate that the NFM not only provides excellent defense capabilities but also enhances the model’s robustness and generalization.(2)The defense effect of various types of noise is studied.Four types of noise,including Uniform,Gaussian,Poisson,and Perlin noise,are used,including simple spatially independent and identically distributed noise and complex lattice noise,which have rich texture and visual characteristics and are widely representative.(3)A comprehensive and detailed set of experiments are designed to evaluate the proposed defense method.Three representative adversarial attack methods,four types of noise,and eleven noise amplitudes are used in 352 experiments on two benchmark datasets to find the ideal amplitude range for defense.In addition,a comparative experiment is conducted with adversarial training methods.The results show that the NFM can not only defend against the three aforementioned adversarial attacks but also has better performance on complex datasets compared to adversarial training methods,demonstrating its generalization defense effect against other adversarial attacks.
Keywords/Search Tags:adversarial attack, adversarial defense, noise fusion, types of noise
PDF Full Text Request
Related items