Font Size: a A A

Adversarial Attacks And Defense Based On Medical Images

Posted on:2022-05-14Degree:MasterType:Thesis
Country:ChinaCandidate:C D RaoFull Text:PDF
GTID:2480306569481644Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Deep learning have been widely applied into medical images analysis.However,recent studies have shown that deep models are vulnerable to adversarial attacks which lead the performance of models drop sharply.To study how these examples attack models,and improve the robustness of models,researchers have proposed many attack and defense methods in the field of natural images analysis.However,medical images contain complex texture information and rich lesion types,which bring great challenges to adversarial attacks.One the one hand,the existing gradient-based attack methods cannot satisfied the requirements of medical images.In addition,medical images always contain a lot of noise,which lead large noise are difficult to find by human eyes.But the existing defense methods cannot defense against adversarial examples with large noise for high precision.To address the existing attack methods difficult to satisfied the requirements of medical images.We proposed gradient-based dynamic step-size attack method.The method calculate the gradient of images and update the step-size dynamically during each iteration.We verify the performance of this method through black box attacks and white box attacks as well as ensemble adversarial attacks.In addition,in view of the problem that the existing defense methods difficult defense against adversarial examples with large noise in medical images with high accuracy,we propose a defense method based on medical images.It disturbs the adversarial noise through images transform algorithm,and conduct adversarial training to improve the robustness of models.Finally,we conduct experiments to verify our method by different noise.However,training models by adversarial examples spend a lot of time and consume many computing resources.Thus,we propose a defense method for medical images with free adversarial training.The method updates parameters in models once after each iteration of the noise update.In addition,the method add clean samples and adversarial examples to the training process at the same time to improve the model's defense capabilities while improving the accuracy of the prediction of clean samples.The method also dynamically updates the constraint value for the noise of adversarial examples,which leading models robust to adversarial examples with large perturbations.
Keywords/Search Tags:Medical image analysis, Adversarial Attack, Adversarial Defense
PDF Full Text Request
Related items