Font Size: a A A

Security Research Of Deep Learning Mo E Ase Io Etric Recognition Syste

Posted on:2021-05-03Degree:MasterType:Thesis
Country:ChinaCandidate:M M SuFull Text:PDF
GTID:2428330623467363Subject:Control engineering
Abstract/Summary:PDF Full Text Request
With the development of artificial intelligence,deep learning is widely applied to various areas such as finance,medical treatment,security and military.Especially in the security related applications,deep learning has overwhelming efficiency and performance in area of biometric systems including facial recognition,fingerprint recognition,iris recognition and speech recognition.However,attacks toward deep learning models arouse wide attention,since the deep learning models are easily fooled by the attacks resulting in huge economic and security loss.Therefore researchers have focused on the security of deep learning model.In order to improve the security of the deep learning model,especially the robustness of biometric identification system,this thesis studies model attacks including poisoning attacks and adversarial attacks,and proposes new attack techniques to further explore the model vulnerabilities to prepare for improving its security.Specifically,since the current black-box adversarial attacks have practical limitations such as many queries,obvious perturbation,and low attack success rate,which are easily beaten by the defense method.Similarly,for the defense against poisoning attacks,since the poisoning examples of most poisoning attacks are significantly different from the normal examples,they can be efficiently defended by sampling strategy,so the defense methods of poisoning attacks are relatively simple and weak.Aiming at the above characteristics of the existing attacks,this thesis focuses on attacks toward deep learning model based biometric recognition system,and proposes two kinds of potential attack methods,black-box adversarial attack based on genetic algorithm and covert poisoning attack based on genetic algorithm.The proposed two attack methods can not only attack generic deep learning based models,but also can fail biometric models,which are commonly used in security fields.The main contributions include three aspects.(1)In order to overcome the problems that the low success rate and obvious perturbation of current black-box adversarial attack,this thesis proposes a perturbation-optimized black-box attack method based on genetic algorithm.It optimizes the perturbation by genetic operations such as selection,crossover and mutation,and overcomes the shortcoming of the other black-box attacks in aspects of equivalent models to achieve black-box attacks or a large number of queries to achieve attacks.It can generate adversarial examples with less perturbation through fewer queries,and achieves a high attack success rate similar to white-box attacks.(2)In order to overcome the problems that the poisoning examples are easy to be discovered and low poisoning toxicity of current poisoning attack,this thesis proposes a method of concealed poisoning attack based on genetic algorithm.It generates poisoning examples similar to the target class by attacking the equivalent model,and overcomes the obvious difference between the poisoned examples and the normal ones.Experiments show that the proposed method are not only capable of generating highly concealed poisoned examples,but also only need a small number of poisoning examples to achieve high success rate of attack.(3)Aiming at the security problem of application system based on biometric recognition model,this thesis applies the two attack methods to face recognition system and fingerprint recognition system.The perturbation-optimized black-box attack method based on genetic algorithm attacks the face recognition model and fingerprint recognition model electronically or physically,proving that the potential security risks of biometric recognition model is universal.The concealed poisoning attack based on genetic algorithm analyzes the face recognition model from various angles,and shows that even if one poisoning example is fed into the model,it can achieve a high success rate of attack by leaving a backdoor,so we should pay more attention to poisoning attack.
Keywords/Search Tags:deep learning, biometric, adversarial attack, poisoning attack, genetic algorithm
PDF Full Text Request
Related items