The key to optimization and improvement of deep neural networks is to deeply understand the operating mechanism and feature images.However,most of the previous research directions assume that deep neural network can be trained and used in a friendly and controllable environment.With the extensive use of deep neural network in many core application fields,attacks on deep learning model are gradually increasing,which poses a serious threat to the application based on deep learning.At present,deep neural network performs well in many research and application fields,and shows excellent characteristics in various fields.It shows a very high success rate in many classification problems,such as image recognition or target detection.However,it is an indisputable fact that neural network still has security risks.In recent years,more and more researchers and scholars have found and confirmed that by adding small interference such as noise to the image and target to be identified,the learning classification model can make the image and target be misclassified.What is more serious is that the image can be misclassified as an attack What kind of category do you want to be.Therefore,counter samples bring great security risks to neural networks,which makes researchers and scholars want to find out the reasons for the recognition errors of classification samples and the deeper or lower level of classification specified by attackers.It also escorts the application of deep learning research in the future.Deep learning has been well-established in different domains,such as NLP,CV,and recommendation.This paper combines cutting-edge computer algorithms with image anti-attack,through the fusion of anti-attack algorithms with traditional image processing techniques,a variety of image anti attack algorithms based on deep learning are proposed,and through this attack method,different models are attacked,the model structure information is studied,and the attack principle is used to improve this attack algorithm Generalization performance of other model data.It provides more ideas and ideas for the research of counter attack.The main contributions of this paper are as follows:1.Combined with I-FGSM attack algorithm,an iterative gradient method AI- FGSM(Adam-based Iterative Fast Gradient Sign Method)based on Adam idea is proposed to improve the attack success rate and migration of countermeasure samples,and shorten the time required to generate countermeasure samples.2.The paper proposes a new adversary attack algorithm for specific areas: PS-MIFGSM(Perceptual-sensitive Momentum Iterative Fast Gradient Sign Method),which can obtain the main attack area through grad cam to attack,so that the disturbance coverage can be reduced to achieve the same attack effect as the whole picture.The attack method can reduce the difference between the confrontation sample and the real sample effectively while the attack success rate is constant..3.A new attack enhancement strategy GF-Attack(Grad-Cam Flip-Attack)is proposed.The traditional attack method is improved by attacking specific area and combining with specific gradient of flipped image.This strategy can improve the migration of the generated counter samples and reduce the number of modified pixels.4.It is the first time to combine MI-FGSM(Momentum Iterative Fast Gradient Sign Method)with target detection.The attack method combined in this way has higher success rate and higher efficiency than the existing attack algorithms such as PGD(Projected Gradient Descent). |