Time series prediction models have been widely used in various industries in daily life,and the adversarial attacks against these prediction models are related to the security of data in various industries.In this paper,we focus on the adversarial attack methods and the corresponding defense methods in time series prediction problems.At present,adversarial attacks related to time series prediction are mostly globally perturbed on a large scale,which leads to easy perception of the adversarial samples.At the same time,the effectiveness of adversarial attacks decreases significantly with the decrease of perturbation magnitude and scope.Therefore,how to generate imperceptible adversarial samples while maintaining a good attack effect is one of the urgent problems in the current field of time series adversarial attacks.In order to study the above problems,the main work of this paper is as follows.(1)For the problem of large perturbation range of adversarial attacks,a sliding window-based time series local perturbation algorithm LAIRD(Local Basic Iterative Method)is proposed to generate local perturbation by splitting global perturbation samples through sliding windows according to the characteristics of deep learning neural network models on time series prediction problems samples so as to reduce the probability of perturbation being perceived,and experimentally analyze the setting of sliding window and step size to make the algorithm ensure the perturbation effect while the perturbation range decreases.(2)For the problem that the adversarial attack will lead to a decrease in the attack effect while the perturbation range decreases,a semi-white-box attack algorithm AIMDE(Local Perturbation Merged Differential Evolution)is proposed,which compares the sliding window in the LAIRD method to a black-box interval.After comparing different optimization algorithms through pre-experiments,we choose to use the best differential evolution algorithm to find the optimal attack points in the black box interval,and combine the segmentation function to further partition the perturbation interval,which not only reduces the perturbation range again,but also can make the perturbation effect improved.For the two algorithms proposed above,two challenging tasks of stock trading and electricity consumption are compared with existing adversarial attack methods on LSTM(Long Short Term Memory)and TCN(Temporal Convolutional Network)deep learning models using mean square error and correlation coefficient evaluation metrics experiment.The results show that LAIRD successfully reduces the perturbation range,improves the imperceptibility,and guarantees the perturbation effect compared to the global attack method.AIMDE further improves the imperceptibility of the adversarial samples compared to LAIRD,and also has better perturbation effect compared to the global attack method,and the algorithm efficiency of AIMDE is not affected by the change of the perturbation magnitude.Finally,the results of related defense experiments show that the adversarial training defense strategies of both attack methods are universal and can improve the robustness of deep learning models. |