| Deep learning models have been widely used in electroencephalogram(EEG)analysis and obtained excellent performance.However,deep learning models have major security issues,and deep learning models are misled when normal samples become adversarial samples due to the addition of imperceptible perturbations.The work in this dissertation exposes an important safety issue in deep-learning-based brain disease diagnostic systems by examining the vulnerability of deep learning models for diagnosing epilepsy with brain electrical activity mappings(BEAMs)to white-box attacks.It proposes two methods,Gradient Perturbations of BEAMs(GPBEAM),and GPBEAM with Differential Evolution(GPBEAM-DE),which generate EEG adversarial samples,for the first time by perturbing BEAMs densely and sparsely respectively,and find that these BEAMs-based adversarial samples can easily mislead deep learning models.The experiments in this paper use the EEG data from CHB-MIT dataset and two types of victim models each of which has four different deep neural network(DNN)architectures.The results show that:(1)these BEAM-based adversarial samples produced by our methods are aggressive to BEAM-related victim models which use BEAMs as the input to internal DNN architectures,but unaggressive to EEG-related victim models which have raw EEG as the input to internal DNN architectures;(2)GPBEAMDE outperforms GPBEAM;(3)a simple modification to the GPBEAM/GPBEAM-DE will make it have aggressiveness to both BEAMrelated and EEG-related models,and this capacity enhancement is done without any cost of distortion increment.The goal of this study is not to attack any of EEG medical diagnostic systems,but to raise concerns about the safety of deep learning models and hope to lead to a safer design. |