| With the development and application of Internet and multimedia technology,people are increasingly using multimedia such as images,audio and video to communicate.Steganography is a covert communication technique that embeds secret messages in cover to avoid arousing attentions.It has been widely used for illegal purposes,such as spreading computer viruses,transmitting illegal contents,etc.Obviously,it is of great practical significance to develop effective countermeasures of steganography.Steganalysis is such a countermeasure that aims to detect steganographic behavior and stego multimedia.Thus,the research of steganalysis has received widespread attention.There are mainly two types of steganalyzers:handcrafted steganalyzers that detect stego by utilizing high-dimensional features;deep learning steganalyzers,of which the structures are usually deep learning models.With the rapid progress of steganography,the detection accuracy of handcrafted steganalyzers has fallen behind.deep learning steganalyzers have become the mainstream.However,adversarial steganography severely challenges the reliability and application of them in real world.Adversarial steganography can easily deceive deep learning steganalyzers and convey secret messages at the same time,by adding subtle adversarial perturbations in stego images.In the real-world scenario,steganalyzers are facing three types of samples:cover,conventional stego,and adversarial stego.Hence,two different abilities are required for qualified steganalyzers:the ability to accurately classify cover and conventional stego,called accuracy;the ability of not being effected by adversarial perturbations and correctly classify adversarial stego,called robustness.Obviously,the robustness of deep learning steganalyzers requires enhancement.Currently,the most effective way to defend against adversarial steganography is retraining,i.e.,augmenting the training dataset with adversarial stego.But,the steganographer can generate adversarial stego targeting the retrained model.In this way,steganalysis will get stuck in the "arms race" like multi-round game with steganography.Even worse,in such multi-round game,the accuracy of deep learning steganalyzers continuously drops.But adversarial steganography can maintain its success rate,while improving its resistance against the detection of non-target steganalyzers.Hence,how to improve the robustness of deep learning steganalyzers and avoid getting stuck in the multi-round game is an urgent problem.To address such problem,this dissertation proposes to improve the robustness of deep learning steganalyzers from the following aspects:1)decoupling adversarial perturbations and steganographic modifications;2)utilizing the complementary advantages of deep features and handcrafted features.This dissertation reveals the characteristics of adversarial steganography,proposes robustness enhancement methods for different cases.The main work and contributions are summarized as follows:1.Random Patch Sampling Based SteganalysisAdversarial steganography minimizes introduced perturbations with subject to deceiving target deep learning steganalyzers.Currently,adversarial steganography achieves this goal by reduce the l0 norm of perturbations.To this end,this dissertation proposes the patch steganalysis that samples the input image based on predicted modification probabilities.Specifically,each sampled image patch is from a candidate group that engages unique modification probability range.Then deep features are extracted from these sampled image patches(called patch features).In this way,the attention of deep learning steganalyzers is scattered across the whole image,which can avoid them being influenced by local adversarial perturbations.Moreover,the correlations among image patches are considered.Firstly,the statistical vectors of patch features are calculated.Secondly,the patch features and the statistical vectors are regrouped and utilized to train many sub-classifiers.Finally,the predictions of the patch steganalysis are determined by the votes of sub-classifiers.Extensive experiments show that the patch steganalysis can effectively improve the robustness of deep learning steganalyzers without retraining.Meanwhile,it maintains superior accuracy over handcrafted steganalyzers.2.Robustness Enhancement Against Adversarial Steganography via Steganalyzer OutputsCurrently,there are two types of steganalyzers:handcrafted and deep learning.The accuracy of deep learning models is clearly superior to handcrafted models,but they are vulnerable against adversarial steganography.While handcrafted models are the opposite.Real-world scenario steganalysis requires models with both accuracy and robustness.To this end,a robustness enhancement framework is proposed,which utilizes the complementary advantages of the two types of steganalyzers.For the label outputs,due to the detection accuracy gap on adversarial stego images,the images labeled as cover by the deep learning model,and labeled as stego by the handcrafted model,contain a large amount of adversarial stego images.For the probabilistic outputs,caused by the perturbation minimization of adversarial steganography,the probabilistic outputs of steganalyzers on adversarial stego images are different from those on cover images.Based on those characteristics of adversarial stego images,a rough filter is proposed to filter them from the input data.Then the filtered images will be labeled by a specific classifier.Meanwhile,the remaining images will be labeled by the deep learning steganalyzer.The proposed robustness enhancement framework improves the robustness of deep learning steganalyzers while keeping detecting cover and conventional stego images accurately.In the real-world scenario where cover,conventional stego,and adversarial stego images are mixed,the robustness enhanced steganalyzers outperform the previous ones.3.Feature Fusion Based Adversarial Example Detection and SteganalysisSteganography and adversarial examples are quite similar.They both utilize invisible perturbations to achieve their goals(convey secret messages and deceive deep learning models).Thus,on the one hand,steganalysis can be adopted to detect adversarial examples.On the other hand,the defense and detection against adversarial examples can motivate the research of robust steganalyzers.Specifically,classic steganalysis features SRM(spatial rich model)was adopted to detect adversarial examples.But its detection accuracy on sparse perturbations is lower.Meanwhile,deep learning detector can be bypassed by adaptive/second-round attacks.To address such problem,this dissertation proposes to combine handcrafted features with deep features via a fusion scheme to increase the detection accuracy and defend against second-round adversarial attacks.First,to avoid deep features being overwhelmed by high-dimensional handcrafted features,an expansion-then-reduction process to compress the dimensionality of handcrafted features is introduced.Then,to further improve accuracy,a deep learning model specifically for detecting adversarial examples is proposed.Last,a majority voting scheme is introduced to combine the predictions from handcrafted features and deep features.Experimental results show that the proposed model outperforms the state-of-the-art adversarial example detection methods and remains robust under various second-round adversarial attacks.Furthermore,this scheme can improve the accuracy and the robustness of deep learning steganalyzers. |