{"title":"补丁隐写分析:基于采样的对抗性隐写防御","authors":"Chuan Qin, Na Zhao, Weiming Zhang, Nenghai Yu","doi":"10.1109/icassp43922.2022.9747638","DOIUrl":null,"url":null,"abstract":"In recent years, the classification accuracy of CNN (convolutional neural network) steganalyzers has rapidly improved. However, as general CNN classifiers will misclassify adversarial samples, CNN steganalyzers can hardly detect adversarial steganography, which combines adversarial samples and steganography. Adversarial training and preprocessing are two effective methods to defend against adversarial samples. But literature shows adversarial training is ineffective for adversarial steganography. Steganographic modifications will also be destroyed by preprocessing, which aims to wipe out adversarial perturbations. In this paper, we propose a novel sampling based defense method for steganalysis. Specifically, by sampling image patches, CNN steganalyzers can bypass the sparse adversarial perturbations and extract effective features. Additionally, by calculating statistical vectors and regrouping deep features, the impact on the classification accuracy of common samples is effectively compressed. The experiments show that the proposed method can significantly improve the robustness against adversarial steganography without adversarial training.","PeriodicalId":272439,"journal":{"name":"ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Patch Steganalysis: A Sampling Based Defense Against Adversarial Steganography\",\"authors\":\"Chuan Qin, Na Zhao, Weiming Zhang, Nenghai Yu\",\"doi\":\"10.1109/icassp43922.2022.9747638\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, the classification accuracy of CNN (convolutional neural network) steganalyzers has rapidly improved. However, as general CNN classifiers will misclassify adversarial samples, CNN steganalyzers can hardly detect adversarial steganography, which combines adversarial samples and steganography. Adversarial training and preprocessing are two effective methods to defend against adversarial samples. But literature shows adversarial training is ineffective for adversarial steganography. Steganographic modifications will also be destroyed by preprocessing, which aims to wipe out adversarial perturbations. In this paper, we propose a novel sampling based defense method for steganalysis. Specifically, by sampling image patches, CNN steganalyzers can bypass the sparse adversarial perturbations and extract effective features. Additionally, by calculating statistical vectors and regrouping deep features, the impact on the classification accuracy of common samples is effectively compressed. The experiments show that the proposed method can significantly improve the robustness against adversarial steganography without adversarial training.\",\"PeriodicalId\":272439,\"journal\":{\"name\":\"ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icassp43922.2022.9747638\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icassp43922.2022.9747638","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Patch Steganalysis: A Sampling Based Defense Against Adversarial Steganography
In recent years, the classification accuracy of CNN (convolutional neural network) steganalyzers has rapidly improved. However, as general CNN classifiers will misclassify adversarial samples, CNN steganalyzers can hardly detect adversarial steganography, which combines adversarial samples and steganography. Adversarial training and preprocessing are two effective methods to defend against adversarial samples. But literature shows adversarial training is ineffective for adversarial steganography. Steganographic modifications will also be destroyed by preprocessing, which aims to wipe out adversarial perturbations. In this paper, we propose a novel sampling based defense method for steganalysis. Specifically, by sampling image patches, CNN steganalyzers can bypass the sparse adversarial perturbations and extract effective features. Additionally, by calculating statistical vectors and regrouping deep features, the impact on the classification accuracy of common samples is effectively compressed. The experiments show that the proposed method can significantly improve the robustness against adversarial steganography without adversarial training.