{"title":"基于参数统计偏差的神经网络隐写分析","authors":"Yi Yin, Weiming Zhang, Nenghai Yu, Kejiang Chen","doi":"10.52396/justc-2021-0197","DOIUrl":null,"url":null,"abstract":"Many pretrained deep learning models have been released to help engineers and researchers develop deep learning-based systems or conduct research with minimall effort. Previous work has shown that at secret message can be embedded in neural network parameters without compromising the accuracy of the model. Malicious developers can, therefore, hide malware or other baneful information in pretrained models, causing harm to society. Hence, reliable detection of these vicious pretrained models is urgently needed. We analyze existing approaches for hiding messages and find that they will ineluctably cause biases in the parameter statistics. Therefore, we propose steganalysis methods for steganography on neural network parameters that extract statistics from benign and malicious models and build classifiers based on the extracted statistics. To the best of our knowledge, this is the first study on neural network steganalysis. The experimental results reveal that our proposed algorithm can effectively detect a model with an embedded message. Notably, our detection methods are still valid in cases where the payload of the stego model is low.","PeriodicalId":17548,"journal":{"name":"中国科学技术大学学报","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Steganalysis of neural networks based on parameter statistical bias\",\"authors\":\"Yi Yin, Weiming Zhang, Nenghai Yu, Kejiang Chen\",\"doi\":\"10.52396/justc-2021-0197\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many pretrained deep learning models have been released to help engineers and researchers develop deep learning-based systems or conduct research with minimall effort. Previous work has shown that at secret message can be embedded in neural network parameters without compromising the accuracy of the model. Malicious developers can, therefore, hide malware or other baneful information in pretrained models, causing harm to society. Hence, reliable detection of these vicious pretrained models is urgently needed. We analyze existing approaches for hiding messages and find that they will ineluctably cause biases in the parameter statistics. Therefore, we propose steganalysis methods for steganography on neural network parameters that extract statistics from benign and malicious models and build classifiers based on the extracted statistics. To the best of our knowledge, this is the first study on neural network steganalysis. The experimental results reveal that our proposed algorithm can effectively detect a model with an embedded message. Notably, our detection methods are still valid in cases where the payload of the stego model is low.\",\"PeriodicalId\":17548,\"journal\":{\"name\":\"中国科学技术大学学报\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"中国科学技术大学学报\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://doi.org/10.52396/justc-2021-0197\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"中国科学技术大学学报","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.52396/justc-2021-0197","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
Steganalysis of neural networks based on parameter statistical bias
Many pretrained deep learning models have been released to help engineers and researchers develop deep learning-based systems or conduct research with minimall effort. Previous work has shown that at secret message can be embedded in neural network parameters without compromising the accuracy of the model. Malicious developers can, therefore, hide malware or other baneful information in pretrained models, causing harm to society. Hence, reliable detection of these vicious pretrained models is urgently needed. We analyze existing approaches for hiding messages and find that they will ineluctably cause biases in the parameter statistics. Therefore, we propose steganalysis methods for steganography on neural network parameters that extract statistics from benign and malicious models and build classifiers based on the extracted statistics. To the best of our knowledge, this is the first study on neural network steganalysis. The experimental results reveal that our proposed algorithm can effectively detect a model with an embedded message. Notably, our detection methods are still valid in cases where the payload of the stego model is low.