{"title":"CamStegNet: A Robust Image Steganography Method Based on Camouflage Model","authors":"Le Mao;Yun Tan;Jiaohua Qin;Xuyu Xiang","doi":"10.1109/TCSVT.2025.3570725","DOIUrl":null,"url":null,"abstract":"Deep learning models are increasingly being employed in steganographic schemes for the embedding and extraction of secret information. However, steganographic models themselves are also at risk of detection and attacks. Although there are approaches proposed to hide deep learning models, making these models difficult to detect while achieving high-quality image steganography performance remains a challenging task. In this work, a robust image steganography method based on a camouflage model CamStegNet is proposed. The steganographic model is camouflaged as a routine deep learning model to significantly enhance its concealment. A sparse weight-filling paradigm is designed to enable the model to be flexibly switched among three modes by utilizing different keys: routine machine learning task, secret embedding task and secret recovery task. Furthermore, a residual state-space module and a neighborhood attention mechanism are constructed to improve the performance of image steganography. Experiments conducted on the DIV2K, ImageNet and COCO datasets demonstrate that the stego images generated by CamStegNet are superior to existing methods in terms of visual quality. They also exhibit enhanced resistance to steganalysis and maintain over 95% robustness against noise and scale attacks. Additionally, the model demonstrates high robustness which can achieve excellent performance in machine learning tasks and maintain stability across various weight initialization methods.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 10","pages":"10599-10611"},"PeriodicalIF":11.1000,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11006153/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning models are increasingly being employed in steganographic schemes for the embedding and extraction of secret information. However, steganographic models themselves are also at risk of detection and attacks. Although there are approaches proposed to hide deep learning models, making these models difficult to detect while achieving high-quality image steganography performance remains a challenging task. In this work, a robust image steganography method based on a camouflage model CamStegNet is proposed. The steganographic model is camouflaged as a routine deep learning model to significantly enhance its concealment. A sparse weight-filling paradigm is designed to enable the model to be flexibly switched among three modes by utilizing different keys: routine machine learning task, secret embedding task and secret recovery task. Furthermore, a residual state-space module and a neighborhood attention mechanism are constructed to improve the performance of image steganography. Experiments conducted on the DIV2K, ImageNet and COCO datasets demonstrate that the stego images generated by CamStegNet are superior to existing methods in terms of visual quality. They also exhibit enhanced resistance to steganalysis and maintain over 95% robustness against noise and scale attacks. Additionally, the model demonstrates high robustness which can achieve excellent performance in machine learning tasks and maintain stability across various weight initialization methods.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.