Lightweight Steganography Detection Method Based on Multiple Residual Structures and Transformer

IF 1.6 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Hao Li;Yi Zhang;Jinwei Wang;Weiming Zhang;Xiangyang Luo
{"title":"Lightweight Steganography Detection Method Based on Multiple Residual Structures and Transformer","authors":"Hao Li;Yi Zhang;Jinwei Wang;Weiming Zhang;Xiangyang Luo","doi":"10.23919/cje.2022.00.452","DOIUrl":null,"url":null,"abstract":"Existing deep learning-based steganography detection methods utilize convolution to automatically capture and learn steganographic features, yielding higher detection efficiency compared to manually designed steganography detection methods. Detection methods based on convolutional neural network frameworks can extract global features by increasing the network's depth and width. These frameworks are not highly sensitive to global features and can lead to significant resource consumption. This manuscript proposes a lightweight steganography detection method based on multiple residual structures and Transformer (ResFormer). A multi-residuals block based on channel rearrangement is designed in the preprocessing layer. Multiple residuals are used to enrich the residual features and channel shuffle is used to enhance the feature representation capability. A lightweight convolutional and Transformer feature extraction backbone is constructed, which reduces the computational and parameter complexity of the network by employing depth-wise separable convolutions. This backbone integrates local and global image features through the fusion of convolutional layers and Transformer, enhancing the network's ability to learn global features and effectively enriching feature diversity. An effective weighted loss function is introduced for learning both local and global features, BiasLoss loss function is used to give full play to the role of feature diversity in classification, and cross-entropy loss function and contrast loss function are organically combined to enhance the expression ability of features. Based on BossBase-1.01, BOWS2 and ALASKA#2, extensive experiments are conducted on the stego images generated by spatial and JPEG domain adaptive steganographic algorithms, employing both classical and state-of-the-art steganalysis techniques. The experimental results demonstrate that compared to the SRM, SRNet, SiaStegNet, CSANet, LWENet, and SiaIRNet methods, the proposed ResFormer method achieves the highest reduction in the parameter, up to 91.82%. It achieves the highest improvement in detection accuracy, up to 5.10%. Compared to the SRNet and EWNet methods, the proposed ResFormer method achieves an improvement in detection accuracy for the J-UNIWARD algorithm by 5.78% and 6.24%, respectively.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":null,"pages":null},"PeriodicalIF":1.6000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10606201","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chinese Journal of Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10606201/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Existing deep learning-based steganography detection methods utilize convolution to automatically capture and learn steganographic features, yielding higher detection efficiency compared to manually designed steganography detection methods. Detection methods based on convolutional neural network frameworks can extract global features by increasing the network's depth and width. These frameworks are not highly sensitive to global features and can lead to significant resource consumption. This manuscript proposes a lightweight steganography detection method based on multiple residual structures and Transformer (ResFormer). A multi-residuals block based on channel rearrangement is designed in the preprocessing layer. Multiple residuals are used to enrich the residual features and channel shuffle is used to enhance the feature representation capability. A lightweight convolutional and Transformer feature extraction backbone is constructed, which reduces the computational and parameter complexity of the network by employing depth-wise separable convolutions. This backbone integrates local and global image features through the fusion of convolutional layers and Transformer, enhancing the network's ability to learn global features and effectively enriching feature diversity. An effective weighted loss function is introduced for learning both local and global features, BiasLoss loss function is used to give full play to the role of feature diversity in classification, and cross-entropy loss function and contrast loss function are organically combined to enhance the expression ability of features. Based on BossBase-1.01, BOWS2 and ALASKA#2, extensive experiments are conducted on the stego images generated by spatial and JPEG domain adaptive steganographic algorithms, employing both classical and state-of-the-art steganalysis techniques. The experimental results demonstrate that compared to the SRM, SRNet, SiaStegNet, CSANet, LWENet, and SiaIRNet methods, the proposed ResFormer method achieves the highest reduction in the parameter, up to 91.82%. It achieves the highest improvement in detection accuracy, up to 5.10%. Compared to the SRNet and EWNet methods, the proposed ResFormer method achieves an improvement in detection accuracy for the J-UNIWARD algorithm by 5.78% and 6.24%, respectively.
基于多重残差结构和变换器的轻量级隐写术检测方法
现有的基于深度学习的隐写术检测方法利用卷积来自动捕捉和学习隐写术特征,与人工设计的隐写术检测方法相比,检测效率更高。基于卷积神经网络框架的检测方法可以通过增加网络的深度和宽度来提取全局特征。这些框架对全局特征的敏感度不高,而且会导致大量资源消耗。本手稿提出了一种基于多残差结构和变换器(ResFormer)的轻量级隐写检测方法。在预处理层设计了一个基于信道重排的多残差块。多重残差用于丰富残差特征,信道洗牌用于增强特征表示能力。构建了一个轻量级卷积和变换器特征提取骨干网,通过采用深度可分离卷积,降低了网络的计算和参数复杂度。该骨干网通过卷积层和 Transformer 的融合,整合了局部和全局图像特征,增强了网络学习全局特征的能力,有效丰富了特征多样性。引入有效的加权损失函数来学习局部和全局特征,使用 BiasLoss 损失函数来充分发挥特征多样性在分类中的作用,并将交叉熵损失函数和对比度损失函数有机地结合起来,以增强特征的表达能力。以BossBase-1.01、BOWS2和ALASKA#2为基础,采用经典和最先进的隐写分析技术,对空间域和JPEG域自适应隐写算法生成的隐写图像进行了大量实验。实验结果表明,与 SRM、SRNet、SiaStegNet、CSANet、LWENet 和 SiaIRNet 方法相比,所提出的 ResFormer 方法的参数降低率最高,达到 91.82%。检测准确率的提高幅度最大,达到 5.10%。与 SRNet 和 EWNet 方法相比,所提出的 ResFormer 方法使 J-UNIWARD 算法的检测精度分别提高了 5.78% 和 6.24%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Chinese Journal of Electronics
Chinese Journal of Electronics 工程技术-工程:电子与电气
CiteScore
3.70
自引率
16.70%
发文量
342
审稿时长
12.0 months
期刊介绍: CJE focuses on the emerging fields of electronics, publishing innovative and transformative research papers. Most of the papers published in CJE are from universities and research institutes, presenting their innovative research results. Both theoretical and practical contributions are encouraged, and original research papers reporting novel solutions to the hot topics in electronics are strongly recommended.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信