CSA:通过内容和风格攻击制作对抗性示例

IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Wei Chen , Yunqi Zhang
{"title":"CSA:通过内容和风格攻击制作对抗性示例","authors":"Wei Chen ,&nbsp;Yunqi Zhang","doi":"10.1016/j.jisa.2025.103974","DOIUrl":null,"url":null,"abstract":"<div><div>Most existing black-box attacks fall into two categories: gradient-based attacks and unrestricted attacks. The former injects adversarial perturbations into the original clean examples under the <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span>-norm constraint, while the latter tends to attack by changing the shape, color, and texture of the original image. However, the adversarial examples generated by the gradient-based attacks are vulnerable to defense methods and unnatural to the human eye. Meanwhile, unrestricted attacks have poor transferability of adversarial examples compared to gradient-based attacks. Therefore, we propose a novel attack that combines gradient-based and unrestricted attacks, <em>i.e.</em>, Content and Style Attack (CSA). Specifically, we utilize an encoder to extract the content features of the original image and train a reconstructor to generate an image consistent with these features. A gradient-based method is then employed to inject perturbations, followed by using the encoder to extract the content features of the altered image. We implement a momentum-based approach to search for malicious style information, which is then fused with the adversarial content features to create the final attack features. Extensive experiments on the ImageNet standard dataset demonstrate that our method is capable of generating adversarial examples that are both natural-looking and possess high transferability.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103974"},"PeriodicalIF":3.8000,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CSA: Crafting adversarial examples via content and style attacks\",\"authors\":\"Wei Chen ,&nbsp;Yunqi Zhang\",\"doi\":\"10.1016/j.jisa.2025.103974\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Most existing black-box attacks fall into two categories: gradient-based attacks and unrestricted attacks. The former injects adversarial perturbations into the original clean examples under the <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span>-norm constraint, while the latter tends to attack by changing the shape, color, and texture of the original image. However, the adversarial examples generated by the gradient-based attacks are vulnerable to defense methods and unnatural to the human eye. Meanwhile, unrestricted attacks have poor transferability of adversarial examples compared to gradient-based attacks. Therefore, we propose a novel attack that combines gradient-based and unrestricted attacks, <em>i.e.</em>, Content and Style Attack (CSA). Specifically, we utilize an encoder to extract the content features of the original image and train a reconstructor to generate an image consistent with these features. A gradient-based method is then employed to inject perturbations, followed by using the encoder to extract the content features of the altered image. We implement a momentum-based approach to search for malicious style information, which is then fused with the adversarial content features to create the final attack features. Extensive experiments on the ImageNet standard dataset demonstrate that our method is capable of generating adversarial examples that are both natural-looking and possess high transferability.</div></div>\",\"PeriodicalId\":48638,\"journal\":{\"name\":\"Journal of Information Security and Applications\",\"volume\":\"89 \",\"pages\":\"Article 103974\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-01-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information Security and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214212625000122\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212625000122","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

大多数现有的黑盒攻击分为两类:基于梯度的攻击和不受限制的攻击。前者在lp范数约束下向原始干净样本注入对抗性扰动,而后者则倾向于通过改变原始图像的形状、颜色和纹理来攻击。然而,基于梯度的攻击生成的对抗性示例容易受到防御方法的攻击,并且对人眼来说不自然。同时,与基于梯度的攻击相比,无限制攻击具有较差的对抗性示例可转移性。因此,我们提出了一种结合梯度攻击和无限制攻击的新型攻击,即内容和风格攻击(CSA)。具体来说,我们利用编码器提取原始图像的内容特征,并训练重构器生成与这些特征一致的图像。然后采用基于梯度的方法注入扰动,然后使用编码器提取改变后图像的内容特征。我们实现了一种基于动量的方法来搜索恶意风格信息,然后将其与对抗性内容特征融合以创建最终的攻击特征。在ImageNet标准数据集上的大量实验表明,我们的方法能够生成既自然又具有高可移植性的对抗性示例。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CSA: Crafting adversarial examples via content and style attacks
Most existing black-box attacks fall into two categories: gradient-based attacks and unrestricted attacks. The former injects adversarial perturbations into the original clean examples under the Lp-norm constraint, while the latter tends to attack by changing the shape, color, and texture of the original image. However, the adversarial examples generated by the gradient-based attacks are vulnerable to defense methods and unnatural to the human eye. Meanwhile, unrestricted attacks have poor transferability of adversarial examples compared to gradient-based attacks. Therefore, we propose a novel attack that combines gradient-based and unrestricted attacks, i.e., Content and Style Attack (CSA). Specifically, we utilize an encoder to extract the content features of the original image and train a reconstructor to generate an image consistent with these features. A gradient-based method is then employed to inject perturbations, followed by using the encoder to extract the content features of the altered image. We implement a momentum-based approach to search for malicious style information, which is then fused with the adversarial content features to create the final attack features. Extensive experiments on the ImageNet standard dataset demonstrate that our method is capable of generating adversarial examples that are both natural-looking and possess high transferability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Information Security and Applications
Journal of Information Security and Applications Computer Science-Computer Networks and Communications
CiteScore
10.90
自引率
5.40%
发文量
206
审稿时长
56 days
期刊介绍: Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信