PNAP-YOLO:一种改进的基于提示的自然对抗补丁模型

Q1 Decision Sciences
Jun Li, Chenwu Shan, Liyan Shen, Yawei Ren, Jiajie Zhang
{"title":"PNAP-YOLO:一种改进的基于提示的自然对抗补丁模型","authors":"Jun Li,&nbsp;Chenwu Shan,&nbsp;Liyan Shen,&nbsp;Yawei Ren,&nbsp;Jiajie Zhang","doi":"10.1007/s40745-025-00604-0","DOIUrl":null,"url":null,"abstract":"<div><p>Detectors have been extensively utilized in various scenarios such as autonomous driving and video surveillance. Nonetheless, recent studies have revealed that these detectors are vulnerable to adversarial attacks, particularly adversarial patch attacks. Adversarial patches are specifically crafted to disrupt deep learning models by disturbing image regions, thereby misleading the deep learning models when added to into normal images. Traditional adversarial patches often lack semantics, posing challenges in maintaining concealment in physical world scenarios. To tackle this issue, this paper proposes a Prompt-based Natural Adversarial Patch generation method, which creates patches controllable by textual descriptions to ensure flexibility in application. This approach leverages the latest text-to-image generation model—Latent Diffusion Model (LDM) to produce adversarial patches. We optimize the attack performance of the patches by updating the latent variables of LDM through a combined loss function. Experimental results indicate that our method can generate more natural, semantically rich adversarial patches, achieving effective attacks on various detectors.</p></div>","PeriodicalId":36280,"journal":{"name":"Annals of Data Science","volume":"12 3","pages":"1055 - 1072"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PNAP-YOLO: An Improved Prompts-Based Naturalistic Adversarial Patch Model for Object Detectors\",\"authors\":\"Jun Li,&nbsp;Chenwu Shan,&nbsp;Liyan Shen,&nbsp;Yawei Ren,&nbsp;Jiajie Zhang\",\"doi\":\"10.1007/s40745-025-00604-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Detectors have been extensively utilized in various scenarios such as autonomous driving and video surveillance. Nonetheless, recent studies have revealed that these detectors are vulnerable to adversarial attacks, particularly adversarial patch attacks. Adversarial patches are specifically crafted to disrupt deep learning models by disturbing image regions, thereby misleading the deep learning models when added to into normal images. Traditional adversarial patches often lack semantics, posing challenges in maintaining concealment in physical world scenarios. To tackle this issue, this paper proposes a Prompt-based Natural Adversarial Patch generation method, which creates patches controllable by textual descriptions to ensure flexibility in application. This approach leverages the latest text-to-image generation model—Latent Diffusion Model (LDM) to produce adversarial patches. We optimize the attack performance of the patches by updating the latent variables of LDM through a combined loss function. Experimental results indicate that our method can generate more natural, semantically rich adversarial patches, achieving effective attacks on various detectors.</p></div>\",\"PeriodicalId\":36280,\"journal\":{\"name\":\"Annals of Data Science\",\"volume\":\"12 3\",\"pages\":\"1055 - 1072\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Data Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s40745-025-00604-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Decision Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Data Science","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s40745-025-00604-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Decision Sciences","Score":null,"Total":0}
引用次数: 0

摘要

探测器已广泛应用于自动驾驶和视频监控等各种场景。然而,最近的研究表明,这些检测器容易受到对抗性攻击,特别是对抗性补丁攻击。对抗性补丁是专门通过干扰图像区域来破坏深度学习模型的,从而在添加到正常图像中时误导深度学习模型。传统的对抗性补丁通常缺乏语义,这对在物理世界场景中保持隐蔽性提出了挑战。针对这一问题,本文提出了一种基于提示的自然对抗补丁生成方法,该方法通过文本描述生成可控制的补丁,保证了应用的灵活性。该方法利用最新的文本到图像生成模型-潜在扩散模型(LDM)来生成对抗性补丁。我们通过组合损失函数更新LDM的潜在变量来优化补丁的攻击性能。实验结果表明,该方法可以生成更自然、语义更丰富的对抗补丁,实现对各种检测器的有效攻击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

PNAP-YOLO: An Improved Prompts-Based Naturalistic Adversarial Patch Model for Object Detectors

PNAP-YOLO: An Improved Prompts-Based Naturalistic Adversarial Patch Model for Object Detectors

Detectors have been extensively utilized in various scenarios such as autonomous driving and video surveillance. Nonetheless, recent studies have revealed that these detectors are vulnerable to adversarial attacks, particularly adversarial patch attacks. Adversarial patches are specifically crafted to disrupt deep learning models by disturbing image regions, thereby misleading the deep learning models when added to into normal images. Traditional adversarial patches often lack semantics, posing challenges in maintaining concealment in physical world scenarios. To tackle this issue, this paper proposes a Prompt-based Natural Adversarial Patch generation method, which creates patches controllable by textual descriptions to ensure flexibility in application. This approach leverages the latest text-to-image generation model—Latent Diffusion Model (LDM) to produce adversarial patches. We optimize the attack performance of the patches by updating the latent variables of LDM through a combined loss function. Experimental results indicate that our method can generate more natural, semantically rich adversarial patches, achieving effective attacks on various detectors.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Annals of Data Science
Annals of Data Science Decision Sciences-Statistics, Probability and Uncertainty
CiteScore
6.50
自引率
0.00%
发文量
93
期刊介绍: Annals of Data Science (ADS) publishes cutting-edge research findings, experimental results and case studies of data science. Although Data Science is regarded as an interdisciplinary field of using mathematics, statistics, databases, data mining, high-performance computing, knowledge management and virtualization to discover knowledge from Big Data, it should have its own scientific contents, such as axioms, laws and rules, which are fundamentally important for experts in different fields to explore their own interests from Big Data. ADS encourages contributors to address such challenging problems at this exchange platform. At present, how to discover knowledge from heterogeneous data under Big Data environment needs to be addressed.     ADS is a series of volumes edited by either the editorial office or guest editors. Guest editors will be responsible for call-for-papers and the review process for high-quality contributions in their volumes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信