Jun Li, Chenwu Shan, Liyan Shen, Yawei Ren, Jiajie Zhang
{"title":"PNAP-YOLO:一种改进的基于提示的自然对抗补丁模型","authors":"Jun Li, Chenwu Shan, Liyan Shen, Yawei Ren, Jiajie Zhang","doi":"10.1007/s40745-025-00604-0","DOIUrl":null,"url":null,"abstract":"<div><p>Detectors have been extensively utilized in various scenarios such as autonomous driving and video surveillance. Nonetheless, recent studies have revealed that these detectors are vulnerable to adversarial attacks, particularly adversarial patch attacks. Adversarial patches are specifically crafted to disrupt deep learning models by disturbing image regions, thereby misleading the deep learning models when added to into normal images. Traditional adversarial patches often lack semantics, posing challenges in maintaining concealment in physical world scenarios. To tackle this issue, this paper proposes a Prompt-based Natural Adversarial Patch generation method, which creates patches controllable by textual descriptions to ensure flexibility in application. This approach leverages the latest text-to-image generation model—Latent Diffusion Model (LDM) to produce adversarial patches. We optimize the attack performance of the patches by updating the latent variables of LDM through a combined loss function. Experimental results indicate that our method can generate more natural, semantically rich adversarial patches, achieving effective attacks on various detectors.</p></div>","PeriodicalId":36280,"journal":{"name":"Annals of Data Science","volume":"12 3","pages":"1055 - 1072"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PNAP-YOLO: An Improved Prompts-Based Naturalistic Adversarial Patch Model for Object Detectors\",\"authors\":\"Jun Li, Chenwu Shan, Liyan Shen, Yawei Ren, Jiajie Zhang\",\"doi\":\"10.1007/s40745-025-00604-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Detectors have been extensively utilized in various scenarios such as autonomous driving and video surveillance. Nonetheless, recent studies have revealed that these detectors are vulnerable to adversarial attacks, particularly adversarial patch attacks. Adversarial patches are specifically crafted to disrupt deep learning models by disturbing image regions, thereby misleading the deep learning models when added to into normal images. Traditional adversarial patches often lack semantics, posing challenges in maintaining concealment in physical world scenarios. To tackle this issue, this paper proposes a Prompt-based Natural Adversarial Patch generation method, which creates patches controllable by textual descriptions to ensure flexibility in application. This approach leverages the latest text-to-image generation model—Latent Diffusion Model (LDM) to produce adversarial patches. We optimize the attack performance of the patches by updating the latent variables of LDM through a combined loss function. Experimental results indicate that our method can generate more natural, semantically rich adversarial patches, achieving effective attacks on various detectors.</p></div>\",\"PeriodicalId\":36280,\"journal\":{\"name\":\"Annals of Data Science\",\"volume\":\"12 3\",\"pages\":\"1055 - 1072\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Data Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s40745-025-00604-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Decision Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Data Science","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s40745-025-00604-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Decision Sciences","Score":null,"Total":0}
PNAP-YOLO: An Improved Prompts-Based Naturalistic Adversarial Patch Model for Object Detectors
Detectors have been extensively utilized in various scenarios such as autonomous driving and video surveillance. Nonetheless, recent studies have revealed that these detectors are vulnerable to adversarial attacks, particularly adversarial patch attacks. Adversarial patches are specifically crafted to disrupt deep learning models by disturbing image regions, thereby misleading the deep learning models when added to into normal images. Traditional adversarial patches often lack semantics, posing challenges in maintaining concealment in physical world scenarios. To tackle this issue, this paper proposes a Prompt-based Natural Adversarial Patch generation method, which creates patches controllable by textual descriptions to ensure flexibility in application. This approach leverages the latest text-to-image generation model—Latent Diffusion Model (LDM) to produce adversarial patches. We optimize the attack performance of the patches by updating the latent variables of LDM through a combined loss function. Experimental results indicate that our method can generate more natural, semantically rich adversarial patches, achieving effective attacks on various detectors.
期刊介绍:
Annals of Data Science (ADS) publishes cutting-edge research findings, experimental results and case studies of data science. Although Data Science is regarded as an interdisciplinary field of using mathematics, statistics, databases, data mining, high-performance computing, knowledge management and virtualization to discover knowledge from Big Data, it should have its own scientific contents, such as axioms, laws and rules, which are fundamentally important for experts in different fields to explore their own interests from Big Data. ADS encourages contributors to address such challenging problems at this exchange platform. At present, how to discover knowledge from heterogeneous data under Big Data environment needs to be addressed. ADS is a series of volumes edited by either the editorial office or guest editors. Guest editors will be responsible for call-for-papers and the review process for high-quality contributions in their volumes.