{"title":"A Simple Yet Practical Backdoor Prompt Attack Against Black-Box Code Summarization Engines","authors":"Yubin Qu, Song Huang, Yongming Yao, Peng Nie","doi":"10.1002/smr.70032","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>A code summarization engine based on large language models (LLMs) can describe code functionality from different perspectives according to programmers' needs. However, these engines are at risk of black-box backdoor attacks. We propose a simple yet practical method called <b>Bad Prompt Attack</b> (<span>BPA</span>), specifically designed to investigate such black-box backdoor attacks. This innovative attack method aims to induce the code summarization engine to generate summarizations that conceal security vulnerabilities in source code. Consistent with most commercial code summarization engines, <span>BPA</span> only assumes black-box query access to the target engine without requiring knowledge of its internal structure. This attack targets in-context learning by injecting adversarial demonstrations into user input prompts. We validated our method on the SOTA black-box commercial service, OpenAI API. In security-critical test cases covering seven types of CWE, <span>BPA</span> significantly increased the likelihood that the code summarization engine would generate the attacker-desired code summarization targets, achieving an average attack success rate (ASR) of 91.4%. This result underscores the potential threat of backdoor attacks on code summarization tasks while providing essential reference points for future defense research.</p>\n </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 8","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2025-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Software-Evolution and Process","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/smr.70032","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
A code summarization engine based on large language models (LLMs) can describe code functionality from different perspectives according to programmers' needs. However, these engines are at risk of black-box backdoor attacks. We propose a simple yet practical method called Bad Prompt Attack (BPA), specifically designed to investigate such black-box backdoor attacks. This innovative attack method aims to induce the code summarization engine to generate summarizations that conceal security vulnerabilities in source code. Consistent with most commercial code summarization engines, BPA only assumes black-box query access to the target engine without requiring knowledge of its internal structure. This attack targets in-context learning by injecting adversarial demonstrations into user input prompts. We validated our method on the SOTA black-box commercial service, OpenAI API. In security-critical test cases covering seven types of CWE, BPA significantly increased the likelihood that the code summarization engine would generate the attacker-desired code summarization targets, achieving an average attack success rate (ASR) of 91.4%. This result underscores the potential threat of backdoor attacks on code summarization tasks while providing essential reference points for future defense research.