Lijia Lv, Weigang Zhang, Xuehai Tang, Jie Wen, Feng Liu, Jizhong Han, Songlin Hu
{"title":"AdaPPA:针对 LLM 的自适应位置预填充越狱攻击方法","authors":"Lijia Lv, Weigang Zhang, Xuehai Tang, Jie Wen, Feng Liu, Jizhong Han, Songlin Hu","doi":"arxiv-2409.07503","DOIUrl":null,"url":null,"abstract":"Jailbreak vulnerabilities in Large Language Models (LLMs) refer to methods\nthat extract malicious content from the model by carefully crafting prompts or\nsuffixes, which has garnered significant attention from the research community.\nHowever, traditional attack methods, which primarily focus on the semantic\nlevel, are easily detected by the model. These methods overlook the difference\nin the model's alignment protection capabilities at different output stages. To\naddress this issue, we propose an adaptive position pre-fill jailbreak attack\napproach for executing jailbreak attacks on LLMs. Our method leverages the\nmodel's instruction-following capabilities to first output pre-filled safe\ncontent, then exploits its narrative-shifting abilities to generate harmful\ncontent. Extensive black-box experiments demonstrate our method can improve the\nattack success rate by 47% on the widely recognized secure model (Llama2)\ncompared to existing approaches. Our code can be found at:\nhttps://github.com/Yummy416/AdaPPA.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"15 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AdaPPA: Adaptive Position Pre-Fill Jailbreak Attack Approach Targeting LLMs\",\"authors\":\"Lijia Lv, Weigang Zhang, Xuehai Tang, Jie Wen, Feng Liu, Jizhong Han, Songlin Hu\",\"doi\":\"arxiv-2409.07503\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Jailbreak vulnerabilities in Large Language Models (LLMs) refer to methods\\nthat extract malicious content from the model by carefully crafting prompts or\\nsuffixes, which has garnered significant attention from the research community.\\nHowever, traditional attack methods, which primarily focus on the semantic\\nlevel, are easily detected by the model. These methods overlook the difference\\nin the model's alignment protection capabilities at different output stages. To\\naddress this issue, we propose an adaptive position pre-fill jailbreak attack\\napproach for executing jailbreak attacks on LLMs. Our method leverages the\\nmodel's instruction-following capabilities to first output pre-filled safe\\ncontent, then exploits its narrative-shifting abilities to generate harmful\\ncontent. Extensive black-box experiments demonstrate our method can improve the\\nattack success rate by 47% on the widely recognized secure model (Llama2)\\ncompared to existing approaches. Our code can be found at:\\nhttps://github.com/Yummy416/AdaPPA.\",\"PeriodicalId\":501332,\"journal\":{\"name\":\"arXiv - CS - Cryptography and Security\",\"volume\":\"15 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Cryptography and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07503\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07503","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AdaPPA: Adaptive Position Pre-Fill Jailbreak Attack Approach Targeting LLMs
Jailbreak vulnerabilities in Large Language Models (LLMs) refer to methods
that extract malicious content from the model by carefully crafting prompts or
suffixes, which has garnered significant attention from the research community.
However, traditional attack methods, which primarily focus on the semantic
level, are easily detected by the model. These methods overlook the difference
in the model's alignment protection capabilities at different output stages. To
address this issue, we propose an adaptive position pre-fill jailbreak attack
approach for executing jailbreak attacks on LLMs. Our method leverages the
model's instruction-following capabilities to first output pre-filled safe
content, then exploits its narrative-shifting abilities to generate harmful
content. Extensive black-box experiments demonstrate our method can improve the
attack success rate by 47% on the widely recognized secure model (Llama2)
compared to existing approaches. Our code can be found at:
https://github.com/Yummy416/AdaPPA.