{"title":"由 LLM 驱动的针对无 ID 推荐系统的文本模拟攻击","authors":"Zongwei Wang, Min Gao, Junliang Yu, Xinyi Gao, Quoc Viet Hung Nguyen, Shazia Sadiq, Hongzhi Yin","doi":"arxiv-2409.11690","DOIUrl":null,"url":null,"abstract":"The ID-free recommendation paradigm has been proposed to address the\nlimitation that traditional recommender systems struggle to model cold-start\nusers or items with new IDs. Despite its effectiveness, this study uncovers\nthat ID-free recommender systems are vulnerable to the proposed Text Simulation\nattack (TextSimu) which aims to promote specific target items. As a novel type\nof text poisoning attack, TextSimu exploits large language models (LLM) to\nalter the textual information of target items by simulating the characteristics\nof popular items. It operates effectively in both black-box and white-box\nsettings, utilizing two key components: a unified popularity extraction module,\nwhich captures the essential characteristics of popular items, and an N-persona\nconsistency simulation strategy, which creates multiple personas to\ncollaboratively synthesize refined promotional textual descriptions for target\nitems by simulating the popular items. To withstand TextSimu-like attacks, we\nfurther explore the detection approach for identifying LLM-generated\npromotional text. Extensive experiments conducted on three datasets demonstrate\nthat TextSimu poses a more significant threat than existing poisoning attacks,\nwhile our defense method can detect malicious text of target items generated by\nTextSimu. By identifying the vulnerability, we aim to advance the development\nof more robust ID-free recommender systems.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"16 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LLM-Powered Text Simulation Attack Against ID-Free Recommender Systems\",\"authors\":\"Zongwei Wang, Min Gao, Junliang Yu, Xinyi Gao, Quoc Viet Hung Nguyen, Shazia Sadiq, Hongzhi Yin\",\"doi\":\"arxiv-2409.11690\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The ID-free recommendation paradigm has been proposed to address the\\nlimitation that traditional recommender systems struggle to model cold-start\\nusers or items with new IDs. Despite its effectiveness, this study uncovers\\nthat ID-free recommender systems are vulnerable to the proposed Text Simulation\\nattack (TextSimu) which aims to promote specific target items. As a novel type\\nof text poisoning attack, TextSimu exploits large language models (LLM) to\\nalter the textual information of target items by simulating the characteristics\\nof popular items. It operates effectively in both black-box and white-box\\nsettings, utilizing two key components: a unified popularity extraction module,\\nwhich captures the essential characteristics of popular items, and an N-persona\\nconsistency simulation strategy, which creates multiple personas to\\ncollaboratively synthesize refined promotional textual descriptions for target\\nitems by simulating the popular items. To withstand TextSimu-like attacks, we\\nfurther explore the detection approach for identifying LLM-generated\\npromotional text. Extensive experiments conducted on three datasets demonstrate\\nthat TextSimu poses a more significant threat than existing poisoning attacks,\\nwhile our defense method can detect malicious text of target items generated by\\nTextSimu. By identifying the vulnerability, we aim to advance the development\\nof more robust ID-free recommender systems.\",\"PeriodicalId\":501281,\"journal\":{\"name\":\"arXiv - CS - Information Retrieval\",\"volume\":\"16 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11690\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11690","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
无 ID 推荐范式的提出是为了解决传统推荐系统难以对冷启动用户或具有新 ID 的项目进行建模的限制。尽管无 ID 推荐系统很有效,但本研究发现它很容易受到旨在推广特定目标项目的文本模拟攻击(TextSimu)的攻击。作为一种新型文本中毒攻击,TextSimu 利用大型语言模型(LLM),通过模拟流行项目的特征来改变目标项目的文本信息。它在黑盒和白盒环境下都能有效运行,利用了两个关键组件:一个是统一的流行度提取模块,它能捕捉流行项目的基本特征;另一个是 N 人一致性模拟策略,它能创建多个角色,通过模拟流行项目来协作合成目标项目的精炼促销文本描述。为了抵御类似 TextSimu 的攻击,我们进一步探索了识别 LLM 生成的促销文本的检测方法。在三个数据集上进行的广泛实验表明,TextSimu 比现有的中毒攻击构成了更大的威胁,而我们的防御方法可以检测到由 TextSimu 生成的目标项目的恶意文本。通过识别该漏洞,我们旨在推动更强大的无 ID 推荐系统的开发。
LLM-Powered Text Simulation Attack Against ID-Free Recommender Systems
The ID-free recommendation paradigm has been proposed to address the
limitation that traditional recommender systems struggle to model cold-start
users or items with new IDs. Despite its effectiveness, this study uncovers
that ID-free recommender systems are vulnerable to the proposed Text Simulation
attack (TextSimu) which aims to promote specific target items. As a novel type
of text poisoning attack, TextSimu exploits large language models (LLM) to
alter the textual information of target items by simulating the characteristics
of popular items. It operates effectively in both black-box and white-box
settings, utilizing two key components: a unified popularity extraction module,
which captures the essential characteristics of popular items, and an N-persona
consistency simulation strategy, which creates multiple personas to
collaboratively synthesize refined promotional textual descriptions for target
items by simulating the popular items. To withstand TextSimu-like attacks, we
further explore the detection approach for identifying LLM-generated
promotional text. Extensive experiments conducted on three datasets demonstrate
that TextSimu poses a more significant threat than existing poisoning attacks,
while our defense method can detect malicious text of target items generated by
TextSimu. By identifying the vulnerability, we aim to advance the development
of more robust ID-free recommender systems.