Zhiye Wang, Baisong Liu, Chennan Lin, Xueyuan Zhang, Ce Hu, Jiangcheng Qin, Linze Luo
{"title":"回顾基于深度学习的推荐系统的数据中毒攻击","authors":"Zhiye Wang, Baisong Liu, Chennan Lin, Xueyuan Zhang, Ce Hu, Jiangcheng Qin, Linze Luo","doi":"10.1109/ISCC58397.2023.10218302","DOIUrl":null,"url":null,"abstract":"Deep learning based recommender systems(DLRS) as one of the up-and-coming recommender systems, and their robustness is crucial for building trustworthy recommender systems. However, recent studies have demonstrated that DLRS are vulnerable to data poisoning attacks. Specifically, an unpopular item can be promoted to regular users by injecting well-crafted fake user profiles into the victim recommender systems. In this paper, we revisit the data poisoning attacks on DLRS and find that state-of-the-art attacks suffer from two issues: user-agnostic and fake-user-unitary or target-item-agnostic, reducing the effectiveness of promotion attacks. To gap these two limitations, we proposed our improved method Generate Targeted Attacks(GTA), to implement targeted attacks on vulnerable users defined by user intent and sensitivity. We initialize the fake users by adding seed items to address the cold start problems of fake users so that we can implement targeted attacks. Our extensive experiments on two real-world datasets demonstrate the effectiveness of GTA.","PeriodicalId":265337,"journal":{"name":"2023 IEEE Symposium on Computers and Communications (ISCC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Revisiting Data Poisoning Attacks on Deep Learning Based Recommender Systems\",\"authors\":\"Zhiye Wang, Baisong Liu, Chennan Lin, Xueyuan Zhang, Ce Hu, Jiangcheng Qin, Linze Luo\",\"doi\":\"10.1109/ISCC58397.2023.10218302\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning based recommender systems(DLRS) as one of the up-and-coming recommender systems, and their robustness is crucial for building trustworthy recommender systems. However, recent studies have demonstrated that DLRS are vulnerable to data poisoning attacks. Specifically, an unpopular item can be promoted to regular users by injecting well-crafted fake user profiles into the victim recommender systems. In this paper, we revisit the data poisoning attacks on DLRS and find that state-of-the-art attacks suffer from two issues: user-agnostic and fake-user-unitary or target-item-agnostic, reducing the effectiveness of promotion attacks. To gap these two limitations, we proposed our improved method Generate Targeted Attacks(GTA), to implement targeted attacks on vulnerable users defined by user intent and sensitivity. We initialize the fake users by adding seed items to address the cold start problems of fake users so that we can implement targeted attacks. Our extensive experiments on two real-world datasets demonstrate the effectiveness of GTA.\",\"PeriodicalId\":265337,\"journal\":{\"name\":\"2023 IEEE Symposium on Computers and Communications (ISCC)\",\"volume\":\"62 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Symposium on Computers and Communications (ISCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCC58397.2023.10218302\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Symposium on Computers and Communications (ISCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCC58397.2023.10218302","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Revisiting Data Poisoning Attacks on Deep Learning Based Recommender Systems
Deep learning based recommender systems(DLRS) as one of the up-and-coming recommender systems, and their robustness is crucial for building trustworthy recommender systems. However, recent studies have demonstrated that DLRS are vulnerable to data poisoning attacks. Specifically, an unpopular item can be promoted to regular users by injecting well-crafted fake user profiles into the victim recommender systems. In this paper, we revisit the data poisoning attacks on DLRS and find that state-of-the-art attacks suffer from two issues: user-agnostic and fake-user-unitary or target-item-agnostic, reducing the effectiveness of promotion attacks. To gap these two limitations, we proposed our improved method Generate Targeted Attacks(GTA), to implement targeted attacks on vulnerable users defined by user intent and sensitivity. We initialize the fake users by adding seed items to address the cold start problems of fake users so that we can implement targeted attacks. Our extensive experiments on two real-world datasets demonstrate the effectiveness of GTA.