{"title":"Pivoting Image-based Profiles Toward Privacy: Inhibiting Malicious Profiling with Adversarial Additions","authors":"Zhuoran Liu, Zhengyu Zhao, M. Larson","doi":"10.1145/3450613.3456832","DOIUrl":null,"url":null,"abstract":"Users build up profiles online consisting of items that they have shared or interacted with. In this work, we look at profiles that consist of images. We address the issue of privacy-sensitive information being automatically inferred from these user profiles, against users’ will and best interest. We introduce the concept of a privacy pivot, which is a strategic change that users can make in their sharing that will inhibit malicious profiling. Importantly, the pivot helps put privacy control into the hands of the users. Further, it does not require users to delete any of the existing images in their profiles, nor does it require a radical change in their sharing intentions, i.e., what they would like to communicate with their profile. Previous work has investigated adversarial images for privacy protection, but has focused on individual images. Here, we move further to study image sets comprising image profiles. We define a conceptual formulation of the challenge of the privacy pivot in the form of an “Anti-Profiling Model”. Within this model, we propose a basic pivot solution that uses adversarial additions to effectively inhibit the predictions of profilers using set-based image classification.","PeriodicalId":435674,"journal":{"name":"Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization","volume":"60 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3450613.3456832","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Users build up profiles online consisting of items that they have shared or interacted with. In this work, we look at profiles that consist of images. We address the issue of privacy-sensitive information being automatically inferred from these user profiles, against users’ will and best interest. We introduce the concept of a privacy pivot, which is a strategic change that users can make in their sharing that will inhibit malicious profiling. Importantly, the pivot helps put privacy control into the hands of the users. Further, it does not require users to delete any of the existing images in their profiles, nor does it require a radical change in their sharing intentions, i.e., what they would like to communicate with their profile. Previous work has investigated adversarial images for privacy protection, but has focused on individual images. Here, we move further to study image sets comprising image profiles. We define a conceptual formulation of the challenge of the privacy pivot in the form of an “Anti-Profiling Model”. Within this model, we propose a basic pivot solution that uses adversarial additions to effectively inhibit the predictions of profilers using set-based image classification.