Gelareh Hasel Mehri , Georgi Kostov , Bernardo Breve , Andrei Jalba , Nicola Zannone
{"title":"Personalized privacy in OSNs: Evaluating deep learning models for context-aware image editing","authors":"Gelareh Hasel Mehri , Georgi Kostov , Bernardo Breve , Andrei Jalba , Nicola Zannone","doi":"10.1016/j.iswa.2025.200581","DOIUrl":null,"url":null,"abstract":"<div><div>Online Social Networks (OSNs) have become a cornerstone of digital interaction, enabling users to easily create and share content. While these platforms offer numerous benefits, they also expose users to privacy risks such as cyberstalking and identity theft. To address these concerns, OSNs typically provide access control mechanisms that allow users to regulate content visibility. However, these mechanisms often assume that content is managed by individual users and focus primarily on preserving content integrity, which may discourage users from sharing sensitive information. In this work, we propose a privacy model that empowers users to conceal sensitive content in images according to their preferences, expressed by means of policies. Our approach employs a multi-stage pipeline that includes segmentation for object localization, scene graphs and distance metrics for determining object ownership, and inpainting techniques for editing. We investigate the use of advanced deep learning models to implement the privacy model, aiming to provide personalized privacy controls while maintaining high image fidelity. To evaluate the proposed model, we conducted a user study with 20 participants. The user study highlights that ownership is the most significant factor influencing user perceptions of policy enforcement compliance, with less impact from localization and editing. The results also reveal that participants are generally willing to adopt the fully automated privacy model for selectively editing images in OSNs based on viewer identity, although some prefer alternative use cases, such as editing or censorship tools. Participants also raised concerns about the potential misuse of the model, supporting our choice of excluding an option for object replacement.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200581"},"PeriodicalIF":4.3000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems with Applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667305325001073","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Online Social Networks (OSNs) have become a cornerstone of digital interaction, enabling users to easily create and share content. While these platforms offer numerous benefits, they also expose users to privacy risks such as cyberstalking and identity theft. To address these concerns, OSNs typically provide access control mechanisms that allow users to regulate content visibility. However, these mechanisms often assume that content is managed by individual users and focus primarily on preserving content integrity, which may discourage users from sharing sensitive information. In this work, we propose a privacy model that empowers users to conceal sensitive content in images according to their preferences, expressed by means of policies. Our approach employs a multi-stage pipeline that includes segmentation for object localization, scene graphs and distance metrics for determining object ownership, and inpainting techniques for editing. We investigate the use of advanced deep learning models to implement the privacy model, aiming to provide personalized privacy controls while maintaining high image fidelity. To evaluate the proposed model, we conducted a user study with 20 participants. The user study highlights that ownership is the most significant factor influencing user perceptions of policy enforcement compliance, with less impact from localization and editing. The results also reveal that participants are generally willing to adopt the fully automated privacy model for selectively editing images in OSNs based on viewer identity, although some prefer alternative use cases, such as editing or censorship tools. Participants also raised concerns about the potential misuse of the model, supporting our choice of excluding an option for object replacement.