Personalized privacy in OSNs: Evaluating deep learning models for context-aware image editing

IF 4.3
Gelareh Hasel Mehri , Georgi Kostov , Bernardo Breve , Andrei Jalba , Nicola Zannone
{"title":"Personalized privacy in OSNs: Evaluating deep learning models for context-aware image editing","authors":"Gelareh Hasel Mehri ,&nbsp;Georgi Kostov ,&nbsp;Bernardo Breve ,&nbsp;Andrei Jalba ,&nbsp;Nicola Zannone","doi":"10.1016/j.iswa.2025.200581","DOIUrl":null,"url":null,"abstract":"<div><div>Online Social Networks (OSNs) have become a cornerstone of digital interaction, enabling users to easily create and share content. While these platforms offer numerous benefits, they also expose users to privacy risks such as cyberstalking and identity theft. To address these concerns, OSNs typically provide access control mechanisms that allow users to regulate content visibility. However, these mechanisms often assume that content is managed by individual users and focus primarily on preserving content integrity, which may discourage users from sharing sensitive information. In this work, we propose a privacy model that empowers users to conceal sensitive content in images according to their preferences, expressed by means of policies. Our approach employs a multi-stage pipeline that includes segmentation for object localization, scene graphs and distance metrics for determining object ownership, and inpainting techniques for editing. We investigate the use of advanced deep learning models to implement the privacy model, aiming to provide personalized privacy controls while maintaining high image fidelity. To evaluate the proposed model, we conducted a user study with 20 participants. The user study highlights that ownership is the most significant factor influencing user perceptions of policy enforcement compliance, with less impact from localization and editing. The results also reveal that participants are generally willing to adopt the fully automated privacy model for selectively editing images in OSNs based on viewer identity, although some prefer alternative use cases, such as editing or censorship tools. Participants also raised concerns about the potential misuse of the model, supporting our choice of excluding an option for object replacement.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200581"},"PeriodicalIF":4.3000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems with Applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667305325001073","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Online Social Networks (OSNs) have become a cornerstone of digital interaction, enabling users to easily create and share content. While these platforms offer numerous benefits, they also expose users to privacy risks such as cyberstalking and identity theft. To address these concerns, OSNs typically provide access control mechanisms that allow users to regulate content visibility. However, these mechanisms often assume that content is managed by individual users and focus primarily on preserving content integrity, which may discourage users from sharing sensitive information. In this work, we propose a privacy model that empowers users to conceal sensitive content in images according to their preferences, expressed by means of policies. Our approach employs a multi-stage pipeline that includes segmentation for object localization, scene graphs and distance metrics for determining object ownership, and inpainting techniques for editing. We investigate the use of advanced deep learning models to implement the privacy model, aiming to provide personalized privacy controls while maintaining high image fidelity. To evaluate the proposed model, we conducted a user study with 20 participants. The user study highlights that ownership is the most significant factor influencing user perceptions of policy enforcement compliance, with less impact from localization and editing. The results also reveal that participants are generally willing to adopt the fully automated privacy model for selectively editing images in OSNs based on viewer identity, although some prefer alternative use cases, such as editing or censorship tools. Participants also raised concerns about the potential misuse of the model, supporting our choice of excluding an option for object replacement.
osn中的个性化隐私:评估上下文感知图像编辑的深度学习模型
在线社交网络(OSNs)已经成为数字交互的基石,使用户能够轻松地创建和共享内容。虽然这些平台提供了许多好处,但它们也使用户面临网络跟踪和身份盗用等隐私风险。为了解决这些问题,osn通常提供允许用户调节内容可见性的访问控制机制。然而,这些机制通常假设内容是由单个用户管理的,并且主要关注于保持内容的完整性,这可能会阻碍用户共享敏感信息。在这项工作中,我们提出了一个隐私模型,使用户能够根据自己的偏好隐藏图像中的敏感内容,并通过策略来表达。我们的方法采用多阶段管道,包括用于对象定位的分割,用于确定对象所有权的场景图和距离度量,以及用于编辑的绘图技术。我们研究了使用先进的深度学习模型来实现隐私模型,旨在提供个性化的隐私控制,同时保持高图像保真度。为了评估所提出的模型,我们对20名参与者进行了用户研究。用户研究强调,所有权是影响用户对政策执行合规性看法的最重要因素,本地化和编辑的影响较小。研究结果还显示,参与者普遍愿意采用全自动隐私模型,根据观看者身份选择性地编辑osn中的图像,尽管有些人更喜欢其他用例,如编辑或审查工具。与会者还提出了对模型可能被滥用的担忧,支持我们排除对象替换选项的选择。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信