Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks

IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
{"title":"Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks","authors":"Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis","doi":"https://dl.acm.org/doi/10.1145/3592800","DOIUrl":null,"url":null,"abstract":"<p>Collaborative machine learning settings such as federated learning can be susceptible to adversarial interference and attacks. One class of such attacks is termed <i>model inversion attacks</i>, characterised by the adversary reverse-engineering the model into disclosing the training data. Previous implementations of this attack typically <i>only</i> rely on the shared data representations, ignoring the adversarial priors, or require that specific layers are present in the target model, reducing the potential attack surface. In this work, we propose a novel context-agnostic model inversion framework that builds on the foundations of gradient-based inversion attacks, but additionally exploits the features and the style of the data controlled by an in-the-network adversary. Our technique outperforms existing gradient-based approaches both qualitatively and quantitatively across all training settings, showing particular effectiveness against the collaborative medical imaging tasks. Finally, we demonstrate that our method achieves significant success on two downstream tasks: sensitive feature inference and facial recognition spoofing.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"25 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Privacy and Security","FirstCategoryId":"94","ListUrlMain":"https://doi.org/https://dl.acm.org/doi/10.1145/3592800","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Collaborative machine learning settings such as federated learning can be susceptible to adversarial interference and attacks. One class of such attacks is termed model inversion attacks, characterised by the adversary reverse-engineering the model into disclosing the training data. Previous implementations of this attack typically only rely on the shared data representations, ignoring the adversarial priors, or require that specific layers are present in the target model, reducing the potential attack surface. In this work, we propose a novel context-agnostic model inversion framework that builds on the foundations of gradient-based inversion attacks, but additionally exploits the features and the style of the data controlled by an in-the-network adversary. Our technique outperforms existing gradient-based approaches both qualitatively and quantitatively across all training settings, showing particular effectiveness against the collaborative medical imaging tasks. Finally, we demonstrate that our method achieves significant success on two downstream tasks: sensitive feature inference and facial recognition spoofing.

超越梯度:利用模型反转攻击中的对抗性先验
协作机器学习设置(如联邦学习)可能容易受到对抗性干扰和攻击。其中一类攻击被称为模型反转攻击,其特征是攻击者对模型进行逆向工程,使其暴露训练数据。这种攻击的先前实现通常只依赖于共享数据表示,而忽略了对抗性先验,或者要求目标模型中存在特定的层,从而减少了潜在的攻击面。在这项工作中,我们提出了一种新的上下文无关模型反演框架,该框架建立在基于梯度的反演攻击的基础上,但还利用了网络内对手控制的数据的特征和风格。我们的技术在所有训练设置的定性和定量上都优于现有的基于梯度的方法,在协作医学成像任务中表现出特别的有效性。最后,我们证明了我们的方法在两个下游任务上取得了显著的成功:敏感特征推理和面部识别欺骗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ACM Transactions on Privacy and Security
ACM Transactions on Privacy and Security Computer Science-General Computer Science
CiteScore
5.20
自引率
0.00%
发文量
52
期刊介绍: ACM Transactions on Privacy and Security (TOPS) (formerly known as TISSEC) publishes high-quality research results in the fields of information and system security and privacy. Studies addressing all aspects of these fields are welcomed, ranging from technologies, to systems and applications, to the crafting of policies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信