Image reconstruction attacks on distributed machine learning models

Hadjer Benkraouda, K. Nahrstedt
{"title":"Image reconstruction attacks on distributed machine learning models","authors":"Hadjer Benkraouda, K. Nahrstedt","doi":"10.1145/3488659.3493779","DOIUrl":null,"url":null,"abstract":"Recent developments in Deep Neural Networks have resulted in their wide deployment for services around many aspects of human life, including security critical domains that handle sensitive data. Congruently, we have seen a proliferation of IoT devices with limited resources. Together, these two trends have led to the distribution of data analysis, processing, and decision making between edge devices and third parties such as cloud services. In this work we assess the security of the previously proposed distributed machine learning (ML) schemes by analyzing the information leaked from the output of the edge devices, i.e. the intermediate representation (IR). We particularly look at a Deep Neural Network that is used for video/image classification and tackle the problem of image/frame reconstruction from the output of the edge device. Our work focuses on assessing whether the proposed scheme of partitioned enclave execution is secure against chosen-image attacks (CIA). Given the attacker has the capability of querying the model under attack (victim model) to create image-IR pairs, can the attacker reconstruct the private input images? In this work we show that it is possible to carry out a black-box reconstruction attack by training a CNN based encoder-decoder architecture (reconstruction model) using image-IR pairs. Our tests show that the proposed reconstruction model achieves a 70% similarity between the original image and the reconstructed image.","PeriodicalId":343000,"journal":{"name":"Proceedings of the 2nd ACM International Workshop on Distributed Machine Learning","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd ACM International Workshop on Distributed Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3488659.3493779","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Recent developments in Deep Neural Networks have resulted in their wide deployment for services around many aspects of human life, including security critical domains that handle sensitive data. Congruently, we have seen a proliferation of IoT devices with limited resources. Together, these two trends have led to the distribution of data analysis, processing, and decision making between edge devices and third parties such as cloud services. In this work we assess the security of the previously proposed distributed machine learning (ML) schemes by analyzing the information leaked from the output of the edge devices, i.e. the intermediate representation (IR). We particularly look at a Deep Neural Network that is used for video/image classification and tackle the problem of image/frame reconstruction from the output of the edge device. Our work focuses on assessing whether the proposed scheme of partitioned enclave execution is secure against chosen-image attacks (CIA). Given the attacker has the capability of querying the model under attack (victim model) to create image-IR pairs, can the attacker reconstruct the private input images? In this work we show that it is possible to carry out a black-box reconstruction attack by training a CNN based encoder-decoder architecture (reconstruction model) using image-IR pairs. Our tests show that the proposed reconstruction model achieves a 70% similarity between the original image and the reconstructed image.
分布式机器学习模型的图像重建攻击
深度神经网络的最新发展使其广泛应用于人类生活的许多方面,包括处理敏感数据的安全关键领域。与此同时,我们看到了资源有限的物联网设备的激增。这两种趋势共同导致了数据分析、处理和决策在边缘设备和第三方(如云服务)之间的分布。在这项工作中,我们通过分析从边缘设备的输出泄露的信息(即中间表示(IR))来评估先前提出的分布式机器学习(ML)方案的安全性。我们特别关注用于视频/图像分类的深度神经网络,并从边缘设备的输出处理图像/帧重建问题。我们的工作重点是评估所提出的分区飞地执行方案是否能够安全抵御选择映像攻击(CIA)。假设攻击者有能力查询被攻击的模型(受害者模型)来创建图像-红外对,攻击者能否重建私有输入图像?在这项工作中,我们证明了通过使用图像-红外对训练基于CNN的编码器-解码器架构(重建模型)来进行黑箱重建攻击是可能的。我们的测试表明,所提出的重建模型在原始图像和重建图像之间达到了70%的相似度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信