DarkneTZ: towards model privacy at the edge using trusted execution environments

Fan Mo, A. Shamsabadi, Kleomenis Katevas, Soteris Demetriou, Ilias Leontiadis, A. Cavallaro, H. Haddadi
{"title":"DarkneTZ: towards model privacy at the edge using trusted execution environments","authors":"Fan Mo, A. Shamsabadi, Kleomenis Katevas, Soteris Demetriou, Ilias Leontiadis, A. Cavallaro, H. Haddadi","doi":"10.1145/3386901.3388946","DOIUrl":null,"url":null,"abstract":"We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs). Increasingly, edge devices (smartphones and consumer IoT devices) are equipped with pre-trained DNNs for a variety of applications. This trend comes with privacy risks as models can leak information about their training data through effective membership inference attacks (MIAs). We evaluate the performance of DarkneTZ, including CPU execution time, memory usage, and accurate power consumption, using two small and six large image classification models. Due to the limited memory of the edge device's TEE, we partition model layers into more sensitive layers (to be executed inside the device TEE), and a set of layers to be executed in the untrusted part of the operating system. Our results show that even if a single layer is hidden, we can provide reliable model privacy and defend against state of the art MIAs, with only 3% performance overhead. When fully utilizing the TEE, DarkneTZ provides model protections with up to 10% overhead.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"102","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3386901.3388946","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 102

Abstract

We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs). Increasingly, edge devices (smartphones and consumer IoT devices) are equipped with pre-trained DNNs for a variety of applications. This trend comes with privacy risks as models can leak information about their training data through effective membership inference attacks (MIAs). We evaluate the performance of DarkneTZ, including CPU execution time, memory usage, and accurate power consumption, using two small and six large image classification models. Due to the limited memory of the edge device's TEE, we partition model layers into more sensitive layers (to be executed inside the device TEE), and a set of layers to be executed in the untrusted part of the operating system. Our results show that even if a single layer is hidden, we can provide reliable model privacy and defend against state of the art MIAs, with only 3% performance overhead. When fully utilizing the TEE, DarkneTZ provides model protections with up to 10% overhead.
DarkneTZ:使用可信的执行环境在边缘建立隐私模型
我们提出了DarkneTZ,这是一个框架,它使用边缘设备的可信执行环境(TEE)与模型划分相结合,以限制针对深度神经网络(dnn)的攻击面。越来越多的边缘设备(智能手机和消费物联网设备)配备了用于各种应用的预训练dnn。这种趋势带来了隐私风险,因为模型可能会通过有效的成员推理攻击(mia)泄露有关其训练数据的信息。我们使用两个小型和六个大型图像分类模型来评估DarkneTZ的性能,包括CPU执行时间、内存使用和准确功耗。由于边缘设备TEE的内存有限,我们将模型层划分为更敏感的层(将在设备TEE内执行)和一组将在操作系统的不可信部分执行的层。我们的结果表明,即使隐藏了单个层,我们也可以提供可靠的模型隐私并防御最先进的mia,而性能开销仅为3%。当充分利用TEE时,DarkneTZ提供高达10%开销的模型保护。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信