Holistic Implicit Factor Evaluation of Model Extraction Attacks

IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Anli Yan, Hongyang Yan, Li Hu, Xiaozhang Liu, Teng Huang
{"title":"Holistic Implicit Factor Evaluation of Model Extraction Attacks","authors":"Anli Yan, Hongyang Yan, Li Hu, Xiaozhang Liu, Teng Huang","doi":"10.1109/tdsc.2022.3231271","DOIUrl":null,"url":null,"abstract":"Model extraction attacks (MEAs) allow adversaries to replicate a surrogate model analogous to the target model's decision pattern. While several attacks and defenses have been studied in-depth, the underlying reasons behind our susceptibility to them often remain unclear. Analyzing these implication influence factors helps to promote secure deep learning (DL) systems, it requires studying extraction attacks in various scenarios to determine the success of different attacks and the hallmarks of DLs. However, understanding, implementing, and evaluating even a single attack requires extremely high technical effort, making it impractical to study the vast number of unique extraction attack scenarios. To this end, we present a first-of-its-kind holistic evaluation of implication factors for MEAs which relies on the attack process abstracted from state-of-the-art MEAs. Specifically, we concentrate on four perspectives. we consider the impact of the task accuracy, model architecture, and robustness of the target model on MEAs, as well as the impact of the model architecture of the surrogate model on MEAs. Our empirical evaluation includes an ablation study over sixteen model architectures and four image datasets. Surprisingly, our study shows that improving the robustness of the target model via adversarial training is more vulnerable to model extraction attacks.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":"127 2","pages":"0"},"PeriodicalIF":7.0000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Dependable and Secure Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/tdsc.2022.3231271","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Model extraction attacks (MEAs) allow adversaries to replicate a surrogate model analogous to the target model's decision pattern. While several attacks and defenses have been studied in-depth, the underlying reasons behind our susceptibility to them often remain unclear. Analyzing these implication influence factors helps to promote secure deep learning (DL) systems, it requires studying extraction attacks in various scenarios to determine the success of different attacks and the hallmarks of DLs. However, understanding, implementing, and evaluating even a single attack requires extremely high technical effort, making it impractical to study the vast number of unique extraction attack scenarios. To this end, we present a first-of-its-kind holistic evaluation of implication factors for MEAs which relies on the attack process abstracted from state-of-the-art MEAs. Specifically, we concentrate on four perspectives. we consider the impact of the task accuracy, model architecture, and robustness of the target model on MEAs, as well as the impact of the model architecture of the surrogate model on MEAs. Our empirical evaluation includes an ablation study over sixteen model architectures and four image datasets. Surprisingly, our study shows that improving the robustness of the target model via adversarial training is more vulnerable to model extraction attacks.
模型提取攻击的整体隐式因子评估
模型提取攻击(mea)允许攻击者复制与目标模型的决策模式类似的代理模型。虽然对几种攻击和防御进行了深入研究,但我们对它们易感性背后的潜在原因往往尚不清楚。分析这些隐含的影响因素有助于提升深度学习(DL)系统的安全性,这需要研究各种场景下的抽取攻击,以确定不同攻击的成功和DL的特征。然而,理解、实现和评估单个攻击都需要极高的技术努力,这使得研究大量独特的提取攻击场景变得不切实际。为此,我们提出了一种基于从最先进的mea中抽象出来的攻击过程的mea隐含因素的首次整体评估。具体来说,我们关注四个方面。我们考虑了目标模型的任务精度、模型架构和鲁棒性对mea的影响,以及代理模型的模型架构对mea的影响。我们的实证评估包括对16个模型架构和4个图像数据集的消融研究。令人惊讶的是,我们的研究表明,通过对抗性训练来提高目标模型的鲁棒性更容易受到模型提取攻击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Dependable and Secure Computing
IEEE Transactions on Dependable and Secure Computing 工程技术-计算机:软件工程
CiteScore
11.20
自引率
5.50%
发文量
354
审稿时长
9 months
期刊介绍: The "IEEE Transactions on Dependable and Secure Computing (TDSC)" is a prestigious journal that publishes high-quality, peer-reviewed research in the field of computer science, specifically targeting the development of dependable and secure computing systems and networks. This journal is dedicated to exploring the fundamental principles, methodologies, and mechanisms that enable the design, modeling, and evaluation of systems that meet the required levels of reliability, security, and performance. The scope of TDSC includes research on measurement, modeling, and simulation techniques that contribute to the understanding and improvement of system performance under various constraints. It also covers the foundations necessary for the joint evaluation, verification, and design of systems that balance performance, security, and dependability. By publishing archival research results, TDSC aims to provide a valuable resource for researchers, engineers, and practitioners working in the areas of cybersecurity, fault tolerance, and system reliability. The journal's focus on cutting-edge research ensures that it remains at the forefront of advancements in the field, promoting the development of technologies that are critical for the functioning of modern, complex systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信