利用不确定性量化的部分模型提取

Arne Aarts, Wil Michiels, Peter Roelse
{"title":"利用不确定性量化的部分模型提取","authors":"Arne Aarts, Wil Michiels, Peter Roelse","doi":"10.1109/CloudNet53349.2021.9657130","DOIUrl":null,"url":null,"abstract":"Companies deploy deep learning models in the cloud and offer black-box access to them as a pay as you go service. It has been shown that with enough queries those models can be extracted. This paper presents a new cloning scheme using uncertainty quantification, enabling the adversary to leverage partial model extractions. First, a relatively small number of queries is spent to extract part of the target’s model. Second, for every query directed at the adversary, the uncertainty of the output of the extracted model is computed; when below a given threshold, the adversary will return the output. Otherwise, the query is delegated to the target’s model and its output returned. In this way the adversary is able to monetize knowledge that has successfully been extracted. We propose methods to determine thresholds such that the accuracy of the new scheme is close to the target network’s accuracy. The new scheme has been implemented, and experiments were conducted on the Caltech-256 and indoor datasets using multiple uncertainty quantification methods. The results show that the rate of delegation decreases logarithmically with the initial number of queries spent on extraction. Compared to conventional cloning techniques, the main advantages of the new scheme are that the total costs in terms of queries to the target model can be lower while achieving the same accuracy, and that the accuracy of the new scheme can be arbitrarily close to the target model’s accuracy by selecting a suitable value of the threshold.","PeriodicalId":369247,"journal":{"name":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Leveraging Partial Model Extractions using Uncertainty Quantification\",\"authors\":\"Arne Aarts, Wil Michiels, Peter Roelse\",\"doi\":\"10.1109/CloudNet53349.2021.9657130\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Companies deploy deep learning models in the cloud and offer black-box access to them as a pay as you go service. It has been shown that with enough queries those models can be extracted. This paper presents a new cloning scheme using uncertainty quantification, enabling the adversary to leverage partial model extractions. First, a relatively small number of queries is spent to extract part of the target’s model. Second, for every query directed at the adversary, the uncertainty of the output of the extracted model is computed; when below a given threshold, the adversary will return the output. Otherwise, the query is delegated to the target’s model and its output returned. In this way the adversary is able to monetize knowledge that has successfully been extracted. We propose methods to determine thresholds such that the accuracy of the new scheme is close to the target network’s accuracy. The new scheme has been implemented, and experiments were conducted on the Caltech-256 and indoor datasets using multiple uncertainty quantification methods. The results show that the rate of delegation decreases logarithmically with the initial number of queries spent on extraction. Compared to conventional cloning techniques, the main advantages of the new scheme are that the total costs in terms of queries to the target model can be lower while achieving the same accuracy, and that the accuracy of the new scheme can be arbitrarily close to the target model’s accuracy by selecting a suitable value of the threshold.\",\"PeriodicalId\":369247,\"journal\":{\"name\":\"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CloudNet53349.2021.9657130\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 10th International Conference on Cloud Networking (CloudNet)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CloudNet53349.2021.9657130","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

公司在云中部署深度学习模型,并以即用即付服务的形式提供黑盒访问。事实证明,通过足够的查询,这些模型可以被提取出来。本文提出了一种新的克隆方案,使用不确定性量化,使对手能够利用部分模型提取。首先,花费相对较少的查询来提取目标模型的一部分。其次,对于针对对手的每个查询,计算提取模型输出的不确定性;当低于给定阈值时,攻击者将返回输出。否则,查询将委托给目标模型并返回其输出。通过这种方式,对手能够将成功提取的知识货币化。我们提出了确定阈值的方法,使新方案的精度接近目标网络的精度。并利用多种不确定度量化方法在Caltech-256和室内数据集上进行了实验。结果表明,随着用于提取的初始查询数的增加,委托率呈对数递减。与传统克隆技术相比,新方案的主要优点是在达到相同精度的情况下,查询目标模型的总成本更低,并且通过选择合适的阈值,新方案的精度可以任意接近目标模型的精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Leveraging Partial Model Extractions using Uncertainty Quantification
Companies deploy deep learning models in the cloud and offer black-box access to them as a pay as you go service. It has been shown that with enough queries those models can be extracted. This paper presents a new cloning scheme using uncertainty quantification, enabling the adversary to leverage partial model extractions. First, a relatively small number of queries is spent to extract part of the target’s model. Second, for every query directed at the adversary, the uncertainty of the output of the extracted model is computed; when below a given threshold, the adversary will return the output. Otherwise, the query is delegated to the target’s model and its output returned. In this way the adversary is able to monetize knowledge that has successfully been extracted. We propose methods to determine thresholds such that the accuracy of the new scheme is close to the target network’s accuracy. The new scheme has been implemented, and experiments were conducted on the Caltech-256 and indoor datasets using multiple uncertainty quantification methods. The results show that the rate of delegation decreases logarithmically with the initial number of queries spent on extraction. Compared to conventional cloning techniques, the main advantages of the new scheme are that the total costs in terms of queries to the target model can be lower while achieving the same accuracy, and that the accuracy of the new scheme can be arbitrarily close to the target model’s accuracy by selecting a suitable value of the threshold.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信