了解模型供应链中木马量化神经网络的威胁

Xudong Pan, Mi Zhang, Yifan Yan, Min Yang
{"title":"了解模型供应链中木马量化神经网络的威胁","authors":"Xudong Pan, Mi Zhang, Yifan Yan, Min Yang","doi":"10.1145/3485832.3485881","DOIUrl":null,"url":null,"abstract":"Deep learning with edge computing arises as a popular paradigm for powering edge devices with intelligence. As the size of deep neural networks (DNN) continually increases, model quantization, which converts the full-precision model into lower-bit representation while mostly preserving the accuracy, becomes a prerequisite for deploying a well-trained DNN on resource-limited edge devices. However, to properly quantize a DNN requires an essential amount of expert knowledge, or otherwise the model accuracy would be devastatingly affected. Alternatively, recent years witness the birth of third-party model supply chains which provide pretrained quantized neural networks (QNN) for free downloading. In this paper, we systematically analyze the potential threats of trojaned models in third-party QNN supply chains. For the first time, we describe and implement a QUAntization-SpecIfic backdoor attack (QUASI), which manipulates the quantization mechanism to inject a backdoor specific to the quantized model. In other words, the attacker-specified inputs, or triggers, would not cause misbehaviors of the trojaned model in full precision until the backdoor function is automatically completed by a normal quantization operation, producing a trojaned QNN which can be triggered with a near 100% success rate. Our proposed QUASI attack reveals several key vulnerabilities in the existing QNN supply chains: (i) QUASI demonstrates a third-party QNN released online can also be injected with backdoors, while, unlike full-precision models, there is almost no working algorithm for checking the fidelity of a QNN. (ii) More threateningly, the backdoor injected by QUASI remains inactivated in the full-precision model, which inhibits model consumers from attributing undergoing trojan attacks to the malicious model provider. As a practical implication, we alarm it can be highly risky to accept and deploy third-party QNN on edge devices at the current stage, if without future mitigation studies.","PeriodicalId":175869,"journal":{"name":"Annual Computer Security Applications Conference","volume":"168 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Understanding the Threats of Trojaned Quantized Neural Network in Model Supply Chains\",\"authors\":\"Xudong Pan, Mi Zhang, Yifan Yan, Min Yang\",\"doi\":\"10.1145/3485832.3485881\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning with edge computing arises as a popular paradigm for powering edge devices with intelligence. As the size of deep neural networks (DNN) continually increases, model quantization, which converts the full-precision model into lower-bit representation while mostly preserving the accuracy, becomes a prerequisite for deploying a well-trained DNN on resource-limited edge devices. However, to properly quantize a DNN requires an essential amount of expert knowledge, or otherwise the model accuracy would be devastatingly affected. Alternatively, recent years witness the birth of third-party model supply chains which provide pretrained quantized neural networks (QNN) for free downloading. In this paper, we systematically analyze the potential threats of trojaned models in third-party QNN supply chains. For the first time, we describe and implement a QUAntization-SpecIfic backdoor attack (QUASI), which manipulates the quantization mechanism to inject a backdoor specific to the quantized model. In other words, the attacker-specified inputs, or triggers, would not cause misbehaviors of the trojaned model in full precision until the backdoor function is automatically completed by a normal quantization operation, producing a trojaned QNN which can be triggered with a near 100% success rate. Our proposed QUASI attack reveals several key vulnerabilities in the existing QNN supply chains: (i) QUASI demonstrates a third-party QNN released online can also be injected with backdoors, while, unlike full-precision models, there is almost no working algorithm for checking the fidelity of a QNN. (ii) More threateningly, the backdoor injected by QUASI remains inactivated in the full-precision model, which inhibits model consumers from attributing undergoing trojan attacks to the malicious model provider. As a practical implication, we alarm it can be highly risky to accept and deploy third-party QNN on edge devices at the current stage, if without future mitigation studies.\",\"PeriodicalId\":175869,\"journal\":{\"name\":\"Annual Computer Security Applications Conference\",\"volume\":\"168 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annual Computer Security Applications Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3485832.3485881\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annual Computer Security Applications Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3485832.3485881","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

边缘计算的深度学习作为一种流行的范例出现,为边缘设备提供智能。随着深度神经网络(deep neural network, DNN)规模的不断扩大,模型量化成为在资源有限的边缘设备上部署训练良好的深度神经网络的先决条件。模型量化将全精度模型转换为低比特表示,同时在很大程度上保持精度。然而,要正确量化深度神经网络需要大量的专家知识,否则模型的准确性将受到毁灭性的影响。另外,近年来第三方模型供应链的诞生提供了预训练的量化神经网络(QNN)供免费下载。本文系统分析了第三方QNN供应链中木马模型的潜在威胁。我们首次描述并实现了一种量化特定后门攻击(QUASI),它操纵量化机制注入特定于量化模型的后门。换句话说,攻击者指定的输入或触发器不会完全精确地引起木马模型的错误行为,直到后门函数通过正常的量化操作自动完成,产生一个木马QNN,该QNN可以以接近100%的成功率触发。我们提出的QUASI攻击揭示了现有QNN供应链中的几个关键漏洞:(i) QUASI表明在线发布的第三方QNN也可以注入后门,同时,与全精度模型不同,几乎没有有效的算法来检查QNN的保真度。(ii)更具威胁性的是,准注入的后门在全精度模型中保持不激活状态,这抑制了模型消费者将遭受木马攻击归咎于恶意模型提供者。作为实际影响,我们警告说,如果没有未来的缓解研究,在当前阶段在边缘设备上接受和部署第三方QNN可能会有很高的风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Understanding the Threats of Trojaned Quantized Neural Network in Model Supply Chains
Deep learning with edge computing arises as a popular paradigm for powering edge devices with intelligence. As the size of deep neural networks (DNN) continually increases, model quantization, which converts the full-precision model into lower-bit representation while mostly preserving the accuracy, becomes a prerequisite for deploying a well-trained DNN on resource-limited edge devices. However, to properly quantize a DNN requires an essential amount of expert knowledge, or otherwise the model accuracy would be devastatingly affected. Alternatively, recent years witness the birth of third-party model supply chains which provide pretrained quantized neural networks (QNN) for free downloading. In this paper, we systematically analyze the potential threats of trojaned models in third-party QNN supply chains. For the first time, we describe and implement a QUAntization-SpecIfic backdoor attack (QUASI), which manipulates the quantization mechanism to inject a backdoor specific to the quantized model. In other words, the attacker-specified inputs, or triggers, would not cause misbehaviors of the trojaned model in full precision until the backdoor function is automatically completed by a normal quantization operation, producing a trojaned QNN which can be triggered with a near 100% success rate. Our proposed QUASI attack reveals several key vulnerabilities in the existing QNN supply chains: (i) QUASI demonstrates a third-party QNN released online can also be injected with backdoors, while, unlike full-precision models, there is almost no working algorithm for checking the fidelity of a QNN. (ii) More threateningly, the backdoor injected by QUASI remains inactivated in the full-precision model, which inhibits model consumers from attributing undergoing trojan attacks to the malicious model provider. As a practical implication, we alarm it can be highly risky to accept and deploy third-party QNN on edge devices at the current stage, if without future mitigation studies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信