神经木马调查

Yuntao Liu, Ankit Mondal, Abhishek Chakraborty, Michael Zuzak, Nina L. Jacobsen, Daniel Xing, Ankur Srivastava
{"title":"神经木马调查","authors":"Yuntao Liu, Ankit Mondal, Abhishek Chakraborty, Michael Zuzak, Nina L. Jacobsen, Daniel Xing, Ankur Srivastava","doi":"10.1109/ISQED48828.2020.9137011","DOIUrl":null,"url":null,"abstract":"Neural networks have become increasingly prevalent in many real-world applications including security critical ones. Due to the high hardware requirement and time consumption to train high-performance neural network models, users often outsource training to a machine-learning-as-a-service (MLaaS) provider. This puts the integrity of the trained model at risk. In 2017, Liu et al. found that, by mixing the training data with a few malicious samples of a certain trigger pattern, hidden functionality can be embedded in the trained network which can be evoked by the trigger pattern [33]. We refer to this kind of hidden malicious functionality as neural Trojans. In this paper, we survey a myriad of neural Trojan attack and defense techniques that have been proposed over the last few years. In a neural Trojan insertion attack, the attacker can be the MLaaS provider itself or a third party capable of adding or tampering with training data. In most research on attacks, the attacker selects the Trojan's functionality and a set of input patterns that will trigger the Trojan. Training data poisoning is the most common way to make the neural network acquire the Trojan functionality. Trojan embedding methods that modify the training algorithm or directly interfere with the neural network's execution at the binary level have also been studied. Defense techniques include detecting neural Trojans in the model and/or Trojan trigger patterns, erasing the Trojan's functionality from the neural network model, and bypassing the Trojan. It was also shown that carefully crafted neural Trojans can be used to mitigate other types of attacks. We systematize the above attack and defense approaches in this paper.","PeriodicalId":225828,"journal":{"name":"2020 21st International Symposium on Quality Electronic Design (ISQED)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"56","resultStr":"{\"title\":\"A Survey on Neural Trojans\",\"authors\":\"Yuntao Liu, Ankit Mondal, Abhishek Chakraborty, Michael Zuzak, Nina L. Jacobsen, Daniel Xing, Ankur Srivastava\",\"doi\":\"10.1109/ISQED48828.2020.9137011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks have become increasingly prevalent in many real-world applications including security critical ones. Due to the high hardware requirement and time consumption to train high-performance neural network models, users often outsource training to a machine-learning-as-a-service (MLaaS) provider. This puts the integrity of the trained model at risk. In 2017, Liu et al. found that, by mixing the training data with a few malicious samples of a certain trigger pattern, hidden functionality can be embedded in the trained network which can be evoked by the trigger pattern [33]. We refer to this kind of hidden malicious functionality as neural Trojans. In this paper, we survey a myriad of neural Trojan attack and defense techniques that have been proposed over the last few years. In a neural Trojan insertion attack, the attacker can be the MLaaS provider itself or a third party capable of adding or tampering with training data. In most research on attacks, the attacker selects the Trojan's functionality and a set of input patterns that will trigger the Trojan. Training data poisoning is the most common way to make the neural network acquire the Trojan functionality. Trojan embedding methods that modify the training algorithm or directly interfere with the neural network's execution at the binary level have also been studied. Defense techniques include detecting neural Trojans in the model and/or Trojan trigger patterns, erasing the Trojan's functionality from the neural network model, and bypassing the Trojan. It was also shown that carefully crafted neural Trojans can be used to mitigate other types of attacks. We systematize the above attack and defense approaches in this paper.\",\"PeriodicalId\":225828,\"journal\":{\"name\":\"2020 21st International Symposium on Quality Electronic Design (ISQED)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"56\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 21st International Symposium on Quality Electronic Design (ISQED)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISQED48828.2020.9137011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 21st International Symposium on Quality Electronic Design (ISQED)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISQED48828.2020.9137011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 56

摘要

神经网络在包括安全关键应用在内的许多实际应用中变得越来越普遍。由于训练高性能神经网络模型的硬件要求高,耗时长,用户通常将训练外包给机器学习即服务(MLaaS)提供商。这将使训练模型的完整性处于危险之中。2017年,Liu等人发现,通过将训练数据与特定触发模式的少量恶意样本混合,可以在训练网络中嵌入可被触发模式唤醒的隐藏功能[33]。我们将这种隐藏的恶意功能称为神经木马。在本文中,我们调查了无数的神经木马攻击和防御技术,已经提出了在过去的几年中。在神经木马插入攻击中,攻击者可以是MLaaS提供商本身,也可以是能够添加或篡改训练数据的第三方。在大多数攻击研究中,攻击者选择木马的功能和一组将触发木马的输入模式。训练数据中毒是使神经网络获得木马功能最常见的方法。本文还研究了修改训练算法或直接干扰神经网络在二进制级执行的木马嵌入方法。防御技术包括检测模型中的神经木马和/或木马触发模式,从神经网络模型中删除木马的功能,以及绕过木马。研究还表明,精心制作的神经木马可以用来减轻其他类型的攻击。本文将上述攻防方法系统化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Survey on Neural Trojans
Neural networks have become increasingly prevalent in many real-world applications including security critical ones. Due to the high hardware requirement and time consumption to train high-performance neural network models, users often outsource training to a machine-learning-as-a-service (MLaaS) provider. This puts the integrity of the trained model at risk. In 2017, Liu et al. found that, by mixing the training data with a few malicious samples of a certain trigger pattern, hidden functionality can be embedded in the trained network which can be evoked by the trigger pattern [33]. We refer to this kind of hidden malicious functionality as neural Trojans. In this paper, we survey a myriad of neural Trojan attack and defense techniques that have been proposed over the last few years. In a neural Trojan insertion attack, the attacker can be the MLaaS provider itself or a third party capable of adding or tampering with training data. In most research on attacks, the attacker selects the Trojan's functionality and a set of input patterns that will trigger the Trojan. Training data poisoning is the most common way to make the neural network acquire the Trojan functionality. Trojan embedding methods that modify the training algorithm or directly interfere with the neural network's execution at the binary level have also been studied. Defense techniques include detecting neural Trojans in the model and/or Trojan trigger patterns, erasing the Trojan's functionality from the neural network model, and bypassing the Trojan. It was also shown that carefully crafted neural Trojans can be used to mitigate other types of attacks. We systematize the above attack and defense approaches in this paper.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信