一种用于强化DNN可执行文件的越界杀菌剂

Yanzuo Chen, Yuanyuan Yuan, Shuai Wang
{"title":"一种用于强化DNN可执行文件的越界杀菌剂","authors":"Yanzuo Chen, Yuanyuan Yuan, Shuai Wang","doi":"10.14722/ndss.2023.24103","DOIUrl":null,"url":null,"abstract":"—The rapid adoption of deep neural network (DNN) models on a variety of hardware platforms has boosted the development of deep learning (DL) compilers. DL compilers take as input the high-level DNN model specifications and generate optimized DNN executables for diverse hardware architectures like CPUs and GPUs. Despite the emerging adoption of DL compilers in real-world scenarios, no solutions exist to protect DNN executables. To fill this critical gap, this paper introduces OBS AN , a fast sanitizer designed to check for out-of-bound (OOB) behavior in DNN executables. Holistically, DNN incorporates bidirectional computation : forward propagation which predicts an output based on an input, and backward propagation which characterizes how the forward prediction is made. Both the neuron activations in forward propagation and gradients in backward propagation should fall within valid ranges, and deviations from these ranges would be considered as OOB. OOB is primarily related to unsafe behavior of DNNs, which root from anomalous inputs and may cause mispredictions or even exploitation via adversarial examples (AEs). We thus design OBS AN , which includes two variants, FOBS AN and BOBS AN , to detect OOB in forward and backward propagations, respectively. Each OBS AN variant is designed as extra passes of DL compilers to integrate with large-scale DNN models, and we design various optimization schemes to reduce the overhead of OBS AN . Evaluations over various anomalous inputs show that OBS AN manifests promising OOB detectability with low overhead. We further present two downstream applications to show how OBS AN prevents online AE generation and facilitates feedback-driven fuzz testing toward DNN executables.","PeriodicalId":199733,"journal":{"name":"Proceedings 2023 Network and Distributed System Security Symposium","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"OBSan: An Out-Of-Bound Sanitizer to Harden DNN Executables\",\"authors\":\"Yanzuo Chen, Yuanyuan Yuan, Shuai Wang\",\"doi\":\"10.14722/ndss.2023.24103\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"—The rapid adoption of deep neural network (DNN) models on a variety of hardware platforms has boosted the development of deep learning (DL) compilers. DL compilers take as input the high-level DNN model specifications and generate optimized DNN executables for diverse hardware architectures like CPUs and GPUs. Despite the emerging adoption of DL compilers in real-world scenarios, no solutions exist to protect DNN executables. To fill this critical gap, this paper introduces OBS AN , a fast sanitizer designed to check for out-of-bound (OOB) behavior in DNN executables. Holistically, DNN incorporates bidirectional computation : forward propagation which predicts an output based on an input, and backward propagation which characterizes how the forward prediction is made. Both the neuron activations in forward propagation and gradients in backward propagation should fall within valid ranges, and deviations from these ranges would be considered as OOB. OOB is primarily related to unsafe behavior of DNNs, which root from anomalous inputs and may cause mispredictions or even exploitation via adversarial examples (AEs). We thus design OBS AN , which includes two variants, FOBS AN and BOBS AN , to detect OOB in forward and backward propagations, respectively. Each OBS AN variant is designed as extra passes of DL compilers to integrate with large-scale DNN models, and we design various optimization schemes to reduce the overhead of OBS AN . Evaluations over various anomalous inputs show that OBS AN manifests promising OOB detectability with low overhead. We further present two downstream applications to show how OBS AN prevents online AE generation and facilitates feedback-driven fuzz testing toward DNN executables.\",\"PeriodicalId\":199733,\"journal\":{\"name\":\"Proceedings 2023 Network and Distributed System Security Symposium\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 2023 Network and Distributed System Security Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14722/ndss.2023.24103\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 2023 Network and Distributed System Security Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14722/ndss.2023.24103","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

-深度神经网络(DNN)模型在各种硬件平台上的快速采用促进了深度学习(DL)编译器的发展。DL编译器将高级DNN模型规范作为输入,并为各种硬件架构(如cpu和gpu)生成优化的DNN可执行文件。尽管在实际场景中逐渐采用深度学习编译器,但没有解决方案可以保护深度学习可执行文件。为了填补这一关键空白,本文介绍了OBS AN,一种用于检查DNN可执行文件中越界(OOB)行为的快速清理程序。总体而言,深度神经网络包含双向计算:基于输入预测输出的前向传播和描述如何进行前向预测的后向传播。前向传播的神经元激活和后向传播的神经元梯度都应在有效范围内,偏离该范围将被视为OOB。OOB主要与dnn的不安全行为有关,这种行为源于异常输入,可能导致错误预测,甚至通过对抗性示例(ae)进行利用。因此,我们设计了OBS AN,其中包括两个变体,FOBS AN和BOBS AN,分别用于检测正向和反向传播中的OOB。每个OBS - AN变体都被设计为DL编译器的额外通道,以集成大规模DNN模型,并设计了各种优化方案来减少OBS - AN的开销。对各种异常输入的评估表明,OBS网络在低开销下具有良好的OOB检测能力。我们进一步介绍了两个下游应用程序,以展示OBS AN如何防止在线AE生成,并促进对DNN可执行文件的反馈驱动模糊测试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
OBSan: An Out-Of-Bound Sanitizer to Harden DNN Executables
—The rapid adoption of deep neural network (DNN) models on a variety of hardware platforms has boosted the development of deep learning (DL) compilers. DL compilers take as input the high-level DNN model specifications and generate optimized DNN executables for diverse hardware architectures like CPUs and GPUs. Despite the emerging adoption of DL compilers in real-world scenarios, no solutions exist to protect DNN executables. To fill this critical gap, this paper introduces OBS AN , a fast sanitizer designed to check for out-of-bound (OOB) behavior in DNN executables. Holistically, DNN incorporates bidirectional computation : forward propagation which predicts an output based on an input, and backward propagation which characterizes how the forward prediction is made. Both the neuron activations in forward propagation and gradients in backward propagation should fall within valid ranges, and deviations from these ranges would be considered as OOB. OOB is primarily related to unsafe behavior of DNNs, which root from anomalous inputs and may cause mispredictions or even exploitation via adversarial examples (AEs). We thus design OBS AN , which includes two variants, FOBS AN and BOBS AN , to detect OOB in forward and backward propagations, respectively. Each OBS AN variant is designed as extra passes of DL compilers to integrate with large-scale DNN models, and we design various optimization schemes to reduce the overhead of OBS AN . Evaluations over various anomalous inputs show that OBS AN manifests promising OOB detectability with low overhead. We further present two downstream applications to show how OBS AN prevents online AE generation and facilitates feedback-driven fuzz testing toward DNN executables.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信