{"title":"一种用于强化DNN可执行文件的越界杀菌剂","authors":"Yanzuo Chen, Yuanyuan Yuan, Shuai Wang","doi":"10.14722/ndss.2023.24103","DOIUrl":null,"url":null,"abstract":"—The rapid adoption of deep neural network (DNN) models on a variety of hardware platforms has boosted the development of deep learning (DL) compilers. DL compilers take as input the high-level DNN model specifications and generate optimized DNN executables for diverse hardware architectures like CPUs and GPUs. Despite the emerging adoption of DL compilers in real-world scenarios, no solutions exist to protect DNN executables. To fill this critical gap, this paper introduces OBS AN , a fast sanitizer designed to check for out-of-bound (OOB) behavior in DNN executables. Holistically, DNN incorporates bidirectional computation : forward propagation which predicts an output based on an input, and backward propagation which characterizes how the forward prediction is made. Both the neuron activations in forward propagation and gradients in backward propagation should fall within valid ranges, and deviations from these ranges would be considered as OOB. OOB is primarily related to unsafe behavior of DNNs, which root from anomalous inputs and may cause mispredictions or even exploitation via adversarial examples (AEs). We thus design OBS AN , which includes two variants, FOBS AN and BOBS AN , to detect OOB in forward and backward propagations, respectively. Each OBS AN variant is designed as extra passes of DL compilers to integrate with large-scale DNN models, and we design various optimization schemes to reduce the overhead of OBS AN . Evaluations over various anomalous inputs show that OBS AN manifests promising OOB detectability with low overhead. We further present two downstream applications to show how OBS AN prevents online AE generation and facilitates feedback-driven fuzz testing toward DNN executables.","PeriodicalId":199733,"journal":{"name":"Proceedings 2023 Network and Distributed System Security Symposium","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"OBSan: An Out-Of-Bound Sanitizer to Harden DNN Executables\",\"authors\":\"Yanzuo Chen, Yuanyuan Yuan, Shuai Wang\",\"doi\":\"10.14722/ndss.2023.24103\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"—The rapid adoption of deep neural network (DNN) models on a variety of hardware platforms has boosted the development of deep learning (DL) compilers. DL compilers take as input the high-level DNN model specifications and generate optimized DNN executables for diverse hardware architectures like CPUs and GPUs. Despite the emerging adoption of DL compilers in real-world scenarios, no solutions exist to protect DNN executables. To fill this critical gap, this paper introduces OBS AN , a fast sanitizer designed to check for out-of-bound (OOB) behavior in DNN executables. Holistically, DNN incorporates bidirectional computation : forward propagation which predicts an output based on an input, and backward propagation which characterizes how the forward prediction is made. Both the neuron activations in forward propagation and gradients in backward propagation should fall within valid ranges, and deviations from these ranges would be considered as OOB. OOB is primarily related to unsafe behavior of DNNs, which root from anomalous inputs and may cause mispredictions or even exploitation via adversarial examples (AEs). We thus design OBS AN , which includes two variants, FOBS AN and BOBS AN , to detect OOB in forward and backward propagations, respectively. Each OBS AN variant is designed as extra passes of DL compilers to integrate with large-scale DNN models, and we design various optimization schemes to reduce the overhead of OBS AN . Evaluations over various anomalous inputs show that OBS AN manifests promising OOB detectability with low overhead. We further present two downstream applications to show how OBS AN prevents online AE generation and facilitates feedback-driven fuzz testing toward DNN executables.\",\"PeriodicalId\":199733,\"journal\":{\"name\":\"Proceedings 2023 Network and Distributed System Security Symposium\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 2023 Network and Distributed System Security Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14722/ndss.2023.24103\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 2023 Network and Distributed System Security Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14722/ndss.2023.24103","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
OBSan: An Out-Of-Bound Sanitizer to Harden DNN Executables
—The rapid adoption of deep neural network (DNN) models on a variety of hardware platforms has boosted the development of deep learning (DL) compilers. DL compilers take as input the high-level DNN model specifications and generate optimized DNN executables for diverse hardware architectures like CPUs and GPUs. Despite the emerging adoption of DL compilers in real-world scenarios, no solutions exist to protect DNN executables. To fill this critical gap, this paper introduces OBS AN , a fast sanitizer designed to check for out-of-bound (OOB) behavior in DNN executables. Holistically, DNN incorporates bidirectional computation : forward propagation which predicts an output based on an input, and backward propagation which characterizes how the forward prediction is made. Both the neuron activations in forward propagation and gradients in backward propagation should fall within valid ranges, and deviations from these ranges would be considered as OOB. OOB is primarily related to unsafe behavior of DNNs, which root from anomalous inputs and may cause mispredictions or even exploitation via adversarial examples (AEs). We thus design OBS AN , which includes two variants, FOBS AN and BOBS AN , to detect OOB in forward and backward propagations, respectively. Each OBS AN variant is designed as extra passes of DL compilers to integrate with large-scale DNN models, and we design various optimization schemes to reduce the overhead of OBS AN . Evaluations over various anomalous inputs show that OBS AN manifests promising OOB detectability with low overhead. We further present two downstream applications to show how OBS AN prevents online AE generation and facilitates feedback-driven fuzz testing toward DNN executables.