{"title":"Protecting Deep Neural Network Intellectual Property with Architecture-Agnostic Input Obfuscation","authors":"Brooks Olney, Robert Karam","doi":"10.1145/3526241.3530386","DOIUrl":null,"url":null,"abstract":"Deep Convolutional Neural Networks (DCNNs) have revolutionized and improved many aspects of modern life. However, these models are increasingly more complex, and training them to perform at desirable levels is difficult undertaking; hence, the trained parameters represent a valuable intellectual property (IP) asset which a motivated attacker may wish to steal. To better protect the IP, we propose a method of lightweight input obfuscation that is undone prior to inference, where input data is obfuscated in order to use the model to specification. Without using the correct key and unlocking sequence, the accuracy of the classifier is reduced to a random guess, thus protecting the input/output interface and mitigating model extraction attacks which rely on such access. We evaluate the system using a VGG-16 network trained on CIFAR-10, and demonstrate that with an incorrect deobfuscation key or sequence, the classification accuracy drops to a random guess, with an inference timing overhead of 4.4% on an Nvidia-based evaluation platform. The system avoids the costs associated with retraining and has no impact on model accuracy for authorized users.","PeriodicalId":188228,"journal":{"name":"Proceedings of the Great Lakes Symposium on VLSI 2022","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Great Lakes Symposium on VLSI 2022","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3526241.3530386","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Convolutional Neural Networks (DCNNs) have revolutionized and improved many aspects of modern life. However, these models are increasingly more complex, and training them to perform at desirable levels is difficult undertaking; hence, the trained parameters represent a valuable intellectual property (IP) asset which a motivated attacker may wish to steal. To better protect the IP, we propose a method of lightweight input obfuscation that is undone prior to inference, where input data is obfuscated in order to use the model to specification. Without using the correct key and unlocking sequence, the accuracy of the classifier is reduced to a random guess, thus protecting the input/output interface and mitigating model extraction attacks which rely on such access. We evaluate the system using a VGG-16 network trained on CIFAR-10, and demonstrate that with an incorrect deobfuscation key or sequence, the classification accuracy drops to a random guess, with an inference timing overhead of 4.4% on an Nvidia-based evaluation platform. The system avoids the costs associated with retraining and has no impact on model accuracy for authorized users.