{"title":"深度神经暴露:你可以运行,但不能隐藏你的神经网络架构!","authors":"Sayed Erfan Arefin, Abdul Serwadda","doi":"10.1145/3437880.3460415","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) are at the heart of many of today's most innovative technologies. With companies investing lots of resources to design, build and optimize these networks for their custom products, DNNs are now integral to many companies' tightly guarded Intellectual Property. As is the case for every high-value product, one can expect bad actors to increasingly design techniques aimed to uncover the architectural designs of proprietary DNNs. This paper investigates if the power draw patterns of a GPU on which a DNN runs could be leveraged to glean key details of its design architecture. Based on ten of the most well-known Convolutional Neural Network (CNN) architectures, we study this line of attack under varying assumptions about the kind of data available to the attacker. We show the attack to be highly effective, attaining an accuracy in the 80 percentage range for the best performing attack scenario.","PeriodicalId":120300,"journal":{"name":"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Deep Neural Exposure: You Can Run, But Not Hide Your Neural Network Architecture!\",\"authors\":\"Sayed Erfan Arefin, Abdul Serwadda\",\"doi\":\"10.1145/3437880.3460415\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Neural Networks (DNNs) are at the heart of many of today's most innovative technologies. With companies investing lots of resources to design, build and optimize these networks for their custom products, DNNs are now integral to many companies' tightly guarded Intellectual Property. As is the case for every high-value product, one can expect bad actors to increasingly design techniques aimed to uncover the architectural designs of proprietary DNNs. This paper investigates if the power draw patterns of a GPU on which a DNN runs could be leveraged to glean key details of its design architecture. Based on ten of the most well-known Convolutional Neural Network (CNN) architectures, we study this line of attack under varying assumptions about the kind of data available to the attacker. We show the attack to be highly effective, attaining an accuracy in the 80 percentage range for the best performing attack scenario.\",\"PeriodicalId\":120300,\"journal\":{\"name\":\"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3437880.3460415\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3437880.3460415","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Neural Exposure: You Can Run, But Not Hide Your Neural Network Architecture!
Deep Neural Networks (DNNs) are at the heart of many of today's most innovative technologies. With companies investing lots of resources to design, build and optimize these networks for their custom products, DNNs are now integral to many companies' tightly guarded Intellectual Property. As is the case for every high-value product, one can expect bad actors to increasingly design techniques aimed to uncover the architectural designs of proprietary DNNs. This paper investigates if the power draw patterns of a GPU on which a DNN runs could be leveraged to glean key details of its design architecture. Based on ten of the most well-known Convolutional Neural Network (CNN) architectures, we study this line of attack under varying assumptions about the kind of data available to the attacker. We show the attack to be highly effective, attaining an accuracy in the 80 percentage range for the best performing attack scenario.