{"title":"一种基于神经网络结构的电磁隐蔽信道","authors":"Chaojie Gu, Jiale Chen, Rui Tan, Linshan Jiang","doi":"10.1109/ICPADS53394.2021.00028","DOIUrl":null,"url":null,"abstract":"Outsourcing the design of deep neural networks may incur cybersecurity threats from the hostile designers. This paper studies a new covert channel attack that leaks the inference results over the air through a hostile design of the neural network architecture and the computing device's electromagnetic radiation when executing the neural network. Specifically, the hostile neural network consists of a series of binary models that correspond to all classes and are executed sequentially. The execution terminates once any binary model given the input is positive about its responsible class. We describe an approach to generate such binary models by pruning a benign neural network that is trained using the standard method to deal with all the classes. Compared with the benign neural network, the hostile one has similar memory usage and negligible classification accuracy drop, but distinct inference times for the samples of different classes. As a result, the hostile neural network's classification result can be eavesdropped by measuring the duration of the electromagnetic radiation emanated from the computing device. As neural networks are stored and transmitted as data files, this covert channel attack is more stealthy to the anti-malware than other code-based attacks. We implement the described attack on two edge computing devices that execute the hostile neural network on CPU or GPU. Evaluation shows 100% empirical accuracy in eavesdropping the inference results.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Electromagnetic Covert Channel based on Neural Network Architecture\",\"authors\":\"Chaojie Gu, Jiale Chen, Rui Tan, Linshan Jiang\",\"doi\":\"10.1109/ICPADS53394.2021.00028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Outsourcing the design of deep neural networks may incur cybersecurity threats from the hostile designers. This paper studies a new covert channel attack that leaks the inference results over the air through a hostile design of the neural network architecture and the computing device's electromagnetic radiation when executing the neural network. Specifically, the hostile neural network consists of a series of binary models that correspond to all classes and are executed sequentially. The execution terminates once any binary model given the input is positive about its responsible class. We describe an approach to generate such binary models by pruning a benign neural network that is trained using the standard method to deal with all the classes. Compared with the benign neural network, the hostile one has similar memory usage and negligible classification accuracy drop, but distinct inference times for the samples of different classes. As a result, the hostile neural network's classification result can be eavesdropped by measuring the duration of the electromagnetic radiation emanated from the computing device. As neural networks are stored and transmitted as data files, this covert channel attack is more stealthy to the anti-malware than other code-based attacks. We implement the described attack on two edge computing devices that execute the hostile neural network on CPU or GPU. Evaluation shows 100% empirical accuracy in eavesdropping the inference results.\",\"PeriodicalId\":309508,\"journal\":{\"name\":\"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPADS53394.2021.00028\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPADS53394.2021.00028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Electromagnetic Covert Channel based on Neural Network Architecture
Outsourcing the design of deep neural networks may incur cybersecurity threats from the hostile designers. This paper studies a new covert channel attack that leaks the inference results over the air through a hostile design of the neural network architecture and the computing device's electromagnetic radiation when executing the neural network. Specifically, the hostile neural network consists of a series of binary models that correspond to all classes and are executed sequentially. The execution terminates once any binary model given the input is positive about its responsible class. We describe an approach to generate such binary models by pruning a benign neural network that is trained using the standard method to deal with all the classes. Compared with the benign neural network, the hostile one has similar memory usage and negligible classification accuracy drop, but distinct inference times for the samples of different classes. As a result, the hostile neural network's classification result can be eavesdropped by measuring the duration of the electromagnetic radiation emanated from the computing device. As neural networks are stored and transmitted as data files, this covert channel attack is more stealthy to the anti-malware than other code-based attacks. We implement the described attack on two edge computing devices that execute the hostile neural network on CPU or GPU. Evaluation shows 100% empirical accuracy in eavesdropping the inference results.