A. Dalvi, Apoorva Jain, Smit Moradiya, Riddhisha Nirmal, Jay Sanghavi, Irfan A. Siddavatam
{"title":"利用同态加密保护神经网络","authors":"A. Dalvi, Apoorva Jain, Smit Moradiya, Riddhisha Nirmal, Jay Sanghavi, Irfan A. Siddavatam","doi":"10.1109/CONIT51480.2021.9498376","DOIUrl":null,"url":null,"abstract":"Neural networks are becoming increasingly popular within the modern world, and they are often implemented without much consideration of their potential flaws, which makes them vulnerable and are easily being hacked by hackers. One of such vulnerabilities, namely, a backdoor attack is studied in this paper. A backdoor attacked neural network involves inducing unique misclassification rules or patterns as triggers in the neural network such that, upon encountering the trigger, the neural network will only predict the output based upon the misclassification rules, giving the attacker control over the output of the neural network. To prevent such a vulnerability, we propose to employ homomorphic encryption as a solution. Homomorphic Encrypted Data has a special property where certain operations can be performed on encrypted data to in-turn directly perform the operations on the plain-text data itself, without the need of any special mechanism. This ability of homomorphic encryption can be used in conjunction with the vulnerable neural network, to revoke the control of the attacker from the neural network. Thereby, in this paper, we will be securing a vulnerable neural network from backdoor attack using homomorphic encryption.","PeriodicalId":426131,"journal":{"name":"2021 International Conference on Intelligent Technologies (CONIT)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Securing Neural Networks Using Homomorphic Encryption\",\"authors\":\"A. Dalvi, Apoorva Jain, Smit Moradiya, Riddhisha Nirmal, Jay Sanghavi, Irfan A. Siddavatam\",\"doi\":\"10.1109/CONIT51480.2021.9498376\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks are becoming increasingly popular within the modern world, and they are often implemented without much consideration of their potential flaws, which makes them vulnerable and are easily being hacked by hackers. One of such vulnerabilities, namely, a backdoor attack is studied in this paper. A backdoor attacked neural network involves inducing unique misclassification rules or patterns as triggers in the neural network such that, upon encountering the trigger, the neural network will only predict the output based upon the misclassification rules, giving the attacker control over the output of the neural network. To prevent such a vulnerability, we propose to employ homomorphic encryption as a solution. Homomorphic Encrypted Data has a special property where certain operations can be performed on encrypted data to in-turn directly perform the operations on the plain-text data itself, without the need of any special mechanism. This ability of homomorphic encryption can be used in conjunction with the vulnerable neural network, to revoke the control of the attacker from the neural network. Thereby, in this paper, we will be securing a vulnerable neural network from backdoor attack using homomorphic encryption.\",\"PeriodicalId\":426131,\"journal\":{\"name\":\"2021 International Conference on Intelligent Technologies (CONIT)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Intelligent Technologies (CONIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CONIT51480.2021.9498376\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Intelligent Technologies (CONIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CONIT51480.2021.9498376","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Securing Neural Networks Using Homomorphic Encryption
Neural networks are becoming increasingly popular within the modern world, and they are often implemented without much consideration of their potential flaws, which makes them vulnerable and are easily being hacked by hackers. One of such vulnerabilities, namely, a backdoor attack is studied in this paper. A backdoor attacked neural network involves inducing unique misclassification rules or patterns as triggers in the neural network such that, upon encountering the trigger, the neural network will only predict the output based upon the misclassification rules, giving the attacker control over the output of the neural network. To prevent such a vulnerability, we propose to employ homomorphic encryption as a solution. Homomorphic Encrypted Data has a special property where certain operations can be performed on encrypted data to in-turn directly perform the operations on the plain-text data itself, without the need of any special mechanism. This ability of homomorphic encryption can be used in conjunction with the vulnerable neural network, to revoke the control of the attacker from the neural network. Thereby, in this paper, we will be securing a vulnerable neural network from backdoor attack using homomorphic encryption.