{"title":"基于混合模糊模型的模型窃取防御:尚在研究中","authors":"Zicheng Gong, Wei Jiang, Jinyu Zhan, Ziwei Song","doi":"10.1109/CODESISSS51650.2020.9244031","DOIUrl":null,"url":null,"abstract":"With increasing applications of Deep Neural Networks (DNNs) to edge computing systems, security issues have received more attentions. Particularly, model stealing attack is one of the biggest challenge to the privacy of models. To defend against model stealing attack, we propose a novel protection architecture with fuzzy models. Each fuzzy model is designed to generate wrong predictions corresponding to a particular category. In addition’ we design a special voting strategy to eliminate the systemic errors, which can destroy the dark knowledge in predictions at the same time. Preliminary experiments show that our method substantially decreases the clone model's accuracy (up to 20%) without loss of inference accuracy for benign users.","PeriodicalId":437802,"journal":{"name":"2020 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Model Stealing Defense with Hybrid Fuzzy Models: Work-in-Progress\",\"authors\":\"Zicheng Gong, Wei Jiang, Jinyu Zhan, Ziwei Song\",\"doi\":\"10.1109/CODESISSS51650.2020.9244031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With increasing applications of Deep Neural Networks (DNNs) to edge computing systems, security issues have received more attentions. Particularly, model stealing attack is one of the biggest challenge to the privacy of models. To defend against model stealing attack, we propose a novel protection architecture with fuzzy models. Each fuzzy model is designed to generate wrong predictions corresponding to a particular category. In addition’ we design a special voting strategy to eliminate the systemic errors, which can destroy the dark knowledge in predictions at the same time. Preliminary experiments show that our method substantially decreases the clone model's accuracy (up to 20%) without loss of inference accuracy for benign users.\",\"PeriodicalId\":437802,\"journal\":{\"name\":\"2020 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS)\",\"volume\":\"73 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CODESISSS51650.2020.9244031\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CODESISSS51650.2020.9244031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Model Stealing Defense with Hybrid Fuzzy Models: Work-in-Progress
With increasing applications of Deep Neural Networks (DNNs) to edge computing systems, security issues have received more attentions. Particularly, model stealing attack is one of the biggest challenge to the privacy of models. To defend against model stealing attack, we propose a novel protection architecture with fuzzy models. Each fuzzy model is designed to generate wrong predictions corresponding to a particular category. In addition’ we design a special voting strategy to eliminate the systemic errors, which can destroy the dark knowledge in predictions at the same time. Preliminary experiments show that our method substantially decreases the clone model's accuracy (up to 20%) without loss of inference accuracy for benign users.