Michael R. Clark, Peter Swartz, Andrew Alten, Raed M. Salih
{"title":"不落下一个分类器:基于置信度信息的RBF SVM分类器易受图像提取攻击的深入研究","authors":"Michael R. Clark, Peter Swartz, Andrew Alten, Raed M. Salih","doi":"10.1109/CogMI50398.2020.00037","DOIUrl":null,"url":null,"abstract":"Training image extraction attacks attempt to reverse engineer training images from an already trained machine learning model. Such attacks are concerning because training data can often be sensitive in nature. Recent research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we correct common misperceptions about image extraction attacks and develop a deep understanding ofwhy some trained models are vulnerable to ourattack while others are not. In particular, we use the RBFSVMclassifier to show that we can extract individual training images from models trained on thousands of images., which refutes the notion that these attacks can only extract an “average” of each class. We also show that increasing diversity of the training data set leads to more successful attacks. To the best of our knowledge, our work is the first to show semantically meaningful images extracted from the RBF SVM classifier.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"No Classifier Left Behind: An In-depth Study of the RBF SVM Classifier's Vulnerability to Image Extraction Attacks via Confidence Information Exploitation\",\"authors\":\"Michael R. Clark, Peter Swartz, Andrew Alten, Raed M. Salih\",\"doi\":\"10.1109/CogMI50398.2020.00037\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training image extraction attacks attempt to reverse engineer training images from an already trained machine learning model. Such attacks are concerning because training data can often be sensitive in nature. Recent research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we correct common misperceptions about image extraction attacks and develop a deep understanding ofwhy some trained models are vulnerable to ourattack while others are not. In particular, we use the RBFSVMclassifier to show that we can extract individual training images from models trained on thousands of images., which refutes the notion that these attacks can only extract an “average” of each class. We also show that increasing diversity of the training data set leads to more successful attacks. To the best of our knowledge, our work is the first to show semantically meaningful images extracted from the RBF SVM classifier.\",\"PeriodicalId\":360326,\"journal\":{\"name\":\"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CogMI50398.2020.00037\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CogMI50398.2020.00037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
No Classifier Left Behind: An In-depth Study of the RBF SVM Classifier's Vulnerability to Image Extraction Attacks via Confidence Information Exploitation
Training image extraction attacks attempt to reverse engineer training images from an already trained machine learning model. Such attacks are concerning because training data can often be sensitive in nature. Recent research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we correct common misperceptions about image extraction attacks and develop a deep understanding ofwhy some trained models are vulnerable to ourattack while others are not. In particular, we use the RBFSVMclassifier to show that we can extract individual training images from models trained on thousands of images., which refutes the notion that these attacks can only extract an “average” of each class. We also show that increasing diversity of the training data set leads to more successful attacks. To the best of our knowledge, our work is the first to show semantically meaningful images extracted from the RBF SVM classifier.