{"title":"Protection of Computational Machine Learning Models against Extraction Threat","authors":"M. O. Kalinin, M. D. Soshnev, A. S. Konoplev","doi":"10.3103/S0146411623080084","DOIUrl":null,"url":null,"abstract":"<p>The extraction threat to machine learning models is considered. Most contemporary methods of defense against the extraction of computational machine learning models are based on the use of a protective noise mechanism. The main disadvantage inherent in the noise mechanism is that it reduces the precision of the model’s output. The requirements for the efficient methods of protecting the machine learning models from extraction are formulated, and a new method of defense against this threat, supplementing the noise with a distillation mechanism, is presented. It is experimentally shown that the developed method provides the resistance of machine learning models to extraction threat while maintaining the quality their operating results due to the transformation of protected models into the other simplified models equivalent to the original ones.</p>","PeriodicalId":46238,"journal":{"name":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","volume":"57 8","pages":"996 - 1004"},"PeriodicalIF":0.6000,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AUTOMATIC CONTROL AND COMPUTER SCIENCES","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.3103/S0146411623080084","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The extraction threat to machine learning models is considered. Most contemporary methods of defense against the extraction of computational machine learning models are based on the use of a protective noise mechanism. The main disadvantage inherent in the noise mechanism is that it reduces the precision of the model’s output. The requirements for the efficient methods of protecting the machine learning models from extraction are formulated, and a new method of defense against this threat, supplementing the noise with a distillation mechanism, is presented. It is experimentally shown that the developed method provides the resistance of machine learning models to extraction threat while maintaining the quality their operating results due to the transformation of protected models into the other simplified models equivalent to the original ones.
期刊介绍:
Automatic Control and Computer Sciences is a peer reviewed journal that publishes articles on• Control systems, cyber-physical system, real-time systems, robotics, smart sensors, embedded intelligence • Network information technologies, information security, statistical methods of data processing, distributed artificial intelligence, complex systems modeling, knowledge representation, processing and management • Signal and image processing, machine learning, machine perception, computer vision