{"title":"Revisiting Machine Learning Training Process for Enhanced Data Privacy","authors":"Adit Goyal, Vikas Hassija, V. Albuquerque","doi":"10.1145/3474124.3474208","DOIUrl":null,"url":null,"abstract":"The increasing use of machine learning algorithms for nearly every aspect of our lives has brought a new challenge to the forefront, one of user-privacy. Once the data has been shared by the user online, it is difficult to revoke the access of that data if it has already been used to train the model. For any personal data, every user should reserve the right for the data to be forgotten. To solve the above-mentioned problem, a few frameworks have been introduced recently to achieve machine unlearning or inverse learning. Although there is no specific definition of forgetting in DNNs (deep neural networks) yet, our focus will be on selectively forgetting a subset of data belonging to a class, which was initially used to train the model, without the need of re-training from scratch, nor using the initial training data. This method scrubs the weights clean of the data that needs to be forgotten. Concepts for the stability of stochastic gradient descent and differential privacy are exploited in this approach to address the problem of selective forgetting in DNNs.","PeriodicalId":144611,"journal":{"name":"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Thirteenth International Conference on Contemporary Computing (IC3-2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3474124.3474208","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
The increasing use of machine learning algorithms for nearly every aspect of our lives has brought a new challenge to the forefront, one of user-privacy. Once the data has been shared by the user online, it is difficult to revoke the access of that data if it has already been used to train the model. For any personal data, every user should reserve the right for the data to be forgotten. To solve the above-mentioned problem, a few frameworks have been introduced recently to achieve machine unlearning or inverse learning. Although there is no specific definition of forgetting in DNNs (deep neural networks) yet, our focus will be on selectively forgetting a subset of data belonging to a class, which was initially used to train the model, without the need of re-training from scratch, nor using the initial training data. This method scrubs the weights clean of the data that needs to be forgotten. Concepts for the stability of stochastic gradient descent and differential privacy are exploited in this approach to address the problem of selective forgetting in DNNs.