Akshay Prabhu, Niranjan Balasubramanian, Chinmay Tiwari, R. Deolekar
{"title":"Privacy preserving and secure machine learning","authors":"Akshay Prabhu, Niranjan Balasubramanian, Chinmay Tiwari, R. Deolekar","doi":"10.1109/INDICON52576.2021.9691706","DOIUrl":null,"url":null,"abstract":"Privacy in Machine Learning is a fundamentally important issue that practitioners must keep in mind while developing models. This paper presents the various methods that can be used to defend models against attack that undermine privacy and safety of the data utilized to generate models. These methods help prevent attacks against both the trained model as well as the underlying training data used by the model. The solutions this paper explores include differential privacy and homomorphic encryption which defend the training data while it is being used to train the model while machine unlearning empowers the data scientist to remove training samples post training.","PeriodicalId":106004,"journal":{"name":"2021 IEEE 18th India Council International Conference (INDICON)","volume":"600 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 18th India Council International Conference (INDICON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INDICON52576.2021.9691706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Privacy in Machine Learning is a fundamentally important issue that practitioners must keep in mind while developing models. This paper presents the various methods that can be used to defend models against attack that undermine privacy and safety of the data utilized to generate models. These methods help prevent attacks against both the trained model as well as the underlying training data used by the model. The solutions this paper explores include differential privacy and homomorphic encryption which defend the training data while it is being used to train the model while machine unlearning empowers the data scientist to remove training samples post training.