{"title":"通过水印学习跟踪数据","authors":"Alexandre Sablayrolles","doi":"10.1145/3437880.3458442","DOIUrl":null,"url":null,"abstract":"How can we gauge the privacy provided by machine learning algorithms? Models trained with differential privacy (DP) provably limit information leakage, but the question remains open for non-DP models. In this talk, we present multiple techniques for membership inference, which estimates if a given data sample is in the training set of a model. In particular, we introduce a watermarking-based method that allows for a very fast verification of data usage in a model: this technique creates marks called radioactive that propagates from the data to the model during training. This watermark is barely visible to the naked eye and allows data tracing even when the radioactive data represents only 1% of the training set.","PeriodicalId":120300,"journal":{"name":"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Tracing Data through Learning with Watermarking\",\"authors\":\"Alexandre Sablayrolles\",\"doi\":\"10.1145/3437880.3458442\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"How can we gauge the privacy provided by machine learning algorithms? Models trained with differential privacy (DP) provably limit information leakage, but the question remains open for non-DP models. In this talk, we present multiple techniques for membership inference, which estimates if a given data sample is in the training set of a model. In particular, we introduce a watermarking-based method that allows for a very fast verification of data usage in a model: this technique creates marks called radioactive that propagates from the data to the model during training. This watermark is barely visible to the naked eye and allows data tracing even when the radioactive data represents only 1% of the training set.\",\"PeriodicalId\":120300,\"journal\":{\"name\":\"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3437880.3458442\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3437880.3458442","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
How can we gauge the privacy provided by machine learning algorithms? Models trained with differential privacy (DP) provably limit information leakage, but the question remains open for non-DP models. In this talk, we present multiple techniques for membership inference, which estimates if a given data sample is in the training set of a model. In particular, we introduce a watermarking-based method that allows for a very fast verification of data usage in a model: this technique creates marks called radioactive that propagates from the data to the model during training. This watermark is barely visible to the naked eye and allows data tracing even when the radioactive data represents only 1% of the training set.