{"title":"机器学习有多私密?","authors":"Nicolas Carlini","doi":"10.1145/3437880.3458440","DOIUrl":null,"url":null,"abstract":"A machine learning model is private if it doesn't reveal (too much) about its training data. This three-part talk examines to what extent current networks are private. Standard models are not private. We develop an attack that extracts rare training examples (for example, individual people's names, phone numbers, or addresses) out of GPT-2, a language model trained on gigabytes of text from the Internet [2]. As a result there is a clear need for training models with privacy-preserving techniques. We show that InstaHide, a recent candidate, is not private. We develop a complete break of this scheme and can again recover verbatim inputs [1]. Fortunately, there exists provably-correct \"differentiallyprivate\" training that guarantees no adversary could ever succeed at the above attacks. We develop techniques to that allow us to empirically evaluate the privacy offered by such schemes, and find they may be more private than can be proven formally [3].","PeriodicalId":120300,"journal":{"name":"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"How Private is Machine Learning?\",\"authors\":\"Nicolas Carlini\",\"doi\":\"10.1145/3437880.3458440\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A machine learning model is private if it doesn't reveal (too much) about its training data. This three-part talk examines to what extent current networks are private. Standard models are not private. We develop an attack that extracts rare training examples (for example, individual people's names, phone numbers, or addresses) out of GPT-2, a language model trained on gigabytes of text from the Internet [2]. As a result there is a clear need for training models with privacy-preserving techniques. We show that InstaHide, a recent candidate, is not private. We develop a complete break of this scheme and can again recover verbatim inputs [1]. Fortunately, there exists provably-correct \\\"differentiallyprivate\\\" training that guarantees no adversary could ever succeed at the above attacks. We develop techniques to that allow us to empirically evaluate the privacy offered by such schemes, and find they may be more private than can be proven formally [3].\",\"PeriodicalId\":120300,\"journal\":{\"name\":\"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3437880.3458440\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 ACM Workshop on Information Hiding and Multimedia Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3437880.3458440","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A machine learning model is private if it doesn't reveal (too much) about its training data. This three-part talk examines to what extent current networks are private. Standard models are not private. We develop an attack that extracts rare training examples (for example, individual people's names, phone numbers, or addresses) out of GPT-2, a language model trained on gigabytes of text from the Internet [2]. As a result there is a clear need for training models with privacy-preserving techniques. We show that InstaHide, a recent candidate, is not private. We develop a complete break of this scheme and can again recover verbatim inputs [1]. Fortunately, there exists provably-correct "differentiallyprivate" training that guarantees no adversary could ever succeed at the above attacks. We develop techniques to that allow us to empirically evaluate the privacy offered by such schemes, and find they may be more private than can be proven formally [3].