{"title":"最优多源推断隐私——一种广义Lloyd-Max算法","authors":"Ruochi Zhang, P. Venkitasubramaniam","doi":"10.1109/ALLERTON.2018.8635890","DOIUrl":null,"url":null,"abstract":"Information sanitization to protect an underlying label from being inferred through multiple data sources is investigated in this work. The problem is posed as an optimal mapping from a set of underlying distributions that reveal classes/labels for the data to a target distribution with minimum distortion. The optimal sanitization operation are transformed to convex optimization problems corresponding to the domain of the source and target distributions. In particular, when the target distribution is discrete, a parallel is drawn to a “biased” quantization method and an efficient sub-gradient method is proposed to derive the optimal transformation. The method is extended to a scenario where multiple source continuous distributions are to be mapped to an unknown target discrete distribution. A generalized version of the classical Lloyd Max iterative algorithm is proposed to derive the optimal biased quantizers that achieve perfect inference privacy. A real time system is investigated where the sanitizer does not have apriori information about the source distribution save for the class of possible source distributions. In the real time framework, an algorithm is proposed that achieves asymptotically the same distortion as if the source distribution were known apriori.","PeriodicalId":299280,"journal":{"name":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Optimal Multi-Source Inference Privacy — A Generalized Lloyd-Max Algorithm\",\"authors\":\"Ruochi Zhang, P. Venkitasubramaniam\",\"doi\":\"10.1109/ALLERTON.2018.8635890\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Information sanitization to protect an underlying label from being inferred through multiple data sources is investigated in this work. The problem is posed as an optimal mapping from a set of underlying distributions that reveal classes/labels for the data to a target distribution with minimum distortion. The optimal sanitization operation are transformed to convex optimization problems corresponding to the domain of the source and target distributions. In particular, when the target distribution is discrete, a parallel is drawn to a “biased” quantization method and an efficient sub-gradient method is proposed to derive the optimal transformation. The method is extended to a scenario where multiple source continuous distributions are to be mapped to an unknown target discrete distribution. A generalized version of the classical Lloyd Max iterative algorithm is proposed to derive the optimal biased quantizers that achieve perfect inference privacy. A real time system is investigated where the sanitizer does not have apriori information about the source distribution save for the class of possible source distributions. In the real time framework, an algorithm is proposed that achieves asymptotically the same distortion as if the source distribution were known apriori.\",\"PeriodicalId\":299280,\"journal\":{\"name\":\"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"volume\":\"74 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ALLERTON.2018.8635890\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ALLERTON.2018.8635890","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optimal Multi-Source Inference Privacy — A Generalized Lloyd-Max Algorithm
Information sanitization to protect an underlying label from being inferred through multiple data sources is investigated in this work. The problem is posed as an optimal mapping from a set of underlying distributions that reveal classes/labels for the data to a target distribution with minimum distortion. The optimal sanitization operation are transformed to convex optimization problems corresponding to the domain of the source and target distributions. In particular, when the target distribution is discrete, a parallel is drawn to a “biased” quantization method and an efficient sub-gradient method is proposed to derive the optimal transformation. The method is extended to a scenario where multiple source continuous distributions are to be mapped to an unknown target discrete distribution. A generalized version of the classical Lloyd Max iterative algorithm is proposed to derive the optimal biased quantizers that achieve perfect inference privacy. A real time system is investigated where the sanitizer does not have apriori information about the source distribution save for the class of possible source distributions. In the real time framework, an algorithm is proposed that achieves asymptotically the same distortion as if the source distribution were known apriori.