{"title":"时间序列数据共享中的主动隐私效用权衡","authors":"Ecenaz Erdemir;Pier Luigi Dragotti;Deniz Gündüz","doi":"10.1109/JSAIT.2023.3287929","DOIUrl":null,"url":null,"abstract":"Internet of Things devices have become highly popular thanks to the services they offer. However, they also raise privacy concerns since they share fine-grained time-series user data with untrusted third parties. We model the user’s personal information as the secret variable, to be kept private from an honest-but-curious service provider, and the useful variable, to be disclosed for utility. We consider an active learning framework, where one out of a finite set of measurement mechanisms is chosen at each time step, each revealing some information about the underlying secret and useful variables, albeit with different statistics. The measurements are taken such that the correct value of useful variable can be detected quickly, while the confidence on the secret variable remains below a predefined level. For privacy measure, we consider both the probability of correctly detecting the secret variable value and the mutual information between the secret and released data. We formulate both problems as partially observable Markov decision processes, and numerically solve by advantage actor-critic deep reinforcement learning. We evaluate the privacy-utility trade-off of the proposed policies on both the synthetic and real-world time-series datasets.","PeriodicalId":73295,"journal":{"name":"IEEE journal on selected areas in information theory","volume":"4 ","pages":"159-173"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Active Privacy-Utility Trade-Off Against Inference in Time-Series Data Sharing\",\"authors\":\"Ecenaz Erdemir;Pier Luigi Dragotti;Deniz Gündüz\",\"doi\":\"10.1109/JSAIT.2023.3287929\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Internet of Things devices have become highly popular thanks to the services they offer. However, they also raise privacy concerns since they share fine-grained time-series user data with untrusted third parties. We model the user’s personal information as the secret variable, to be kept private from an honest-but-curious service provider, and the useful variable, to be disclosed for utility. We consider an active learning framework, where one out of a finite set of measurement mechanisms is chosen at each time step, each revealing some information about the underlying secret and useful variables, albeit with different statistics. The measurements are taken such that the correct value of useful variable can be detected quickly, while the confidence on the secret variable remains below a predefined level. For privacy measure, we consider both the probability of correctly detecting the secret variable value and the mutual information between the secret and released data. We formulate both problems as partially observable Markov decision processes, and numerically solve by advantage actor-critic deep reinforcement learning. We evaluate the privacy-utility trade-off of the proposed policies on both the synthetic and real-world time-series datasets.\",\"PeriodicalId\":73295,\"journal\":{\"name\":\"IEEE journal on selected areas in information theory\",\"volume\":\"4 \",\"pages\":\"159-173\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE journal on selected areas in information theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10167744/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal on selected areas in information theory","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10167744/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Active Privacy-Utility Trade-Off Against Inference in Time-Series Data Sharing
Internet of Things devices have become highly popular thanks to the services they offer. However, they also raise privacy concerns since they share fine-grained time-series user data with untrusted third parties. We model the user’s personal information as the secret variable, to be kept private from an honest-but-curious service provider, and the useful variable, to be disclosed for utility. We consider an active learning framework, where one out of a finite set of measurement mechanisms is chosen at each time step, each revealing some information about the underlying secret and useful variables, albeit with different statistics. The measurements are taken such that the correct value of useful variable can be detected quickly, while the confidence on the secret variable remains below a predefined level. For privacy measure, we consider both the probability of correctly detecting the secret variable value and the mutual information between the secret and released data. We formulate both problems as partially observable Markov decision processes, and numerically solve by advantage actor-critic deep reinforcement learning. We evaluate the privacy-utility trade-off of the proposed policies on both the synthetic and real-world time-series datasets.