{"title":"相关时间序列差分隐私保护方法的攻击模型","authors":"Xiong Wenjun, Xu Zhengquan, Hao Wang","doi":"10.14257/IJDTA.2017.10.1.09","DOIUrl":null,"url":null,"abstract":"Differential privacy has played a significant role in privacy preserving, and it has performed well in independent series. However, in real-world applications, most data are released in the form of correlated time series. Although a few differential privacy methods have focused on correlated time series, they are not designed by protecting against a specific attack model. Due to this drawback, the effectiveness of these methods cannot be verified and the privacy level of them cannot be measured. To address the problem, this paper presents an attack model based on the principle of filtering in signal processing theory. Since the distribution of the noise designed by current methods is independent and different from that of the original correlated series, a filter is designed as a unified attack model to sanitize the independent noise from the perturbed time series. Furthermore, the designed attack model can realize the function of measuring the effective privacy level of these methods and comparing the performance of them. Experimental results show that the attack model leads to degradation in privacy levels and can work as a unified measurement.","PeriodicalId":13926,"journal":{"name":"International journal of database theory and application","volume":"100 1","pages":"89-104"},"PeriodicalIF":0.0000,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An Attack Model on Differential Privacy Preserving Methods for Correlated Time Series\",\"authors\":\"Xiong Wenjun, Xu Zhengquan, Hao Wang\",\"doi\":\"10.14257/IJDTA.2017.10.1.09\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Differential privacy has played a significant role in privacy preserving, and it has performed well in independent series. However, in real-world applications, most data are released in the form of correlated time series. Although a few differential privacy methods have focused on correlated time series, they are not designed by protecting against a specific attack model. Due to this drawback, the effectiveness of these methods cannot be verified and the privacy level of them cannot be measured. To address the problem, this paper presents an attack model based on the principle of filtering in signal processing theory. Since the distribution of the noise designed by current methods is independent and different from that of the original correlated series, a filter is designed as a unified attack model to sanitize the independent noise from the perturbed time series. Furthermore, the designed attack model can realize the function of measuring the effective privacy level of these methods and comparing the performance of them. Experimental results show that the attack model leads to degradation in privacy levels and can work as a unified measurement.\",\"PeriodicalId\":13926,\"journal\":{\"name\":\"International journal of database theory and application\",\"volume\":\"100 1\",\"pages\":\"89-104\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-01-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International journal of database theory and application\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14257/IJDTA.2017.10.1.09\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of database theory and application","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14257/IJDTA.2017.10.1.09","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Attack Model on Differential Privacy Preserving Methods for Correlated Time Series
Differential privacy has played a significant role in privacy preserving, and it has performed well in independent series. However, in real-world applications, most data are released in the form of correlated time series. Although a few differential privacy methods have focused on correlated time series, they are not designed by protecting against a specific attack model. Due to this drawback, the effectiveness of these methods cannot be verified and the privacy level of them cannot be measured. To address the problem, this paper presents an attack model based on the principle of filtering in signal processing theory. Since the distribution of the noise designed by current methods is independent and different from that of the original correlated series, a filter is designed as a unified attack model to sanitize the independent noise from the perturbed time series. Furthermore, the designed attack model can realize the function of measuring the effective privacy level of these methods and comparing the performance of them. Experimental results show that the attack model leads to degradation in privacy levels and can work as a unified measurement.