{"title":"轨迹发布中保护敏感位置访问免受推理攻击","authors":"Xiangyu Liu, Manish Singh, Xiufeng Xia","doi":"10.1109/PDCAT46702.2019.00051","DOIUrl":null,"url":null,"abstract":"With an increasing popularity of Location-Based Services (LBSs), people's trajectories are continuously recorded and collected. The trajectory data are often shared or published for improving user experience, such as personalized recommendations and activity mining. However, releasing the trajectory data makes users' sensitive location visits vulnerable to inference attacks. In this paper, we study the problem of protecting sensitive location visits in the publication of trajectory data, assuming an adversary can do inference attacks using association rules derived from the data. We propose a methodology of anonymizing trajectories employing both generalizations and suppressions, to sanitize the trajectory data and protect sensitive location visits against inference attacks. We design a number of techniques to make our trajectory anonymizing algorithm efficient meanwhile maintaining the utility. We have conducted an empirical study to show that our algorithms can efficiently prevent inference attacks for real datasets while preserving the accuracy of aggregate querying on the published data.","PeriodicalId":166126,"journal":{"name":"2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Protecting Sensitive Location Visits Against Inference Attacks in Trajectory Publishing\",\"authors\":\"Xiangyu Liu, Manish Singh, Xiufeng Xia\",\"doi\":\"10.1109/PDCAT46702.2019.00051\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With an increasing popularity of Location-Based Services (LBSs), people's trajectories are continuously recorded and collected. The trajectory data are often shared or published for improving user experience, such as personalized recommendations and activity mining. However, releasing the trajectory data makes users' sensitive location visits vulnerable to inference attacks. In this paper, we study the problem of protecting sensitive location visits in the publication of trajectory data, assuming an adversary can do inference attacks using association rules derived from the data. We propose a methodology of anonymizing trajectories employing both generalizations and suppressions, to sanitize the trajectory data and protect sensitive location visits against inference attacks. We design a number of techniques to make our trajectory anonymizing algorithm efficient meanwhile maintaining the utility. We have conducted an empirical study to show that our algorithms can efficiently prevent inference attacks for real datasets while preserving the accuracy of aggregate querying on the published data.\",\"PeriodicalId\":166126,\"journal\":{\"name\":\"2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PDCAT46702.2019.00051\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PDCAT46702.2019.00051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Protecting Sensitive Location Visits Against Inference Attacks in Trajectory Publishing
With an increasing popularity of Location-Based Services (LBSs), people's trajectories are continuously recorded and collected. The trajectory data are often shared or published for improving user experience, such as personalized recommendations and activity mining. However, releasing the trajectory data makes users' sensitive location visits vulnerable to inference attacks. In this paper, we study the problem of protecting sensitive location visits in the publication of trajectory data, assuming an adversary can do inference attacks using association rules derived from the data. We propose a methodology of anonymizing trajectories employing both generalizations and suppressions, to sanitize the trajectory data and protect sensitive location visits against inference attacks. We design a number of techniques to make our trajectory anonymizing algorithm efficient meanwhile maintaining the utility. We have conducted an empirical study to show that our algorithms can efficiently prevent inference attacks for real datasets while preserving the accuracy of aggregate querying on the published data.