arXiv - CS - Cryptography and Security最新文献

筛选
英文 中文
A Response to: A Note on "Privacy Preserving n-Party Scalar Product Protocol" 回应:关于 "保护隐私的 n 方标量产品协议 "的说明
arXiv - CS - Cryptography and Security Pub Date : 2024-09-16 DOI: arxiv-2409.10057
Florian van Daalen, Lianne Ippel, Andre Dekker, Inigo Bermejo
{"title":"A Response to: A Note on \"Privacy Preserving n-Party Scalar Product Protocol\"","authors":"Florian van Daalen, Lianne Ippel, Andre Dekker, Inigo Bermejo","doi":"arxiv-2409.10057","DOIUrl":"https://doi.org/arxiv-2409.10057","url":null,"abstract":"We reply to the comments on our proposed privacy preserving n-party scalar\u0000product protocol made by Liu. In their comment Liu raised concerns regarding\u0000the security and scalability of the $n$-party scalar product protocol. In this\u0000reply, we show that their concerns are unfounded and that the $n$-party scalar\u0000product protocol is safe for its intended purposes. Their concerns regarding\u0000the security are based on a misunderstanding of the protocol. Additionally,\u0000while the scalability of the protocol puts limitations on its use, the protocol\u0000still has numerous practical applications when applied in the correct\u0000scenarios. Specifically within vertically partitioned scenarios, which often\u0000involve few parties, the protocol remains practical. In this reply we clarify\u0000Liu's misunderstanding. Additionally, we explain why the protocols scaling is\u0000not a practical problem in its intended application.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking Secure Sampling Protocols for Differential Privacy 差异隐私安全采样协议基准测试
arXiv - CS - Cryptography and Security Pub Date : 2024-09-16 DOI: arxiv-2409.10667
Yucheng Fu, Tianhao Wang
{"title":"Benchmarking Secure Sampling Protocols for Differential Privacy","authors":"Yucheng Fu, Tianhao Wang","doi":"arxiv-2409.10667","DOIUrl":"https://doi.org/arxiv-2409.10667","url":null,"abstract":"Differential privacy (DP) is widely employed to provide privacy protection\u0000for individuals by limiting information leakage from the aggregated data. Two\u0000well-known models of DP are the central model and the local model. The former\u0000requires a trustworthy server for data aggregation, while the latter requires\u0000individuals to add noise, significantly decreasing the utility of aggregated\u0000results. Recently, many studies have proposed to achieve DP with Secure\u0000Multi-party Computation (MPC) in distributed settings, namely, the distributed\u0000model, which has utility comparable to central model while, under specific\u0000security assumptions, preventing parties from obtaining others' information.\u0000One challenge of realizing DP in distributed model is efficiently sampling\u0000noise with MPC. Although many secure sampling methods have been proposed, they\u0000have different security assumptions and isolated theoretical analyses. There is\u0000a lack of experimental evaluations to measure and compare their performances.\u0000We fill this gap by benchmarking existing sampling protocols in MPC and\u0000performing comprehensive measurements of their efficiency. First, we present a\u0000taxonomy of the underlying techniques of these sampling protocols. Second, we\u0000extend widely used distributed noise generation protocols to be resilient\u0000against Byzantine attackers. Third, we implement discrete sampling protocols\u0000and align their security settings for a fair comparison. We then conduct an\u0000extensive evaluation to study their efficiency and utility.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PersonaMark: Personalized LLM watermarking for model protection and user attribution PersonaMark:用于模型保护和用户归属的个性化 LLM 水印
arXiv - CS - Cryptography and Security Pub Date : 2024-09-15 DOI: arxiv-2409.09739
Yuehan Zhang, Peizhuo Lv, Yinpeng Liu, Yongqiang Ma, Wei Lu, Xiaofeng Wang, Xiaozhong Liu, Jiawei Liu
{"title":"PersonaMark: Personalized LLM watermarking for model protection and user attribution","authors":"Yuehan Zhang, Peizhuo Lv, Yinpeng Liu, Yongqiang Ma, Wei Lu, Xiaofeng Wang, Xiaozhong Liu, Jiawei Liu","doi":"arxiv-2409.09739","DOIUrl":"https://doi.org/arxiv-2409.09739","url":null,"abstract":"The rapid development of LLMs brings both convenience and potential threats.\u0000As costumed and private LLMs are widely applied, model copyright protection has\u0000become important. Text watermarking is emerging as a promising solution to\u0000AI-generated text detection and model protection issues. However, current text\u0000watermarks have largely ignored the critical need for injecting different\u0000watermarks for different users, which could help attribute the watermark to a\u0000specific individual. In this paper, we explore the personalized text\u0000watermarking scheme for LLM copyright protection and other scenarios, ensuring\u0000accountability and traceability in content generation. Specifically, we propose\u0000a novel text watermarking method PersonaMark that utilizes sentence structure\u0000as the hidden medium for the watermark information and optimizes the\u0000sentence-level generation algorithm to minimize disruption to the model's\u0000natural generation process. By employing a personalized hashing function to\u0000inject unique watermark signals for different users, personalized watermarked\u0000text can be obtained. Since our approach performs on sentence level instead of\u0000token probability, the text quality is highly preserved. The injection process\u0000of unique watermark signals for different users is time-efficient for a large\u0000number of users with the designed multi-user hashing function. As far as we\u0000know, we achieved personalized text watermarking for the first time through\u0000this. We conduct an extensive evaluation of four different LLMs in terms of\u0000perplexity, sentiment polarity, alignment, readability, etc. The results\u0000demonstrate that our method maintains performance with minimal perturbation to\u0000the model's behavior, allows for unbiased insertion of watermark information,\u0000and exhibits strong watermark recognition capabilities.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GLEAN: Generative Learning for Eliminating Adversarial Noise GLEAN:消除对抗性噪音的生成学习
arXiv - CS - Cryptography and Security Pub Date : 2024-09-15 DOI: arxiv-2409.10578
Justin Lyu Kim, Kyoungwan Woo
{"title":"GLEAN: Generative Learning for Eliminating Adversarial Noise","authors":"Justin Lyu Kim, Kyoungwan Woo","doi":"arxiv-2409.10578","DOIUrl":"https://doi.org/arxiv-2409.10578","url":null,"abstract":"In the age of powerful diffusion models such as DALL-E and Stable Diffusion,\u0000many in the digital art community have suffered style mimicry attacks due to\u0000fine-tuning these models on their works. The ability to mimic an artist's style\u0000via text-to-image diffusion models raises serious ethical issues, especially\u0000without explicit consent. Glaze, a tool that applies various ranges of\u0000perturbations to digital art, has shown significant success in preventing style\u0000mimicry attacks, at the cost of artifacts ranging from imperceptible noise to\u0000severe quality degradation. The release of Glaze has sparked further\u0000discussions regarding the effectiveness of similar protection methods. In this\u0000paper, we propose GLEAN- applying I2I generative networks to strip\u0000perturbations from Glazed images, evaluating the performance of style mimicry\u0000attacks before and after GLEAN on the results of Glaze. GLEAN aims to support\u0000and enhance Glaze by highlighting its limitations and encouraging further\u0000development.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective 重新审视物理世界对交通标志识别的对抗性攻击:商业系统视角
arXiv - CS - Cryptography and Security Pub Date : 2024-09-15 DOI: arxiv-2409.09860
Ningfei Wang, Shaoyuan Xie, Takami Sato, Yunpeng Luo, Kaidi Xu, Qi Alfred Chen
{"title":"Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective","authors":"Ningfei Wang, Shaoyuan Xie, Takami Sato, Yunpeng Luo, Kaidi Xu, Qi Alfred Chen","doi":"arxiv-2409.09860","DOIUrl":"https://doi.org/arxiv-2409.09860","url":null,"abstract":"Traffic Sign Recognition (TSR) is crucial for safe and correct driving\u0000automation. Recent works revealed a general vulnerability of TSR models to\u0000physical-world adversarial attacks, which can be low-cost, highly deployable,\u0000and capable of causing severe attack effects such as hiding a critical traffic\u0000sign or spoofing a fake one. However, so far existing works generally only\u0000considered evaluating the attack effects on academic TSR models, leaving the\u0000impacts of such attacks on real-world commercial TSR systems largely unclear.\u0000In this paper, we conduct the first large-scale measurement of physical-world\u0000adversarial attacks against commercial TSR systems. Our testing results reveal\u0000that it is possible for existing attack works from academia to have highly\u0000reliable (100%) attack success against certain commercial TSR system\u0000functionality, but such attack capabilities are not generalizable, leading to\u0000much lower-than-expected attack success rates overall. We find that one\u0000potential major factor is a spatial memorization design that commonly exists in\u0000today's commercial TSR systems. We design new attack success metrics that can\u0000mathematically model the impacts of such design on the TSR system-level attack\u0000success, and use them to revisit existing attacks. Through these efforts, we\u0000uncover 7 novel observations, some of which directly challenge the observations\u0000or claims in prior works due to the introduction of the new metrics.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taming the Ransomware Threats: Leveraging Prospect Theory for Rational Payment Decisions 驯服勒索软件威胁:利用前景理论做出理性支付决策
arXiv - CS - Cryptography and Security Pub Date : 2024-09-15 DOI: arxiv-2409.09744
Pranjal Sharma
{"title":"Taming the Ransomware Threats: Leveraging Prospect Theory for Rational Payment Decisions","authors":"Pranjal Sharma","doi":"arxiv-2409.09744","DOIUrl":"https://doi.org/arxiv-2409.09744","url":null,"abstract":"Day by day, the frequency of ransomware attacks on organizations is\u0000experiencing a significant surge. High-profile incidents involving major\u0000entities like Las Vegas giants MGM Resorts, Caesar Entertainment, and Boeing\u0000underscore the profound impact, posing substantial business barriers. When a\u0000sudden cyberattack occurs, organizations often find themselves at a loss, with\u0000a looming countdown to pay the ransom, leading to a cascade of impromptu and\u0000unfavourable decisions. This paper adopts a novel approach, leveraging Prospect\u0000Theory, to elucidate the tactics employed by cyber attackers to entice\u0000organizations into paying the ransom. Furthermore, it introduces an algorithm\u0000based on Prospect Theory and an Attack Recovery Plan, enabling organizations to\u0000make informed decisions on whether to consent to the ransom demands or resist.\u0000This algorithm Ransomware Risk Analysis and Decision Support (RADS) uses\u0000Prospect Theory to re-instantiate the shifted reference manipulated as\u0000perceived gains by attackers and adjusts for the framing effect created due to\u0000time urgency. Additionally, leveraging application criticality and\u0000incorporating Prospect Theory's insights into under/over weighing\u0000probabilities, RADS facilitates informed decision-making that transcends the\u0000simplistic framework of \"consent\" or \"resistance,\" enabling organizations to\u0000achieve optimal decisions.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nebula: Efficient, Private and Accurate Histogram Estimation 星云高效、私密、精确的直方图估算
arXiv - CS - Cryptography and Security Pub Date : 2024-09-15 DOI: arxiv-2409.09676
Ali Shahin Shamsabadi, Peter Snyder, Ralph Giles, Aurélien Bellet, Hamed Haddadi
{"title":"Nebula: Efficient, Private and Accurate Histogram Estimation","authors":"Ali Shahin Shamsabadi, Peter Snyder, Ralph Giles, Aurélien Bellet, Hamed Haddadi","doi":"arxiv-2409.09676","DOIUrl":"https://doi.org/arxiv-2409.09676","url":null,"abstract":"We present Nebula, a system for differential private histogram estimation of\u0000data distributed among clients. Nebula enables clients to locally subsample and\u0000encode their data such that an untrusted server learns only data values that\u0000meet an aggregation threshold to satisfy differential privacy guarantees.\u0000Compared with other private histogram estimation systems, Nebula uniquely\u0000achieves all of the following: textit{i)} a strict upper bound on privacy\u0000leakage; textit{ii)} client privacy under realistic trust assumptions;\u0000textit{iii)} significantly better utility compared to standard local\u0000differential privacy systems; and textit{iv)} avoiding trusted third-parties,\u0000multi-party computation, or trusted hardware. We provide both a formal\u0000evaluation of Nebula's privacy, utility and efficiency guarantees, along with\u0000an empirical evaluation on three real-world datasets. We demonstrate that\u0000clients can encode and upload their data efficiently (only 0.0058 seconds\u0000running time and 0.0027 MB data communication) and privately (strong\u0000differential privacy guarantees $varepsilon=1$). On the United States Census\u0000dataset, the Nebula's untrusted aggregation server estimates histograms with\u0000above 88% better utility than the existing local deployment of differential\u0000privacy. Additionally, we describe a variant that allows clients to submit\u0000multi-dimensional data, with similar privacy, utility, and performance.\u0000Finally, we provide an open source implementation of Nebula.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hacking, The Lazy Way: LLM Augmented Pentesting 黑客,懒惰的方式LLM 扩增渗透测试
arXiv - CS - Cryptography and Security Pub Date : 2024-09-14 DOI: arxiv-2409.09493
Dhruva Goyal, Sitaraman Subramanian, Aditya Peela
{"title":"Hacking, The Lazy Way: LLM Augmented Pentesting","authors":"Dhruva Goyal, Sitaraman Subramanian, Aditya Peela","doi":"arxiv-2409.09493","DOIUrl":"https://doi.org/arxiv-2409.09493","url":null,"abstract":"Security researchers are continually challenged by the need to stay current\u0000with rapidly evolving cybersecurity research, tools, and techniques. This\u0000constant cycle of learning, unlearning, and relearning, combined with the\u0000repetitive tasks of sifting through documentation and analyzing data, often\u0000hinders productivity and innovation. This has led to a disparity where only\u0000organizations with substantial resources can access top-tier security experts,\u0000while others rely on firms with less skilled researchers who focus primarily on\u0000compliance rather than actual security. We introduce \"LLM Augmented Pentesting,\" demonstrated through a tool named\u0000\"Pentest Copilot,\" to address this gap. This approach integrates Large Language\u0000Models into penetration testing workflows. Our research includes a \"chain of\u0000thought\" mechanism to streamline token usage and boost performance, as well as\u0000unique Retrieval Augmented Generation implementation to minimize hallucinations\u0000and keep models aligned with the latest techniques. Additionally, we propose a\u0000novel file analysis approach, enabling LLMs to understand files. Furthermore,\u0000we highlight a unique infrastructure system that supports if implemented, can\u0000support in-browser assisted penetration testing, offering a robust platform for\u0000cybersecurity professionals, These advancements mark a significant step toward\u0000bridging the gap between automated tools and human expertise, offering a\u0000powerful solution to the challenges faced by modern cybersecurity teams.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-world Adversarial Defense against Patch Attacks based on Diffusion Model 基于扩散模型的真实世界补丁攻击对抗防御系统
arXiv - CS - Cryptography and Security Pub Date : 2024-09-14 DOI: arxiv-2409.09406
Xingxing Wei, Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su
{"title":"Real-world Adversarial Defense against Patch Attacks based on Diffusion Model","authors":"Xingxing Wei, Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su","doi":"arxiv-2409.09406","DOIUrl":"https://doi.org/arxiv-2409.09406","url":null,"abstract":"Adversarial patches present significant challenges to the robustness of deep\u0000learning models, making the development of effective defenses become critical\u0000for real-world applications. This paper introduces DIFFender, a novel\u0000DIFfusion-based DeFender framework that leverages the power of a text-guided\u0000diffusion model to counter adversarial patch attacks. At the core of our\u0000approach is the discovery of the Adversarial Anomaly Perception (AAP)\u0000phenomenon, which enables the diffusion model to accurately detect and locate\u0000adversarial patches by analyzing distributional anomalies. DIFFender seamlessly\u0000integrates the tasks of patch localization and restoration within a unified\u0000diffusion model framework, enhancing defense efficacy through their close\u0000interaction. Additionally, DIFFender employs an efficient few-shot\u0000prompt-tuning algorithm, facilitating the adaptation of the pre-trained\u0000diffusion model to defense tasks without the need for extensive retraining. Our\u0000comprehensive evaluation, covering image classification and face recognition\u0000tasks, as well as real-world scenarios, demonstrates DIFFender's robust\u0000performance against adversarial attacks. The framework's versatility and\u0000generalizability across various settings, classifiers, and attack methodologies\u0000mark a significant advancement in adversarial patch defense strategies. Except\u0000for the popular visible domain, we have identified another advantage of\u0000DIFFender: its capability to easily expand into the infrared domain.\u0000Consequently, we demonstrate the good flexibility of DIFFender, which can\u0000defend against both infrared and visible adversarial patch attacks\u0000alternatively using a universal defense framework.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Protecting Vehicle Location Privacy with Contextually-Driven Synthetic Location Generation 利用上下文驱动的合成位置生成保护车辆位置隐私
arXiv - CS - Cryptography and Security Pub Date : 2024-09-14 DOI: arxiv-2409.09495
Sourabh Yadav, Chenyang Yu, Xinpeng Xie, Yan Huang, Chenxi Qiu
{"title":"Protecting Vehicle Location Privacy with Contextually-Driven Synthetic Location Generation","authors":"Sourabh Yadav, Chenyang Yu, Xinpeng Xie, Yan Huang, Chenxi Qiu","doi":"arxiv-2409.09495","DOIUrl":"https://doi.org/arxiv-2409.09495","url":null,"abstract":"Geo-obfuscation is a Location Privacy Protection Mechanism used in\u0000location-based services that allows users to report obfuscated locations\u0000instead of exact ones. A formal privacy criterion, geoindistinguishability\u0000(Geo-Ind), requires real locations to be hard to distinguish from nearby\u0000locations (by attackers) based on their obfuscated representations. However,\u0000Geo-Ind often fails to consider context, such as road networks and vehicle\u0000traffic conditions, making it less effective in protecting the location privacy\u0000of vehicles, of which the mobility are heavily influenced by these factors. In this paper, we introduce VehiTrack, a new threat model to demonstrate the\u0000vulnerability of Geo-Ind in protecting vehicle location privacy from\u0000context-aware inference attacks. Our experiments demonstrate that VehiTrack can\u0000accurately determine exact vehicle locations from obfuscated data, reducing\u0000average inference errors by 61.20% with Laplacian noise and 47.35% with linear\u0000programming (LP) compared to traditional Bayesian attacks. By using contextual\u0000data like road networks and traffic flow, VehiTrack effectively eliminates a\u0000significant number of seemingly \"impossible\" locations during its search for\u0000the actual location of the vehicles. Based on these insights, we propose\u0000TransProtect, a new geo-obfuscation approach that limits obfuscation to\u0000realistic vehicle movement patterns, complicating attackers' ability to\u0000differentiate obfuscated from actual locations. Our results show that\u0000TransProtect increases VehiTrack's inference error by 57.75% with Laplacian\u0000noise and 27.21% with LP, significantly enhancing protection against these\u0000attacks.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信