欺骗基于学习的草图导致不准确的频率估计

Xuyang Jing, Xiaojun Cheng, Zheng Yan, Xian Li
{"title":"欺骗基于学习的草图导致不准确的频率估计","authors":"Xuyang Jing, Xiaojun Cheng, Zheng Yan, Xian Li","doi":"10.1109/TrustCom56396.2022.00038","DOIUrl":null,"url":null,"abstract":"Learning-based sketches have been widely studied as an improvement of traditional sketches that achieves high efficiency in terms of both time and space. It uses a learning model to reveal and exploit underlying patterns of input data for helping traditional sketches obtain accurate frequency estimation with memory efficient. However, recent studies only focus on the performance improvement of learning-based sketches and pay little attention to security. The potential security problems can be easily exploited by an adversary to make learning-based sketches inaccurate. In this paper, we firstly explore the security issues of learning-based sketches with regard to estimation accuracy and memory overhead. Some adversarial scenarios of learning model and backup sketch are modeled according to the knowledge and capabilities of an adversary. Then, we propose four attacks to deceive learning-based sketch, namely counterfeit attack, targeted point attack, memory occupation attack, and blind increment attack. We conduct a series of experiments based on real-world datasets and verify that the proposed attacks highly degrade the performance of learning-based sketch even when the adversary knows nothing about it.","PeriodicalId":276379,"journal":{"name":"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)","volume":"30 1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deceiving Learning-based Sketches to Cause Inaccurate Frequency Estimation\",\"authors\":\"Xuyang Jing, Xiaojun Cheng, Zheng Yan, Xian Li\",\"doi\":\"10.1109/TrustCom56396.2022.00038\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Learning-based sketches have been widely studied as an improvement of traditional sketches that achieves high efficiency in terms of both time and space. It uses a learning model to reveal and exploit underlying patterns of input data for helping traditional sketches obtain accurate frequency estimation with memory efficient. However, recent studies only focus on the performance improvement of learning-based sketches and pay little attention to security. The potential security problems can be easily exploited by an adversary to make learning-based sketches inaccurate. In this paper, we firstly explore the security issues of learning-based sketches with regard to estimation accuracy and memory overhead. Some adversarial scenarios of learning model and backup sketch are modeled according to the knowledge and capabilities of an adversary. Then, we propose four attacks to deceive learning-based sketch, namely counterfeit attack, targeted point attack, memory occupation attack, and blind increment attack. We conduct a series of experiments based on real-world datasets and verify that the proposed attacks highly degrade the performance of learning-based sketch even when the adversary knows nothing about it.\",\"PeriodicalId\":276379,\"journal\":{\"name\":\"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)\",\"volume\":\"30 1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TrustCom56396.2022.00038\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TrustCom56396.2022.00038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于学习的草图作为传统草图的一种改进,在时间和空间上都达到了很高的效率,得到了广泛的研究。它使用一个学习模型来揭示和挖掘输入数据的潜在模式,以帮助传统草图获得准确的频率估计。然而,目前的研究只关注基于学习的草图的性能提升,对安全性的关注较少。潜在的安全问题很容易被攻击者利用,使基于学习的草图变得不准确。在本文中,我们首先探讨了基于学习的草图在估计精度和内存开销方面的安全性问题。根据对手的知识和能力,建立了一些学习模型和备用草图的对抗场景。然后,我们提出了四种欺骗基于学习的草图的攻击,即伪造攻击、目标点攻击、内存占用攻击和盲目增量攻击。我们进行了一系列基于真实世界数据集的实验,并验证了所提出的攻击即使在对手一无所知的情况下也会严重降低基于学习的草图的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deceiving Learning-based Sketches to Cause Inaccurate Frequency Estimation
Learning-based sketches have been widely studied as an improvement of traditional sketches that achieves high efficiency in terms of both time and space. It uses a learning model to reveal and exploit underlying patterns of input data for helping traditional sketches obtain accurate frequency estimation with memory efficient. However, recent studies only focus on the performance improvement of learning-based sketches and pay little attention to security. The potential security problems can be easily exploited by an adversary to make learning-based sketches inaccurate. In this paper, we firstly explore the security issues of learning-based sketches with regard to estimation accuracy and memory overhead. Some adversarial scenarios of learning model and backup sketch are modeled according to the knowledge and capabilities of an adversary. Then, we propose four attacks to deceive learning-based sketch, namely counterfeit attack, targeted point attack, memory occupation attack, and blind increment attack. We conduct a series of experiments based on real-world datasets and verify that the proposed attacks highly degrade the performance of learning-based sketch even when the adversary knows nothing about it.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信