Public Verification for Private Hash Matching

Sarah Scheffler, Anunay Kulshrestha, Jonathan R. Mayer
{"title":"Public Verification for Private Hash Matching","authors":"Sarah Scheffler, Anunay Kulshrestha, Jonathan R. Mayer","doi":"10.1109/SP46215.2023.10179349","DOIUrl":null,"url":null,"abstract":"End-to-end encryption (E2EE) prevents online services from accessing user content. This important security property is also an obstacle for content moderation methods that involve content analysis. The tension between E2EE and efforts to combat child sexual abuse material (CSAM) has become a global flashpoint in encryption policy, because the predominant method of detecting harmful content—server-side perceptual hash matching on plaintext images—is unavailable.Recent applied cryptography advances enable private hash matching (PHM), where a service can match user content against a set of known CSAM images without revealing the hash set to users or nonmatching content to the service. These designs, especially a 2021 proposal for identifying CSAM in Apple’s iCloud Photos service, have attracted widespread criticism for creating risks to security, privacy, and free expression.In this work, we aim to advance scholarship and dialogue about PHM by contributing new cryptographic methods for system verification by the general public. We begin with motivation, describing the rationale for PHM to detect CSAM and the serious societal and technical issues with its deployment. Verification could partially address shortcomings of PHM, and we systematize critiques into two areas for auditing: trust in the hash set and trust in the implementation. We explain how, while these two issues cannot be fully resolved by technology alone, there are possible cryptographic trust improvements.The central contributions of this paper are novel cryptographic protocols that enable three types of public verification for PHM systems: (1) certification that external groups approve the hash set, (2) proof that particular lawful content is not in the hash set, and (3) eventual notification to users of false positive matches. The protocols that we describe are practical, efficient, and compatible with existing PHM constructions.","PeriodicalId":439989,"journal":{"name":"2023 IEEE Symposium on Security and Privacy (SP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Symposium on Security and Privacy (SP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SP46215.2023.10179349","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

End-to-end encryption (E2EE) prevents online services from accessing user content. This important security property is also an obstacle for content moderation methods that involve content analysis. The tension between E2EE and efforts to combat child sexual abuse material (CSAM) has become a global flashpoint in encryption policy, because the predominant method of detecting harmful content—server-side perceptual hash matching on plaintext images—is unavailable.Recent applied cryptography advances enable private hash matching (PHM), where a service can match user content against a set of known CSAM images without revealing the hash set to users or nonmatching content to the service. These designs, especially a 2021 proposal for identifying CSAM in Apple’s iCloud Photos service, have attracted widespread criticism for creating risks to security, privacy, and free expression.In this work, we aim to advance scholarship and dialogue about PHM by contributing new cryptographic methods for system verification by the general public. We begin with motivation, describing the rationale for PHM to detect CSAM and the serious societal and technical issues with its deployment. Verification could partially address shortcomings of PHM, and we systematize critiques into two areas for auditing: trust in the hash set and trust in the implementation. We explain how, while these two issues cannot be fully resolved by technology alone, there are possible cryptographic trust improvements.The central contributions of this paper are novel cryptographic protocols that enable three types of public verification for PHM systems: (1) certification that external groups approve the hash set, (2) proof that particular lawful content is not in the hash set, and (3) eventual notification to users of false positive matches. The protocols that we describe are practical, efficient, and compatible with existing PHM constructions.
私有哈希匹配的公共验证
端到端加密(E2EE)可以防止在线服务访问用户内容。这个重要的安全属性也是涉及内容分析的内容审核方法的一个障碍。E2EE和打击儿童性虐待材料(CSAM)之间的紧张关系已经成为加密政策中的一个全球爆发点,因为检测有害内容的主要方法——对明文图像进行服务器端感知哈希匹配——是不可用的。最近应用的加密技术进步支持私有哈希匹配(PHM),其中服务可以根据一组已知的CSAM图像匹配用户内容,而不会向用户透露哈希集或向服务透露不匹配的内容。这些设计,尤其是2021年在苹果iCloud照片服务中识别CSAM的提议,因给安全、隐私和言论自由带来风险而受到广泛批评。在这项工作中,我们的目标是通过为公众提供新的系统验证密码方法来促进关于PHM的学术研究和对话。我们从动机开始,描述PHM检测CSAM的基本原理以及其部署的严重社会和技术问题。验证可以部分地解决PHM的缺点,我们将批评系统化地分为两个领域进行审计:对哈希集的信任和对实现的信任。虽然这两个问题不能单独通过技术完全解决,但我们解释了如何改进加密信任。本文的核心贡献是新颖的加密协议,它为PHM系统提供了三种类型的公共验证:(1)外部组批准哈希集的认证,(2)证明特定的合法内容不在哈希集中,以及(3)最终通知用户误报匹配。我们描述的协议是实用的、高效的,并且与现有的PHM结构兼容。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信