反人性的验证码:在线社交媒体中基于验证码的攻击

Q1 Social Sciences
Mauro Conti, Luca Pajola, Pier Paolo Tricomi
{"title":"反人性的验证码:在线社交媒体中基于验证码的攻击","authors":"Mauro Conti,&nbsp;Luca Pajola,&nbsp;Pier Paolo Tricomi","doi":"10.1016/j.osnem.2023.100252","DOIUrl":null,"url":null,"abstract":"<div><p>Nowadays, people generate and share massive amounts of content on online platforms (e.g., social networks, blogs). In 2021, the 1.9 billion daily active Facebook users posted around 150 thousand photos every minute. Content moderators constantly monitor these online platforms to prevent the spreading of inappropriate content (e.g., hate speech, nudity images). Based on deep learning (DL) advances, Automatic Content Moderators (ACM) help human moderators handle high data volume. Despite their advantages, attackers can exploit weaknesses of DL components (e.g., preprocessing, model) to affect their performance. Therefore, an attacker can leverage such techniques to spread inappropriate content by evading ACM.</p><p>In this work, we analyzed 4600 potentially toxic Instagram posts, and we discovered that 44% of them adopt obfuscations that might undermine ACM. As these posts are reminiscent of captchas (i.e., not understandable by automated mechanisms), we coin this threat as Captcha Attack (<span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span>). Our contributions start by proposing a <span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span> taxonomy to better understand how ACM is vulnerable to obfuscation attacks. We then focus on the broad sub-category of <span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span> using textual Captcha Challenges, namely <span>CC-CAPA</span>, and we empirically demonstrate that it evades real-world ACM (i.e., Amazon, Google, Microsoft) with 100% accuracy. Our investigation revealed that ACM failures are caused by the OCR text extraction phase. The training of OCRs to withstand such obfuscation is therefore crucial, but huge amounts of data are required. Thus, we investigate methods to identify <span>CC-CAPA</span> samples from large sets of data (originated by three OSN – Pinterest, Twitter, Yahoo-Flickr), and we empirically demonstrate that supervised techniques identify target styles of samples almost perfectly. Unsupervised solutions, on the other hand, represent a solid methodology for inspecting uncommon data to detect new obfuscation techniques.</p></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Turning captchas against humanity: Captcha-based attacks in online social media\",\"authors\":\"Mauro Conti,&nbsp;Luca Pajola,&nbsp;Pier Paolo Tricomi\",\"doi\":\"10.1016/j.osnem.2023.100252\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Nowadays, people generate and share massive amounts of content on online platforms (e.g., social networks, blogs). In 2021, the 1.9 billion daily active Facebook users posted around 150 thousand photos every minute. Content moderators constantly monitor these online platforms to prevent the spreading of inappropriate content (e.g., hate speech, nudity images). Based on deep learning (DL) advances, Automatic Content Moderators (ACM) help human moderators handle high data volume. Despite their advantages, attackers can exploit weaknesses of DL components (e.g., preprocessing, model) to affect their performance. Therefore, an attacker can leverage such techniques to spread inappropriate content by evading ACM.</p><p>In this work, we analyzed 4600 potentially toxic Instagram posts, and we discovered that 44% of them adopt obfuscations that might undermine ACM. As these posts are reminiscent of captchas (i.e., not understandable by automated mechanisms), we coin this threat as Captcha Attack (<span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span>). Our contributions start by proposing a <span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span> taxonomy to better understand how ACM is vulnerable to obfuscation attacks. We then focus on the broad sub-category of <span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span> using textual Captcha Challenges, namely <span>CC-CAPA</span>, and we empirically demonstrate that it evades real-world ACM (i.e., Amazon, Google, Microsoft) with 100% accuracy. Our investigation revealed that ACM failures are caused by the OCR text extraction phase. The training of OCRs to withstand such obfuscation is therefore crucial, but huge amounts of data are required. Thus, we investigate methods to identify <span>CC-CAPA</span> samples from large sets of data (originated by three OSN – Pinterest, Twitter, Yahoo-Flickr), and we empirically demonstrate that supervised techniques identify target styles of samples almost perfectly. Unsupervised solutions, on the other hand, represent a solid methodology for inspecting uncommon data to detect new obfuscation techniques.</p></div>\",\"PeriodicalId\":52228,\"journal\":{\"name\":\"Online Social Networks and Media\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Online Social Networks and Media\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2468696423000113\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Social Networks and Media","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468696423000113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 1

摘要

如今,人们在在线平台(如社交网络、博客)上生成和分享大量内容。2021年,19亿Facebook日活跃用户每分钟发布约15万张照片。内容审核员不断监控这些在线平台,以防止不适当内容的传播(例如,仇恨言论,裸露图像)。基于深度学习(DL)的进步,自动内容版主(ACM)帮助人工版主处理大数据量。尽管DL组件具有优势,但攻击者可以利用其弱点(例如预处理、模型)来影响其性能。因此,攻击者可以利用这些技术通过规避ACM来传播不适当的内容。在这项工作中,我们分析了4600个潜在有毒的Instagram帖子,我们发现其中44%的帖子采用了可能破坏ACM的混淆。由于这些帖子让人想起验证码(即,自动化机制无法理解),我们将这种威胁称为验证码攻击(CAPA)。我们的贡献首先是提出一个CAPA分类法,以更好地理解ACM如何容易受到混淆攻击。然后,我们使用文本验证码挑战,即CC-CAPA,专注于CAPA的广泛子类别,并且我们经验地证明它以100%的准确率避开了现实世界的ACM(即亚马逊,b谷歌,微软)。我们的调查显示,ACM故障是由OCR文本提取阶段引起的。因此,训练ocr来抵御这种混淆是至关重要的,但需要大量的数据。因此,我们研究了从大型数据集(来自三个OSN - Pinterest, Twitter, Yahoo-Flickr)中识别CC-CAPA样本的方法,并通过经验证明监督技术几乎可以完美地识别样本的目标风格。另一方面,无监督解决方案代表了一种可靠的方法,用于检查不常见的数据以检测新的混淆技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Turning captchas against humanity: Captcha-based attacks in online social media

Nowadays, people generate and share massive amounts of content on online platforms (e.g., social networks, blogs). In 2021, the 1.9 billion daily active Facebook users posted around 150 thousand photos every minute. Content moderators constantly monitor these online platforms to prevent the spreading of inappropriate content (e.g., hate speech, nudity images). Based on deep learning (DL) advances, Automatic Content Moderators (ACM) help human moderators handle high data volume. Despite their advantages, attackers can exploit weaknesses of DL components (e.g., preprocessing, model) to affect their performance. Therefore, an attacker can leverage such techniques to spread inappropriate content by evading ACM.

In this work, we analyzed 4600 potentially toxic Instagram posts, and we discovered that 44% of them adopt obfuscations that might undermine ACM. As these posts are reminiscent of captchas (i.e., not understandable by automated mechanisms), we coin this threat as Captcha Attack (CAPA). Our contributions start by proposing a CAPA taxonomy to better understand how ACM is vulnerable to obfuscation attacks. We then focus on the broad sub-category of CAPA using textual Captcha Challenges, namely CC-CAPA, and we empirically demonstrate that it evades real-world ACM (i.e., Amazon, Google, Microsoft) with 100% accuracy. Our investigation revealed that ACM failures are caused by the OCR text extraction phase. The training of OCRs to withstand such obfuscation is therefore crucial, but huge amounts of data are required. Thus, we investigate methods to identify CC-CAPA samples from large sets of data (originated by three OSN – Pinterest, Twitter, Yahoo-Flickr), and we empirically demonstrate that supervised techniques identify target styles of samples almost perfectly. Unsupervised solutions, on the other hand, represent a solid methodology for inspecting uncommon data to detect new obfuscation techniques.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Online Social Networks and Media
Online Social Networks and Media Social Sciences-Communication
CiteScore
10.60
自引率
0.00%
发文量
32
审稿时长
44 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信