{"title":"反人性的验证码:在线社交媒体中基于验证码的攻击","authors":"Mauro Conti, Luca Pajola, Pier Paolo Tricomi","doi":"10.1016/j.osnem.2023.100252","DOIUrl":null,"url":null,"abstract":"<div><p>Nowadays, people generate and share massive amounts of content on online platforms (e.g., social networks, blogs). In 2021, the 1.9 billion daily active Facebook users posted around 150 thousand photos every minute. Content moderators constantly monitor these online platforms to prevent the spreading of inappropriate content (e.g., hate speech, nudity images). Based on deep learning (DL) advances, Automatic Content Moderators (ACM) help human moderators handle high data volume. Despite their advantages, attackers can exploit weaknesses of DL components (e.g., preprocessing, model) to affect their performance. Therefore, an attacker can leverage such techniques to spread inappropriate content by evading ACM.</p><p>In this work, we analyzed 4600 potentially toxic Instagram posts, and we discovered that 44% of them adopt obfuscations that might undermine ACM. As these posts are reminiscent of captchas (i.e., not understandable by automated mechanisms), we coin this threat as Captcha Attack (<span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span>). Our contributions start by proposing a <span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span> taxonomy to better understand how ACM is vulnerable to obfuscation attacks. We then focus on the broad sub-category of <span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span> using textual Captcha Challenges, namely <span>CC-CAPA</span>, and we empirically demonstrate that it evades real-world ACM (i.e., Amazon, Google, Microsoft) with 100% accuracy. Our investigation revealed that ACM failures are caused by the OCR text extraction phase. The training of OCRs to withstand such obfuscation is therefore crucial, but huge amounts of data are required. Thus, we investigate methods to identify <span>CC-CAPA</span> samples from large sets of data (originated by three OSN – Pinterest, Twitter, Yahoo-Flickr), and we empirically demonstrate that supervised techniques identify target styles of samples almost perfectly. Unsupervised solutions, on the other hand, represent a solid methodology for inspecting uncommon data to detect new obfuscation techniques.</p></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Turning captchas against humanity: Captcha-based attacks in online social media\",\"authors\":\"Mauro Conti, Luca Pajola, Pier Paolo Tricomi\",\"doi\":\"10.1016/j.osnem.2023.100252\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Nowadays, people generate and share massive amounts of content on online platforms (e.g., social networks, blogs). In 2021, the 1.9 billion daily active Facebook users posted around 150 thousand photos every minute. Content moderators constantly monitor these online platforms to prevent the spreading of inappropriate content (e.g., hate speech, nudity images). Based on deep learning (DL) advances, Automatic Content Moderators (ACM) help human moderators handle high data volume. Despite their advantages, attackers can exploit weaknesses of DL components (e.g., preprocessing, model) to affect their performance. Therefore, an attacker can leverage such techniques to spread inappropriate content by evading ACM.</p><p>In this work, we analyzed 4600 potentially toxic Instagram posts, and we discovered that 44% of them adopt obfuscations that might undermine ACM. As these posts are reminiscent of captchas (i.e., not understandable by automated mechanisms), we coin this threat as Captcha Attack (<span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span>). Our contributions start by proposing a <span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span> taxonomy to better understand how ACM is vulnerable to obfuscation attacks. We then focus on the broad sub-category of <span><math><mrow><mi>C</mi><mi>A</mi><mi>P</mi><mi>A</mi></mrow></math></span> using textual Captcha Challenges, namely <span>CC-CAPA</span>, and we empirically demonstrate that it evades real-world ACM (i.e., Amazon, Google, Microsoft) with 100% accuracy. Our investigation revealed that ACM failures are caused by the OCR text extraction phase. The training of OCRs to withstand such obfuscation is therefore crucial, but huge amounts of data are required. Thus, we investigate methods to identify <span>CC-CAPA</span> samples from large sets of data (originated by three OSN – Pinterest, Twitter, Yahoo-Flickr), and we empirically demonstrate that supervised techniques identify target styles of samples almost perfectly. Unsupervised solutions, on the other hand, represent a solid methodology for inspecting uncommon data to detect new obfuscation techniques.</p></div>\",\"PeriodicalId\":52228,\"journal\":{\"name\":\"Online Social Networks and Media\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Online Social Networks and Media\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2468696423000113\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Online Social Networks and Media","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468696423000113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
Turning captchas against humanity: Captcha-based attacks in online social media
Nowadays, people generate and share massive amounts of content on online platforms (e.g., social networks, blogs). In 2021, the 1.9 billion daily active Facebook users posted around 150 thousand photos every minute. Content moderators constantly monitor these online platforms to prevent the spreading of inappropriate content (e.g., hate speech, nudity images). Based on deep learning (DL) advances, Automatic Content Moderators (ACM) help human moderators handle high data volume. Despite their advantages, attackers can exploit weaknesses of DL components (e.g., preprocessing, model) to affect their performance. Therefore, an attacker can leverage such techniques to spread inappropriate content by evading ACM.
In this work, we analyzed 4600 potentially toxic Instagram posts, and we discovered that 44% of them adopt obfuscations that might undermine ACM. As these posts are reminiscent of captchas (i.e., not understandable by automated mechanisms), we coin this threat as Captcha Attack (). Our contributions start by proposing a taxonomy to better understand how ACM is vulnerable to obfuscation attacks. We then focus on the broad sub-category of using textual Captcha Challenges, namely CC-CAPA, and we empirically demonstrate that it evades real-world ACM (i.e., Amazon, Google, Microsoft) with 100% accuracy. Our investigation revealed that ACM failures are caused by the OCR text extraction phase. The training of OCRs to withstand such obfuscation is therefore crucial, but huge amounts of data are required. Thus, we investigate methods to identify CC-CAPA samples from large sets of data (originated by three OSN – Pinterest, Twitter, Yahoo-Flickr), and we empirically demonstrate that supervised techniques identify target styles of samples almost perfectly. Unsupervised solutions, on the other hand, represent a solid methodology for inspecting uncommon data to detect new obfuscation techniques.