{"title":"提高对抗式验证码的可转移性","authors":"","doi":"10.1016/j.cose.2024.104000","DOIUrl":null,"url":null,"abstract":"<div><p>Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a test to distinguish humans and computers. Since attackers can achieve high accuracy in recognizing the CAPTCHAs using deep learning models, geometric transformations are added to the CAPTCHAs to disturb deep learning model recognition. However, excessive geometric transformations might also affect humans’ recognition of the CAPTCHA. Adversarial CAPTCHAs are special CAPTCHAs that can disrupt deep learning models without affecting humans. Previous works of adversarial CAPTCHAs mainly focus on defending the filtering attack. In real-world scenarios, the attackers’ models are inaccessible when generating adversarial CAPTCHAs, and the attackers may use models with different architectures, thus it is crucial to improve the transferability of the adversarial CAPTCHAs. We propose CFA, a method to generate more transferable adversarial CAPTCHAs focusing on altering content features in the original CAPTCHA. We use the attack success rate as our metric to evaluate the effectiveness of our method when attacking various models. A higher attack success rate means a higher level of preventing models from recognizing the CAPTCHAs. The experiment shows that our method can effectively attack various models, even when facing possible defense methods that the attacker might use. Our method outperforms other feature space attacks and provides a more secure version of adversarial CAPTCHAs.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Boosting the transferability of adversarial CAPTCHAs\",\"authors\":\"\",\"doi\":\"10.1016/j.cose.2024.104000\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a test to distinguish humans and computers. Since attackers can achieve high accuracy in recognizing the CAPTCHAs using deep learning models, geometric transformations are added to the CAPTCHAs to disturb deep learning model recognition. However, excessive geometric transformations might also affect humans’ recognition of the CAPTCHA. Adversarial CAPTCHAs are special CAPTCHAs that can disrupt deep learning models without affecting humans. Previous works of adversarial CAPTCHAs mainly focus on defending the filtering attack. In real-world scenarios, the attackers’ models are inaccessible when generating adversarial CAPTCHAs, and the attackers may use models with different architectures, thus it is crucial to improve the transferability of the adversarial CAPTCHAs. We propose CFA, a method to generate more transferable adversarial CAPTCHAs focusing on altering content features in the original CAPTCHA. We use the attack success rate as our metric to evaluate the effectiveness of our method when attacking various models. A higher attack success rate means a higher level of preventing models from recognizing the CAPTCHAs. The experiment shows that our method can effectively attack various models, even when facing possible defense methods that the attacker might use. Our method outperforms other feature space attacks and provides a more secure version of adversarial CAPTCHAs.</p></div>\",\"PeriodicalId\":51004,\"journal\":{\"name\":\"Computers & Security\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2024-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167404824003055\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404824003055","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
区分计算机和人类的完全自动公共图灵测试(CAPTCHA)是一种区分人类和计算机的测试。由于攻击者可以利用深度学习模型实现高精度的验证码识别,因此在验证码中添加了几何变换,以干扰深度学习模型的识别。然而,过多的几何变换也可能影响人类对验证码的识别。对抗式验证码是一种特殊的验证码,可以在不影响人类的情况下破坏深度学习模型。以往关于对抗性验证码的研究主要集中在防御过滤攻击上。在现实场景中,生成对抗式验证码时无法访问攻击者的模型,而且攻击者可能使用不同架构的模型,因此提高对抗式验证码的可移植性至关重要。我们提出的 CFA 是一种生成可移植性更强的对抗性验证码的方法,重点在于改变原始验证码的内容特征。我们使用攻击成功率作为衡量标准,评估我们的方法在攻击各种模型时的有效性。攻击成功率越高,说明阻止模型识别验证码的程度越高。实验表明,即使面对攻击者可能使用的防御方法,我们的方法也能有效地攻击各种模型。我们的方法优于其他特征空间攻击,并提供了一个更安全的对抗性验证码版本。
Boosting the transferability of adversarial CAPTCHAs
Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a test to distinguish humans and computers. Since attackers can achieve high accuracy in recognizing the CAPTCHAs using deep learning models, geometric transformations are added to the CAPTCHAs to disturb deep learning model recognition. However, excessive geometric transformations might also affect humans’ recognition of the CAPTCHA. Adversarial CAPTCHAs are special CAPTCHAs that can disrupt deep learning models without affecting humans. Previous works of adversarial CAPTCHAs mainly focus on defending the filtering attack. In real-world scenarios, the attackers’ models are inaccessible when generating adversarial CAPTCHAs, and the attackers may use models with different architectures, thus it is crucial to improve the transferability of the adversarial CAPTCHAs. We propose CFA, a method to generate more transferable adversarial CAPTCHAs focusing on altering content features in the original CAPTCHA. We use the attack success rate as our metric to evaluate the effectiveness of our method when attacking various models. A higher attack success rate means a higher level of preventing models from recognizing the CAPTCHAs. The experiment shows that our method can effectively attack various models, even when facing possible defense methods that the attacker might use. Our method outperforms other feature space attacks and provides a more secure version of adversarial CAPTCHAs.
期刊介绍:
Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world.
Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.