对抗样本对验证码鲁棒性的影响

Yang Zhang, Haichang Gao, Ge Pei, Shuai Kang, Xin Zhou
{"title":"对抗样本对验证码鲁棒性的影响","authors":"Yang Zhang, Haichang Gao, Ge Pei, Shuai Kang, Xin Zhou","doi":"10.1109/CYBERC.2018.00013","DOIUrl":null,"url":null,"abstract":"A good CAPTCHA(Completely Automated Public Turing Test to Tell Computers and Humans Apart) should be friendly for humans to solve but hard for computers. This balance between security and usability is hard to achieve. With the development of deep neural network techniques, increasingly more CAPTCHAs have been cracked. Recent works have shown deep neural networks to be highly susceptible to adversarial examples, which can reliably fool neural networks by adding noise that is imperceptible to humans that matches the needs of CAPTCHA design. In this paper, we study the effect of adversarial examples on CAPTCHA robustness (including image-selecting, clicking-based, and text-based CAPTCHAs). The experimental results demonstrate that adversarial examples have a positive effect on the robustness of CAPTCHA. Even if we fine tune the neural network, the impact of adversarial examples cannot be completely eliminated. At the end of this paper, suggestions are given on how to improve the security of CAPTCHA using adversarial examples.","PeriodicalId":282903,"journal":{"name":"2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Effect of Adversarial Examples on the Robustness of CAPTCHA\",\"authors\":\"Yang Zhang, Haichang Gao, Ge Pei, Shuai Kang, Xin Zhou\",\"doi\":\"10.1109/CYBERC.2018.00013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A good CAPTCHA(Completely Automated Public Turing Test to Tell Computers and Humans Apart) should be friendly for humans to solve but hard for computers. This balance between security and usability is hard to achieve. With the development of deep neural network techniques, increasingly more CAPTCHAs have been cracked. Recent works have shown deep neural networks to be highly susceptible to adversarial examples, which can reliably fool neural networks by adding noise that is imperceptible to humans that matches the needs of CAPTCHA design. In this paper, we study the effect of adversarial examples on CAPTCHA robustness (including image-selecting, clicking-based, and text-based CAPTCHAs). The experimental results demonstrate that adversarial examples have a positive effect on the robustness of CAPTCHA. Even if we fine tune the neural network, the impact of adversarial examples cannot be completely eliminated. At the end of this paper, suggestions are given on how to improve the security of CAPTCHA using adversarial examples.\",\"PeriodicalId\":282903,\"journal\":{\"name\":\"2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CYBERC.2018.00013\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CYBERC.2018.00013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

摘要

一个好的CAPTCHA(完全自动化的公共图灵测试来区分计算机和人类)应该对人类友好,但对计算机来说很难解决。安全性和可用性之间的平衡很难实现。随着深度神经网络技术的发展,越来越多的验证码被破解。最近的研究表明,深度神经网络非常容易受到对抗性示例的影响,对抗性示例可以通过添加符合CAPTCHA设计需求的人类无法察觉的噪声来可靠地欺骗神经网络。在本文中,我们研究了对抗性示例对验证码鲁棒性的影响(包括图像选择、基于点击和基于文本的验证码)。实验结果表明,对抗样例对验证码的鲁棒性有积极的影响。即使我们对神经网络进行微调,对抗性样本的影响也不能完全消除。在本文的最后,通过对抗性实例对如何提高CAPTCHA的安全性提出了建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Effect of Adversarial Examples on the Robustness of CAPTCHA
A good CAPTCHA(Completely Automated Public Turing Test to Tell Computers and Humans Apart) should be friendly for humans to solve but hard for computers. This balance between security and usability is hard to achieve. With the development of deep neural network techniques, increasingly more CAPTCHAs have been cracked. Recent works have shown deep neural networks to be highly susceptible to adversarial examples, which can reliably fool neural networks by adding noise that is imperceptible to humans that matches the needs of CAPTCHA design. In this paper, we study the effect of adversarial examples on CAPTCHA robustness (including image-selecting, clicking-based, and text-based CAPTCHAs). The experimental results demonstrate that adversarial examples have a positive effect on the robustness of CAPTCHA. Even if we fine tune the neural network, the impact of adversarial examples cannot be completely eliminated. At the end of this paper, suggestions are given on how to improve the security of CAPTCHA using adversarial examples.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信