Yang Zhang, Haichang Gao, Ge Pei, Shuai Kang, Xin Zhou
{"title":"对抗样本对验证码鲁棒性的影响","authors":"Yang Zhang, Haichang Gao, Ge Pei, Shuai Kang, Xin Zhou","doi":"10.1109/CYBERC.2018.00013","DOIUrl":null,"url":null,"abstract":"A good CAPTCHA(Completely Automated Public Turing Test to Tell Computers and Humans Apart) should be friendly for humans to solve but hard for computers. This balance between security and usability is hard to achieve. With the development of deep neural network techniques, increasingly more CAPTCHAs have been cracked. Recent works have shown deep neural networks to be highly susceptible to adversarial examples, which can reliably fool neural networks by adding noise that is imperceptible to humans that matches the needs of CAPTCHA design. In this paper, we study the effect of adversarial examples on CAPTCHA robustness (including image-selecting, clicking-based, and text-based CAPTCHAs). The experimental results demonstrate that adversarial examples have a positive effect on the robustness of CAPTCHA. Even if we fine tune the neural network, the impact of adversarial examples cannot be completely eliminated. At the end of this paper, suggestions are given on how to improve the security of CAPTCHA using adversarial examples.","PeriodicalId":282903,"journal":{"name":"2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Effect of Adversarial Examples on the Robustness of CAPTCHA\",\"authors\":\"Yang Zhang, Haichang Gao, Ge Pei, Shuai Kang, Xin Zhou\",\"doi\":\"10.1109/CYBERC.2018.00013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A good CAPTCHA(Completely Automated Public Turing Test to Tell Computers and Humans Apart) should be friendly for humans to solve but hard for computers. This balance between security and usability is hard to achieve. With the development of deep neural network techniques, increasingly more CAPTCHAs have been cracked. Recent works have shown deep neural networks to be highly susceptible to adversarial examples, which can reliably fool neural networks by adding noise that is imperceptible to humans that matches the needs of CAPTCHA design. In this paper, we study the effect of adversarial examples on CAPTCHA robustness (including image-selecting, clicking-based, and text-based CAPTCHAs). The experimental results demonstrate that adversarial examples have a positive effect on the robustness of CAPTCHA. Even if we fine tune the neural network, the impact of adversarial examples cannot be completely eliminated. At the end of this paper, suggestions are given on how to improve the security of CAPTCHA using adversarial examples.\",\"PeriodicalId\":282903,\"journal\":{\"name\":\"2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CYBERC.2018.00013\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CYBERC.2018.00013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Effect of Adversarial Examples on the Robustness of CAPTCHA
A good CAPTCHA(Completely Automated Public Turing Test to Tell Computers and Humans Apart) should be friendly for humans to solve but hard for computers. This balance between security and usability is hard to achieve. With the development of deep neural network techniques, increasingly more CAPTCHAs have been cracked. Recent works have shown deep neural networks to be highly susceptible to adversarial examples, which can reliably fool neural networks by adding noise that is imperceptible to humans that matches the needs of CAPTCHA design. In this paper, we study the effect of adversarial examples on CAPTCHA robustness (including image-selecting, clicking-based, and text-based CAPTCHAs). The experimental results demonstrate that adversarial examples have a positive effect on the robustness of CAPTCHA. Even if we fine tune the neural network, the impact of adversarial examples cannot be completely eliminated. At the end of this paper, suggestions are given on how to improve the security of CAPTCHA using adversarial examples.