Thwart Physical and Digital Domain's Adversarial Attack Methods on Face Detection

Guohua Zhang, Huiyun Jing, Xinzhe Wang, Chuan Zhou, Xin He, Duohe Ma
{"title":"Thwart Physical and Digital Domain's Adversarial Attack Methods on Face Detection","authors":"Guohua Zhang, Huiyun Jing, Xinzhe Wang, Chuan Zhou, Xin He, Duohe Ma","doi":"10.1109/icaice54393.2021.00167","DOIUrl":null,"url":null,"abstract":"Face detection is a classic problem widely focused on the field of computer vision. It has essential values in security monitoring, human-computer interaction, social interaction, and other fields. Face detection technology has been widely integrated into digital cameras, smartphones, and other end-to-end devices to realize the functions of finding out and focusing on faces. For example, beauty camera applications use face detection to identify faces in preparation for subsequent beauty functions. Face recognition relies on face detection to provide support and assurance. Unfortunately, face detection security problems are constantly emerging in the public's vision with the widespread use of face detection technology. Research on attacking and defending methods on face detection has become a hot research topic about artificial intelligence security. By studying the adversarial attack methods on face detection, we can better evaluate the face detection models' security, and at the same time, can give beneficial help to improve the security of face detection. Among these methods, the most popular attacking method is adversarial attacks. In this paper, we have rationalized and classified the methods of adversarial attacks on face detection according to the attacking principles, the attacking domain, and the attacker's understanding of the face detection models. According to the domain to make classification, it includes digital-domain attack, physical-domain attack; according to the attacker's understanding of the face detection models, it includes black-box attack, white-box attack, grey-box attack. Finally, according to the problems in its current development situation, we proposed the possible solutions and predicted its future development trend.","PeriodicalId":388444,"journal":{"name":"2021 2nd International Conference on Artificial Intelligence and Computer Engineering (ICAICE)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 2nd International Conference on Artificial Intelligence and Computer Engineering (ICAICE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icaice54393.2021.00167","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Face detection is a classic problem widely focused on the field of computer vision. It has essential values in security monitoring, human-computer interaction, social interaction, and other fields. Face detection technology has been widely integrated into digital cameras, smartphones, and other end-to-end devices to realize the functions of finding out and focusing on faces. For example, beauty camera applications use face detection to identify faces in preparation for subsequent beauty functions. Face recognition relies on face detection to provide support and assurance. Unfortunately, face detection security problems are constantly emerging in the public's vision with the widespread use of face detection technology. Research on attacking and defending methods on face detection has become a hot research topic about artificial intelligence security. By studying the adversarial attack methods on face detection, we can better evaluate the face detection models' security, and at the same time, can give beneficial help to improve the security of face detection. Among these methods, the most popular attacking method is adversarial attacks. In this paper, we have rationalized and classified the methods of adversarial attacks on face detection according to the attacking principles, the attacking domain, and the attacker's understanding of the face detection models. According to the domain to make classification, it includes digital-domain attack, physical-domain attack; according to the attacker's understanding of the face detection models, it includes black-box attack, white-box attack, grey-box attack. Finally, according to the problems in its current development situation, we proposed the possible solutions and predicted its future development trend.
挫败物理和数字领域的对抗性人脸检测方法
人脸检测是计算机视觉领域广泛关注的一个经典问题。它在安全监控、人机交互、社会互动等领域具有重要的价值。人脸检测技术已广泛集成到数码相机、智能手机等端到端设备中,实现对人脸的发现和聚焦功能。例如,美容相机应用程序使用人脸检测来识别人脸,为后续的美容功能做准备。人脸识别依靠人脸检测提供支持和保证。不幸的是,随着人脸检测技术的广泛应用,人脸检测的安全问题也不断出现在公众的视野中。人脸检测的攻防方法研究已成为人工智能安全领域的研究热点。通过研究人脸检测中的对抗性攻击方法,可以更好地评估人脸检测模型的安全性,同时也可以为提高人脸检测的安全性提供有益的帮助。在这些方法中,最流行的攻击方法是对抗性攻击。本文根据攻击原理、攻击领域以及攻击者对人脸检测模型的理解程度,对人脸检测中的对抗性攻击方法进行了合理化和分类。按域进行分类,包括数字域攻击、物理域攻击;根据攻击者对人脸检测模型的理解,可以分为黑盒攻击、白盒攻击、灰盒攻击。最后,根据其目前发展现状中存在的问题,提出了可能的解决方案,并对其未来的发展趋势进行了预测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信