Backdooring Convolutional Neural Networks via Targeted Weight Perturbations

Jacob Dumford, W. Scheirer
{"title":"Backdooring Convolutional Neural Networks via Targeted Weight Perturbations","authors":"Jacob Dumford, W. Scheirer","doi":"10.1109/IJCB48548.2020.9304875","DOIUrl":null,"url":null,"abstract":"We present a new white-box backdoor attack that exploits a vulnerability of convolutional neural networks (CNNs). In particular, we examine the application of facial recognition. Deep learning techniques are at the top of the game for facial recognition, which means they have now been implemented in many production-level systems. Alarmingly, unlike other commercial technologies such as operating systems and network devices, deep learning-based facial recognition algorithms are not presently designed with security requirements or audited for security vulnerabilities before deployment. Given how young the technology is and how abstract many of the internal workings of these algorithms are, neural network-based facial recognition systems are prime targets for security breaches. As more and more of our personal information begins to be guarded by facial recognition (e.g., the iPhone X), exploring the security vulnerabilities of these systems from a penetration testing standpoint is crucial. Along these lines, we describe a general methodology for backdooring CNNs via targeted weight perturbations. Using a five-layer CNN and ResNet-50 as case studies, we show that an attacker is able to significantly increase the chance that inputs they supply will be falsely accepted by a CNN while simultaneously preserving the error rates for legitimate enrolled classes.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"85","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Joint Conference on Biometrics (IJCB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCB48548.2020.9304875","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 85

Abstract

We present a new white-box backdoor attack that exploits a vulnerability of convolutional neural networks (CNNs). In particular, we examine the application of facial recognition. Deep learning techniques are at the top of the game for facial recognition, which means they have now been implemented in many production-level systems. Alarmingly, unlike other commercial technologies such as operating systems and network devices, deep learning-based facial recognition algorithms are not presently designed with security requirements or audited for security vulnerabilities before deployment. Given how young the technology is and how abstract many of the internal workings of these algorithms are, neural network-based facial recognition systems are prime targets for security breaches. As more and more of our personal information begins to be guarded by facial recognition (e.g., the iPhone X), exploring the security vulnerabilities of these systems from a penetration testing standpoint is crucial. Along these lines, we describe a general methodology for backdooring CNNs via targeted weight perturbations. Using a five-layer CNN and ResNet-50 as case studies, we show that an attacker is able to significantly increase the chance that inputs they supply will be falsely accepted by a CNN while simultaneously preserving the error rates for legitimate enrolled classes.
基于目标权扰动的反向卷积神经网络
我们提出了一种新的白盒后门攻击,利用卷积神经网络(cnn)的漏洞。特别地,我们研究了面部识别的应用。深度学习技术在面部识别领域处于领先地位,这意味着它们现在已经在许多生产级系统中实现。令人担忧的是,与其他商业技术(如操作系统和网络设备)不同,基于深度学习的面部识别算法目前在设计时没有安全要求,也没有在部署前对安全漏洞进行审计。鉴于这项技术还很年轻,而且这些算法的许多内部工作非常抽象,基于神经网络的面部识别系统是安全漏洞的主要目标。随着越来越多的个人信息开始受到面部识别的保护(例如iPhone X),从渗透测试的角度探索这些系统的安全漏洞至关重要。沿着这些思路,我们描述了通过目标权重扰动后门cnn的一般方法。使用五层CNN和ResNet-50作为案例研究,我们表明攻击者能够显着增加他们提供的输入被CNN错误接受的机会,同时保持合法注册类的错误率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信