Computational Asymmetries in Robust Classification

Samuele Marro, M. Lombardi
{"title":"Computational Asymmetries in Robust Classification","authors":"Samuele Marro, M. Lombardi","doi":"10.48550/arXiv.2306.14326","DOIUrl":null,"url":null,"abstract":"In the context of adversarial robustness, we make three strongly related contributions. First, we prove that while attacking ReLU classifiers is $\\mathit{NP}$-hard, ensuring their robustness at training time is $\\Sigma^2_P$-hard (even on a single example). This asymmetry provides a rationale for the fact that robust classifications approaches are frequently fooled in the literature. Second, we show that inference-time robustness certificates are not affected by this asymmetry, by introducing a proof-of-concept approach named Counter-Attack (CA). Indeed, CA displays a reversed asymmetry: running the defense is $\\mathit{NP}$-hard, while attacking it is $\\Sigma_2^P$-hard. Finally, motivated by our previous result, we argue that adversarial attacks can be used in the context of robustness certification, and provide an empirical evaluation of their effectiveness. As a byproduct of this process, we also release UG100, a benchmark dataset for adversarial attacks.","PeriodicalId":74529,"journal":{"name":"Proceedings of the ... International Conference on Machine Learning. International Conference on Machine Learning","volume":"13 1","pages":"24082-24138"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... International Conference on Machine Learning. International Conference on Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2306.14326","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the context of adversarial robustness, we make three strongly related contributions. First, we prove that while attacking ReLU classifiers is $\mathit{NP}$-hard, ensuring their robustness at training time is $\Sigma^2_P$-hard (even on a single example). This asymmetry provides a rationale for the fact that robust classifications approaches are frequently fooled in the literature. Second, we show that inference-time robustness certificates are not affected by this asymmetry, by introducing a proof-of-concept approach named Counter-Attack (CA). Indeed, CA displays a reversed asymmetry: running the defense is $\mathit{NP}$-hard, while attacking it is $\Sigma_2^P$-hard. Finally, motivated by our previous result, we argue that adversarial attacks can be used in the context of robustness certification, and provide an empirical evaluation of their effectiveness. As a byproduct of this process, we also release UG100, a benchmark dataset for adversarial attacks.
鲁棒分类中的计算不对称性
在对抗性鲁棒性的背景下,我们做出了三个强烈相关的贡献。首先,我们证明了虽然攻击ReLU分类器是$\mathit{NP}$-hard,但确保它们在训练时的鲁棒性是$\Sigma^2_P$-hard(即使在单个示例上)。这种不对称性为健壮的分类方法在文献中经常被愚弄提供了一个基本原理。其次,我们通过引入一种名为反击(CA)的概念验证方法,证明了推理时间鲁棒性证书不受这种不对称性的影响。事实上,CA显示了一种反向的不对称:运行防御是$\mathit{NP}$-困难的,而攻击它是$\Sigma_2^P$-困难的。最后,根据我们之前的结果,我们认为对抗性攻击可以在鲁棒性认证的背景下使用,并提供其有效性的经验评估。作为这个过程的副产品,我们还发布了UG100,这是一个针对对抗性攻击的基准数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信