Real World Robustness from Systematic Noise

Yan Wang, Yuhang Li, Ruihao Gong
{"title":"Real World Robustness from Systematic Noise","authors":"Yan Wang, Yuhang Li, Ruihao Gong","doi":"10.1145/3475724.3483607","DOIUrl":null,"url":null,"abstract":"Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples caused by systematic error. More specifically, we find the trained neural network classifier can be fooled by inconsistent implementations of image decoding and resize. This tiny difference between these implementations often causes an accuracy drop from training to deployment. To benchmark these real-world adversarial examples, we propose ImageNet-S dataset, which enables researchers to measure a classifier's robustness to systematic error. For example, we find a normal ResNet-50 trained on ImageNet can have 1%$\\sim$5% accuracy difference due to the systematic error. Together our evaluation and dataset may aid future work toward real-world robustness and practical generalization.","PeriodicalId":279202,"journal":{"name":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3475724.3483607","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples caused by systematic error. More specifically, we find the trained neural network classifier can be fooled by inconsistent implementations of image decoding and resize. This tiny difference between these implementations often causes an accuracy drop from training to deployment. To benchmark these real-world adversarial examples, we propose ImageNet-S dataset, which enables researchers to measure a classifier's robustness to systematic error. For example, we find a normal ResNet-50 trained on ImageNet can have 1%$\sim$5% accuracy difference due to the systematic error. Together our evaluation and dataset may aid future work toward real-world robustness and practical generalization.
系统噪声的真实世界鲁棒性
系统误差不是偶然决定的,通常是指系统固有的不准确性(包括观察或测量过程)。在本文中,我们展示了一些长期被忽视但经常发生的由系统误差引起的对抗性例子。更具体地说,我们发现训练好的神经网络分类器可能会被不一致的图像解码和大小调整实现所愚弄。这些实现之间的微小差异通常会导致从训练到部署的准确性下降。为了对这些真实世界的对抗性示例进行基准测试,我们提出了ImageNet-S数据集,它使研究人员能够测量分类器对系统误差的鲁棒性。例如,我们发现在ImageNet上训练的正常ResNet-50由于系统误差可能有1%$\sim$5%的精度差异。我们的评估和数据集可以帮助未来的工作向现实世界的鲁棒性和实际推广。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信