Doctoral Consortium of WSDM'22: Exploring the Bias of Adversarial Defenses

Han Xu
{"title":"Doctoral Consortium of WSDM'22: Exploring the Bias of Adversarial Defenses","authors":"Han Xu","doi":"10.1145/3488560.3502215","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) have achieved extraordinary accomplishments on various machine learning tasks. However, the existence of adversarial attacks still raise great concerns when they are adopted to safety-critical tasks. As countermeasures to protect DNN models against adversarial attacks, there are various defense strategies proposed. However, we find that the robustness (\"safety'') provided by the robust training algorithms usually result unequal performance either among classes or sub-populations across the whole data distribution. For example, the model can achieve extremely low accuracy / robustness on certain groups of data. As a result, the safety of the model is still under great threats. As a summary, our project is about to study the bias problems of robust trained neural networks from different perspectives, which aims to build eventually reliable and safe deep learning models. We propose to present our research works in the Doctoral Consortium in WSDM'22 and gain opportunities to share our contribution to the relate problems.","PeriodicalId":348686,"journal":{"name":"Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining","volume":"150 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3488560.3502215","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep neural networks (DNNs) have achieved extraordinary accomplishments on various machine learning tasks. However, the existence of adversarial attacks still raise great concerns when they are adopted to safety-critical tasks. As countermeasures to protect DNN models against adversarial attacks, there are various defense strategies proposed. However, we find that the robustness ("safety'') provided by the robust training algorithms usually result unequal performance either among classes or sub-populations across the whole data distribution. For example, the model can achieve extremely low accuracy / robustness on certain groups of data. As a result, the safety of the model is still under great threats. As a summary, our project is about to study the bias problems of robust trained neural networks from different perspectives, which aims to build eventually reliable and safe deep learning models. We propose to present our research works in the Doctoral Consortium in WSDM'22 and gain opportunities to share our contribution to the relate problems.
WSDM'22博士联盟:探索对抗性防御的偏见
深度神经网络(dnn)在各种机器学习任务上取得了非凡的成就。然而,对抗性攻击的存在仍然引起了极大的关注,当它们被用于安全关键任务时。为了保护DNN模型免受对抗性攻击,人们提出了多种防御策略。然而,我们发现,鲁棒性训练算法提供的鲁棒性(“安全性”)通常会导致整个数据分布的类或子种群之间的性能不平等。例如,该模型在某些数据组上可能达到极低的准确性/鲁棒性。因此,模型的安全性仍然受到很大的威胁。综上所述,我们的项目将从不同的角度研究经过鲁棒训练的神经网络的偏差问题,旨在最终建立可靠和安全的深度学习模型。我们建议在WSDM'22的博士联盟中展示我们的研究工作,并获得分享我们对相关问题的贡献的机会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信