FIXC: A Method for Data Distribution Shifts Calibration via Feature Importance

Liu Zhendong, Wenyu Jiang, Yan Zhang, Chongjun Wang
{"title":"FIXC: A Method for Data Distribution Shifts Calibration via Feature Importance","authors":"Liu Zhendong, Wenyu Jiang, Yan Zhang, Chongjun Wang","doi":"10.1109/ICTAI56018.2022.00030","DOIUrl":null,"url":null,"abstract":"With the rapid development of Artificial Intelligence (AI), a long line of past papers have shown concerns about the data distribution shifts problem in image classification models via deep learning. Moreover, there is also an Out-of-Distribution (OOD) problem in perturbation-based explanation methods for DNNs ineXplainable Artificial Intelligence (XAI), because the generated perturbation samples may be not the same distribution as the original dataset. We explore the limitations of post-hoc Learning to Explain (L2X) explanation methods that use approximators to mimic the behavior of DNNs. We propose a training pipeline called Feature Importance eXplanation(-based) Calibration (FIXC), which efficiently extracts feature importance without using imitations of existing DNNs. We use feature importance as additional information to calibrate data distribution shifts. The evaluation of the corrupted dataset and DNNs benchmarks shows that the FIXC effectively improves the classification accuracy of corrupted images. Experiments on popular vision datasets show that the FIXC outperforms state-of-the-art methods on calibration metrics While the training pipeline provides a calibrated feature importance explanation. We also provide an analysis of our method based on game interaction theory.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the rapid development of Artificial Intelligence (AI), a long line of past papers have shown concerns about the data distribution shifts problem in image classification models via deep learning. Moreover, there is also an Out-of-Distribution (OOD) problem in perturbation-based explanation methods for DNNs ineXplainable Artificial Intelligence (XAI), because the generated perturbation samples may be not the same distribution as the original dataset. We explore the limitations of post-hoc Learning to Explain (L2X) explanation methods that use approximators to mimic the behavior of DNNs. We propose a training pipeline called Feature Importance eXplanation(-based) Calibration (FIXC), which efficiently extracts feature importance without using imitations of existing DNNs. We use feature importance as additional information to calibrate data distribution shifts. The evaluation of the corrupted dataset and DNNs benchmarks shows that the FIXC effectively improves the classification accuracy of corrupted images. Experiments on popular vision datasets show that the FIXC outperforms state-of-the-art methods on calibration metrics While the training pipeline provides a calibrated feature importance explanation. We also provide an analysis of our method based on game interaction theory.
FIXC:一种基于特征重要性的数据分布偏移校准方法
随着人工智能(AI)的快速发展,过去的许多论文都对基于深度学习的图像分类模型中的数据分布偏移问题表示了关注。此外,基于微扰的dnn - explainable Artificial Intelligence (XAI)解释方法也存在out - distribution (OOD)问题,因为生成的微扰样本可能与原始数据集的分布不同。我们探讨了使用近似器模拟dnn行为的事后学习解释(L2X)解释方法的局限性。我们提出了一个训练管道,称为特征重要性解释(基于)校准(FIXC),它有效地提取特征重要性,而不使用现有深度神经网络的模仿。我们使用特征重要性作为附加信息来校准数据分布偏移。对损坏数据集和dnn基准的评估表明,FIXC有效地提高了损坏图像的分类精度。在流行的视觉数据集上的实验表明,FIXC在校准指标上优于最先进的方法,而训练管道提供了校准后的特征重要性解释。我们还提供了基于游戏交互理论的方法分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信