Liu Zhendong, Wenyu Jiang, Yan Zhang, Chongjun Wang
{"title":"FIXC:一种基于特征重要性的数据分布偏移校准方法","authors":"Liu Zhendong, Wenyu Jiang, Yan Zhang, Chongjun Wang","doi":"10.1109/ICTAI56018.2022.00030","DOIUrl":null,"url":null,"abstract":"With the rapid development of Artificial Intelligence (AI), a long line of past papers have shown concerns about the data distribution shifts problem in image classification models via deep learning. Moreover, there is also an Out-of-Distribution (OOD) problem in perturbation-based explanation methods for DNNs ineXplainable Artificial Intelligence (XAI), because the generated perturbation samples may be not the same distribution as the original dataset. We explore the limitations of post-hoc Learning to Explain (L2X) explanation methods that use approximators to mimic the behavior of DNNs. We propose a training pipeline called Feature Importance eXplanation(-based) Calibration (FIXC), which efficiently extracts feature importance without using imitations of existing DNNs. We use feature importance as additional information to calibrate data distribution shifts. The evaluation of the corrupted dataset and DNNs benchmarks shows that the FIXC effectively improves the classification accuracy of corrupted images. Experiments on popular vision datasets show that the FIXC outperforms state-of-the-art methods on calibration metrics While the training pipeline provides a calibrated feature importance explanation. We also provide an analysis of our method based on game interaction theory.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FIXC: A Method for Data Distribution Shifts Calibration via Feature Importance\",\"authors\":\"Liu Zhendong, Wenyu Jiang, Yan Zhang, Chongjun Wang\",\"doi\":\"10.1109/ICTAI56018.2022.00030\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the rapid development of Artificial Intelligence (AI), a long line of past papers have shown concerns about the data distribution shifts problem in image classification models via deep learning. Moreover, there is also an Out-of-Distribution (OOD) problem in perturbation-based explanation methods for DNNs ineXplainable Artificial Intelligence (XAI), because the generated perturbation samples may be not the same distribution as the original dataset. We explore the limitations of post-hoc Learning to Explain (L2X) explanation methods that use approximators to mimic the behavior of DNNs. We propose a training pipeline called Feature Importance eXplanation(-based) Calibration (FIXC), which efficiently extracts feature importance without using imitations of existing DNNs. We use feature importance as additional information to calibrate data distribution shifts. The evaluation of the corrupted dataset and DNNs benchmarks shows that the FIXC effectively improves the classification accuracy of corrupted images. Experiments on popular vision datasets show that the FIXC outperforms state-of-the-art methods on calibration metrics While the training pipeline provides a calibrated feature importance explanation. We also provide an analysis of our method based on game interaction theory.\",\"PeriodicalId\":354314,\"journal\":{\"name\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI56018.2022.00030\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
随着人工智能(AI)的快速发展,过去的许多论文都对基于深度学习的图像分类模型中的数据分布偏移问题表示了关注。此外,基于微扰的dnn - explainable Artificial Intelligence (XAI)解释方法也存在out - distribution (OOD)问题,因为生成的微扰样本可能与原始数据集的分布不同。我们探讨了使用近似器模拟dnn行为的事后学习解释(L2X)解释方法的局限性。我们提出了一个训练管道,称为特征重要性解释(基于)校准(FIXC),它有效地提取特征重要性,而不使用现有深度神经网络的模仿。我们使用特征重要性作为附加信息来校准数据分布偏移。对损坏数据集和dnn基准的评估表明,FIXC有效地提高了损坏图像的分类精度。在流行的视觉数据集上的实验表明,FIXC在校准指标上优于最先进的方法,而训练管道提供了校准后的特征重要性解释。我们还提供了基于游戏交互理论的方法分析。
FIXC: A Method for Data Distribution Shifts Calibration via Feature Importance
With the rapid development of Artificial Intelligence (AI), a long line of past papers have shown concerns about the data distribution shifts problem in image classification models via deep learning. Moreover, there is also an Out-of-Distribution (OOD) problem in perturbation-based explanation methods for DNNs ineXplainable Artificial Intelligence (XAI), because the generated perturbation samples may be not the same distribution as the original dataset. We explore the limitations of post-hoc Learning to Explain (L2X) explanation methods that use approximators to mimic the behavior of DNNs. We propose a training pipeline called Feature Importance eXplanation(-based) Calibration (FIXC), which efficiently extracts feature importance without using imitations of existing DNNs. We use feature importance as additional information to calibrate data distribution shifts. The evaluation of the corrupted dataset and DNNs benchmarks shows that the FIXC effectively improves the classification accuracy of corrupted images. Experiments on popular vision datasets show that the FIXC outperforms state-of-the-art methods on calibration metrics While the training pipeline provides a calibrated feature importance explanation. We also provide an analysis of our method based on game interaction theory.