Liu Zhendong, Wenyu Jiang, Yan Zhang, Chongjun Wang
{"title":"FIXC: A Method for Data Distribution Shifts Calibration via Feature Importance","authors":"Liu Zhendong, Wenyu Jiang, Yan Zhang, Chongjun Wang","doi":"10.1109/ICTAI56018.2022.00030","DOIUrl":null,"url":null,"abstract":"With the rapid development of Artificial Intelligence (AI), a long line of past papers have shown concerns about the data distribution shifts problem in image classification models via deep learning. Moreover, there is also an Out-of-Distribution (OOD) problem in perturbation-based explanation methods for DNNs ineXplainable Artificial Intelligence (XAI), because the generated perturbation samples may be not the same distribution as the original dataset. We explore the limitations of post-hoc Learning to Explain (L2X) explanation methods that use approximators to mimic the behavior of DNNs. We propose a training pipeline called Feature Importance eXplanation(-based) Calibration (FIXC), which efficiently extracts feature importance without using imitations of existing DNNs. We use feature importance as additional information to calibrate data distribution shifts. The evaluation of the corrupted dataset and DNNs benchmarks shows that the FIXC effectively improves the classification accuracy of corrupted images. Experiments on popular vision datasets show that the FIXC outperforms state-of-the-art methods on calibration metrics While the training pipeline provides a calibrated feature importance explanation. We also provide an analysis of our method based on game interaction theory.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the rapid development of Artificial Intelligence (AI), a long line of past papers have shown concerns about the data distribution shifts problem in image classification models via deep learning. Moreover, there is also an Out-of-Distribution (OOD) problem in perturbation-based explanation methods for DNNs ineXplainable Artificial Intelligence (XAI), because the generated perturbation samples may be not the same distribution as the original dataset. We explore the limitations of post-hoc Learning to Explain (L2X) explanation methods that use approximators to mimic the behavior of DNNs. We propose a training pipeline called Feature Importance eXplanation(-based) Calibration (FIXC), which efficiently extracts feature importance without using imitations of existing DNNs. We use feature importance as additional information to calibrate data distribution shifts. The evaluation of the corrupted dataset and DNNs benchmarks shows that the FIXC effectively improves the classification accuracy of corrupted images. Experiments on popular vision datasets show that the FIXC outperforms state-of-the-art methods on calibration metrics While the training pipeline provides a calibrated feature importance explanation. We also provide an analysis of our method based on game interaction theory.