{"title":"特征归因解释可信度评价方法的实证分析","authors":"Yuya Asazuma, Kazuaki Hanawa, Kentaro Inui","doi":"10.1527/tjsai.38-6_c-n22","DOIUrl":null,"url":null,"abstract":"Many high-performance machine learning models in the real world exhibit the black box problem. This issue is widely recognized as needing output reliability and model transparency. XAI (Explainable AI) represents a research field that addresses this issue. Within XAI, feature attribution methods, which clarify the importance of features irrespective of the task or model type, have become a central focus. Evaluating their efficacy based on empirical evidence is essential when proposing new methods. However, extensive debate exists regarding the properties that importance should be possessed, and a consensus on specific evaluation methods remains elusive. Given this context, many existing studies adopt their evaluation techniques, leading to fragmented discussions. This study aims to ”evaluate the evaluation methods,” focusing mainly on the faithfulness metric, deemed especially significant in evaluation criteria. We conducted empirical experiments related to existing evaluation techniques. The experiments approached the topic from two angles: correlation-based comparative evaluations and property verification using random sequences. In the former experiment, we investigated the correlation between faithfulness evaluation tests using numerous models and feature attribution methods. As a result, we found that very few test combinations exhibited high correlation, and many combinations showed low or no correlation. In the latter experiment, we observed that the measured faithfulness varied depending on the model and dataset by using random sequences instead of feature attribution methods to verify the properties of the faithfulness tests.","PeriodicalId":23256,"journal":{"name":"Transactions of The Japanese Society for Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Empirical Analysis of Methods for Evaluating Faithfulness of Explanations by Feature Attribution\",\"authors\":\"Yuya Asazuma, Kazuaki Hanawa, Kentaro Inui\",\"doi\":\"10.1527/tjsai.38-6_c-n22\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many high-performance machine learning models in the real world exhibit the black box problem. This issue is widely recognized as needing output reliability and model transparency. XAI (Explainable AI) represents a research field that addresses this issue. Within XAI, feature attribution methods, which clarify the importance of features irrespective of the task or model type, have become a central focus. Evaluating their efficacy based on empirical evidence is essential when proposing new methods. However, extensive debate exists regarding the properties that importance should be possessed, and a consensus on specific evaluation methods remains elusive. Given this context, many existing studies adopt their evaluation techniques, leading to fragmented discussions. This study aims to ”evaluate the evaluation methods,” focusing mainly on the faithfulness metric, deemed especially significant in evaluation criteria. We conducted empirical experiments related to existing evaluation techniques. The experiments approached the topic from two angles: correlation-based comparative evaluations and property verification using random sequences. In the former experiment, we investigated the correlation between faithfulness evaluation tests using numerous models and feature attribution methods. As a result, we found that very few test combinations exhibited high correlation, and many combinations showed low or no correlation. In the latter experiment, we observed that the measured faithfulness varied depending on the model and dataset by using random sequences instead of feature attribution methods to verify the properties of the faithfulness tests.\",\"PeriodicalId\":23256,\"journal\":{\"name\":\"Transactions of The Japanese Society for Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Transactions of The Japanese Society for Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1527/tjsai.38-6_c-n22\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions of The Japanese Society for Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1527/tjsai.38-6_c-n22","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Empirical Analysis of Methods for Evaluating Faithfulness of Explanations by Feature Attribution
Many high-performance machine learning models in the real world exhibit the black box problem. This issue is widely recognized as needing output reliability and model transparency. XAI (Explainable AI) represents a research field that addresses this issue. Within XAI, feature attribution methods, which clarify the importance of features irrespective of the task or model type, have become a central focus. Evaluating their efficacy based on empirical evidence is essential when proposing new methods. However, extensive debate exists regarding the properties that importance should be possessed, and a consensus on specific evaluation methods remains elusive. Given this context, many existing studies adopt their evaluation techniques, leading to fragmented discussions. This study aims to ”evaluate the evaluation methods,” focusing mainly on the faithfulness metric, deemed especially significant in evaluation criteria. We conducted empirical experiments related to existing evaluation techniques. The experiments approached the topic from two angles: correlation-based comparative evaluations and property verification using random sequences. In the former experiment, we investigated the correlation between faithfulness evaluation tests using numerous models and feature attribution methods. As a result, we found that very few test combinations exhibited high correlation, and many combinations showed low or no correlation. In the latter experiment, we observed that the measured faithfulness varied depending on the model and dataset by using random sequences instead of feature attribution methods to verify the properties of the faithfulness tests.