{"title":"走向可靠和忠实的解释:选择性合理化的解纠缠增强方法。","authors":"Linan Yue,Qi Liu,YiChao Du,Li Wang,Yanqing An,Enhong Chen","doi":"10.1109/tpami.2025.3592313","DOIUrl":null,"url":null,"abstract":"The pursuit of model explainability has prompted the selective rationalization (aka, rationale extraction) which can identify important features (i.e., rationales) from the original input to support prediction results. Existing methods typically involve a cascaded approach with a selector responsible for extracting rationales from the input, followed by a predictor that makes predictions based on the selected rationales. However, these approaches often neglect the information contained in the non-rationales, underutilizing the input. Therefore, in our prior work, we introduce the Disentanglement-Augmented Rationale Extraction (DARE) method, which disentangles the input into rationale and non-rationale components, and enhances rationale representations by minimizing the mutual information between them. While DARE demonstrates strong performance in rationalization, it may still rely on shortcuts in the training distribution, leading to unfaithful rationales. To this end, in this paper, we propose Faith-DARE, an extension of DARE that aims to extract more reliable rationales by mitigating shortcut dependencies. Specifically, we treat the non-rationale features identified by DARE as environments that are decorrelated from the predictions. By shuffling and recombining these environments with rationales, we generate counterfactual samples and identify invariant rationales that remain predictive across shifted distributions. Extensive experiments on graph and textual datasets validate the effectiveness of Faith-DARE. Codes are available at https://github.com/yuelinan/DARE.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"19 1","pages":""},"PeriodicalIF":20.8000,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Reliable and Faithful Explanations: A Disentanglement-Augmented Approach for Selective Rationalization.\",\"authors\":\"Linan Yue,Qi Liu,YiChao Du,Li Wang,Yanqing An,Enhong Chen\",\"doi\":\"10.1109/tpami.2025.3592313\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The pursuit of model explainability has prompted the selective rationalization (aka, rationale extraction) which can identify important features (i.e., rationales) from the original input to support prediction results. Existing methods typically involve a cascaded approach with a selector responsible for extracting rationales from the input, followed by a predictor that makes predictions based on the selected rationales. However, these approaches often neglect the information contained in the non-rationales, underutilizing the input. Therefore, in our prior work, we introduce the Disentanglement-Augmented Rationale Extraction (DARE) method, which disentangles the input into rationale and non-rationale components, and enhances rationale representations by minimizing the mutual information between them. While DARE demonstrates strong performance in rationalization, it may still rely on shortcuts in the training distribution, leading to unfaithful rationales. To this end, in this paper, we propose Faith-DARE, an extension of DARE that aims to extract more reliable rationales by mitigating shortcut dependencies. Specifically, we treat the non-rationale features identified by DARE as environments that are decorrelated from the predictions. By shuffling and recombining these environments with rationales, we generate counterfactual samples and identify invariant rationales that remain predictive across shifted distributions. Extensive experiments on graph and textual datasets validate the effectiveness of Faith-DARE. Codes are available at https://github.com/yuelinan/DARE.\",\"PeriodicalId\":13426,\"journal\":{\"name\":\"IEEE Transactions on Pattern Analysis and Machine Intelligence\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":20.8000,\"publicationDate\":\"2025-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Pattern Analysis and Machine Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tpami.2025.3592313\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tpami.2025.3592313","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Towards Reliable and Faithful Explanations: A Disentanglement-Augmented Approach for Selective Rationalization.
The pursuit of model explainability has prompted the selective rationalization (aka, rationale extraction) which can identify important features (i.e., rationales) from the original input to support prediction results. Existing methods typically involve a cascaded approach with a selector responsible for extracting rationales from the input, followed by a predictor that makes predictions based on the selected rationales. However, these approaches often neglect the information contained in the non-rationales, underutilizing the input. Therefore, in our prior work, we introduce the Disentanglement-Augmented Rationale Extraction (DARE) method, which disentangles the input into rationale and non-rationale components, and enhances rationale representations by minimizing the mutual information between them. While DARE demonstrates strong performance in rationalization, it may still rely on shortcuts in the training distribution, leading to unfaithful rationales. To this end, in this paper, we propose Faith-DARE, an extension of DARE that aims to extract more reliable rationales by mitigating shortcut dependencies. Specifically, we treat the non-rationale features identified by DARE as environments that are decorrelated from the predictions. By shuffling and recombining these environments with rationales, we generate counterfactual samples and identify invariant rationales that remain predictive across shifted distributions. Extensive experiments on graph and textual datasets validate the effectiveness of Faith-DARE. Codes are available at https://github.com/yuelinan/DARE.
期刊介绍:
The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.