Towards unveiling sensitive and decisive patterns in explainable AI with a case study in geometric deep learning

IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jiajun Zhu, Siqi Miao, Rex Ying, Pan Li
{"title":"Towards unveiling sensitive and decisive patterns in explainable AI with a case study in geometric deep learning","authors":"Jiajun Zhu, Siqi Miao, Rex Ying, Pan Li","doi":"10.1038/s42256-025-00998-9","DOIUrl":null,"url":null,"abstract":"The interpretability of machine learning models has gained increasing attention, particularly in scientific domains where high precision and accountability are crucial. This research focuses on distinguishing between two critical data patterns—sensitive patterns (model related) and decisive patterns (task related)—which are commonly used as model interpretations but often lead to confusion. Specifically, this study compares the effectiveness of two main streams of interpretation methods: post-hoc methods and self-interpretable methods, in detecting these patterns. Recently, geometric deep learning (GDL) has shown superior predictive performance in various scientific applications, creating an urgent need for principled interpretation methods. Here, therefore, we conduct our study using several representative GDL applications as case studies. We evaluate 13 interpretation methods applied to 3 major GDL backbone models, using 4 scientific datasets to assess how well these methods identify sensitive and decisive patterns. Our findings indicate that post-hoc methods tend to provide interpretations better aligned with sensitive patterns, whereas certain self-interpretable methods exhibit strong and stable performance in detecting decisive patterns. Moreover, our study offers valuable insights into improving the reliability of these interpretation methods. For example, ensembling post-hoc interpretations from multiple models trained on the same task can effectively uncover the task’s decisive patterns. Interpreting decisions made by machine learning systems remains difficult. Here Zhu et al. test interpretability methods on their ability to identify model-related and task-related patterns.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"7 3","pages":"471-483"},"PeriodicalIF":18.8000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.nature.com/articles/s42256-025-00998-9","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The interpretability of machine learning models has gained increasing attention, particularly in scientific domains where high precision and accountability are crucial. This research focuses on distinguishing between two critical data patterns—sensitive patterns (model related) and decisive patterns (task related)—which are commonly used as model interpretations but often lead to confusion. Specifically, this study compares the effectiveness of two main streams of interpretation methods: post-hoc methods and self-interpretable methods, in detecting these patterns. Recently, geometric deep learning (GDL) has shown superior predictive performance in various scientific applications, creating an urgent need for principled interpretation methods. Here, therefore, we conduct our study using several representative GDL applications as case studies. We evaluate 13 interpretation methods applied to 3 major GDL backbone models, using 4 scientific datasets to assess how well these methods identify sensitive and decisive patterns. Our findings indicate that post-hoc methods tend to provide interpretations better aligned with sensitive patterns, whereas certain self-interpretable methods exhibit strong and stable performance in detecting decisive patterns. Moreover, our study offers valuable insights into improving the reliability of these interpretation methods. For example, ensembling post-hoc interpretations from multiple models trained on the same task can effectively uncover the task’s decisive patterns. Interpreting decisions made by machine learning systems remains difficult. Here Zhu et al. test interpretability methods on their ability to identify model-related and task-related patterns.

Abstract Image

Abstract Image

机器学习模型的可解释性越来越受到关注,尤其是在高精度和高责任感至关重要的科学领域。本研究的重点是区分两种关键数据模式--敏感模式(与模型相关)和决定性模式(与任务相关),这两种模式通常用作模型解释,但往往会导致混淆。具体来说,本研究比较了两种主流解释方法:事后解释方法和自我解释方法在检测这些模式方面的有效性。最近,几何深度学习(GDL)在各种科学应用中表现出了卓越的预测性能,因此迫切需要有原则的解释方法。因此,在这里,我们使用几个具有代表性的 GDL 应用作为案例进行研究。我们使用 4 个科学数据集评估了应用于 3 个主要 GDL 骨干模型的 13 种解释方法,以评估这些方法识别敏感和决定性模式的能力。我们的研究结果表明,事后解释方法倾向于提供更符合敏感模式的解释,而某些自解释方法在检测决定性模式方面表现出强大而稳定的性能。此外,我们的研究还为提高这些解释方法的可靠性提供了宝贵的见解。例如,将针对同一任务训练的多个模型的事后解释集合起来,可以有效地发现任务的决定性模式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
36.90
自引率
2.10%
发文量
127
期刊介绍: Nature Machine Intelligence is a distinguished publication that presents original research and reviews on various topics in machine learning, robotics, and AI. Our focus extends beyond these fields, exploring their profound impact on other scientific disciplines, as well as societal and industrial aspects. We recognize limitless possibilities wherein machine intelligence can augment human capabilities and knowledge in domains like scientific exploration, healthcare, medical diagnostics, and the creation of safe and sustainable cities, transportation, and agriculture. Simultaneously, we acknowledge the emergence of ethical, social, and legal concerns due to the rapid pace of advancements. To foster interdisciplinary discussions on these far-reaching implications, Nature Machine Intelligence serves as a platform for dialogue facilitated through Comments, News Features, News & Views articles, and Correspondence. Our goal is to encourage a comprehensive examination of these subjects. Similar to all Nature-branded journals, Nature Machine Intelligence operates under the guidance of a team of skilled editors. We adhere to a fair and rigorous peer-review process, ensuring high standards of copy-editing and production, swift publication, and editorial independence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信