Explainable Artificial Intelligence in Drug Discovery: Bridging Predictive Power and Mechanistic Insight

IF 27 2区 化学 Q1 CHEMISTRY, MULTIDISCIPLINARY
Antonio Lavecchia
{"title":"Explainable Artificial Intelligence in Drug Discovery: Bridging Predictive Power and Mechanistic Insight","authors":"Antonio Lavecchia","doi":"10.1002/wcms.70049","DOIUrl":null,"url":null,"abstract":"<p>Explainable artificial intelligence (XAI) is increasingly essential in drug discovery, where interpretability and trust must accompany predictive accuracy. As deep learning models, particularly, deep neural networks (DNNs) and graph neural networks (GNNs), enhance molecular property prediction, de novo design, and toxicity estimation, transparent, mechanistically meaningful insights become critical. This article classifies major XAI strategies in computational molecular science, including gradient-based attribution, perturbation analysis, surrogate modeling, counterfactual reasoning, and self-explaining architectures. Molecular representations, such as fingerprints, SMILES, molecular graphs, and latent embeddings, are evaluated for their impact on explanation fidelity. An evaluation framework is outlined using metrics like fidelity, stability, completeness, sparsity, and usability, with emphasis on integration into drug discovery workflows. The discussion also highlights emerging directions, including neuro-symbolic systems and physics-informed networks that embed mechanistic constraints into statistical models. By aligning algorithmic transparency with pharmacological reasoning, XAI not only demystifies black-box models but also supports scientific insight, regulatory compliance, and ethical AI deployment in pharmaceutical research.</p><p>This article is categorized under:\n\n </p>","PeriodicalId":236,"journal":{"name":"Wiley Interdisciplinary Reviews: Computational Molecular Science","volume":"15 5","pages":""},"PeriodicalIF":27.0000,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://wires.onlinelibrary.wiley.com/doi/epdf/10.1002/wcms.70049","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wiley Interdisciplinary Reviews: Computational Molecular Science","FirstCategoryId":"92","ListUrlMain":"https://wires.onlinelibrary.wiley.com/doi/10.1002/wcms.70049","RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Explainable artificial intelligence (XAI) is increasingly essential in drug discovery, where interpretability and trust must accompany predictive accuracy. As deep learning models, particularly, deep neural networks (DNNs) and graph neural networks (GNNs), enhance molecular property prediction, de novo design, and toxicity estimation, transparent, mechanistically meaningful insights become critical. This article classifies major XAI strategies in computational molecular science, including gradient-based attribution, perturbation analysis, surrogate modeling, counterfactual reasoning, and self-explaining architectures. Molecular representations, such as fingerprints, SMILES, molecular graphs, and latent embeddings, are evaluated for their impact on explanation fidelity. An evaluation framework is outlined using metrics like fidelity, stability, completeness, sparsity, and usability, with emphasis on integration into drug discovery workflows. The discussion also highlights emerging directions, including neuro-symbolic systems and physics-informed networks that embed mechanistic constraints into statistical models. By aligning algorithmic transparency with pharmacological reasoning, XAI not only demystifies black-box models but also supports scientific insight, regulatory compliance, and ethical AI deployment in pharmaceutical research.

This article is categorized under:

Abstract Image

药物发现中的可解释人工智能:连接预测能力和机制洞察力
可解释的人工智能(XAI)在药物发现中越来越重要,其中可解释性和信任必须伴随着预测准确性。随着深度学习模型,特别是深度神经网络(dnn)和图神经网络(gnn),增强了分子性质预测、从头设计和毒性估计,透明的、有机械意义的见解变得至关重要。本文对计算分子科学中的主要XAI策略进行了分类,包括基于梯度的归因、微扰分析、代理建模、反事实推理和自我解释架构。分子表征,如指纹、smile、分子图和潜在嵌入,评估了它们对解释保真度的影响。评估框架使用保真度、稳定性、完整性、稀疏性和可用性等指标进行概述,重点是与药物发现工作流程的集成。讨论还强调了新兴方向,包括神经符号系统和物理信息网络,这些网络将机械约束嵌入到统计模型中。通过将算法透明度与药理学推理结合起来,XAI不仅揭开了黑箱模型的神秘面纱,还支持在药物研究中进行科学洞察、监管合规和道德人工智能部署。本文分类如下:
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Wiley Interdisciplinary Reviews: Computational Molecular Science
Wiley Interdisciplinary Reviews: Computational Molecular Science CHEMISTRY, MULTIDISCIPLINARY-MATHEMATICAL & COMPUTATIONAL BIOLOGY
CiteScore
28.90
自引率
1.80%
发文量
52
审稿时长
6-12 weeks
期刊介绍: Computational molecular sciences harness the power of rigorous chemical and physical theories, employing computer-based modeling, specialized hardware, software development, algorithm design, and database management to explore and illuminate every facet of molecular sciences. These interdisciplinary approaches form a bridge between chemistry, biology, and materials sciences, establishing connections with adjacent application-driven fields in both chemistry and biology. WIREs Computational Molecular Science stands as a platform to comprehensively review and spotlight research from these dynamic and interconnected fields.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信