{"title":"药物发现中的可解释人工智能:连接预测能力和机制洞察力","authors":"Antonio Lavecchia","doi":"10.1002/wcms.70049","DOIUrl":null,"url":null,"abstract":"<p>Explainable artificial intelligence (XAI) is increasingly essential in drug discovery, where interpretability and trust must accompany predictive accuracy. As deep learning models, particularly, deep neural networks (DNNs) and graph neural networks (GNNs), enhance molecular property prediction, de novo design, and toxicity estimation, transparent, mechanistically meaningful insights become critical. This article classifies major XAI strategies in computational molecular science, including gradient-based attribution, perturbation analysis, surrogate modeling, counterfactual reasoning, and self-explaining architectures. Molecular representations, such as fingerprints, SMILES, molecular graphs, and latent embeddings, are evaluated for their impact on explanation fidelity. An evaluation framework is outlined using metrics like fidelity, stability, completeness, sparsity, and usability, with emphasis on integration into drug discovery workflows. The discussion also highlights emerging directions, including neuro-symbolic systems and physics-informed networks that embed mechanistic constraints into statistical models. By aligning algorithmic transparency with pharmacological reasoning, XAI not only demystifies black-box models but also supports scientific insight, regulatory compliance, and ethical AI deployment in pharmaceutical research.</p><p>This article is categorized under:\n\n </p>","PeriodicalId":236,"journal":{"name":"Wiley Interdisciplinary Reviews: Computational Molecular Science","volume":"15 5","pages":""},"PeriodicalIF":27.0000,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://wires.onlinelibrary.wiley.com/doi/epdf/10.1002/wcms.70049","citationCount":"0","resultStr":"{\"title\":\"Explainable Artificial Intelligence in Drug Discovery: Bridging Predictive Power and Mechanistic Insight\",\"authors\":\"Antonio Lavecchia\",\"doi\":\"10.1002/wcms.70049\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Explainable artificial intelligence (XAI) is increasingly essential in drug discovery, where interpretability and trust must accompany predictive accuracy. As deep learning models, particularly, deep neural networks (DNNs) and graph neural networks (GNNs), enhance molecular property prediction, de novo design, and toxicity estimation, transparent, mechanistically meaningful insights become critical. This article classifies major XAI strategies in computational molecular science, including gradient-based attribution, perturbation analysis, surrogate modeling, counterfactual reasoning, and self-explaining architectures. Molecular representations, such as fingerprints, SMILES, molecular graphs, and latent embeddings, are evaluated for their impact on explanation fidelity. An evaluation framework is outlined using metrics like fidelity, stability, completeness, sparsity, and usability, with emphasis on integration into drug discovery workflows. The discussion also highlights emerging directions, including neuro-symbolic systems and physics-informed networks that embed mechanistic constraints into statistical models. By aligning algorithmic transparency with pharmacological reasoning, XAI not only demystifies black-box models but also supports scientific insight, regulatory compliance, and ethical AI deployment in pharmaceutical research.</p><p>This article is categorized under:\\n\\n </p>\",\"PeriodicalId\":236,\"journal\":{\"name\":\"Wiley Interdisciplinary Reviews: Computational Molecular Science\",\"volume\":\"15 5\",\"pages\":\"\"},\"PeriodicalIF\":27.0000,\"publicationDate\":\"2025-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://wires.onlinelibrary.wiley.com/doi/epdf/10.1002/wcms.70049\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Wiley Interdisciplinary Reviews: Computational Molecular Science\",\"FirstCategoryId\":\"92\",\"ListUrlMain\":\"https://wires.onlinelibrary.wiley.com/doi/10.1002/wcms.70049\",\"RegionNum\":2,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wiley Interdisciplinary Reviews: Computational Molecular Science","FirstCategoryId":"92","ListUrlMain":"https://wires.onlinelibrary.wiley.com/doi/10.1002/wcms.70049","RegionNum":2,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
Explainable Artificial Intelligence in Drug Discovery: Bridging Predictive Power and Mechanistic Insight
Explainable artificial intelligence (XAI) is increasingly essential in drug discovery, where interpretability and trust must accompany predictive accuracy. As deep learning models, particularly, deep neural networks (DNNs) and graph neural networks (GNNs), enhance molecular property prediction, de novo design, and toxicity estimation, transparent, mechanistically meaningful insights become critical. This article classifies major XAI strategies in computational molecular science, including gradient-based attribution, perturbation analysis, surrogate modeling, counterfactual reasoning, and self-explaining architectures. Molecular representations, such as fingerprints, SMILES, molecular graphs, and latent embeddings, are evaluated for their impact on explanation fidelity. An evaluation framework is outlined using metrics like fidelity, stability, completeness, sparsity, and usability, with emphasis on integration into drug discovery workflows. The discussion also highlights emerging directions, including neuro-symbolic systems and physics-informed networks that embed mechanistic constraints into statistical models. By aligning algorithmic transparency with pharmacological reasoning, XAI not only demystifies black-box models but also supports scientific insight, regulatory compliance, and ethical AI deployment in pharmaceutical research.
期刊介绍:
Computational molecular sciences harness the power of rigorous chemical and physical theories, employing computer-based modeling, specialized hardware, software development, algorithm design, and database management to explore and illuminate every facet of molecular sciences. These interdisciplinary approaches form a bridge between chemistry, biology, and materials sciences, establishing connections with adjacent application-driven fields in both chemistry and biology. WIREs Computational Molecular Science stands as a platform to comprehensively review and spotlight research from these dynamic and interconnected fields.