On Robustness of the Explanatory Power of Machine Learning Models: Insights From a New Explainable AI Approach Using Sensitivity Analysis

IF 4.6 1区 地球科学 Q2 ENVIRONMENTAL SCIENCES
Banamali Panigrahi, Saman Razavi, Lorne E. Doig, Blanchard Cordell, Hoshin V. Gupta, Karsten Liber
{"title":"On Robustness of the Explanatory Power of Machine Learning Models: Insights From a New Explainable AI Approach Using Sensitivity Analysis","authors":"Banamali Panigrahi, Saman Razavi, Lorne E. Doig, Blanchard Cordell, Hoshin V. Gupta, Karsten Liber","doi":"10.1029/2024wr037398","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) is increasingly considered the solution to environmental problems where limited or no physico-chemical process understanding exists. But in supporting high-stakes decisions, where the ability to <i>explain</i> possible solutions is key to their acceptability and legitimacy, ML can fall short. Here, we develop a method, rooted in formal <i>sensitivity analysis</i>, to uncover the primary drivers behind ML predictions. Unlike many methods for <i>explainable artificial intelligence</i> (XAI), this method (a) accounts for complex multi-variate distributional properties of data, common in environmental systems, (b) offers a global assessment of the input-output response surface formed by ML, rather than focusing solely on local regions around existing data points, and (c) is scalable and data-size independent, ensuring computational efficiency with large data sets. We apply this method to a suite of ML models predicting various water quality variables in a pilot-scale experimental pit lake. A critical finding is that subtle alterations in the design of some ML models (such as variations in random seed, functional class, hyperparameters, or data splitting) can lead to different interpretations of how outputs depend on inputs. Further, models from different ML families (decision trees, connectionists, or kernels) may focus on different aspects of the information provided by data, despite displaying similar predictive power. Overall, our results underscore the need to assess the explanatory robustness of ML models and advocate for using model ensembles to gain deeper insights into system drivers and improve prediction reliability.","PeriodicalId":23799,"journal":{"name":"Water Resources Research","volume":"214 1","pages":""},"PeriodicalIF":4.6000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Water Resources Research","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.1029/2024wr037398","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning (ML) is increasingly considered the solution to environmental problems where limited or no physico-chemical process understanding exists. But in supporting high-stakes decisions, where the ability to explain possible solutions is key to their acceptability and legitimacy, ML can fall short. Here, we develop a method, rooted in formal sensitivity analysis, to uncover the primary drivers behind ML predictions. Unlike many methods for explainable artificial intelligence (XAI), this method (a) accounts for complex multi-variate distributional properties of data, common in environmental systems, (b) offers a global assessment of the input-output response surface formed by ML, rather than focusing solely on local regions around existing data points, and (c) is scalable and data-size independent, ensuring computational efficiency with large data sets. We apply this method to a suite of ML models predicting various water quality variables in a pilot-scale experimental pit lake. A critical finding is that subtle alterations in the design of some ML models (such as variations in random seed, functional class, hyperparameters, or data splitting) can lead to different interpretations of how outputs depend on inputs. Further, models from different ML families (decision trees, connectionists, or kernels) may focus on different aspects of the information provided by data, despite displaying similar predictive power. Overall, our results underscore the need to assess the explanatory robustness of ML models and advocate for using model ensembles to gain deeper insights into system drivers and improve prediction reliability.
机器学习模型解释力的稳健性:从一种新的可解释的人工智能方法使用敏感性分析的见解
机器学习(ML)越来越被认为是解决有限或没有物理化学过程理解存在的环境问题的解决方案。但在支持高风险决策方面,解释可能的解决方案的能力是其可接受性和合法性的关键,ML可能会有所不足。在这里,我们开发了一种基于形式敏感性分析的方法,以揭示ML预测背后的主要驱动因素。与许多可解释人工智能(XAI)的方法不同,该方法(a)考虑了环境系统中常见的数据的复杂多元分布特性,(b)提供了对ML形成的输入-输出响应面的全局评估,而不是仅仅关注现有数据点周围的局部区域,以及(c)可扩展且数据大小独立,确保了大型数据集的计算效率。我们将这种方法应用于一套ML模型,预测中试规模实验坑湖的各种水质变量。一个关键的发现是,一些ML模型设计中的细微变化(例如随机种子、函数类、超参数或数据分割的变化)可能导致对输出如何依赖于输入的不同解释。此外,尽管显示出相似的预测能力,但来自不同ML家族(决策树、连接器或核)的模型可能会关注数据提供的信息的不同方面。总的来说,我们的研究结果强调了评估机器学习模型解释稳健性的必要性,并提倡使用模型集成来更深入地了解系统驱动因素并提高预测可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Water Resources Research
Water Resources Research 环境科学-湖沼学
CiteScore
8.80
自引率
13.00%
发文量
599
审稿时长
3.5 months
期刊介绍: Water Resources Research (WRR) is an interdisciplinary journal that focuses on hydrology and water resources. It publishes original research in the natural and social sciences of water. It emphasizes the role of water in the Earth system, including physical, chemical, biological, and ecological processes in water resources research and management, including social, policy, and public health implications. It encompasses observational, experimental, theoretical, analytical, numerical, and data-driven approaches that advance the science of water and its management. Submissions are evaluated for their novelty, accuracy, significance, and broader implications of the findings.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信