{"title":"环境与地球系统科学中的人工智能:可解释性与可信度","authors":"Josepha Schiller, Stefan Stiller, Masahiro Ryo","doi":"10.1007/s10462-025-11165-2","DOIUrl":null,"url":null,"abstract":"<div><p>Explainable artificial intelligence (XAI) methods have recently emerged to gain insights into complex machine learning models. XAI can be promising for environmental and Earth system science because high-stakes decision-making for management and planning requires justification based on evidence and systems understanding. However, an overview of XAI applications and trust in AI in environmental and Earth system science is still missing. To close this gap, we reviewed 575 articles. XAI applications are popular in various domains, including ecology, engineering, geology, remote sensing, water resources, meteorology, atmospheric sciences, geochemistry, and geophysics. XAI applications focused primarily on understanding and predicting anthropogenic changes in geospatial patterns and impacts on human society and natural resources, especially biological species distributions, vegetation, air quality, transportation, and climate-water related topics, including risk and management. Among XAI methods, the SHAP and Shapley methods were the most popular (135 articles), followed by feature importance (27), partial dependence plots (22), LIME (21), and saliency maps (15). Although XAI methods are often argued to increase trust in model predictions, only seven studies (1.2%) addressed trustworthiness as a core research objective. This gap is critical because understanding the relationship between explainability and trust is lacking. While XAI applications continue to grow, they do not necessarily enhance trust. Hence, more studies on how to strengthen trust in AI applications are critically needed. Finally, this review underlines the recommendation of developing a “human-centered” XAI framework that incorporates the distinct views and needs of multiple stakeholder groups to enable trustworthy decision-making.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 10","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11165-2.pdf","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence in environmental and Earth system sciences: explainability and trustworthiness\",\"authors\":\"Josepha Schiller, Stefan Stiller, Masahiro Ryo\",\"doi\":\"10.1007/s10462-025-11165-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Explainable artificial intelligence (XAI) methods have recently emerged to gain insights into complex machine learning models. XAI can be promising for environmental and Earth system science because high-stakes decision-making for management and planning requires justification based on evidence and systems understanding. However, an overview of XAI applications and trust in AI in environmental and Earth system science is still missing. To close this gap, we reviewed 575 articles. XAI applications are popular in various domains, including ecology, engineering, geology, remote sensing, water resources, meteorology, atmospheric sciences, geochemistry, and geophysics. XAI applications focused primarily on understanding and predicting anthropogenic changes in geospatial patterns and impacts on human society and natural resources, especially biological species distributions, vegetation, air quality, transportation, and climate-water related topics, including risk and management. Among XAI methods, the SHAP and Shapley methods were the most popular (135 articles), followed by feature importance (27), partial dependence plots (22), LIME (21), and saliency maps (15). Although XAI methods are often argued to increase trust in model predictions, only seven studies (1.2%) addressed trustworthiness as a core research objective. This gap is critical because understanding the relationship between explainability and trust is lacking. While XAI applications continue to grow, they do not necessarily enhance trust. Hence, more studies on how to strengthen trust in AI applications are critically needed. Finally, this review underlines the recommendation of developing a “human-centered” XAI framework that incorporates the distinct views and needs of multiple stakeholder groups to enable trustworthy decision-making.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"58 10\",\"pages\":\"\"},\"PeriodicalIF\":13.9000,\"publicationDate\":\"2025-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-025-11165-2.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-025-11165-2\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11165-2","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Artificial intelligence in environmental and Earth system sciences: explainability and trustworthiness
Explainable artificial intelligence (XAI) methods have recently emerged to gain insights into complex machine learning models. XAI can be promising for environmental and Earth system science because high-stakes decision-making for management and planning requires justification based on evidence and systems understanding. However, an overview of XAI applications and trust in AI in environmental and Earth system science is still missing. To close this gap, we reviewed 575 articles. XAI applications are popular in various domains, including ecology, engineering, geology, remote sensing, water resources, meteorology, atmospheric sciences, geochemistry, and geophysics. XAI applications focused primarily on understanding and predicting anthropogenic changes in geospatial patterns and impacts on human society and natural resources, especially biological species distributions, vegetation, air quality, transportation, and climate-water related topics, including risk and management. Among XAI methods, the SHAP and Shapley methods were the most popular (135 articles), followed by feature importance (27), partial dependence plots (22), LIME (21), and saliency maps (15). Although XAI methods are often argued to increase trust in model predictions, only seven studies (1.2%) addressed trustworthiness as a core research objective. This gap is critical because understanding the relationship between explainability and trust is lacking. While XAI applications continue to grow, they do not necessarily enhance trust. Hence, more studies on how to strengthen trust in AI applications are critically needed. Finally, this review underlines the recommendation of developing a “human-centered” XAI framework that incorporates the distinct views and needs of multiple stakeholder groups to enable trustworthy decision-making.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.