Bridging AI and explainability in civil engineering: the Yin-Yang of predictive power and interpretability

Monjurul Hasan, Ming Lu
{"title":"Bridging AI and explainability in civil engineering: the Yin-Yang of predictive power and interpretability","authors":"Monjurul Hasan,&nbsp;Ming Lu","doi":"10.1007/s43503-025-00066-6","DOIUrl":null,"url":null,"abstract":"<div><p>Civil engineering relies on data from experiments or simulations to calibrate models that approximate system behaviors. This paper examines machine learning (ML) algorithms for AI-driven decision support in civil engineering, specifically construction engineering and management, where complex input–output relationships demand both predictive accuracy and interpretability. Explainable AI (XAI) is critical for safety and compliance-sensitive applications, ensuring transparency in AI decisions. The literature review identifies key XAI evaluation attributes—model type, explainability, perspective, and interpretability and assesses the Enhanced Model Tree (EMT), a novel method demonstrating strong potential for civil engineering applications compared to commonly applied ML algorithms. The study highlights the need to balance AI’s predictive power with XAI’s transparency, akin to the Yin–Yang philosophy: AI advances in efficiency and optimization, while XAI provides logical reasoning behind conclusions. Drawing on insights from the literature, the study proposes a tailored XAI assessment framework addressing civil engineering's unique needs—problem context, data constraints, and model explainability. By formalizing this synergy, the research fosters trust in AI systems, enabling safer and more socially responsible outcomes. The findings underscore XAI’s role in bridging the gap between complex AI models and end-user accountability, ensuring AI’s full potential is realized in the field.</p></div>","PeriodicalId":72138,"journal":{"name":"AI in civil engineering","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43503-025-00066-6.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI in civil engineering","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43503-025-00066-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Civil engineering relies on data from experiments or simulations to calibrate models that approximate system behaviors. This paper examines machine learning (ML) algorithms for AI-driven decision support in civil engineering, specifically construction engineering and management, where complex input–output relationships demand both predictive accuracy and interpretability. Explainable AI (XAI) is critical for safety and compliance-sensitive applications, ensuring transparency in AI decisions. The literature review identifies key XAI evaluation attributes—model type, explainability, perspective, and interpretability and assesses the Enhanced Model Tree (EMT), a novel method demonstrating strong potential for civil engineering applications compared to commonly applied ML algorithms. The study highlights the need to balance AI’s predictive power with XAI’s transparency, akin to the Yin–Yang philosophy: AI advances in efficiency and optimization, while XAI provides logical reasoning behind conclusions. Drawing on insights from the literature, the study proposes a tailored XAI assessment framework addressing civil engineering's unique needs—problem context, data constraints, and model explainability. By formalizing this synergy, the research fosters trust in AI systems, enabling safer and more socially responsible outcomes. The findings underscore XAI’s role in bridging the gap between complex AI models and end-user accountability, ensuring AI’s full potential is realized in the field.

土木工程中人工智能和可解释性的桥梁:预测能力和可解释性的阴阳
土木工程依靠来自实验或模拟的数据来校准接近系统行为的模型。本文研究了土木工程中人工智能驱动决策支持的机器学习(ML)算法,特别是建筑工程和管理,其中复杂的输入输出关系需要预测准确性和可解释性。可解释的AI (XAI)对于安全性和合规性敏感的应用程序至关重要,可以确保AI决策的透明度。文献综述确定了XAI评估的关键属性——模型类型、可解释性、视角和可解释性,并评估了增强模型树(EMT),这是一种与常用ML算法相比,在土木工程应用中表现出强大潜力的新方法。这项研究强调了平衡人工智能的预测能力和XAI的透明度的必要性,类似于阴阳哲学:人工智能提高效率和优化,而XAI提供结论背后的逻辑推理。根据文献的见解,该研究提出了一个定制的XAI评估框架,以解决土木工程的独特需求-问题背景,数据约束和模型可解释性。通过将这种协同作用正式化,该研究促进了对人工智能系统的信任,实现了更安全、更有社会责任感的结果。研究结果强调了XAI在弥合复杂人工智能模型和最终用户问责制之间的差距方面的作用,确保人工智能在该领域的全部潜力得到实现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信