{"title":"Bridging AI and explainability in civil engineering: the Yin-Yang of predictive power and interpretability","authors":"Monjurul Hasan, Ming Lu","doi":"10.1007/s43503-025-00066-6","DOIUrl":null,"url":null,"abstract":"<div><p>Civil engineering relies on data from experiments or simulations to calibrate models that approximate system behaviors. This paper examines machine learning (ML) algorithms for AI-driven decision support in civil engineering, specifically construction engineering and management, where complex input–output relationships demand both predictive accuracy and interpretability. Explainable AI (XAI) is critical for safety and compliance-sensitive applications, ensuring transparency in AI decisions. The literature review identifies key XAI evaluation attributes—model type, explainability, perspective, and interpretability and assesses the Enhanced Model Tree (EMT), a novel method demonstrating strong potential for civil engineering applications compared to commonly applied ML algorithms. The study highlights the need to balance AI’s predictive power with XAI’s transparency, akin to the Yin–Yang philosophy: AI advances in efficiency and optimization, while XAI provides logical reasoning behind conclusions. Drawing on insights from the literature, the study proposes a tailored XAI assessment framework addressing civil engineering's unique needs—problem context, data constraints, and model explainability. By formalizing this synergy, the research fosters trust in AI systems, enabling safer and more socially responsible outcomes. The findings underscore XAI’s role in bridging the gap between complex AI models and end-user accountability, ensuring AI’s full potential is realized in the field.</p></div>","PeriodicalId":72138,"journal":{"name":"AI in civil engineering","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43503-025-00066-6.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI in civil engineering","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43503-025-00066-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Civil engineering relies on data from experiments or simulations to calibrate models that approximate system behaviors. This paper examines machine learning (ML) algorithms for AI-driven decision support in civil engineering, specifically construction engineering and management, where complex input–output relationships demand both predictive accuracy and interpretability. Explainable AI (XAI) is critical for safety and compliance-sensitive applications, ensuring transparency in AI decisions. The literature review identifies key XAI evaluation attributes—model type, explainability, perspective, and interpretability and assesses the Enhanced Model Tree (EMT), a novel method demonstrating strong potential for civil engineering applications compared to commonly applied ML algorithms. The study highlights the need to balance AI’s predictive power with XAI’s transparency, akin to the Yin–Yang philosophy: AI advances in efficiency and optimization, while XAI provides logical reasoning behind conclusions. Drawing on insights from the literature, the study proposes a tailored XAI assessment framework addressing civil engineering's unique needs—problem context, data constraints, and model explainability. By formalizing this synergy, the research fosters trust in AI systems, enabling safer and more socially responsible outcomes. The findings underscore XAI’s role in bridging the gap between complex AI models and end-user accountability, ensuring AI’s full potential is realized in the field.