Bokai Liu, Pengju Liu, Weizhuo Lu, Thomas Olofsson
{"title":"Explainable Artificial Intelligence (XAI) for Material Design and Engineering Applications: A Quantitative Computational Framework","authors":"Bokai Liu, Pengju Liu, Weizhuo Lu, Thomas Olofsson","doi":"10.1002/msd2.70017","DOIUrl":null,"url":null,"abstract":"<p>The advancement of artificial intelligence (AI) in material design and engineering has led to significant improvements in predictive modeling of material properties. However, the lack of interpretability in machine learning (ML)-based material informatics presents a major barrier to its practical adoption. This study proposes a novel quantitative computational framework that integrates ML models with explainable artificial intelligence (XAI) techniques to enhance both predictive accuracy and interpretability in material property prediction. The framework systematically incorporates a structured pipeline, including data processing, feature selection, model training, performance evaluation, explainability analysis, and real-world deployment. It is validated through a representative case study on the prediction of high-performance concrete (HPC) compressive strength, utilizing a comparative analysis of ML models such as Random Forest, XGBoost, Support Vector Regression (SVR), and Deep Neural Networks (DNNs). The results demonstrate that XGBoost achieves the highest predictive performance (<span></span><math></math>), while SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) provide detailed insights into feature importance and material interactions. Additionally, the deployment of the trained model as a cloud-based Flask-Gunicorn API enables real-time inference, ensuring its scalability and accessibility for industrial and research applications. The proposed framework addresses key limitations of existing ML approaches by integrating advanced explainability techniques, systematically handling nonlinear feature interactions, and providing a scalable deployment strategy. This study contributes to the development of interpretable and deployable AI-driven material informatics, bridging the gap between data-driven predictions and fundamental material science principles.</p>","PeriodicalId":60486,"journal":{"name":"国际机械系统动力学学报(英文)","volume":"5 2","pages":"236-265"},"PeriodicalIF":3.4000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/msd2.70017","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"国际机械系统动力学学报(英文)","FirstCategoryId":"1087","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/msd2.70017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MECHANICAL","Score":null,"Total":0}
引用次数: 0
Abstract
The advancement of artificial intelligence (AI) in material design and engineering has led to significant improvements in predictive modeling of material properties. However, the lack of interpretability in machine learning (ML)-based material informatics presents a major barrier to its practical adoption. This study proposes a novel quantitative computational framework that integrates ML models with explainable artificial intelligence (XAI) techniques to enhance both predictive accuracy and interpretability in material property prediction. The framework systematically incorporates a structured pipeline, including data processing, feature selection, model training, performance evaluation, explainability analysis, and real-world deployment. It is validated through a representative case study on the prediction of high-performance concrete (HPC) compressive strength, utilizing a comparative analysis of ML models such as Random Forest, XGBoost, Support Vector Regression (SVR), and Deep Neural Networks (DNNs). The results demonstrate that XGBoost achieves the highest predictive performance (), while SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) provide detailed insights into feature importance and material interactions. Additionally, the deployment of the trained model as a cloud-based Flask-Gunicorn API enables real-time inference, ensuring its scalability and accessibility for industrial and research applications. The proposed framework addresses key limitations of existing ML approaches by integrating advanced explainability techniques, systematically handling nonlinear feature interactions, and providing a scalable deployment strategy. This study contributes to the development of interpretable and deployable AI-driven material informatics, bridging the gap between data-driven predictions and fundamental material science principles.