Bodrunnessa Badhon , Ripon K. Chakrabortty , Sreenatha G. Anavatti , Mario Vanhoucke
{"title":"A Multi-Module Explainable Artificial Intelligence Framework for Project Risk Management: Enhancing Transparency in Decision-making","authors":"Bodrunnessa Badhon , Ripon K. Chakrabortty , Sreenatha G. Anavatti , Mario Vanhoucke","doi":"10.1016/j.engappai.2025.110427","DOIUrl":null,"url":null,"abstract":"<div><div>The remarkable advancements in machine learning (ML) have led to its extensive adoption in Project Risk Management (PRM), leveraging its powerful predictive capabilities and data-driven insights that support proactive decision-making. Nevertheless, the “black-box” nature of ML models obscures the reasoning behind predictions, undermining transparency and trust. To address this, existing explainable artificial intelligence (XAI) techniques, such as Local Interpretable Model-agnostic Explanations (LIME), Global Priors-based LIME (G-LIME), and SHapley Additive exPlanations (SHAP), have been applied to interpret black-box models. Yet, they face considerable limitations in PRM, including their inability to model cascading effects and multi-level dependencies among risk factors, suffering from inconsistencies due to random sampling, and failure to capture non-linear interactions in high-dimensional risk data. In response to these shortcomings, this paper proposes the Multi-Module eXplainable Artificial Intelligence framework for Project Risk Management (MMXAI-PRM), a novel approach designed to address the unique demands of PRM. The framework consists of three modules: the Risk Relationship Insight Module (RRIM), which models risk dependencies using a Knowledge Graph (KG); the Risk Factor Influence Analysis Module (RFIAM), which introduces a Conditional Tabular Generative Adversarial Network-aided Local Interpretable Model-agnostic Explanations using Kernel Ridge Regression (CTGAN-LIME-KR) to ensure explanation consistency and handle non-linearity; and the Visualization and Interpretation Module (VIM), which synthesizes these insights into an interpretable, chain-based representation. Extensive experiments demonstrate that MMXAI-PRM delivers more consistent, stable, and accurate explanations than existing XAI methods. By improving interpretability, it enhances trust in AI-driven risk predictions and equips project managers with actionable insights, advancing decision-making in PRM.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"148 ","pages":"Article 110427"},"PeriodicalIF":8.0000,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625004270","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The remarkable advancements in machine learning (ML) have led to its extensive adoption in Project Risk Management (PRM), leveraging its powerful predictive capabilities and data-driven insights that support proactive decision-making. Nevertheless, the “black-box” nature of ML models obscures the reasoning behind predictions, undermining transparency and trust. To address this, existing explainable artificial intelligence (XAI) techniques, such as Local Interpretable Model-agnostic Explanations (LIME), Global Priors-based LIME (G-LIME), and SHapley Additive exPlanations (SHAP), have been applied to interpret black-box models. Yet, they face considerable limitations in PRM, including their inability to model cascading effects and multi-level dependencies among risk factors, suffering from inconsistencies due to random sampling, and failure to capture non-linear interactions in high-dimensional risk data. In response to these shortcomings, this paper proposes the Multi-Module eXplainable Artificial Intelligence framework for Project Risk Management (MMXAI-PRM), a novel approach designed to address the unique demands of PRM. The framework consists of three modules: the Risk Relationship Insight Module (RRIM), which models risk dependencies using a Knowledge Graph (KG); the Risk Factor Influence Analysis Module (RFIAM), which introduces a Conditional Tabular Generative Adversarial Network-aided Local Interpretable Model-agnostic Explanations using Kernel Ridge Regression (CTGAN-LIME-KR) to ensure explanation consistency and handle non-linearity; and the Visualization and Interpretation Module (VIM), which synthesizes these insights into an interpretable, chain-based representation. Extensive experiments demonstrate that MMXAI-PRM delivers more consistent, stable, and accurate explanations than existing XAI methods. By improving interpretability, it enhances trust in AI-driven risk predictions and equips project managers with actionable insights, advancing decision-making in PRM.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.