Huangyuan Wu , Bin Li , Lianfang Tian , Chao Dong , Wenzhi Liao
{"title":"A unified region and concept-level explainable artificial intelligence method for explainability and active learning of defect segmentation model","authors":"Huangyuan Wu , Bin Li , Lianfang Tian , Chao Dong , Wenzhi Liao","doi":"10.1016/j.engappai.2025.111009","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective:</h3><div>Despite the Artificial Intelligence (AI) method having achieved great progress in defect segmentation tasks, the explainability of AI method remains a challenge since its black-box property. To guarantee its prediction result can be understood and trusted by users, recent works attempted to explain the model’s decision process through Explainable Artificial Intelligence (XAI) methods.</div></div><div><h3>Challenges:</h3><div>However, the existing XAI methods still have some limitations: (1) these XAI methods only focus on explaining model decisions from a single perspective, which usually introduces biased explanations. (2) few works consider how to leverage the explanation mechanism of XAI methods to guide the active learning process of model, which limits the application of XAI methods. Methods: To address these issues, a unified region-level and concept-level explainable AI (RC-XAI) framework is proposed for the explainability and active learning of the defect segmentation model.</div></div><div><h3>Novelty:</h3><div>Firstly, RC-XAI incorporates region-level and concept-level explanators in a collaborative manner to provide comprehensive explanations for the model decision. It enhances the reliability and robustness of explanations. Secondly, RC-XAI proposes an explainability-driven representative sample selection (ED-RSS) module to guide the model’s active learning process for improving its final performance.</div></div><div><h3>Findings:</h3><div>Experimental results on three challenging datasets demonstrate the effectiveness and generalization of the proposed RC-XAI method. Our method provides better and more comprehensive explainability compared with other XAI methods. Additionally, experiments demonstrate the potential of applying the explanation mechanism of the RC-XAI method to the active learning process of defect segmentation models.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"156 ","pages":"Article 111009"},"PeriodicalIF":7.5000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625010097","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Objective:
Despite the Artificial Intelligence (AI) method having achieved great progress in defect segmentation tasks, the explainability of AI method remains a challenge since its black-box property. To guarantee its prediction result can be understood and trusted by users, recent works attempted to explain the model’s decision process through Explainable Artificial Intelligence (XAI) methods.
Challenges:
However, the existing XAI methods still have some limitations: (1) these XAI methods only focus on explaining model decisions from a single perspective, which usually introduces biased explanations. (2) few works consider how to leverage the explanation mechanism of XAI methods to guide the active learning process of model, which limits the application of XAI methods. Methods: To address these issues, a unified region-level and concept-level explainable AI (RC-XAI) framework is proposed for the explainability and active learning of the defect segmentation model.
Novelty:
Firstly, RC-XAI incorporates region-level and concept-level explanators in a collaborative manner to provide comprehensive explanations for the model decision. It enhances the reliability and robustness of explanations. Secondly, RC-XAI proposes an explainability-driven representative sample selection (ED-RSS) module to guide the model’s active learning process for improving its final performance.
Findings:
Experimental results on three challenging datasets demonstrate the effectiveness and generalization of the proposed RC-XAI method. Our method provides better and more comprehensive explainability compared with other XAI methods. Additionally, experiments demonstrate the potential of applying the explanation mechanism of the RC-XAI method to the active learning process of defect segmentation models.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.