Yue-Shi Lee , Show-Jane Yen , Wendong Jiang , Jiyuan Chen , Chih-Yung Chang
{"title":"Illuminating the black box: An interpretable machine learning based on ensemble trees","authors":"Yue-Shi Lee , Show-Jane Yen , Wendong Jiang , Jiyuan Chen , Chih-Yung Chang","doi":"10.1016/j.eswa.2025.126720","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning has achieved significant success in the analysis of unstructured data, but its inherent black-box nature has led to numerous limitations in security-sensitive domains. Although many existing interpretable machine learning methods can partially address this issue, they often face challenges such as model limitations, interpretability randomness, and a lack of global interpretability. To address these challenges, this paper introduces an innovative interpretable ensemble tree method, EnEXP. This method generates a sample set by applying fixed masking perturbation to individual samples, then constructs multiple decision trees using bagging and boosting techniques and interprets them based on the importance outputs of these trees, thereby achieving a global interpretation of the entire dataset through the aggregation of all sample insights. Experimental results demonstrate that EnEXP possesses superior explanatory power compared to other interpretable methods. In text processing experiments, the bag-of-words model optimized by EnEXP outperformed the GPT-3 Ada fine-tuned model.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"272 ","pages":"Article 126720"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425003422","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning has achieved significant success in the analysis of unstructured data, but its inherent black-box nature has led to numerous limitations in security-sensitive domains. Although many existing interpretable machine learning methods can partially address this issue, they often face challenges such as model limitations, interpretability randomness, and a lack of global interpretability. To address these challenges, this paper introduces an innovative interpretable ensemble tree method, EnEXP. This method generates a sample set by applying fixed masking perturbation to individual samples, then constructs multiple decision trees using bagging and boosting techniques and interprets them based on the importance outputs of these trees, thereby achieving a global interpretation of the entire dataset through the aggregation of all sample insights. Experimental results demonstrate that EnEXP possesses superior explanatory power compared to other interpretable methods. In text processing experiments, the bag-of-words model optimized by EnEXP outperformed the GPT-3 Ada fine-tuned model.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.