利用可解释的人工智能评估预测员工流失透明度的集成模型

IF 15.6 1区 管理学 Q1 BUSINESS
Meenu Chaudhary , Loveleen Gaur , Amlan Chakrabarti , Gurmeet Singh , Paul Jones , Sascha Kraus
{"title":"利用可解释的人工智能评估预测员工流失透明度的集成模型","authors":"Meenu Chaudhary ,&nbsp;Loveleen Gaur ,&nbsp;Amlan Chakrabarti ,&nbsp;Gurmeet Singh ,&nbsp;Paul Jones ,&nbsp;Sascha Kraus","doi":"10.1016/j.jik.2025.100700","DOIUrl":null,"url":null,"abstract":"<div><div>Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.</div></div>","PeriodicalId":46792,"journal":{"name":"Journal of Innovation & Knowledge","volume":"10 3","pages":"Article 100700"},"PeriodicalIF":15.6000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence\",\"authors\":\"Meenu Chaudhary ,&nbsp;Loveleen Gaur ,&nbsp;Amlan Chakrabarti ,&nbsp;Gurmeet Singh ,&nbsp;Paul Jones ,&nbsp;Sascha Kraus\",\"doi\":\"10.1016/j.jik.2025.100700\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.</div></div>\",\"PeriodicalId\":46792,\"journal\":{\"name\":\"Journal of Innovation & Knowledge\",\"volume\":\"10 3\",\"pages\":\"Article 100700\"},\"PeriodicalIF\":15.6000,\"publicationDate\":\"2025-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Innovation & Knowledge\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2444569X25000502\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Innovation & Knowledge","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2444569X25000502","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

摘要

最近的研究重点是利用机器学习(ML)算法预测员工流失(ECn),以避免可能出现的经济损失、技术泄露以及客户和知识转移。然而,人力资源专业人员能否依赖算法进行预测?他们能否在预测过程未知的情况下做出决定?由于缺乏可解释性,ML 模型的排他性和日益增长的复杂性使得现场专家在理解这些多方面的黑盒子时面临挑战。为了解决黑箱预测的可解释性、可信度和透明度问题,本研究探索了可解释人工智能(XAI)在识别 ECn 升级因素方面的应用,分析了其对生产率、员工士气和财务稳定性的负面影响。我们提出了一个预测模型,根据性能指标对表现最好的两种算法进行比较。此后,我们建议在数据集上应用基于夏普利值的可解释人工智能,即夏普利加法前计划方法(SHAP),来识别和比较表现最佳算法逻辑回归和随机森林分析的特征重要性。预测结果的可解释性解除了预测的束缚,增强了信任度并促进了保留策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence
Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
16.10
自引率
12.70%
发文量
118
审稿时长
37 days
期刊介绍: The Journal of Innovation and Knowledge (JIK) explores how innovation drives knowledge creation and vice versa, emphasizing that not all innovation leads to knowledge, but enduring innovation across diverse fields fosters theory and knowledge. JIK invites papers on innovations enhancing or generating knowledge, covering innovation processes, structures, outcomes, and behaviors at various levels. Articles in JIK examine knowledge-related changes promoting innovation for societal best practices. JIK serves as a platform for high-quality studies undergoing double-blind peer review, ensuring global dissemination to scholars, practitioners, and policymakers who recognize innovation and knowledge as economic drivers. It publishes theoretical articles, empirical studies, case studies, reviews, and other content, addressing current trends and emerging topics in innovation and knowledge. The journal welcomes suggestions for special issues and encourages articles to showcase contextual differences and lessons for a broad audience. In essence, JIK is an interdisciplinary journal dedicated to advancing theoretical and practical innovations and knowledge across multiple fields, including Economics, Business and Management, Engineering, Science, and Education.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信