Meenu Chaudhary , Loveleen Gaur , Amlan Chakrabarti , Gurmeet Singh , Paul Jones , Sascha Kraus
{"title":"利用可解释的人工智能评估预测员工流失透明度的集成模型","authors":"Meenu Chaudhary , Loveleen Gaur , Amlan Chakrabarti , Gurmeet Singh , Paul Jones , Sascha Kraus","doi":"10.1016/j.jik.2025.100700","DOIUrl":null,"url":null,"abstract":"<div><div>Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.</div></div>","PeriodicalId":46792,"journal":{"name":"Journal of Innovation & Knowledge","volume":"10 3","pages":"Article 100700"},"PeriodicalIF":15.6000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence\",\"authors\":\"Meenu Chaudhary , Loveleen Gaur , Amlan Chakrabarti , Gurmeet Singh , Paul Jones , Sascha Kraus\",\"doi\":\"10.1016/j.jik.2025.100700\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.</div></div>\",\"PeriodicalId\":46792,\"journal\":{\"name\":\"Journal of Innovation & Knowledge\",\"volume\":\"10 3\",\"pages\":\"Article 100700\"},\"PeriodicalIF\":15.6000,\"publicationDate\":\"2025-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Innovation & Knowledge\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2444569X25000502\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Innovation & Knowledge","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2444569X25000502","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
An integrated model to evaluate the transparency in predicting employee churn using explainable artificial intelligence
Recent studies focus on machine learning (ML) algorithms for predicting employee churn (ECn) to save probable economic loss, technology leakage, and customer and knowledge transference. However, can human resource professionals rely on algorithms for prediction? Can they decide when the process of prediction is not known? Due to the lack of interpretability, ML models' exclusive nature and growing intricacy make it challenging for field experts to comprehend these multifaceted black boxes. To address the concern of interpretability, trust and transparency of black-box predictions, this study explores the application of explainable artificial intelligence (XAI) in identifying the factors that escalate the ECn, analysing the negative impact on productivity, employee morale and financial stability. We propose a predictive model that compares the best two top-performing algorithms based on the performance metrics. Thereafter, we suggest applying an explainable artificial intelligence based on Shapley values, i.e., the Shapley Additive exPlanations approach (SHAP), to identify and compare the feature importance of top-performing algorithms logistic regression and random forest analysis on our dataset. The interpretability of the predictive outcome unboxes the predictions, enhancing trust and facilitating retention strategies.
期刊介绍:
The Journal of Innovation and Knowledge (JIK) explores how innovation drives knowledge creation and vice versa, emphasizing that not all innovation leads to knowledge, but enduring innovation across diverse fields fosters theory and knowledge. JIK invites papers on innovations enhancing or generating knowledge, covering innovation processes, structures, outcomes, and behaviors at various levels. Articles in JIK examine knowledge-related changes promoting innovation for societal best practices.
JIK serves as a platform for high-quality studies undergoing double-blind peer review, ensuring global dissemination to scholars, practitioners, and policymakers who recognize innovation and knowledge as economic drivers. It publishes theoretical articles, empirical studies, case studies, reviews, and other content, addressing current trends and emerging topics in innovation and knowledge. The journal welcomes suggestions for special issues and encourages articles to showcase contextual differences and lessons for a broad audience.
In essence, JIK is an interdisciplinary journal dedicated to advancing theoretical and practical innovations and knowledge across multiple fields, including Economics, Business and Management, Engineering, Science, and Education.