在人工智能机器学习模型中嵌入透明度:预测和解释员工流失的管理含义

Soumyadeb Chowdhury, Sian Joel-Edgar, P. Dey, S. Bhattacharya, Alexander Kharlamov
{"title":"在人工智能机器学习模型中嵌入透明度:预测和解释员工流失的管理含义","authors":"Soumyadeb Chowdhury, Sian Joel-Edgar, P. Dey, S. Bhattacharya, Alexander Kharlamov","doi":"10.1080/09585192.2022.2066981","DOIUrl":null,"url":null,"abstract":"Abstract Employee turnover (ET) is a major issue faced by firms in all business sectors. Artificial intelligence (AI) machine learning (ML) prediction models can help to classify the likelihood of employees voluntarily departing from employment using historical employee datasets. However, output responses generated by these AI-based ML models lack transparency and interpretability, making it difficult for HR managers to understand the rationale behind the AI predictions. If managers do not understand how and why responses are generated by AI models based on the input datasets, it is unlikely to augment data-driven decision-making and bring value to the organisations. The main purpose of this article is to demonstrate the capability of Local Interpretable Model-Agnostic Explanations (LIME) technique to intuitively explain the ET predictions generated by AI-based ML models for a given employee dataset to HR managers. From a theoretical perspective, we contribute to the International Human Resource Management literature by presenting a conceptual review of AI algorithmic transparency and then discussing its significance to sustain competitive advantage by using the principles of resource-based view theory. We also offer a transparent AI implementation framework using LIME which will provide a useful guide for HR managers to increase the explainability of the AI-based ML models, and therefore mitigate trust issues in data-driven decision-making.","PeriodicalId":22502,"journal":{"name":"The International Journal of Human Resource Management","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Embedding transparency in artificial intelligence machine learning models: managerial implications on predicting and explaining employee turnover\",\"authors\":\"Soumyadeb Chowdhury, Sian Joel-Edgar, P. Dey, S. Bhattacharya, Alexander Kharlamov\",\"doi\":\"10.1080/09585192.2022.2066981\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Employee turnover (ET) is a major issue faced by firms in all business sectors. Artificial intelligence (AI) machine learning (ML) prediction models can help to classify the likelihood of employees voluntarily departing from employment using historical employee datasets. However, output responses generated by these AI-based ML models lack transparency and interpretability, making it difficult for HR managers to understand the rationale behind the AI predictions. If managers do not understand how and why responses are generated by AI models based on the input datasets, it is unlikely to augment data-driven decision-making and bring value to the organisations. The main purpose of this article is to demonstrate the capability of Local Interpretable Model-Agnostic Explanations (LIME) technique to intuitively explain the ET predictions generated by AI-based ML models for a given employee dataset to HR managers. From a theoretical perspective, we contribute to the International Human Resource Management literature by presenting a conceptual review of AI algorithmic transparency and then discussing its significance to sustain competitive advantage by using the principles of resource-based view theory. We also offer a transparent AI implementation framework using LIME which will provide a useful guide for HR managers to increase the explainability of the AI-based ML models, and therefore mitigate trust issues in data-driven decision-making.\",\"PeriodicalId\":22502,\"journal\":{\"name\":\"The International Journal of Human Resource Management\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The International Journal of Human Resource Management\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/09585192.2022.2066981\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Journal of Human Resource Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/09585192.2022.2066981","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

摘要员工流失是企业在各个业务部门面临的一个重大问题。人工智能(AI)机器学习(ML)预测模型可以使用历史员工数据集帮助对员工自愿离职的可能性进行分类。然而,这些基于人工智能的ML模型产生的输出响应缺乏透明度和可解释性,这使得人力资源经理很难理解人工智能预测背后的基本原理。如果管理人员不理解基于输入数据集的人工智能模型如何以及为什么产生响应,则不太可能增强数据驱动的决策并为组织带来价值。本文的主要目的是展示局部可解释模型不可知论解释(LIME)技术的能力,该技术可以直观地向人力资源经理解释基于ai的ML模型为给定员工数据集生成的ET预测。从理论角度来看,我们通过对人工智能算法透明度的概念回顾,然后利用资源基础观点理论的原则讨论其对维持竞争优势的重要性,从而为国际人力资源管理文献做出贡献。我们还使用LIME提供了一个透明的人工智能实施框架,这将为人力资源经理提供有用的指导,以提高基于人工智能的ML模型的可解释性,从而减轻数据驱动决策中的信任问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Embedding transparency in artificial intelligence machine learning models: managerial implications on predicting and explaining employee turnover
Abstract Employee turnover (ET) is a major issue faced by firms in all business sectors. Artificial intelligence (AI) machine learning (ML) prediction models can help to classify the likelihood of employees voluntarily departing from employment using historical employee datasets. However, output responses generated by these AI-based ML models lack transparency and interpretability, making it difficult for HR managers to understand the rationale behind the AI predictions. If managers do not understand how and why responses are generated by AI models based on the input datasets, it is unlikely to augment data-driven decision-making and bring value to the organisations. The main purpose of this article is to demonstrate the capability of Local Interpretable Model-Agnostic Explanations (LIME) technique to intuitively explain the ET predictions generated by AI-based ML models for a given employee dataset to HR managers. From a theoretical perspective, we contribute to the International Human Resource Management literature by presenting a conceptual review of AI algorithmic transparency and then discussing its significance to sustain competitive advantage by using the principles of resource-based view theory. We also offer a transparent AI implementation framework using LIME which will provide a useful guide for HR managers to increase the explainability of the AI-based ML models, and therefore mitigate trust issues in data-driven decision-making.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信