Ethics in the Age of Algorithms: Unravelling the Impact of Algorithmic Unfairness on Data Analytics Recommendation Acceptance

IF 6.5 2区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE
Maryam Ghasemaghaei, Nima Kordzadeh
{"title":"Ethics in the Age of Algorithms: Unravelling the Impact of Algorithmic Unfairness on Data Analytics Recommendation Acceptance","authors":"Maryam Ghasemaghaei,&nbsp;Nima Kordzadeh","doi":"10.1111/isj.12572","DOIUrl":null,"url":null,"abstract":"<p>Algorithms used in data analytics (DA) tools, particularly in high-stakes contexts such as hiring and promotion, may yield unfair recommendations that deviate from merit-based standards and adversely affect individuals. While significant research from fields such as machine learning and human–computer interaction (HCI) has advanced our understanding of algorithmic fairness, less is known about how managers in organisational contexts perceive and respond to unfair algorithmic recommendations, particularly in terms of individual-level distributive fairness. This study focuses on job promotions to uncover how algorithmic unfairness impacts managers' perceived fairness and their subsequent acceptance of DA recommendations. Through an experimental study, we find that (1) algorithmic unfairness (against women) in promotion recommendations reduces managers' perceived distributive fairness, influencing their acceptance of these recommendations; (2) managers' trust in DA competency moderates the relationship between perceived fairness and DA recommendation acceptance; and (3) managers' moral identity moderates the impact of algorithmic unfairness on perceived fairness. These insights contribute to the existing literature by elucidating how perceived distributive fairness plays a critical role in managers' acceptance of unfair algorithmic outputs in job promotion contexts, highlighting the importance of trust and moral identity in these processes.</p>","PeriodicalId":48049,"journal":{"name":"Information Systems Journal","volume":"35 4","pages":"1166-1197"},"PeriodicalIF":6.5000,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12572","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Journal","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/isj.12572","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Algorithms used in data analytics (DA) tools, particularly in high-stakes contexts such as hiring and promotion, may yield unfair recommendations that deviate from merit-based standards and adversely affect individuals. While significant research from fields such as machine learning and human–computer interaction (HCI) has advanced our understanding of algorithmic fairness, less is known about how managers in organisational contexts perceive and respond to unfair algorithmic recommendations, particularly in terms of individual-level distributive fairness. This study focuses on job promotions to uncover how algorithmic unfairness impacts managers' perceived fairness and their subsequent acceptance of DA recommendations. Through an experimental study, we find that (1) algorithmic unfairness (against women) in promotion recommendations reduces managers' perceived distributive fairness, influencing their acceptance of these recommendations; (2) managers' trust in DA competency moderates the relationship between perceived fairness and DA recommendation acceptance; and (3) managers' moral identity moderates the impact of algorithmic unfairness on perceived fairness. These insights contribute to the existing literature by elucidating how perceived distributive fairness plays a critical role in managers' acceptance of unfair algorithmic outputs in job promotion contexts, highlighting the importance of trust and moral identity in these processes.

算法时代的伦理:揭示算法不公平对数据分析推荐接受度的影响
数据分析(DA)工具中使用的算法,特别是在招聘和晋升等高风险环境中,可能会产生不公平的建议,偏离基于绩效的标准,并对个人产生不利影响。虽然来自机器学习和人机交互(HCI)等领域的重要研究促进了我们对算法公平性的理解,但对于组织环境中的管理者如何感知和应对不公平的算法建议,特别是在个人层面的分配公平性方面,我们知之甚少。本研究将重点放在职位晋升上,以揭示算法不公平如何影响管理者的公平感以及他们随后对DA建议的接受程度。通过实验研究,我们发现(1)晋升建议中的算法不公平(针对女性)降低了管理者对分配公平的感知,影响了他们对这些建议的接受程度;(2)管理者对决策建议胜任力的信任调节了公平感与决策建议接受度之间的关系;(3)管理者的道德认同调节了算法不公平对感知公平的影响。这些见解有助于现有的文献,阐明了在工作晋升背景下,感知分配公平如何在管理者接受不公平算法输出方面发挥关键作用,强调了信任和道德认同在这些过程中的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Systems Journal
Information Systems Journal INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
14.60
自引率
7.80%
发文量
44
期刊介绍: The Information Systems Journal (ISJ) is an international journal promoting the study of, and interest in, information systems. Articles are welcome on research, practice, experience, current issues and debates. The ISJ encourages submissions that reflect the wide and interdisciplinary nature of the subject and articles that integrate technological disciplines with social, contextual and management issues, based on research using appropriate research methods.The ISJ has particularly built its reputation by publishing qualitative research and it continues to welcome such papers. Quantitative research papers are also welcome but they need to emphasise the context of the research and the theoretical and practical implications of their findings.The ISJ does not publish purely technical papers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信