情感人工智能的锚定效应、算法公平性和信息透明度的局限性

IF 5 3区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE
Lauren Rhue
{"title":"情感人工智能的锚定效应、算法公平性和信息透明度的局限性","authors":"Lauren Rhue","doi":"10.1287/isre.2019.0493","DOIUrl":null,"url":null,"abstract":"Emotion artificial intelligence (AI) is shown to vary systematically in its ability to accurately identify emotions, and this variation creates potential biases. In this paper, we conduct an experiment involving three commercially available emotion AI systems and a group of human labelers tasked with identifying emotions from two image data sets. The study focuses on the alignment between facial expressions and the emotion labels assigned by both the AI and humans. Importantly, human labelers are given the AI’s scores and informed about its algorithmic fairness measures. This paper presents several key findings. First, the labelers’ scores are affected by the emotion AI scores, consistent with the anchoring effect. Second, information transparency about the AI’s fairness does not uniformly affect human labeling across different emotions. Moreover, information transparency can even increase human inconsistencies. Plus, significant inconsistencies in the scoring among different emotion AI models cast doubt on their reliability. Overall, the study highlights the limitations of individual decision making and information transparency regarding algorithmic fairness measures in addressing algorithmic fairness. These findings underscore the complexity of integrating emotion AI into practice and emphasize the need for careful policies on emotion AI.","PeriodicalId":48411,"journal":{"name":"Information Systems Research","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Anchoring Effect, Algorithmic Fairness, and the Limits of Information Transparency for Emotion Artificial Intelligence\",\"authors\":\"Lauren Rhue\",\"doi\":\"10.1287/isre.2019.0493\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Emotion artificial intelligence (AI) is shown to vary systematically in its ability to accurately identify emotions, and this variation creates potential biases. In this paper, we conduct an experiment involving three commercially available emotion AI systems and a group of human labelers tasked with identifying emotions from two image data sets. The study focuses on the alignment between facial expressions and the emotion labels assigned by both the AI and humans. Importantly, human labelers are given the AI’s scores and informed about its algorithmic fairness measures. This paper presents several key findings. First, the labelers’ scores are affected by the emotion AI scores, consistent with the anchoring effect. Second, information transparency about the AI’s fairness does not uniformly affect human labeling across different emotions. Moreover, information transparency can even increase human inconsistencies. Plus, significant inconsistencies in the scoring among different emotion AI models cast doubt on their reliability. Overall, the study highlights the limitations of individual decision making and information transparency regarding algorithmic fairness measures in addressing algorithmic fairness. These findings underscore the complexity of integrating emotion AI into practice and emphasize the need for careful policies on emotion AI.\",\"PeriodicalId\":48411,\"journal\":{\"name\":\"Information Systems Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2023-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Systems Research\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1287/isre.2019.0493\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Research","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1287/isre.2019.0493","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

摘要

情感人工智能(AI)在准确识别情感的能力上存在系统性差异,而这种差异会造成潜在的偏差。在本文中,我们进行了一项实验,涉及三种市场上销售的情感人工智能系统和一组人类标签员,他们的任务是从两组图像数据中识别情感。研究的重点是面部表情与人工智能和人类所分配的情感标签之间的一致性。重要的是,人类标注者会得到人工智能的评分,并了解其算法公平性措施。本文提出了几项重要发现。首先,标注者的分数会受到人工智能情绪分数的影响,这与锚定效应是一致的。其次,关于人工智能公平性的信息透明度并不会均匀地影响人类对不同情绪的标注。此外,信息透明甚至会增加人类的不一致性。另外,不同情绪的人工智能模型在评分上存在明显的不一致性,这也让人对其可靠性产生怀疑。总之,这项研究强调了个人决策和信息透明在解决算法公平性问题上的局限性。这些发现凸显了将情感人工智能融入实践的复杂性,并强调了制定谨慎的情感人工智能政策的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Anchoring Effect, Algorithmic Fairness, and the Limits of Information Transparency for Emotion Artificial Intelligence
Emotion artificial intelligence (AI) is shown to vary systematically in its ability to accurately identify emotions, and this variation creates potential biases. In this paper, we conduct an experiment involving three commercially available emotion AI systems and a group of human labelers tasked with identifying emotions from two image data sets. The study focuses on the alignment between facial expressions and the emotion labels assigned by both the AI and humans. Importantly, human labelers are given the AI’s scores and informed about its algorithmic fairness measures. This paper presents several key findings. First, the labelers’ scores are affected by the emotion AI scores, consistent with the anchoring effect. Second, information transparency about the AI’s fairness does not uniformly affect human labeling across different emotions. Moreover, information transparency can even increase human inconsistencies. Plus, significant inconsistencies in the scoring among different emotion AI models cast doubt on their reliability. Overall, the study highlights the limitations of individual decision making and information transparency regarding algorithmic fairness measures in addressing algorithmic fairness. These findings underscore the complexity of integrating emotion AI into practice and emphasize the need for careful policies on emotion AI.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.10
自引率
8.20%
发文量
120
期刊介绍: ISR (Information Systems Research) is a journal of INFORMS, the Institute for Operations Research and the Management Sciences. Information Systems Research is a leading international journal of theory, research, and intellectual development, focused on information systems in organizations, institutions, the economy, and society.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信