{"title":"The Anchoring Effect, Algorithmic Fairness, and the Limits of Information Transparency for Emotion Artificial Intelligence","authors":"Lauren Rhue","doi":"10.1287/isre.2019.0493","DOIUrl":null,"url":null,"abstract":"Emotion artificial intelligence (AI) is shown to vary systematically in its ability to accurately identify emotions, and this variation creates potential biases. In this paper, we conduct an experiment involving three commercially available emotion AI systems and a group of human labelers tasked with identifying emotions from two image data sets. The study focuses on the alignment between facial expressions and the emotion labels assigned by both the AI and humans. Importantly, human labelers are given the AI’s scores and informed about its algorithmic fairness measures. This paper presents several key findings. First, the labelers’ scores are affected by the emotion AI scores, consistent with the anchoring effect. Second, information transparency about the AI’s fairness does not uniformly affect human labeling across different emotions. Moreover, information transparency can even increase human inconsistencies. Plus, significant inconsistencies in the scoring among different emotion AI models cast doubt on their reliability. Overall, the study highlights the limitations of individual decision making and information transparency regarding algorithmic fairness measures in addressing algorithmic fairness. These findings underscore the complexity of integrating emotion AI into practice and emphasize the need for careful policies on emotion AI.","PeriodicalId":48411,"journal":{"name":"Information Systems Research","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Research","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1287/isre.2019.0493","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Emotion artificial intelligence (AI) is shown to vary systematically in its ability to accurately identify emotions, and this variation creates potential biases. In this paper, we conduct an experiment involving three commercially available emotion AI systems and a group of human labelers tasked with identifying emotions from two image data sets. The study focuses on the alignment between facial expressions and the emotion labels assigned by both the AI and humans. Importantly, human labelers are given the AI’s scores and informed about its algorithmic fairness measures. This paper presents several key findings. First, the labelers’ scores are affected by the emotion AI scores, consistent with the anchoring effect. Second, information transparency about the AI’s fairness does not uniformly affect human labeling across different emotions. Moreover, information transparency can even increase human inconsistencies. Plus, significant inconsistencies in the scoring among different emotion AI models cast doubt on their reliability. Overall, the study highlights the limitations of individual decision making and information transparency regarding algorithmic fairness measures in addressing algorithmic fairness. These findings underscore the complexity of integrating emotion AI into practice and emphasize the need for careful policies on emotion AI.
期刊介绍:
ISR (Information Systems Research) is a journal of INFORMS, the Institute for Operations Research and the Management Sciences. Information Systems Research is a leading international journal of theory, research, and intellectual development, focused on information systems in organizations, institutions, the economy, and society.