Consumer bias against evaluations received by artificial intelligence: the mediation effect of lack of transparency anxiety

IF 9.6 2区 管理学 Q1 BUSINESS
Alberto Lopez, Ricardo Garza
{"title":"Consumer bias against evaluations received by artificial intelligence: the mediation effect of lack of transparency anxiety","authors":"Alberto Lopez, Ricardo Garza","doi":"10.1108/jrim-07-2021-0192","DOIUrl":null,"url":null,"abstract":"PurposeWill consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how consumers feel about being evaluated by AI instead of by a human. Furthermore, why do consumers experience being evaluated by an AI algorithm or by a human differently? This research aims to offer answers to these questions.Design/methodology/approachThree laboratory experiments were conducted. Experiments 1 and 2 test the main effect of evaluator (AI and human) and evaluations received (positive, neutral and negative) on fairness perception of the evaluation. Experiment 3 replicates previous findings and tests the mediation effect.FindingsBuilding on previous research on consumer biases and lack of transparency anxiety, the authors present converging evidence that consumers who got positive evaluations reported nonsignificant difference on the level of fairness perception on the evaluation regardless of the evaluator (human or AI). Contrarily, consumers who got negative evaluations reported lower fairness perception when the evaluation was given by AI. Further moderated mediation analysis showed that consumers who get a negative evaluation by AI experience higher levels of lack of transparency anxiety, which in turn is an underlying mechanism driving this effect.Originality/valueTo the best of the authors' knowledge, no previous research has investigated how consumers feel about being evaluated by AI instead of by a human. This consumer bias against AI evaluations is a phenomenon previously overlooked in the marketing literature, with many implications for the development and adoption of new AI products, as well as theoretical contributions to the nascent literature on consumer experience and AI.","PeriodicalId":47116,"journal":{"name":"Journal of Research in Interactive Marketing","volume":null,"pages":null},"PeriodicalIF":9.6000,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Research in Interactive Marketing","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1108/jrim-07-2021-0192","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 1

Abstract

PurposeWill consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how consumers feel about being evaluated by AI instead of by a human. Furthermore, why do consumers experience being evaluated by an AI algorithm or by a human differently? This research aims to offer answers to these questions.Design/methodology/approachThree laboratory experiments were conducted. Experiments 1 and 2 test the main effect of evaluator (AI and human) and evaluations received (positive, neutral and negative) on fairness perception of the evaluation. Experiment 3 replicates previous findings and tests the mediation effect.FindingsBuilding on previous research on consumer biases and lack of transparency anxiety, the authors present converging evidence that consumers who got positive evaluations reported nonsignificant difference on the level of fairness perception on the evaluation regardless of the evaluator (human or AI). Contrarily, consumers who got negative evaluations reported lower fairness perception when the evaluation was given by AI. Further moderated mediation analysis showed that consumers who get a negative evaluation by AI experience higher levels of lack of transparency anxiety, which in turn is an underlying mechanism driving this effect.Originality/valueTo the best of the authors' knowledge, no previous research has investigated how consumers feel about being evaluated by AI instead of by a human. This consumer bias against AI evaluations is a phenomenon previously overlooked in the marketing literature, with many implications for the development and adoption of new AI products, as well as theoretical contributions to the nascent literature on consumer experience and AI.
消费者对人工智能评价的偏见:缺乏透明度焦虑的中介效应
目的消费者会接受对他们进行评估的人工智能产品吗?新的消费产品提供人工智能评估。然而,之前的研究从未调查过消费者对人工智能而非人类评估的感受。此外,为什么消费者会体验到人工智能算法或人类的不同评价?本研究旨在为这些问题提供答案。设计/方法/方法进行了三个实验室实验。实验1和2测试了评估者(人工智能和人类)和收到的评估(积极、中立和消极)对评估公平性感知的主要影响。实验3重复了先前的发现,并测试了中介效应。研究结果在先前关于消费者偏见和缺乏透明度焦虑的研究基础上,作者提出了一致的证据,表明获得积极评价的消费者报告称,无论评价者是谁(人类还是人工智能),对评价的公平感知水平都没有显着差异。相反,当人工智能进行评估时,获得负面评价的消费者报告称公平感较低。进一步的调节中介分析表明,获得负面评估的消费者体验到更高程度的缺乏透明度焦虑,这反过来又是驱动这种效应的潜在机制。独创性/价值据作者所知,此前没有任何研究调查消费者对人工智能而非人类评估的感受。这种消费者对人工智能评价的偏见是营销文献中以前忽视的一种现象,对新的人工智能产品的开发和采用有许多影响,对消费者体验和人工智能的新兴文献也有理论贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
17.80
自引率
17.10%
发文量
31
期刊介绍: The mission of the Journal of Research in Interactive Marketing is to address substantive issues in interactive, relationship, electronic, direct and multi-channel marketing and marketing management. ISSN: 2040-7122 eISSN: 2040-7122 With its origins in the discipline and practice of direct marketing, the Journal of Research in Interactive Marketing (JRIM) aims to publish progressive, innovative and rigorous scholarly research for marketing academics and practitioners.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信