Less Artificial, More Intelligent: Understanding Affinity, Trustworthiness, and Preference for Digital Humans

IF 5 3区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE
Mike Seymour, Lingyao (Ivy) Yuan, Kai Riemer, Alan R. Dennis
{"title":"Less Artificial, More Intelligent: Understanding Affinity, Trustworthiness, and Preference for Digital Humans","authors":"Mike Seymour, Lingyao (Ivy) Yuan, Kai Riemer, Alan R. Dennis","doi":"10.1287/isre.2022.0203","DOIUrl":null,"url":null,"abstract":"Practice- and policy-oriented abstract:Companies are increasingly deploying highly realistic digital human agents (DHAs) controlled by advanced AI for online customer service, tasks typically handled by chatbots. We conducted four experiments to assess users’ perceptions (trustworthiness, affinity, and willingness to work with) and behaviors while using DHAs, utilizing quantitative surveys, qualitative interviews, direct observations, and neurophysiological measurements. Our studies involved four DHAs, including two commercial products (found to be immature) and two future-focused ones (where participants believed the AI-controlled DHAs were human-controlled). In the first study, comparing perceptions of a DHA, chatbot, and human agent from descriptions revealed few differences between the DHA and chatbot. The second study, involving actual use of a commercial DHA, showed participants found it uncanny, robotic, or difficult to converse with. The third and fourth studies used a “Wizard of Oz” design, with participants believing a human-controlled DHA was AI-driven. Results showed a preference for human agents via video conferencing, but no significant differences between DHAs and human agents when visual fidelity was controlled. Current DHAs, despite communication issues, trigger more affinity than chatbots. When DHAs match human communication abilities, they are perceived similarly to human agents for simple tasks. This research also suggests DHAs may alleviate algorithm aversion.","PeriodicalId":48411,"journal":{"name":"Information Systems Research","volume":"39 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Research","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1287/isre.2022.0203","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Practice- and policy-oriented abstract:Companies are increasingly deploying highly realistic digital human agents (DHAs) controlled by advanced AI for online customer service, tasks typically handled by chatbots. We conducted four experiments to assess users’ perceptions (trustworthiness, affinity, and willingness to work with) and behaviors while using DHAs, utilizing quantitative surveys, qualitative interviews, direct observations, and neurophysiological measurements. Our studies involved four DHAs, including two commercial products (found to be immature) and two future-focused ones (where participants believed the AI-controlled DHAs were human-controlled). In the first study, comparing perceptions of a DHA, chatbot, and human agent from descriptions revealed few differences between the DHA and chatbot. The second study, involving actual use of a commercial DHA, showed participants found it uncanny, robotic, or difficult to converse with. The third and fourth studies used a “Wizard of Oz” design, with participants believing a human-controlled DHA was AI-driven. Results showed a preference for human agents via video conferencing, but no significant differences between DHAs and human agents when visual fidelity was controlled. Current DHAs, despite communication issues, trigger more affinity than chatbots. When DHAs match human communication abilities, they are perceived similarly to human agents for simple tasks. This research also suggests DHAs may alleviate algorithm aversion.
少一点人工,多一点智能:了解数字人类的亲和力、可信度和偏好
实践与政策导向摘要:越来越多的公司正在部署由高级人工智能控制的高度逼真的数字人类代理(DHA),用于在线客户服务,这些任务通常由聊天机器人处理。我们利用定量调查、定性访谈、直接观察和神经生理学测量方法进行了四项实验,以评估用户在使用 DHA 时的感知(可信度、亲和力和合作意愿)和行为。我们的研究涉及四种 DHA,包括两种商业产品(不成熟)和两种面向未来的产品(参与者认为人工智能控制的 DHA 由人类控制)。在第一项研究中,我们比较了参与者对 DHA、聊天机器人和人类代理的看法,结果发现 DHA 和聊天机器人之间几乎没有差别。第二项研究涉及商业 DHA 的实际使用,结果显示,参与者认为它很怪异、像机器人或难以交谈。第三项和第四项研究采用了 "绿野仙踪 "设计,让参与者相信由人类控制的 DHA 是人工智能驱动的。结果显示,通过视频会议,人们更喜欢人类代理,但在控制视觉保真度的情况下,DHA 与人类代理之间没有显著差异。尽管存在沟通问题,但当前的 DHA 比聊天机器人更能激发亲和力。当 DHA 与人类的交流能力相匹配时,在完成简单任务时,人们对它们的感知与人类代理相似。这项研究还表明,DHA 可以减轻算法厌恶。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
9.10
自引率
8.20%
发文量
120
期刊介绍: ISR (Information Systems Research) is a journal of INFORMS, the Institute for Operations Research and the Management Sciences. Information Systems Research is a leading international journal of theory, research, and intellectual development, focused on information systems in organizations, institutions, the economy, and society.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信