(我们为什么要信任人工智能?基于人工智能的健康聊天机器人案例

IF 2.7 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
A. V. Prakash, Saini Das
{"title":"(我们为什么要信任人工智能?基于人工智能的健康聊天机器人案例","authors":"A. V. Prakash, Saini Das","doi":"10.3127/ajis.v28.4235","DOIUrl":null,"url":null,"abstract":"Automated chatbots powered by artificial intelligence (AI) can act as a ubiquitous point of contact, improving access to healthcare and empowering users to make effective decisions. However, despite the potential benefits, emerging literature suggests that apprehensions linked to the distinctive features of AI technology and the specific context of use (healthcare) could undermine consumer trust and hinder widespread adoption. Although the role of trust is considered pivotal to the acceptance of healthcare technologies, a dearth of research exists that focuses on the contextual factors that drive trust in such AI-based Chatbots for Self-Diagnosis (AICSD). Accordingly, a contextual model based on the trust-in-technology framework was developed to understand the determinants of consumers’ trust in AICSD and its behavioral consequences. It was validated using a free simulation experiment study in India (N = 202). Perceived anthropomorphism, perceived information quality, perceived explainability, disposition to trust technology, and perceived service quality influence consumers’ trust in AICSD. In turn, trust, privacy risk, health risk, and gender determine the intention to use. The research contributes by developing and validating a context-specific model for explaining trust in AICSD that could aid developers and marketers in enhancing consumers’ trust in and adoption of AICSD.","PeriodicalId":45261,"journal":{"name":"Australasian Journal of Information Systems","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"(Why) Do We Trust AI?: A Case of AI-based Health Chatbots\",\"authors\":\"A. V. Prakash, Saini Das\",\"doi\":\"10.3127/ajis.v28.4235\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated chatbots powered by artificial intelligence (AI) can act as a ubiquitous point of contact, improving access to healthcare and empowering users to make effective decisions. However, despite the potential benefits, emerging literature suggests that apprehensions linked to the distinctive features of AI technology and the specific context of use (healthcare) could undermine consumer trust and hinder widespread adoption. Although the role of trust is considered pivotal to the acceptance of healthcare technologies, a dearth of research exists that focuses on the contextual factors that drive trust in such AI-based Chatbots for Self-Diagnosis (AICSD). Accordingly, a contextual model based on the trust-in-technology framework was developed to understand the determinants of consumers’ trust in AICSD and its behavioral consequences. It was validated using a free simulation experiment study in India (N = 202). Perceived anthropomorphism, perceived information quality, perceived explainability, disposition to trust technology, and perceived service quality influence consumers’ trust in AICSD. In turn, trust, privacy risk, health risk, and gender determine the intention to use. The research contributes by developing and validating a context-specific model for explaining trust in AICSD that could aid developers and marketers in enhancing consumers’ trust in and adoption of AICSD.\",\"PeriodicalId\":45261,\"journal\":{\"name\":\"Australasian Journal of Information Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-05-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Australasian Journal of Information Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3127/ajis.v28.4235\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Australasian Journal of Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3127/ajis.v28.4235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

由人工智能(AI)驱动的自动聊天机器人可以充当无处不在的联络点,改善医疗保健的获取途径,使用户能够做出有效的决定。然而,尽管有潜在的好处,但新出现的文献表明,与人工智能技术的显著特征和特定使用环境(医疗保健)相关的忧虑可能会破坏消费者的信任并阻碍广泛采用。尽管信任在医疗保健技术的接受过程中扮演着关键的角色,但目前还缺乏对基于人工智能的自我诊断聊天机器人(AICSD)产生信任的背景因素的研究。因此,我们在技术信任框架的基础上建立了一个情境模型,以了解消费者对 AICSD 信任的决定因素及其行为后果。在印度进行的一项自由模拟实验研究(N = 202)对该模型进行了验证。感知拟人化、感知信息质量、感知可解释性、信任技术的倾向和感知服务质量影响消费者对 AICSD 的信任。反过来,信任、隐私风险、健康风险和性别又决定了使用意向。该研究通过开发和验证一个解释 AICSD 信任度的特定情境模型,帮助开发人员和营销人员提高消费者对 AICSD 的信任度和采用率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
(Why) Do We Trust AI?: A Case of AI-based Health Chatbots
Automated chatbots powered by artificial intelligence (AI) can act as a ubiquitous point of contact, improving access to healthcare and empowering users to make effective decisions. However, despite the potential benefits, emerging literature suggests that apprehensions linked to the distinctive features of AI technology and the specific context of use (healthcare) could undermine consumer trust and hinder widespread adoption. Although the role of trust is considered pivotal to the acceptance of healthcare technologies, a dearth of research exists that focuses on the contextual factors that drive trust in such AI-based Chatbots for Self-Diagnosis (AICSD). Accordingly, a contextual model based on the trust-in-technology framework was developed to understand the determinants of consumers’ trust in AICSD and its behavioral consequences. It was validated using a free simulation experiment study in India (N = 202). Perceived anthropomorphism, perceived information quality, perceived explainability, disposition to trust technology, and perceived service quality influence consumers’ trust in AICSD. In turn, trust, privacy risk, health risk, and gender determine the intention to use. The research contributes by developing and validating a context-specific model for explaining trust in AICSD that could aid developers and marketers in enhancing consumers’ trust in and adoption of AICSD.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Australasian Journal of Information Systems
Australasian Journal of Information Systems COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
4.40
自引率
4.80%
发文量
20
审稿时长
20 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信