Physician Adoption of AI Assistant

Ting Hou, Meng Li, Y. Tan, Huazhong Zhao
{"title":"Physician Adoption of AI Assistant","authors":"Ting Hou, Meng Li, Y. Tan, Huazhong Zhao","doi":"10.1287/msom.2023.0093","DOIUrl":null,"url":null,"abstract":"Problem definition: Artificial intelligence (AI) assistants—software agents that can perform tasks or services for individuals—are among the most promising AI applications. However, little is known about the adoption of AI assistants by service providers (i.e., physicians) in a real-world healthcare setting. In this paper, we investigate the impact of the AI smartness (i.e., whether the AI assistant is powered by machine learning intelligence) and the impact of AI transparency (i.e., whether physicians are informed of the AI assistant). Methodology/results: We collaborate with a leading healthcare platform to run a field experiment in which we compare physicians’ adoption behavior, that is, adoption rate and adoption timing, of smart and automated AI assistants under transparent and non-transparent conditions. We find that the smartness can increase the adoption rate and shorten the adoption timing, whereas the transparency can only shorten the adoption timing. Moreover, the impact of AI transparency on the adoption rate is contingent on the smartness level of the AI assistant: the transparency increases the adoption rate only when the AI assistant is not equipped with smart algorithms and fails to do so when the AI assistant is smart. Managerial implications: Our study can guide platforms in designing their AI strategies. Platforms should improve the smartness of AI assistants. If such an improvement is too costly, the platform should transparentize the AI assistant, especially when it is not smart. Funding: This research was supported by a Behavioral Research Assistance Grant from the C. T. Bauer College of Business, University of Houston. H. Zhao acknowledges support from Hong Kong General Research Fund [9043593]. Y. (R.) Tan acknowledges generous support from CEIBS Research [Grant AG24QCS]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.0093 .","PeriodicalId":119284,"journal":{"name":"Manufacturing & Service Operations Management","volume":" 31","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Manufacturing & Service Operations Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/msom.2023.0093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Problem definition: Artificial intelligence (AI) assistants—software agents that can perform tasks or services for individuals—are among the most promising AI applications. However, little is known about the adoption of AI assistants by service providers (i.e., physicians) in a real-world healthcare setting. In this paper, we investigate the impact of the AI smartness (i.e., whether the AI assistant is powered by machine learning intelligence) and the impact of AI transparency (i.e., whether physicians are informed of the AI assistant). Methodology/results: We collaborate with a leading healthcare platform to run a field experiment in which we compare physicians’ adoption behavior, that is, adoption rate and adoption timing, of smart and automated AI assistants under transparent and non-transparent conditions. We find that the smartness can increase the adoption rate and shorten the adoption timing, whereas the transparency can only shorten the adoption timing. Moreover, the impact of AI transparency on the adoption rate is contingent on the smartness level of the AI assistant: the transparency increases the adoption rate only when the AI assistant is not equipped with smart algorithms and fails to do so when the AI assistant is smart. Managerial implications: Our study can guide platforms in designing their AI strategies. Platforms should improve the smartness of AI assistants. If such an improvement is too costly, the platform should transparentize the AI assistant, especially when it is not smart. Funding: This research was supported by a Behavioral Research Assistance Grant from the C. T. Bauer College of Business, University of Houston. H. Zhao acknowledges support from Hong Kong General Research Fund [9043593]. Y. (R.) Tan acknowledges generous support from CEIBS Research [Grant AG24QCS]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.0093 .
医生采用人工智能助理
问题的定义:人工智能(AI)助手--可为个人执行任务或提供服务的软件代理--是最有前途的人工智能应用之一。然而,人们对服务提供者(即医生)在现实医疗环境中采用人工智能助手的情况知之甚少。在本文中,我们研究了人工智能智能性(即人工智能助手是否由机器学习智能驱动)和人工智能透明度(即医生是否了解人工智能助手)的影响。方法/结果:我们与一家领先的医疗保健平台合作开展了一项实地实验,比较了医生在透明和不透明条件下采用智能和自动人工智能助手的行为,即采用率和采用时间。我们发现,智能化可以提高采用率并缩短采用时间,而透明化只能缩短采用时间。此外,人工智能透明度对采用率的影响取决于人工智能助手的智能程度:只有当人工智能助手不具备智能算法时,透明度才会提高采用率;而当人工智能助手具备智能时,透明度则不会提高采用率。管理意义:我们的研究可以指导平台设计其人工智能战略。平台应提高人工智能助手的智能程度。如果这种改进成本过高,平台应将人工智能助手透明化,尤其是当它不智能时。研究经费本研究得到了休斯顿大学 C. T. Bauer 商学院的行为研究资助。H. Zhao 感谢香港一般研究基金 [9043593] 的支持。Y. (R.) Tan 感谢中欧国际工商学院研究基金[Grant AG24QCS]的慷慨资助。补充材料:在线附录见 https://doi.org/10.1287/msom.2023.0093 。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信