User Intent to Use DeepSeek for Health Care Purposes and Their Trust in the Large Language Model: Multinational Survey Study.

IF 2.6 Q2 HEALTH CARE SCIENCES & SERVICES
JMIR Human Factors Pub Date : 2025-05-26 DOI:10.2196/72867
Avishek Choudhury, Yeganeh Shahsavar, Hamid Shamszare
{"title":"User Intent to Use DeepSeek for Health Care Purposes and Their Trust in the Large Language Model: Multinational Survey Study.","authors":"Avishek Choudhury, Yeganeh Shahsavar, Hamid Shamszare","doi":"10.2196/72867","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Generative artificial intelligence (AI)-particularly large language models (LLMs)-has generated unprecedented interest in applications ranging from everyday questions and answers to health-related inquiries. However, little is known about how everyday users decide whether to trust and adopt these technologies in high-stakes contexts such as personal health.</p><p><strong>Objectives: </strong>This study examines how ease of use, perceived usefulness, and risk perception interact to shape user trust in and intentions to adopt DeepSeek, an emerging LLM-based platform, for health care purposes.</p><p><strong>Methods: </strong>We adapted survey items from validated technology acceptance scales to assess user perception of DeepSeek. A 12-item Likert scale questionnaire was developed and pilot-tested (n=20). It was then distributed on the web to users in India, the United Kingdom, and the United States who had used DeepSeek within the past 2 weeks. Data analysis involved descriptive frequency assessments and Partial Least Squares Structural Equation Modeling. The model assessed direct and indirect effects, including potential quadratic relationships.</p><p><strong>Results: </strong>A total of 556 complete responses were collected, with respondents almost evenly split across India (n=184), the United Kingdom (n=185), and the United States (n=187). Regarding AI in health care, when asked whether they were comfortable with their health care provider using AI tools, 59.3% (n=330) were fine with AI use provided their doctor verified its output, and 31.5% (n=175) were enthusiastic about its use without conditions. DeepSeek was used primarily for academic and educational purposes, 50.7% (n=282) used DeepSeek as a search engine, and 47.7% (n=265) used it for health-related queries. When asked about their intent to adopt DeepSeek over other LLMs such as ChatGPT, 52.1% (n=290) were likely to switch, and 28.9% (n=161) were very likely to do so. The study revealed that trust plays a pivotal mediating role; ease of use exerts a significant indirect impact on usage intentions through trust. At the same time, perceived usefulness contributes to trust development and direct adoption. By contrast, risk perception negatively affects usage intent, emphasizing the importance of robust data governance and transparency. Significant nonlinear paths were observed for ease of use and risk, indicating threshold or plateau effects.</p><p><strong>Conclusions: </strong>Users are receptive to DeepSeek when it is easy to use, useful, and trustworthy. The model highlights trust as a mediator and shows nonlinear dynamics shaping AI-driven health care tool adoption. Expanding the model with mediators such as privacy and cultural differences could provide deeper insights. Longitudinal experimental designs could establish causality. Further investigation into threshold and plateau phenomena could refine our understanding of user perceptions as they become more familiar with AI-driven health care tools.</p>","PeriodicalId":36351,"journal":{"name":"JMIR Human Factors","volume":"12 ","pages":"e72867"},"PeriodicalIF":2.6000,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Human Factors","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/72867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Generative artificial intelligence (AI)-particularly large language models (LLMs)-has generated unprecedented interest in applications ranging from everyday questions and answers to health-related inquiries. However, little is known about how everyday users decide whether to trust and adopt these technologies in high-stakes contexts such as personal health.

Objectives: This study examines how ease of use, perceived usefulness, and risk perception interact to shape user trust in and intentions to adopt DeepSeek, an emerging LLM-based platform, for health care purposes.

Methods: We adapted survey items from validated technology acceptance scales to assess user perception of DeepSeek. A 12-item Likert scale questionnaire was developed and pilot-tested (n=20). It was then distributed on the web to users in India, the United Kingdom, and the United States who had used DeepSeek within the past 2 weeks. Data analysis involved descriptive frequency assessments and Partial Least Squares Structural Equation Modeling. The model assessed direct and indirect effects, including potential quadratic relationships.

Results: A total of 556 complete responses were collected, with respondents almost evenly split across India (n=184), the United Kingdom (n=185), and the United States (n=187). Regarding AI in health care, when asked whether they were comfortable with their health care provider using AI tools, 59.3% (n=330) were fine with AI use provided their doctor verified its output, and 31.5% (n=175) were enthusiastic about its use without conditions. DeepSeek was used primarily for academic and educational purposes, 50.7% (n=282) used DeepSeek as a search engine, and 47.7% (n=265) used it for health-related queries. When asked about their intent to adopt DeepSeek over other LLMs such as ChatGPT, 52.1% (n=290) were likely to switch, and 28.9% (n=161) were very likely to do so. The study revealed that trust plays a pivotal mediating role; ease of use exerts a significant indirect impact on usage intentions through trust. At the same time, perceived usefulness contributes to trust development and direct adoption. By contrast, risk perception negatively affects usage intent, emphasizing the importance of robust data governance and transparency. Significant nonlinear paths were observed for ease of use and risk, indicating threshold or plateau effects.

Conclusions: Users are receptive to DeepSeek when it is easy to use, useful, and trustworthy. The model highlights trust as a mediator and shows nonlinear dynamics shaping AI-driven health care tool adoption. Expanding the model with mediators such as privacy and cultural differences could provide deeper insights. Longitudinal experimental designs could establish causality. Further investigation into threshold and plateau phenomena could refine our understanding of user perceptions as they become more familiar with AI-driven health care tools.

用户使用DeepSeek用于医疗保健目的的意图及其在大语言模型中的信任:多国调查研究。
背景:生成式人工智能(AI)——尤其是大型语言模型(llm)——在从日常问题和答案到健康相关问题的应用中产生了前所未有的兴趣。然而,对于日常用户如何在个人健康等高风险环境中决定是否信任和采用这些技术,人们知之甚少。目的:本研究考察了易用性、感知有用性和风险感知如何相互作用,以形成用户对采用DeepSeek(一种新兴的基于法学硕士的医疗保健平台)的信任和意图。方法:我们采用经过验证的技术接受量表中的调查项目来评估用户对DeepSeek的感知。编制了一份12项李克特量表问卷,并进行了中试(n=20)。然后在网络上分发给在过去两周内使用过DeepSeek的印度、英国和美国用户。数据分析包括描述性频率评估和偏最小二乘结构方程建模。该模型评估了直接和间接影响,包括潜在的二次关系。结果:总共收集了556份完整的回复,受访者几乎平均分布在印度(n=184)、英国(n=185)和美国(n=187)。关于医疗保健中的人工智能,当被问及他们是否对医疗保健提供者使用人工智能工具感到满意时,59.3% (n=330)的人表示,只要医生验证其输出,就可以使用人工智能,31.5% (n=175)的人对无条件使用人工智能充满热情。DeepSeek主要用于学术和教育目的,50.7% (n=282)将DeepSeek用作搜索引擎,47.7% (n=265)将其用于健康相关查询。当被问及他们是否打算采用DeepSeek而不是ChatGPT等其他llm时,52.1% (n=290)的人可能会改变,28.9% (n=161)的人很可能会这样做。研究发现,信任在人际交往中起着关键的中介作用;易用性通过信任对使用意图产生显著的间接影响。同时,感知有用性有助于信任的发展和直接采用。相比之下,风险感知对使用意图产生负面影响,强调了强大的数据治理和透明度的重要性。在易用性和风险方面观察到显著的非线性路径,表明阈值或平台效应。结论:当DeepSeek易于使用、有用且值得信赖时,用户才会接受它。该模型突出了信任作为中介的作用,并显示了影响人工智能驱动的医疗保健工具采用的非线性动力学。用隐私和文化差异等中介来扩展模型,可以提供更深入的见解。纵向实验设计可以建立因果关系。随着用户越来越熟悉人工智能驱动的医疗保健工具,对阈值和平台现象的进一步调查可以改善我们对用户感知的理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
JMIR Human Factors
JMIR Human Factors Medicine-Health Informatics
CiteScore
3.40
自引率
3.70%
发文量
123
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信