User Intent to Use DeepSeek for Healthcare Purposes and their Trust in the Large Language Model: Multinational Survey Study.

IF 2.6 Q2 HEALTH CARE SCIENCES & SERVICES
JMIR Human Factors Pub Date : 2025-04-07 DOI:10.2196/72867
Avishek Choudhury, Yeganeh Shahsavar, Hamid Shamszare
{"title":"User Intent to Use DeepSeek for Healthcare Purposes and their Trust in the Large Language Model: Multinational Survey Study.","authors":"Avishek Choudhury, Yeganeh Shahsavar, Hamid Shamszare","doi":"10.2196/72867","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Generative artificial intelligence (Gen-AI)-particularly large language models (LLMs)-has generated unprecedented interest in applications ranging from everyday Q&A to health-related inquiries. However, little is known about how everyday users decide whether to trust and adopt these technologies-particularly in high-stakes contexts like personal health.</p><p><strong>Objective: </strong>This study examines how ease of use, perceived usefulness, and risk perception interact to shape user trust in and intentions to adopt DeepSeek, an emerging LLM-based platform, for healthcare purposes.</p><p><strong>Methods: </strong>We adapted survey items from validated technology acceptance scales to assess user perception of DeepSeek, focusing on constructs such as trust, intent to use for health, ease of use, perceived usefulness, and risk perception. A 12-item Likert scale questionnaire was developed and pilot-tested (n=20) for clarity and consistency. It was then distributed online to users in India (IND), United Kingdom (UK), and United States of America (USA) who had used DeepSeek within the past two weeks. Data analysis involved descriptive frequency assessments and Partial Least Squares Structural Equation Modeling (PLS-SEM) to evaluate the measurement and structural models. Structural equation modeling assessed direct and indirect effects, including potential quadratic relationships.</p><p><strong>Results: </strong>A total of 556 complete responses were collected, with respondents almost evenly split across IND (n=184), the UK (n=185), and the USA (n=187). Regarding AI in healthcare, when asked if they were comfortable with their healthcare provider using AI tools, 59.3% (n=330) were fine with AI use provided their doctor verified its output, and 31.5% (n=175) were enthusiastic about its use without conditions. DeepSeek was used primarily for academic and educational purposes, 50.7% (n=282) used DeepSeek as a search engine, and 47.7% (n=265) for health-related queries. When asked about their intent to adopt DeepSeek over other LLMs like ChatGPT, 52.1% (n=290) were likely to switch, and 28.9% (n=161) were very likely to do so. The study revealed that trust plays a pivotal mediating role: ease of use exerts a significant indirect impact on usage intentions through trust. At the same time, perceived usefulness contributes to trust development and direct adoption. By contrast, risk perception negatively affects usage intent, emphasizing the importance of robust data governance and transparency. Significant non-linear paths were observed for ease of use and risk, indicating threshold or plateau effects.</p><p><strong>Conclusions: </strong>Users are receptive to DeepSeek when it's easy to use, useful, and trustworthy. The model highlights trust as a mediator and shows non-linear dynamics shaping AI-driven healthcare tool adoption. Expanding the model with mediators like privacy and cultural differences could provide deeper insights. Longitudinal or experimental designs could establish causality and track user attitudes. Further investigation into threshold and plateau phenomena could refine our understanding of user perceptions as they become more familiar with AI-driven healthcare tools.</p>","PeriodicalId":36351,"journal":{"name":"JMIR Human Factors","volume":" ","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Human Factors","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/72867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Generative artificial intelligence (Gen-AI)-particularly large language models (LLMs)-has generated unprecedented interest in applications ranging from everyday Q&A to health-related inquiries. However, little is known about how everyday users decide whether to trust and adopt these technologies-particularly in high-stakes contexts like personal health.

Objective: This study examines how ease of use, perceived usefulness, and risk perception interact to shape user trust in and intentions to adopt DeepSeek, an emerging LLM-based platform, for healthcare purposes.

Methods: We adapted survey items from validated technology acceptance scales to assess user perception of DeepSeek, focusing on constructs such as trust, intent to use for health, ease of use, perceived usefulness, and risk perception. A 12-item Likert scale questionnaire was developed and pilot-tested (n=20) for clarity and consistency. It was then distributed online to users in India (IND), United Kingdom (UK), and United States of America (USA) who had used DeepSeek within the past two weeks. Data analysis involved descriptive frequency assessments and Partial Least Squares Structural Equation Modeling (PLS-SEM) to evaluate the measurement and structural models. Structural equation modeling assessed direct and indirect effects, including potential quadratic relationships.

Results: A total of 556 complete responses were collected, with respondents almost evenly split across IND (n=184), the UK (n=185), and the USA (n=187). Regarding AI in healthcare, when asked if they were comfortable with their healthcare provider using AI tools, 59.3% (n=330) were fine with AI use provided their doctor verified its output, and 31.5% (n=175) were enthusiastic about its use without conditions. DeepSeek was used primarily for academic and educational purposes, 50.7% (n=282) used DeepSeek as a search engine, and 47.7% (n=265) for health-related queries. When asked about their intent to adopt DeepSeek over other LLMs like ChatGPT, 52.1% (n=290) were likely to switch, and 28.9% (n=161) were very likely to do so. The study revealed that trust plays a pivotal mediating role: ease of use exerts a significant indirect impact on usage intentions through trust. At the same time, perceived usefulness contributes to trust development and direct adoption. By contrast, risk perception negatively affects usage intent, emphasizing the importance of robust data governance and transparency. Significant non-linear paths were observed for ease of use and risk, indicating threshold or plateau effects.

Conclusions: Users are receptive to DeepSeek when it's easy to use, useful, and trustworthy. The model highlights trust as a mediator and shows non-linear dynamics shaping AI-driven healthcare tool adoption. Expanding the model with mediators like privacy and cultural differences could provide deeper insights. Longitudinal or experimental designs could establish causality and track user attitudes. Further investigation into threshold and plateau phenomena could refine our understanding of user perceptions as they become more familiar with AI-driven healthcare tools.

求助全文
约1分钟内获得全文 求助全文
来源期刊
JMIR Human Factors
JMIR Human Factors Medicine-Health Informatics
CiteScore
3.40
自引率
3.70%
发文量
123
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信