{"title":"User Intent to Use DeepSeek for Healthcare Purposes and their Trust in the Large Language Model: Multinational Survey Study.","authors":"Avishek Choudhury, Yeganeh Shahsavar, Hamid Shamszare","doi":"10.2196/72867","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Generative artificial intelligence (Gen-AI)-particularly large language models (LLMs)-has generated unprecedented interest in applications ranging from everyday Q&A to health-related inquiries. However, little is known about how everyday users decide whether to trust and adopt these technologies-particularly in high-stakes contexts like personal health.</p><p><strong>Objective: </strong>This study examines how ease of use, perceived usefulness, and risk perception interact to shape user trust in and intentions to adopt DeepSeek, an emerging LLM-based platform, for healthcare purposes.</p><p><strong>Methods: </strong>We adapted survey items from validated technology acceptance scales to assess user perception of DeepSeek, focusing on constructs such as trust, intent to use for health, ease of use, perceived usefulness, and risk perception. A 12-item Likert scale questionnaire was developed and pilot-tested (n=20) for clarity and consistency. It was then distributed online to users in India (IND), United Kingdom (UK), and United States of America (USA) who had used DeepSeek within the past two weeks. Data analysis involved descriptive frequency assessments and Partial Least Squares Structural Equation Modeling (PLS-SEM) to evaluate the measurement and structural models. Structural equation modeling assessed direct and indirect effects, including potential quadratic relationships.</p><p><strong>Results: </strong>A total of 556 complete responses were collected, with respondents almost evenly split across IND (n=184), the UK (n=185), and the USA (n=187). Regarding AI in healthcare, when asked if they were comfortable with their healthcare provider using AI tools, 59.3% (n=330) were fine with AI use provided their doctor verified its output, and 31.5% (n=175) were enthusiastic about its use without conditions. DeepSeek was used primarily for academic and educational purposes, 50.7% (n=282) used DeepSeek as a search engine, and 47.7% (n=265) for health-related queries. When asked about their intent to adopt DeepSeek over other LLMs like ChatGPT, 52.1% (n=290) were likely to switch, and 28.9% (n=161) were very likely to do so. The study revealed that trust plays a pivotal mediating role: ease of use exerts a significant indirect impact on usage intentions through trust. At the same time, perceived usefulness contributes to trust development and direct adoption. By contrast, risk perception negatively affects usage intent, emphasizing the importance of robust data governance and transparency. Significant non-linear paths were observed for ease of use and risk, indicating threshold or plateau effects.</p><p><strong>Conclusions: </strong>Users are receptive to DeepSeek when it's easy to use, useful, and trustworthy. The model highlights trust as a mediator and shows non-linear dynamics shaping AI-driven healthcare tool adoption. Expanding the model with mediators like privacy and cultural differences could provide deeper insights. Longitudinal or experimental designs could establish causality and track user attitudes. Further investigation into threshold and plateau phenomena could refine our understanding of user perceptions as they become more familiar with AI-driven healthcare tools.</p>","PeriodicalId":36351,"journal":{"name":"JMIR Human Factors","volume":" ","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Human Factors","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/72867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Generative artificial intelligence (Gen-AI)-particularly large language models (LLMs)-has generated unprecedented interest in applications ranging from everyday Q&A to health-related inquiries. However, little is known about how everyday users decide whether to trust and adopt these technologies-particularly in high-stakes contexts like personal health.
Objective: This study examines how ease of use, perceived usefulness, and risk perception interact to shape user trust in and intentions to adopt DeepSeek, an emerging LLM-based platform, for healthcare purposes.
Methods: We adapted survey items from validated technology acceptance scales to assess user perception of DeepSeek, focusing on constructs such as trust, intent to use for health, ease of use, perceived usefulness, and risk perception. A 12-item Likert scale questionnaire was developed and pilot-tested (n=20) for clarity and consistency. It was then distributed online to users in India (IND), United Kingdom (UK), and United States of America (USA) who had used DeepSeek within the past two weeks. Data analysis involved descriptive frequency assessments and Partial Least Squares Structural Equation Modeling (PLS-SEM) to evaluate the measurement and structural models. Structural equation modeling assessed direct and indirect effects, including potential quadratic relationships.
Results: A total of 556 complete responses were collected, with respondents almost evenly split across IND (n=184), the UK (n=185), and the USA (n=187). Regarding AI in healthcare, when asked if they were comfortable with their healthcare provider using AI tools, 59.3% (n=330) were fine with AI use provided their doctor verified its output, and 31.5% (n=175) were enthusiastic about its use without conditions. DeepSeek was used primarily for academic and educational purposes, 50.7% (n=282) used DeepSeek as a search engine, and 47.7% (n=265) for health-related queries. When asked about their intent to adopt DeepSeek over other LLMs like ChatGPT, 52.1% (n=290) were likely to switch, and 28.9% (n=161) were very likely to do so. The study revealed that trust plays a pivotal mediating role: ease of use exerts a significant indirect impact on usage intentions through trust. At the same time, perceived usefulness contributes to trust development and direct adoption. By contrast, risk perception negatively affects usage intent, emphasizing the importance of robust data governance and transparency. Significant non-linear paths were observed for ease of use and risk, indicating threshold or plateau effects.
Conclusions: Users are receptive to DeepSeek when it's easy to use, useful, and trustworthy. The model highlights trust as a mediator and shows non-linear dynamics shaping AI-driven healthcare tool adoption. Expanding the model with mediators like privacy and cultural differences could provide deeper insights. Longitudinal or experimental designs could establish causality and track user attitudes. Further investigation into threshold and plateau phenomena could refine our understanding of user perceptions as they become more familiar with AI-driven healthcare tools.