{"title":"Use of ChatGPT to obtain health information in Australia, 2024: insights from a nationally representative survey","authors":"Julie Ayre, Erin Cvejic, Kirsten J McCaffery","doi":"10.5694/mja2.52598","DOIUrl":null,"url":null,"abstract":"<p>Since the launch of ChatGPT in 2022,<span><sup>1</sup></span> people have had easy access to a generative artificial intelligence (AI) application that can provide answers to most health-related questions. Although ChatGPT could massively increase access to tailored health information, the risk of inaccurate information is also recognised, particularly with early ChatGPT versions, and its accuracy varies by task and topic.<span><sup>2</sup></span> Generative AI tools could be a further problem for health services and clinicians, adding to the already large volume of medical misinformation.<span><sup>3</sup></span> Discussions of the benefits and risks of the new technology for health equity, patient engagement, and safety need reliable information about who is using ChatGPT, and the types of health information they are seeking.</p><p>To examine the use of ChatGPT in Australia for obtaining health information, we surveyed a nationally representative sample of adults (18 years or older) drawn from the June 2024 wave of the Life in Australia panel.<span><sup>4</sup></span> Participants who completed the Life in Australia survey online or by telephone were asked how often they used ChatGPT for health information purposes during the preceding six months, the type of questions they asked, and their trust in the responses. Participants who were aware of ChatGPT but had not used it for health information purposes were asked about their intentions to do so in the following six months. Health literacy was assessed using a validated single-item screener: “If you need to go to the doctor, clinic or hospital, how confident are you filling out medical forms by yourself?“<span><sup>5</sup></span> Demographic information was derived from previously collected panel data. Residential postcode-based socio-economic standing was classified according to the Index of Relative Socio-economic Advantage and Disadvantage (IRSAD; by quintile).<span><sup>6</sup></span> Participant responses were weighted to the Australian population using propensity scores. Associations between respondent characteristics and survey responses were assessed using simple logistic regression; we report odds ratios (ORs) with 95% confidence intervals (CIs). Analyses were conducted in SPSS 26. Unless otherwise stated, we report unweighted results (further study details: Supporting Information, part 1). Our study was approved by the University of Sydney Human Research Ethics Committee (2024/HE000247).</p><p>Of 2951 invited panellists, 2034 completed the three ChatGPT and the health literacy survey items (68.9%). The demographic characteristics of the sample were similar to those of the Australian population (data not shown). The weighted proportion of participants who had heard of ChatGPT was 84.7% (95% CI, 83.0–86.3%). The weighted proportion of participants who had used ChatGPT to obtain health-related information during the preceding six months was 9.9% (95% CI, 8.5–11.4%). The proportion of people who had used ChatGPT to obtain health-related information was larger than their overall respondent proportion for people who were aged 18–44 years, lived in capital cities, were born in non-English speaking countries, spoke languages other than English at home, or had limited or marginal health literacy (Box 1; Supporting Information, table 3).</p><p>Among the 187 people who asked ChatGPT health-related questions, trust in the tool was moderate (mean score, 3.1 [of 5]; standard deviation [SD], 0.8). Their questions most frequently related to learning about a specific health condition (89, 48%), finding out what symptoms mean (70, 37%), finding actions to take (67, 36%), and understanding medical terms (65, 35%) (Box 2). At least one higher risk question (ie, questions related to taking action that would typically require clinical advice, rather than questions about general health information) had been asked by 115 participants (61%); the proportion was larger for people born in mainly non-English speaking countries than for those born in Australia (OR, 2.62; 95% CI, 1.27–5.39) and for those who spoke a language at home other than English (OR, 2.24; 95% CI, 1.16–4.32) (Supporting Information, table 4).</p><p>Among the 1523 respondents who were aware of ChatGPT but had not used it for health-related questions during the preceding six months, 591 (38.8%) reported they would consider doing so in the next six months, most frequently for learning about a specific health condition (276 of 1523, 18.1%), understanding medical terms (256, 16.8%), or finding out what symptoms mean (249, 16.3%) (Supporting Information, table 5). At least one higher risk-type question would be considered by 375 participants (24.6%); the proportion was larger for participants with year 12 education or less (OR, 1.76; 95% CI, 1.19–2.61) or an advanced diploma or diploma (OR, 1.67; 95% CI, 1.11–2.51) than for people with postgraduate degrees; for women (OR, 1.34; 95% CI, 1.06–1.70) than for men; and for people aged 35–44 (OR, 2.14; 95% CI, 1.25–3.68), 55–64 (OR, 2.11; 95% CI, 1.21–3.69), or 65 years or older (OR, 2.69; 95% CI, 1.58–4.59) than for people aged 18–24 years (Supporting Information, table 6).</p><p>On the basis of our exploratory study, we estimate that 9.9% of Australian adults (about 1.9 million people<span><sup>7</sup></span>) asked ChatGPT health-related questions during the six months preceding the June 2024 survey. Given the rapid growth in AI technology and the availability of similar tools,<span><sup>8</sup></span> this may be a conservative estimate of the use of generative AI services for obtaining health-related information. The number of users is likely to grow: 38.8% of participants who were aware of ChatGPT but had not recently used it for health-related questions were considering doing so within six months. We also found health-related ChatGPT use was higher for groups who face barriers to health care access,<span><sup>9</sup></span> including people who were born in non-English speaking countries, do not speak English at home, or whose health literacy is limited or marginal. The types of health questions that pose a higher risk for the community will change as AI evolves, and identifying them will require further investigation. There is an urgent need to equip our community with the knowledge and skills to use generative AI tools safely, in order to ensure equity of access and benefit.</p><p>Open access publishing facilitated by the University of Sydney, as part of the Wiley – the University of Sydney agreement via the Council of Australian University Librarians.</p><p>No relevant disclosures.</p><p>The data underlying this report are available on reasonable request.</p>","PeriodicalId":18214,"journal":{"name":"Medical Journal of Australia","volume":"222 4","pages":"210-212"},"PeriodicalIF":6.7000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.5694/mja2.52598","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Journal of Australia","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.5694/mja2.52598","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0
Abstract
Since the launch of ChatGPT in 2022,1 people have had easy access to a generative artificial intelligence (AI) application that can provide answers to most health-related questions. Although ChatGPT could massively increase access to tailored health information, the risk of inaccurate information is also recognised, particularly with early ChatGPT versions, and its accuracy varies by task and topic.2 Generative AI tools could be a further problem for health services and clinicians, adding to the already large volume of medical misinformation.3 Discussions of the benefits and risks of the new technology for health equity, patient engagement, and safety need reliable information about who is using ChatGPT, and the types of health information they are seeking.
To examine the use of ChatGPT in Australia for obtaining health information, we surveyed a nationally representative sample of adults (18 years or older) drawn from the June 2024 wave of the Life in Australia panel.4 Participants who completed the Life in Australia survey online or by telephone were asked how often they used ChatGPT for health information purposes during the preceding six months, the type of questions they asked, and their trust in the responses. Participants who were aware of ChatGPT but had not used it for health information purposes were asked about their intentions to do so in the following six months. Health literacy was assessed using a validated single-item screener: “If you need to go to the doctor, clinic or hospital, how confident are you filling out medical forms by yourself?“5 Demographic information was derived from previously collected panel data. Residential postcode-based socio-economic standing was classified according to the Index of Relative Socio-economic Advantage and Disadvantage (IRSAD; by quintile).6 Participant responses were weighted to the Australian population using propensity scores. Associations between respondent characteristics and survey responses were assessed using simple logistic regression; we report odds ratios (ORs) with 95% confidence intervals (CIs). Analyses were conducted in SPSS 26. Unless otherwise stated, we report unweighted results (further study details: Supporting Information, part 1). Our study was approved by the University of Sydney Human Research Ethics Committee (2024/HE000247).
Of 2951 invited panellists, 2034 completed the three ChatGPT and the health literacy survey items (68.9%). The demographic characteristics of the sample were similar to those of the Australian population (data not shown). The weighted proportion of participants who had heard of ChatGPT was 84.7% (95% CI, 83.0–86.3%). The weighted proportion of participants who had used ChatGPT to obtain health-related information during the preceding six months was 9.9% (95% CI, 8.5–11.4%). The proportion of people who had used ChatGPT to obtain health-related information was larger than their overall respondent proportion for people who were aged 18–44 years, lived in capital cities, were born in non-English speaking countries, spoke languages other than English at home, or had limited or marginal health literacy (Box 1; Supporting Information, table 3).
Among the 187 people who asked ChatGPT health-related questions, trust in the tool was moderate (mean score, 3.1 [of 5]; standard deviation [SD], 0.8). Their questions most frequently related to learning about a specific health condition (89, 48%), finding out what symptoms mean (70, 37%), finding actions to take (67, 36%), and understanding medical terms (65, 35%) (Box 2). At least one higher risk question (ie, questions related to taking action that would typically require clinical advice, rather than questions about general health information) had been asked by 115 participants (61%); the proportion was larger for people born in mainly non-English speaking countries than for those born in Australia (OR, 2.62; 95% CI, 1.27–5.39) and for those who spoke a language at home other than English (OR, 2.24; 95% CI, 1.16–4.32) (Supporting Information, table 4).
Among the 1523 respondents who were aware of ChatGPT but had not used it for health-related questions during the preceding six months, 591 (38.8%) reported they would consider doing so in the next six months, most frequently for learning about a specific health condition (276 of 1523, 18.1%), understanding medical terms (256, 16.8%), or finding out what symptoms mean (249, 16.3%) (Supporting Information, table 5). At least one higher risk-type question would be considered by 375 participants (24.6%); the proportion was larger for participants with year 12 education or less (OR, 1.76; 95% CI, 1.19–2.61) or an advanced diploma or diploma (OR, 1.67; 95% CI, 1.11–2.51) than for people with postgraduate degrees; for women (OR, 1.34; 95% CI, 1.06–1.70) than for men; and for people aged 35–44 (OR, 2.14; 95% CI, 1.25–3.68), 55–64 (OR, 2.11; 95% CI, 1.21–3.69), or 65 years or older (OR, 2.69; 95% CI, 1.58–4.59) than for people aged 18–24 years (Supporting Information, table 6).
On the basis of our exploratory study, we estimate that 9.9% of Australian adults (about 1.9 million people7) asked ChatGPT health-related questions during the six months preceding the June 2024 survey. Given the rapid growth in AI technology and the availability of similar tools,8 this may be a conservative estimate of the use of generative AI services for obtaining health-related information. The number of users is likely to grow: 38.8% of participants who were aware of ChatGPT but had not recently used it for health-related questions were considering doing so within six months. We also found health-related ChatGPT use was higher for groups who face barriers to health care access,9 including people who were born in non-English speaking countries, do not speak English at home, or whose health literacy is limited or marginal. The types of health questions that pose a higher risk for the community will change as AI evolves, and identifying them will require further investigation. There is an urgent need to equip our community with the knowledge and skills to use generative AI tools safely, in order to ensure equity of access and benefit.
Open access publishing facilitated by the University of Sydney, as part of the Wiley – the University of Sydney agreement via the Council of Australian University Librarians.
No relevant disclosures.
The data underlying this report are available on reasonable request.
期刊介绍:
The Medical Journal of Australia (MJA) stands as Australia's foremost general medical journal, leading the dissemination of high-quality research and commentary to shape health policy and influence medical practices within the country. Under the leadership of Professor Virginia Barbour, the expert editorial team at MJA is dedicated to providing authors with a constructive and collaborative peer-review and publication process. Established in 1914, the MJA has evolved into a modern journal that upholds its founding values, maintaining a commitment to supporting the medical profession by delivering high-quality and pertinent information essential to medical practice.