{"title":"Comparing physician and large language model responses to influenza patient questions in the online health community","authors":"Hong Wu, Mingyu Li, Li Zhang","doi":"10.1016/j.ijmedinf.2025.105836","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>During influenza season, some patients tend to seek medical advice through online platforms. However, due to time constraints, the informational and emotional support provided by physicians is limited. Large language models (LLMs) can rapidly provide medical knowledge and empathy, but their capacity for providing informational support to patients with influenza and assisting physicians in providing emotional support is unclear. Therefore, this study evaluated the quality of LLM-generated influenza advice and its emotional support performance in comparison with physician advice.</div></div><div><h3>Methods</h3><div>This study utilized 200 influenza question–answer pairs from the online health community. Data collection consisted of two parts: (1) A panel of board-certified physicians evaluated the quality of LLM advice vs physician advice. (2) Physician advice was polished using an LLM, and the LLM-rewritten advice was compared to the original physician advice using the LLM module.</div></div><div><h3>Results</h3><div>For informational support, there was no significant difference between LLM and physician advice in terms of the presence of incorrect information, omission of information, extent of harm or empathy. Nevertheless, compared to physician advice, LLM advice was more likely to cause harm and to be in line with medical consensus. LLM was also able to assist physicians in providing emotional support, since the LLM-rewritten advice was significantly more respectful, friendly and empathetic, when compared with physician advice. Also, the LLM-rewritten advice was logically smooth. In most cases, LLM did not add or omit the original medical information.</div></div><div><h3>Conclusion</h3><div>This study suggests that LLMs can provide informational and emotional support for influenza patients. This may help to alleviate the pressure on physicians and promote physician-patient communication.</div></div>","PeriodicalId":54950,"journal":{"name":"International Journal of Medical Informatics","volume":"197 ","pages":"Article 105836"},"PeriodicalIF":3.7000,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Medical Informatics","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S138650562500053X","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction
During influenza season, some patients tend to seek medical advice through online platforms. However, due to time constraints, the informational and emotional support provided by physicians is limited. Large language models (LLMs) can rapidly provide medical knowledge and empathy, but their capacity for providing informational support to patients with influenza and assisting physicians in providing emotional support is unclear. Therefore, this study evaluated the quality of LLM-generated influenza advice and its emotional support performance in comparison with physician advice.
Methods
This study utilized 200 influenza question–answer pairs from the online health community. Data collection consisted of two parts: (1) A panel of board-certified physicians evaluated the quality of LLM advice vs physician advice. (2) Physician advice was polished using an LLM, and the LLM-rewritten advice was compared to the original physician advice using the LLM module.
Results
For informational support, there was no significant difference between LLM and physician advice in terms of the presence of incorrect information, omission of information, extent of harm or empathy. Nevertheless, compared to physician advice, LLM advice was more likely to cause harm and to be in line with medical consensus. LLM was also able to assist physicians in providing emotional support, since the LLM-rewritten advice was significantly more respectful, friendly and empathetic, when compared with physician advice. Also, the LLM-rewritten advice was logically smooth. In most cases, LLM did not add or omit the original medical information.
Conclusion
This study suggests that LLMs can provide informational and emotional support for influenza patients. This may help to alleviate the pressure on physicians and promote physician-patient communication.
期刊介绍:
International Journal of Medical Informatics provides an international medium for dissemination of original results and interpretative reviews concerning the field of medical informatics. The Journal emphasizes the evaluation of systems in healthcare settings.
The scope of journal covers:
Information systems, including national or international registration systems, hospital information systems, departmental and/or physician''s office systems, document handling systems, electronic medical record systems, standardization, systems integration etc.;
Computer-aided medical decision support systems using heuristic, algorithmic and/or statistical methods as exemplified in decision theory, protocol development, artificial intelligence, etc.
Educational computer based programs pertaining to medical informatics or medicine in general;
Organizational, economic, social, clinical impact, ethical and cost-benefit aspects of IT applications in health care.