Can ChatGPT-4 perform as a competent physician based on the Chinese critical care examination?

IF 3.2 3区 医学 Q2 CRITICAL CARE MEDICINE
Xueqi Wang, Jin Tang, Yajing Feng, Cijun Tang, Xuebin Wang
{"title":"Can ChatGPT-4 perform as a competent physician based on the Chinese critical care examination?","authors":"Xueqi Wang,&nbsp;Jin Tang,&nbsp;Yajing Feng,&nbsp;Cijun Tang,&nbsp;Xuebin Wang","doi":"10.1016/j.jcrc.2024.155010","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>The use of ChatGPT in medical applications is of increasing interest. However, its efficacy in critical care medicine remains uncertain. This study aims to assess ChatGPT-4's performance in critical care examination, providing insights into its potential as a tool for clinical decision-making.</div></div><div><h3>Methods</h3><div>A dataset from the Chinese Health Professional Technical Qualification Examination for Critical Care Medicine, covering four components—fundamental knowledge, specialized knowledge, professional practical skills, and related medical knowledge—was utilized. ChatGPT-4 answered 600 questions, which were evaluated by critical care experts using a standardized rubric.</div></div><div><h3>Results</h3><div>ChatGPT-4 achieved a 73.5 % success rate, surpassing the 60 % passing threshold in four components, with the highest accuracy in fundamental knowledge (81.94 %). ChatGPT-4 performed significantly better on single-choice questions than on multiple-choice questions (76.72 % vs. 51.32 %, <em>p</em> &lt; 0.001), while no significant difference was observed between case-based and non-case-based questions.</div></div><div><h3>Conclusion</h3><div>ChatGPT demonstrated notable strengths in critical care examination, highlighting its potential for supporting clinical decision-making, information retrieval, and medical education. However, caution is required regarding its potential to generate inaccurate responses. Its application in critical care must therefore be carefully supervised by medical professionals to ensure both the accuracy of the information and patient safety.</div></div>","PeriodicalId":15451,"journal":{"name":"Journal of critical care","volume":"86 ","pages":"Article 155010"},"PeriodicalIF":3.2000,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of critical care","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0883944124004970","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CRITICAL CARE MEDICINE","Score":null,"Total":0}
引用次数: 0

Abstract

Background

The use of ChatGPT in medical applications is of increasing interest. However, its efficacy in critical care medicine remains uncertain. This study aims to assess ChatGPT-4's performance in critical care examination, providing insights into its potential as a tool for clinical decision-making.

Methods

A dataset from the Chinese Health Professional Technical Qualification Examination for Critical Care Medicine, covering four components—fundamental knowledge, specialized knowledge, professional practical skills, and related medical knowledge—was utilized. ChatGPT-4 answered 600 questions, which were evaluated by critical care experts using a standardized rubric.

Results

ChatGPT-4 achieved a 73.5 % success rate, surpassing the 60 % passing threshold in four components, with the highest accuracy in fundamental knowledge (81.94 %). ChatGPT-4 performed significantly better on single-choice questions than on multiple-choice questions (76.72 % vs. 51.32 %, p < 0.001), while no significant difference was observed between case-based and non-case-based questions.

Conclusion

ChatGPT demonstrated notable strengths in critical care examination, highlighting its potential for supporting clinical decision-making, information retrieval, and medical education. However, caution is required regarding its potential to generate inaccurate responses. Its application in critical care must therefore be carefully supervised by medical professionals to ensure both the accuracy of the information and patient safety.
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of critical care
Journal of critical care 医学-危重病医学
CiteScore
8.60
自引率
2.70%
发文量
237
审稿时长
23 days
期刊介绍: The Journal of Critical Care, the official publication of the World Federation of Societies of Intensive and Critical Care Medicine (WFSICCM), is a leading international, peer-reviewed journal providing original research, review articles, tutorials, and invited articles for physicians and allied health professionals involved in treating the critically ill. The Journal aims to improve patient care by furthering understanding of health systems research and its integration into clinical practice. The Journal will include articles which discuss: All aspects of health services research in critical care System based practice in anesthesiology, perioperative and critical care medicine The interface between anesthesiology, critical care medicine and pain Integrating intraoperative management in preparation for postoperative critical care management and recovery Optimizing patient management, i.e., exploring the interface between evidence-based principles or clinical insight into management and care of complex patients The team approach in the OR and ICU System-based research Medical ethics Technology in medicine Seminars discussing current, state of the art, and sometimes controversial topics in anesthesiology, critical care medicine, and professional education Residency Education.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信