Ting Wang, Arch G Mainous, Keith Stelter, Thomas R O'Neill, Warren P Newton
{"title":"全科医学在训考试中的生成预训练转换器 (GPT-4) 性能评估。","authors":"Ting Wang, Arch G Mainous, Keith Stelter, Thomas R O'Neill, Warren P Newton","doi":"10.3122/jabfm.2023.230433R1","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>In this study, we sought to comprehensively evaluate GPT-4 (Generative Pre-trained Transformer)'s performance on the 2022 American Board of Family Medicine's (ABFM) In-Training Examination (ITE), compared with its predecessor, GPT-3.5, and the national family residents' performance on the same examination.</p><p><strong>Methods: </strong>We utilized both quantitative and qualitative analyses. First, a quantitative analysis was employed to evaluate the model's performance metrics using zero-shot prompt (where only examination questions were provided without any additional information). After this, qualitative analysis was executed to understand the nature of the model's responses, the depth of its medical knowledge, and its ability to comprehend contextual or new information through chain-of-thoughts prompts (interactive conversation) with the model.</p><p><strong>Results: </strong>This study demonstrated that GPT-4 made significant improvement in accuracy compared with GPT-3.5 over a 4-month interval between their respective release dates. The correct percentage with zero-shot prompt increased from 56% to 84%, which translates to a scaled score growth from 280 to 690, a 410-point increase. Most notably, further chain-of-thought investigation revealed GPT-4's ability to integrate new information and make self-correction when needed.</p><p><strong>Conclusions: </strong>In this study, GPT-4 has demonstrated notably high accuracy, as well as rapid reading and learning capabilities. These results are consistent with previous research indicating GPT-4's significant potential to assist in clinical decision making. Furthermore, the study highlights the essential role of physicians' critical thinking and lifelong learning skills, particularly evident through the analysis of GPT-4's incorrect responses. This emphasizes the indispensable human element in effectively implementing and using AI technologies in medical settings.</p>","PeriodicalId":50018,"journal":{"name":"Journal of the American Board of Family Medicine","volume":" ","pages":"528-582"},"PeriodicalIF":2.4000,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Performance Evaluation of the Generative Pre-trained Transformer (GPT-4) on the Family Medicine In-Training Examination.\",\"authors\":\"Ting Wang, Arch G Mainous, Keith Stelter, Thomas R O'Neill, Warren P Newton\",\"doi\":\"10.3122/jabfm.2023.230433R1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>In this study, we sought to comprehensively evaluate GPT-4 (Generative Pre-trained Transformer)'s performance on the 2022 American Board of Family Medicine's (ABFM) In-Training Examination (ITE), compared with its predecessor, GPT-3.5, and the national family residents' performance on the same examination.</p><p><strong>Methods: </strong>We utilized both quantitative and qualitative analyses. First, a quantitative analysis was employed to evaluate the model's performance metrics using zero-shot prompt (where only examination questions were provided without any additional information). After this, qualitative analysis was executed to understand the nature of the model's responses, the depth of its medical knowledge, and its ability to comprehend contextual or new information through chain-of-thoughts prompts (interactive conversation) with the model.</p><p><strong>Results: </strong>This study demonstrated that GPT-4 made significant improvement in accuracy compared with GPT-3.5 over a 4-month interval between their respective release dates. The correct percentage with zero-shot prompt increased from 56% to 84%, which translates to a scaled score growth from 280 to 690, a 410-point increase. Most notably, further chain-of-thought investigation revealed GPT-4's ability to integrate new information and make self-correction when needed.</p><p><strong>Conclusions: </strong>In this study, GPT-4 has demonstrated notably high accuracy, as well as rapid reading and learning capabilities. These results are consistent with previous research indicating GPT-4's significant potential to assist in clinical decision making. Furthermore, the study highlights the essential role of physicians' critical thinking and lifelong learning skills, particularly evident through the analysis of GPT-4's incorrect responses. This emphasizes the indispensable human element in effectively implementing and using AI technologies in medical settings.</p>\",\"PeriodicalId\":50018,\"journal\":{\"name\":\"Journal of the American Board of Family Medicine\",\"volume\":\" \",\"pages\":\"528-582\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the American Board of Family Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3122/jabfm.2023.230433R1\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American Board of Family Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3122/jabfm.2023.230433R1","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
Performance Evaluation of the Generative Pre-trained Transformer (GPT-4) on the Family Medicine In-Training Examination.
Objective: In this study, we sought to comprehensively evaluate GPT-4 (Generative Pre-trained Transformer)'s performance on the 2022 American Board of Family Medicine's (ABFM) In-Training Examination (ITE), compared with its predecessor, GPT-3.5, and the national family residents' performance on the same examination.
Methods: We utilized both quantitative and qualitative analyses. First, a quantitative analysis was employed to evaluate the model's performance metrics using zero-shot prompt (where only examination questions were provided without any additional information). After this, qualitative analysis was executed to understand the nature of the model's responses, the depth of its medical knowledge, and its ability to comprehend contextual or new information through chain-of-thoughts prompts (interactive conversation) with the model.
Results: This study demonstrated that GPT-4 made significant improvement in accuracy compared with GPT-3.5 over a 4-month interval between their respective release dates. The correct percentage with zero-shot prompt increased from 56% to 84%, which translates to a scaled score growth from 280 to 690, a 410-point increase. Most notably, further chain-of-thought investigation revealed GPT-4's ability to integrate new information and make self-correction when needed.
Conclusions: In this study, GPT-4 has demonstrated notably high accuracy, as well as rapid reading and learning capabilities. These results are consistent with previous research indicating GPT-4's significant potential to assist in clinical decision making. Furthermore, the study highlights the essential role of physicians' critical thinking and lifelong learning skills, particularly evident through the analysis of GPT-4's incorrect responses. This emphasizes the indispensable human element in effectively implementing and using AI technologies in medical settings.
期刊介绍:
Published since 1988, the Journal of the American Board of Family Medicine ( JABFM ) is the official peer-reviewed journal of the American Board of Family Medicine (ABFM). Believing that the public and scientific communities are best served by open access to information, JABFM makes its articles available free of charge and without registration at www.jabfm.org. JABFM is indexed by Medline, Index Medicus, and other services.