Wei Du , Xueting Jin , Jaryse Carol Harris , Alessandro Brunetti , Erika Johnson , Olivia Leung , Xingchen Li , Selemon Walle , Qing Yu , Xiao Zhou , Fang Bian , Kajanna McKenzie , Manita Kanathanavanich , Yusuf Ozcelik , Farah El-Sharkawy , Shunsuke Koga
{"title":"病理学中的大语言模型:ChatGPT 和 Bard 与病理学学员在选择题上的比较研究。","authors":"Wei Du , Xueting Jin , Jaryse Carol Harris , Alessandro Brunetti , Erika Johnson , Olivia Leung , Xingchen Li , Selemon Walle , Qing Yu , Xiao Zhou , Fang Bian , Kajanna McKenzie , Manita Kanathanavanich , Yusuf Ozcelik , Farah El-Sharkawy , Shunsuke Koga","doi":"10.1016/j.anndiagpath.2024.152392","DOIUrl":null,"url":null,"abstract":"<div><div>Large language models (LLMs), such as ChatGPT and Bard, have shown potential in various medical applications. This study aimed to evaluate the performance of LLMs, specifically ChatGPT and Bard, in pathology by comparing their performance with those of pathology trainees, and to assess the consistency of their responses. We selected 150 multiple-choice questions from 15 subspecialties, excluding those with images. Both ChatGPT and Bard were tested on these questions across three separate sessions between June 2023 and January 2024, and their responses were compared with those of 16 pathology trainees (8 junior and 8 senior) from two hospitals. Questions were categorized into easy, intermediate, and difficult based on trainee performance. Consistency and variability in LLM responses were analyzed across three evaluation sessions. ChatGPT significantly outperformed Bard and trainees, achieving an average total score of 82.2% compared to Bard's 49.5%, junior trainees' 45.1%, and senior trainees' 56.0%. ChatGPT's performance was notably stronger in difficult questions (63.4%–68.3%) compared to Bard (31.7%–34.1%) and trainees (4.9%–48.8%). For easy questions, ChatGPT (83.1%–91.5%) and trainees (73.7%–100.0%) showed similar high scores. Consistency analysis revealed that ChatGPT showed a high consistency rate of 80%–85% across three tests, whereas Bard exhibited greater variability with consistency rates of 54%–61%. While LLMs show significant promise in pathology education and practice, continued development and human oversight are crucial for reliable clinical application.</div></div>","PeriodicalId":50768,"journal":{"name":"Annals of Diagnostic Pathology","volume":"73 ","pages":"Article 152392"},"PeriodicalIF":1.5000,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Large language models in pathology: A comparative study of ChatGPT and Bard with pathology trainees on multiple-choice questions\",\"authors\":\"Wei Du , Xueting Jin , Jaryse Carol Harris , Alessandro Brunetti , Erika Johnson , Olivia Leung , Xingchen Li , Selemon Walle , Qing Yu , Xiao Zhou , Fang Bian , Kajanna McKenzie , Manita Kanathanavanich , Yusuf Ozcelik , Farah El-Sharkawy , Shunsuke Koga\",\"doi\":\"10.1016/j.anndiagpath.2024.152392\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Large language models (LLMs), such as ChatGPT and Bard, have shown potential in various medical applications. This study aimed to evaluate the performance of LLMs, specifically ChatGPT and Bard, in pathology by comparing their performance with those of pathology trainees, and to assess the consistency of their responses. We selected 150 multiple-choice questions from 15 subspecialties, excluding those with images. Both ChatGPT and Bard were tested on these questions across three separate sessions between June 2023 and January 2024, and their responses were compared with those of 16 pathology trainees (8 junior and 8 senior) from two hospitals. Questions were categorized into easy, intermediate, and difficult based on trainee performance. Consistency and variability in LLM responses were analyzed across three evaluation sessions. ChatGPT significantly outperformed Bard and trainees, achieving an average total score of 82.2% compared to Bard's 49.5%, junior trainees' 45.1%, and senior trainees' 56.0%. ChatGPT's performance was notably stronger in difficult questions (63.4%–68.3%) compared to Bard (31.7%–34.1%) and trainees (4.9%–48.8%). For easy questions, ChatGPT (83.1%–91.5%) and trainees (73.7%–100.0%) showed similar high scores. Consistency analysis revealed that ChatGPT showed a high consistency rate of 80%–85% across three tests, whereas Bard exhibited greater variability with consistency rates of 54%–61%. While LLMs show significant promise in pathology education and practice, continued development and human oversight are crucial for reliable clinical application.</div></div>\",\"PeriodicalId\":50768,\"journal\":{\"name\":\"Annals of Diagnostic Pathology\",\"volume\":\"73 \",\"pages\":\"Article 152392\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2024-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Diagnostic Pathology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1092913424001291\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PATHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Diagnostic Pathology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1092913424001291","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PATHOLOGY","Score":null,"Total":0}
Large language models in pathology: A comparative study of ChatGPT and Bard with pathology trainees on multiple-choice questions
Large language models (LLMs), such as ChatGPT and Bard, have shown potential in various medical applications. This study aimed to evaluate the performance of LLMs, specifically ChatGPT and Bard, in pathology by comparing their performance with those of pathology trainees, and to assess the consistency of their responses. We selected 150 multiple-choice questions from 15 subspecialties, excluding those with images. Both ChatGPT and Bard were tested on these questions across three separate sessions between June 2023 and January 2024, and their responses were compared with those of 16 pathology trainees (8 junior and 8 senior) from two hospitals. Questions were categorized into easy, intermediate, and difficult based on trainee performance. Consistency and variability in LLM responses were analyzed across three evaluation sessions. ChatGPT significantly outperformed Bard and trainees, achieving an average total score of 82.2% compared to Bard's 49.5%, junior trainees' 45.1%, and senior trainees' 56.0%. ChatGPT's performance was notably stronger in difficult questions (63.4%–68.3%) compared to Bard (31.7%–34.1%) and trainees (4.9%–48.8%). For easy questions, ChatGPT (83.1%–91.5%) and trainees (73.7%–100.0%) showed similar high scores. Consistency analysis revealed that ChatGPT showed a high consistency rate of 80%–85% across three tests, whereas Bard exhibited greater variability with consistency rates of 54%–61%. While LLMs show significant promise in pathology education and practice, continued development and human oversight are crucial for reliable clinical application.
期刊介绍:
A peer-reviewed journal devoted to the publication of articles dealing with traditional morphologic studies using standard diagnostic techniques and stressing clinicopathological correlations and scientific observation of relevance to the daily practice of pathology. Special features include pathologic-radiologic correlations and pathologic-cytologic correlations.