{"title":"病理学中的大语言模型:病理学学员多项选择题成绩比较研究","authors":"Wei Du, Jaryse Harris, Alessandro Brunetti, Olivia Leung, Xingchen Li, Selemon Walle, Qing Yu, Xiao Zhou, Fang Bian, Kajanna Mckenzie, Xueting Jin, Manita Kanathanavanich, Farah El-Sharkawy, Shunsuke Koga","doi":"10.1101/2024.07.10.24310093","DOIUrl":null,"url":null,"abstract":"Aims: Large language models (LLMs), such as ChatGPT and Bard, have shown potential in various medical applications. This study aims to evaluate the performance of LLMs, specifically ChatGPT and Bard, in pathology by comparing their performance with that of pathology residents and fellows, and to assess the consistency of their responses.\nMethods: We selected 150 multiple-choice questions covering 15 subspecialties, excluding those with images. Both ChatGPT and Bard were tested on these questions three times, and their responses were compared with those of 14 pathology trainees from two hospitals. Questions were categorized into easy, intermediate, and difficult based on trainee performance. Consistency and variability in LLM responses were analyzed across three evaluation sessions.\nResults: ChatGPT significantly outperformed Bard and trainees, achieving an average total score of 82.2% compared to Bard's 49.5% and trainees' 50.7%. ChatGPT's performance was notably stronger in difficult questions (61.8%-70.6%) compared to Bard (29.4%-32.4%) and trainees (5.9%-44.1%). For easy questions, ChatGPT (88.9%-94.4%) and trainees (75.0%-100.0%) showed similar high scores. Consistency analysis revealed that ChatGPT showed a high consistency rate of 85%-80% across three tests, whereas Bard exhibited greater variability with consistency rates of 61%-54%.\nConclusion: ChatGPT consistently outperformed Bard and trainees, especially on difficult questions. While LLMs show significant potential in pathology education and practice, ongoing development and human oversight are essential for reliable clinical application.","PeriodicalId":501528,"journal":{"name":"medRxiv - Pathology","volume":"28 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Large Language Models in Pathology: A Comparative Study on Multiple Choice Question Performance with Pathology Trainees\",\"authors\":\"Wei Du, Jaryse Harris, Alessandro Brunetti, Olivia Leung, Xingchen Li, Selemon Walle, Qing Yu, Xiao Zhou, Fang Bian, Kajanna Mckenzie, Xueting Jin, Manita Kanathanavanich, Farah El-Sharkawy, Shunsuke Koga\",\"doi\":\"10.1101/2024.07.10.24310093\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Aims: Large language models (LLMs), such as ChatGPT and Bard, have shown potential in various medical applications. This study aims to evaluate the performance of LLMs, specifically ChatGPT and Bard, in pathology by comparing their performance with that of pathology residents and fellows, and to assess the consistency of their responses.\\nMethods: We selected 150 multiple-choice questions covering 15 subspecialties, excluding those with images. Both ChatGPT and Bard were tested on these questions three times, and their responses were compared with those of 14 pathology trainees from two hospitals. Questions were categorized into easy, intermediate, and difficult based on trainee performance. Consistency and variability in LLM responses were analyzed across three evaluation sessions.\\nResults: ChatGPT significantly outperformed Bard and trainees, achieving an average total score of 82.2% compared to Bard's 49.5% and trainees' 50.7%. ChatGPT's performance was notably stronger in difficult questions (61.8%-70.6%) compared to Bard (29.4%-32.4%) and trainees (5.9%-44.1%). For easy questions, ChatGPT (88.9%-94.4%) and trainees (75.0%-100.0%) showed similar high scores. Consistency analysis revealed that ChatGPT showed a high consistency rate of 85%-80% across three tests, whereas Bard exhibited greater variability with consistency rates of 61%-54%.\\nConclusion: ChatGPT consistently outperformed Bard and trainees, especially on difficult questions. While LLMs show significant potential in pathology education and practice, ongoing development and human oversight are essential for reliable clinical application.\",\"PeriodicalId\":501528,\"journal\":{\"name\":\"medRxiv - Pathology\",\"volume\":\"28 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv - Pathology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.07.10.24310093\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Pathology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.07.10.24310093","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Large Language Models in Pathology: A Comparative Study on Multiple Choice Question Performance with Pathology Trainees
Aims: Large language models (LLMs), such as ChatGPT and Bard, have shown potential in various medical applications. This study aims to evaluate the performance of LLMs, specifically ChatGPT and Bard, in pathology by comparing their performance with that of pathology residents and fellows, and to assess the consistency of their responses.
Methods: We selected 150 multiple-choice questions covering 15 subspecialties, excluding those with images. Both ChatGPT and Bard were tested on these questions three times, and their responses were compared with those of 14 pathology trainees from two hospitals. Questions were categorized into easy, intermediate, and difficult based on trainee performance. Consistency and variability in LLM responses were analyzed across three evaluation sessions.
Results: ChatGPT significantly outperformed Bard and trainees, achieving an average total score of 82.2% compared to Bard's 49.5% and trainees' 50.7%. ChatGPT's performance was notably stronger in difficult questions (61.8%-70.6%) compared to Bard (29.4%-32.4%) and trainees (5.9%-44.1%). For easy questions, ChatGPT (88.9%-94.4%) and trainees (75.0%-100.0%) showed similar high scores. Consistency analysis revealed that ChatGPT showed a high consistency rate of 85%-80% across three tests, whereas Bard exhibited greater variability with consistency rates of 61%-54%.
Conclusion: ChatGPT consistently outperformed Bard and trainees, especially on difficult questions. While LLMs show significant potential in pathology education and practice, ongoing development and human oversight are essential for reliable clinical application.