Adrian Hang Yue Siu, Damien Gibson, Xin Mu, Ishith Seth, Alexander Chi Wang Siu, Dilshad Dooreemeah, Angus Lee
{"title":"Emplying Large Language Models for Surgical Education: An In-depth Analysis of ChatGPT-4","authors":"Adrian Hang Yue Siu, Damien Gibson, Xin Mu, Ishith Seth, Alexander Chi Wang Siu, Dilshad Dooreemeah, Angus Lee","doi":"10.5812/jme-137753","DOIUrl":null,"url":null,"abstract":"Background: The growing interest in artificial intelligence (AI) has spurred an increase in the availability of Large Language Models (LLMs) in surgical education. These LLMs hold the potential to augment medical curricula for future healthcare professionals, facilitating engagement in remote learning experiences, and assisting in personalised student feedback. Objectives: To evaluate the ability of LLMs to assist junior doctors in providing advice for common ward-based surgical scenarios with increasing complexity. Methods: Utilising an instrumental case study approach, this study explored the potential of LLMs by comparing the responses of the ChatGPT-4, BingAI and BARD. LLMs were prompted by 3 common ward-based surgical scenarios and tasked with assisting junior doctors in clinical decision-making. The outputs were assessed by a panel of two senior surgeons with extensive experience in AI and education, qualitatively utilising a Likert scale on their accuracy, safety, and effectiveness to determine their viability as a synergistic tool in surgical education. A quantitative assessment of their reliability and readability was conducted using the DISCERN score and a set of reading scores, including the Flesch Reading Ease Score, Flesch-Kincaid Grade Level, and Coleman-Liau index. Results: BARD proved superior in readability, with Flesch Reading Ease Score 50.13 (± 5.00), Flesch-Kincaid Grade Level 9.33 (± 0.76), and Coleman-Liau index 11.67 (± 0.58). ChatGPT-4 outperformed BARD and BingAI, with the highest DISCERN score of 71.7 (± 2.52). Using a Likert scale-based framework, the surgical expert panel further affirmed that the advice provided by the ChatGPT-4 was suitable and safe for first-year interns and residents. A t-test showed statistical significance in reliability among all three AIs (P < 0.05) and readability only between the ChatGPT-4 and BARD. This study underscores the potential for LLM integration in surgical education, particularly ChatGPT, in the provision of reliable and accurate information. Conclusions: This study highlighted the potential of LLM, specifically ChatGPT-4, as a valuable educational resource for junior doctors. The findings are limited by the potential of non-generalizability of the use of junior doctors' simulated scenarios. Future work should aim to optimise learning experiences and better support surgical trainees. Particular attention should be paid to addressing the longitudinal impact of LLMs, refining AI models, validating AI content, and exploring technological amalgamations for improved outcomes.","PeriodicalId":31052,"journal":{"name":"Journal of Medical Education","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5812/jme-137753","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background: The growing interest in artificial intelligence (AI) has spurred an increase in the availability of Large Language Models (LLMs) in surgical education. These LLMs hold the potential to augment medical curricula for future healthcare professionals, facilitating engagement in remote learning experiences, and assisting in personalised student feedback. Objectives: To evaluate the ability of LLMs to assist junior doctors in providing advice for common ward-based surgical scenarios with increasing complexity. Methods: Utilising an instrumental case study approach, this study explored the potential of LLMs by comparing the responses of the ChatGPT-4, BingAI and BARD. LLMs were prompted by 3 common ward-based surgical scenarios and tasked with assisting junior doctors in clinical decision-making. The outputs were assessed by a panel of two senior surgeons with extensive experience in AI and education, qualitatively utilising a Likert scale on their accuracy, safety, and effectiveness to determine their viability as a synergistic tool in surgical education. A quantitative assessment of their reliability and readability was conducted using the DISCERN score and a set of reading scores, including the Flesch Reading Ease Score, Flesch-Kincaid Grade Level, and Coleman-Liau index. Results: BARD proved superior in readability, with Flesch Reading Ease Score 50.13 (± 5.00), Flesch-Kincaid Grade Level 9.33 (± 0.76), and Coleman-Liau index 11.67 (± 0.58). ChatGPT-4 outperformed BARD and BingAI, with the highest DISCERN score of 71.7 (± 2.52). Using a Likert scale-based framework, the surgical expert panel further affirmed that the advice provided by the ChatGPT-4 was suitable and safe for first-year interns and residents. A t-test showed statistical significance in reliability among all three AIs (P < 0.05) and readability only between the ChatGPT-4 and BARD. This study underscores the potential for LLM integration in surgical education, particularly ChatGPT, in the provision of reliable and accurate information. Conclusions: This study highlighted the potential of LLM, specifically ChatGPT-4, as a valuable educational resource for junior doctors. The findings are limited by the potential of non-generalizability of the use of junior doctors' simulated scenarios. Future work should aim to optimise learning experiences and better support surgical trainees. Particular attention should be paid to addressing the longitudinal impact of LLMs, refining AI models, validating AI content, and exploring technological amalgamations for improved outcomes.