Benjamin Schnapp MD, MEd, Morgan Sehdev MD, Caitlin Schrepel MD, Sharon Bord MD, Alexis Pelletier-Bui MD, Al’ai Alvarez MD, Nicole M. Dubosh MD, Yoon Soo Park PhD, Eric Shappell MD, MHPE
{"title":"ChatG-PD吗?比较大语言模型人工智能和教师排名的竞争力标准化评价信。","authors":"Benjamin Schnapp MD, MEd, Morgan Sehdev MD, Caitlin Schrepel MD, Sharon Bord MD, Alexis Pelletier-Bui MD, Al’ai Alvarez MD, Nicole M. Dubosh MD, Yoon Soo Park PhD, Eric Shappell MD, MHPE","doi":"10.1002/aet2.11052","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>While faculty have previously been shown to have high levels of agreement about the competitiveness of emergency medicine (EM) standardized letters of evaluation (SLOEs), reviewing SLOEs remains a highly time-intensive process for faculty. Artificial intelligence large language models (LLMs) have shown promise for effectively analyzing large volumes of data across a variety of contexts, but their ability to interpret SLOEs is unknown.</p>\n </section>\n \n <section>\n \n <h3> Objective</h3>\n \n <p>The objective was to evaluate the ability of LLMs to rate EM SLOEs on competitiveness compared to faculty consensus and previously developed algorithms.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Fifty mock SLOE letters were drafted and analyzed seven times by a data-focused LLM with instructions to rank them based on desirability for residency. The LLM was also asked to use its own criteria to decide which characteristics are most important for residency and revise its ranking of the SLOEs. LLM-generated rank lists were compared with faculty consensus rankings.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>There was a high degree of correlation (<i>r =</i> 0.96) between the rank list initially generated by LLM consensus and the rank list generated by trained faculty. The correlation between the revised list generated by the LLM and the faculty consensus was lower (<i>r =</i> 0.86).</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>The LLM generated rankings showed strong correlation with expert faculty consensus rankings with minimal input of faculty time and effort.</p>\n </section>\n </div>","PeriodicalId":37032,"journal":{"name":"AEM Education and Training","volume":"8 6","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11628426/pdf/","citationCount":"0","resultStr":"{\"title\":\"ChatG-PD? Comparing large language model artificial intelligence and faculty rankings of the competitiveness of standardized letters of evaluation\",\"authors\":\"Benjamin Schnapp MD, MEd, Morgan Sehdev MD, Caitlin Schrepel MD, Sharon Bord MD, Alexis Pelletier-Bui MD, Al’ai Alvarez MD, Nicole M. Dubosh MD, Yoon Soo Park PhD, Eric Shappell MD, MHPE\",\"doi\":\"10.1002/aet2.11052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>While faculty have previously been shown to have high levels of agreement about the competitiveness of emergency medicine (EM) standardized letters of evaluation (SLOEs), reviewing SLOEs remains a highly time-intensive process for faculty. Artificial intelligence large language models (LLMs) have shown promise for effectively analyzing large volumes of data across a variety of contexts, but their ability to interpret SLOEs is unknown.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Objective</h3>\\n \\n <p>The objective was to evaluate the ability of LLMs to rate EM SLOEs on competitiveness compared to faculty consensus and previously developed algorithms.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>Fifty mock SLOE letters were drafted and analyzed seven times by a data-focused LLM with instructions to rank them based on desirability for residency. The LLM was also asked to use its own criteria to decide which characteristics are most important for residency and revise its ranking of the SLOEs. LLM-generated rank lists were compared with faculty consensus rankings.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>There was a high degree of correlation (<i>r =</i> 0.96) between the rank list initially generated by LLM consensus and the rank list generated by trained faculty. The correlation between the revised list generated by the LLM and the faculty consensus was lower (<i>r =</i> 0.86).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>The LLM generated rankings showed strong correlation with expert faculty consensus rankings with minimal input of faculty time and effort.</p>\\n </section>\\n </div>\",\"PeriodicalId\":37032,\"journal\":{\"name\":\"AEM Education and Training\",\"volume\":\"8 6\",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11628426/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AEM Education and Training\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/aet2.11052\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AEM Education and Training","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aet2.11052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
ChatG-PD? Comparing large language model artificial intelligence and faculty rankings of the competitiveness of standardized letters of evaluation
Background
While faculty have previously been shown to have high levels of agreement about the competitiveness of emergency medicine (EM) standardized letters of evaluation (SLOEs), reviewing SLOEs remains a highly time-intensive process for faculty. Artificial intelligence large language models (LLMs) have shown promise for effectively analyzing large volumes of data across a variety of contexts, but their ability to interpret SLOEs is unknown.
Objective
The objective was to evaluate the ability of LLMs to rate EM SLOEs on competitiveness compared to faculty consensus and previously developed algorithms.
Methods
Fifty mock SLOE letters were drafted and analyzed seven times by a data-focused LLM with instructions to rank them based on desirability for residency. The LLM was also asked to use its own criteria to decide which characteristics are most important for residency and revise its ranking of the SLOEs. LLM-generated rank lists were compared with faculty consensus rankings.
Results
There was a high degree of correlation (r = 0.96) between the rank list initially generated by LLM consensus and the rank list generated by trained faculty. The correlation between the revised list generated by the LLM and the faculty consensus was lower (r = 0.86).
Conclusions
The LLM generated rankings showed strong correlation with expert faculty consensus rankings with minimal input of faculty time and effort.