Adam Cohen, Jersey Burns, Martina Gabra, Alex Gordon, Nicholas Deebel, Ryan Terlecki, Katherine L Woodburn
{"title":"Performance of Chat Generative Pre-Trained Transformer on Personal Review of Learning in Obstetrics and Gynecology.","authors":"Adam Cohen, Jersey Burns, Martina Gabra, Alex Gordon, Nicholas Deebel, Ryan Terlecki, Katherine L Woodburn","doi":"10.14423/SMJ.0000000000001783","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Chat Generative Pre-Trained Transformer (ChatGPT) is a popular natural-language processor that is able to analyze and respond to a variety of prompts, providing eloquent answers based on a collection of Internet data. ChatGPT has been considered an avenue for the education of resident physicians in the form of board preparation in the contemporary literature, where it has been applied against board study material across multiple medical specialties. The purpose of our study was to evaluate the performance of ChatGPT on the Personal Review of Learning in Obstetrics and Gynecology (PROLOG) assessments and gauge its specialty specific knowledge for educational applications.</p><p><strong>Methods: </strong>PROLOG assessments were administered to ChatGPT version 3.5, and the percentage of correct responses was recorded. Questions were categorized by question stem order and used to measure ChatGPT performance. Performance was compared using descriptive statistics.</p><p><strong>Results: </strong>There were 848 questions without visual components; ChatGPT answered 57.8% correct (N = 490). ChatGPT performed worse on higher-order questions compared with first-order questions, 56.8% vs 60.5%, respectively. There were 65 questions containing visual data, and ChatGPT answered 16.9% correctly.</p><p><strong>Conclusions: </strong>The passing score for the PROLOG assessments is 80%; therefore ChatGPT 3.5 did not perform satisfactorily. Given this, it is unlikely that the tested version of ChatGPT has sufficient specialty-specific knowledge or logical capability to serve as a reliable tool for trainee education.</p>","PeriodicalId":22043,"journal":{"name":"Southern Medical Journal","volume":"118 2","pages":"102-105"},"PeriodicalIF":1.0000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Southern Medical Journal","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.14423/SMJ.0000000000001783","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: Chat Generative Pre-Trained Transformer (ChatGPT) is a popular natural-language processor that is able to analyze and respond to a variety of prompts, providing eloquent answers based on a collection of Internet data. ChatGPT has been considered an avenue for the education of resident physicians in the form of board preparation in the contemporary literature, where it has been applied against board study material across multiple medical specialties. The purpose of our study was to evaluate the performance of ChatGPT on the Personal Review of Learning in Obstetrics and Gynecology (PROLOG) assessments and gauge its specialty specific knowledge for educational applications.
Methods: PROLOG assessments were administered to ChatGPT version 3.5, and the percentage of correct responses was recorded. Questions were categorized by question stem order and used to measure ChatGPT performance. Performance was compared using descriptive statistics.
Results: There were 848 questions without visual components; ChatGPT answered 57.8% correct (N = 490). ChatGPT performed worse on higher-order questions compared with first-order questions, 56.8% vs 60.5%, respectively. There were 65 questions containing visual data, and ChatGPT answered 16.9% correctly.
Conclusions: The passing score for the PROLOG assessments is 80%; therefore ChatGPT 3.5 did not perform satisfactorily. Given this, it is unlikely that the tested version of ChatGPT has sufficient specialty-specific knowledge or logical capability to serve as a reliable tool for trainee education.
期刊介绍:
As the official journal of the Birmingham, Alabama-based Southern Medical Association (SMA), the Southern Medical Journal (SMJ) has for more than 100 years provided the latest clinical information in areas that affect patients'' daily lives. Now delivered to individuals exclusively online, the SMJ has a multidisciplinary focus that covers a broad range of topics relevant to physicians and other healthcare specialists in all relevant aspects of the profession, including medicine and medical specialties, surgery and surgery specialties; child and maternal health; mental health; emergency and disaster medicine; public health and environmental medicine; bioethics and medical education; and quality health care, patient safety, and best practices. Each month, articles span the spectrum of medical topics, providing timely, up-to-the-minute information for both primary care physicians and specialists. Contributors include leaders in the healthcare field from across the country and around the world. The SMJ enables physicians to provide the best possible care to patients in this age of rapidly changing modern medicine.