Troy Camarata, Lise McCoy, Robert Rosenberg, Kelsey R Temprine Grellinger, Kylie Brettschnieder, Jonathan Berman
{"title":"面向临床前医学专业学生的法学硕士选择题练习测验项目写作缺陷的普遍性。","authors":"Troy Camarata, Lise McCoy, Robert Rosenberg, Kelsey R Temprine Grellinger, Kylie Brettschnieder, Jonathan Berman","doi":"10.1152/advan.00106.2024","DOIUrl":null,"url":null,"abstract":"<p><p>Multiple choice questions (MCQs) are frequently used in medical education for assessment. Automated generation of MCQs in board-exam format could potentially save significant effort for faculty and generate a wider set of practice materials for student use. The goal of this study was to explore the feasibility of using ChatGPT by OpenAI to generate United States Medical Licensing Exam (USMLE)/Comprehensive Osteopathic Medical Licensing Examination (COMLEX-USA)-style practice quiz items as study aids. Researchers gave second-year medical students studying renal physiology access to a set of practice quizzes with ChatGPT-generated questions. The exam items generated were evaluated by independent experts for quality and adherence to the National Board of Medical Examiners (NBME)/National Board of Osteopathic Medical Examiners (NBOME) guidelines. Forty-nine percent of questions contained item writing flaws, and 22% contained factual or conceptual errors. However, 59/65 (91%) were categorized as a reasonable starting point for revision. These results demonstrate the feasibility of large language model (LLM)-generated practice questions in medical education but only when supervised by a subject matter expert with training in exam item writing.<b>NEW & NOTEWORTHY</b> Practice board exam questions generated by large language models can be made suitable for preclinical medical students by subject-matter experts.</p>","PeriodicalId":50852,"journal":{"name":"Advances in Physiology Education","volume":" ","pages":"758-763"},"PeriodicalIF":1.7000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LLM-Generated multiple choice practice quizzes for preclinical medical students.\",\"authors\":\"Troy Camarata, Lise McCoy, Robert Rosenberg, Kelsey R Temprine Grellinger, Kylie Brettschnieder, Jonathan Berman\",\"doi\":\"10.1152/advan.00106.2024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Multiple choice questions (MCQs) are frequently used in medical education for assessment. Automated generation of MCQs in board-exam format could potentially save significant effort for faculty and generate a wider set of practice materials for student use. The goal of this study was to explore the feasibility of using ChatGPT by OpenAI to generate United States Medical Licensing Exam (USMLE)/Comprehensive Osteopathic Medical Licensing Examination (COMLEX-USA)-style practice quiz items as study aids. Researchers gave second-year medical students studying renal physiology access to a set of practice quizzes with ChatGPT-generated questions. The exam items generated were evaluated by independent experts for quality and adherence to the National Board of Medical Examiners (NBME)/National Board of Osteopathic Medical Examiners (NBOME) guidelines. Forty-nine percent of questions contained item writing flaws, and 22% contained factual or conceptual errors. However, 59/65 (91%) were categorized as a reasonable starting point for revision. These results demonstrate the feasibility of large language model (LLM)-generated practice questions in medical education but only when supervised by a subject matter expert with training in exam item writing.<b>NEW & NOTEWORTHY</b> Practice board exam questions generated by large language models can be made suitable for preclinical medical students by subject-matter experts.</p>\",\"PeriodicalId\":50852,\"journal\":{\"name\":\"Advances in Physiology Education\",\"volume\":\" \",\"pages\":\"758-763\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in Physiology Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.1152/advan.00106.2024\",\"RegionNum\":4,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/6/14 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Physiology Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1152/advan.00106.2024","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/14 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
LLM-Generated multiple choice practice quizzes for preclinical medical students.
Multiple choice questions (MCQs) are frequently used in medical education for assessment. Automated generation of MCQs in board-exam format could potentially save significant effort for faculty and generate a wider set of practice materials for student use. The goal of this study was to explore the feasibility of using ChatGPT by OpenAI to generate United States Medical Licensing Exam (USMLE)/Comprehensive Osteopathic Medical Licensing Examination (COMLEX-USA)-style practice quiz items as study aids. Researchers gave second-year medical students studying renal physiology access to a set of practice quizzes with ChatGPT-generated questions. The exam items generated were evaluated by independent experts for quality and adherence to the National Board of Medical Examiners (NBME)/National Board of Osteopathic Medical Examiners (NBOME) guidelines. Forty-nine percent of questions contained item writing flaws, and 22% contained factual or conceptual errors. However, 59/65 (91%) were categorized as a reasonable starting point for revision. These results demonstrate the feasibility of large language model (LLM)-generated practice questions in medical education but only when supervised by a subject matter expert with training in exam item writing.NEW & NOTEWORTHY Practice board exam questions generated by large language models can be made suitable for preclinical medical students by subject-matter experts.
期刊介绍:
Advances in Physiology Education promotes and disseminates educational scholarship in order to enhance teaching and learning of physiology, neuroscience and pathophysiology. The journal publishes peer-reviewed descriptions of innovations that improve teaching in the classroom and laboratory, essays on education, and review articles based on our current understanding of physiological mechanisms. Submissions that evaluate new technologies for teaching and research, and educational pedagogy, are especially welcome. The audience for the journal includes educators at all levels: K–12, undergraduate, graduate, and professional programs.