{"title":"将 ChatGPT 作为教学工具:用人工智能生成的消化系统病理学测试为病理学住院医师考试做准备。","authors":"Thiyaphat Laohawetwanit, Sompon Apornvirat, Charinee Kantasiripitak","doi":"10.1093/ajcp/aqae062","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To evaluate the effectiveness of ChatGPT 4 in generating multiple-choice questions (MCQs) with explanations for pathology board examinations, specifically for digestive system pathology.</p><p><strong>Methods: </strong>The customized ChatGPT 4 model was developed for MCQ and explanation generation. Expert pathologists evaluated content accuracy and relevance. These MCQs were then administered to pathology residents, followed by an analysis focusing on question difficulty, accuracy, item discrimination, and internal consistency.</p><p><strong>Results: </strong>The customized ChatGPT 4 generated 80 MCQs covering various gastrointestinal and hepatobiliary topics. While the MCQs demonstrated moderate to high agreement in evaluation parameters such as content accuracy, clinical relevance, and overall quality, there were issues in cognitive level and distractor quality. The explanations were generally acceptable. Involving 9 residents with a median experience of 1 year, the average score was 57.4 (71.8%). Pairwise comparisons revealed a significant difference in performance between each year group (P < .01). The test analysis showed moderate difficulty, effective item discrimination (index = 0.15), and good internal consistency (Cronbach's α = 0.74).</p><p><strong>Conclusions: </strong>ChatGPT 4 demonstrated significant potential as a supplementary educational tool in medical education, especially in generating MCQs with explanations similar to those seen in board examinations. While artificial intelligence-generated content was of high quality, it necessitated refinement and expert review.</p>","PeriodicalId":7506,"journal":{"name":"American journal of clinical pathology","volume":" ","pages":"471-479"},"PeriodicalIF":2.3000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT as a teaching tool: Preparing pathology residents for board examination with AI-generated digestive system pathology tests.\",\"authors\":\"Thiyaphat Laohawetwanit, Sompon Apornvirat, Charinee Kantasiripitak\",\"doi\":\"10.1093/ajcp/aqae062\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>To evaluate the effectiveness of ChatGPT 4 in generating multiple-choice questions (MCQs) with explanations for pathology board examinations, specifically for digestive system pathology.</p><p><strong>Methods: </strong>The customized ChatGPT 4 model was developed for MCQ and explanation generation. Expert pathologists evaluated content accuracy and relevance. These MCQs were then administered to pathology residents, followed by an analysis focusing on question difficulty, accuracy, item discrimination, and internal consistency.</p><p><strong>Results: </strong>The customized ChatGPT 4 generated 80 MCQs covering various gastrointestinal and hepatobiliary topics. While the MCQs demonstrated moderate to high agreement in evaluation parameters such as content accuracy, clinical relevance, and overall quality, there were issues in cognitive level and distractor quality. The explanations were generally acceptable. Involving 9 residents with a median experience of 1 year, the average score was 57.4 (71.8%). Pairwise comparisons revealed a significant difference in performance between each year group (P < .01). The test analysis showed moderate difficulty, effective item discrimination (index = 0.15), and good internal consistency (Cronbach's α = 0.74).</p><p><strong>Conclusions: </strong>ChatGPT 4 demonstrated significant potential as a supplementary educational tool in medical education, especially in generating MCQs with explanations similar to those seen in board examinations. While artificial intelligence-generated content was of high quality, it necessitated refinement and expert review.</p>\",\"PeriodicalId\":7506,\"journal\":{\"name\":\"American journal of clinical pathology\",\"volume\":\" \",\"pages\":\"471-479\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American journal of clinical pathology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/ajcp/aqae062\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PATHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American journal of clinical pathology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/ajcp/aqae062","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PATHOLOGY","Score":null,"Total":0}
ChatGPT as a teaching tool: Preparing pathology residents for board examination with AI-generated digestive system pathology tests.
Objectives: To evaluate the effectiveness of ChatGPT 4 in generating multiple-choice questions (MCQs) with explanations for pathology board examinations, specifically for digestive system pathology.
Methods: The customized ChatGPT 4 model was developed for MCQ and explanation generation. Expert pathologists evaluated content accuracy and relevance. These MCQs were then administered to pathology residents, followed by an analysis focusing on question difficulty, accuracy, item discrimination, and internal consistency.
Results: The customized ChatGPT 4 generated 80 MCQs covering various gastrointestinal and hepatobiliary topics. While the MCQs demonstrated moderate to high agreement in evaluation parameters such as content accuracy, clinical relevance, and overall quality, there were issues in cognitive level and distractor quality. The explanations were generally acceptable. Involving 9 residents with a median experience of 1 year, the average score was 57.4 (71.8%). Pairwise comparisons revealed a significant difference in performance between each year group (P < .01). The test analysis showed moderate difficulty, effective item discrimination (index = 0.15), and good internal consistency (Cronbach's α = 0.74).
Conclusions: ChatGPT 4 demonstrated significant potential as a supplementary educational tool in medical education, especially in generating MCQs with explanations similar to those seen in board examinations. While artificial intelligence-generated content was of high quality, it necessitated refinement and expert review.
期刊介绍:
The American Journal of Clinical Pathology (AJCP) is the official journal of the American Society for Clinical Pathology and the Academy of Clinical Laboratory Physicians and Scientists. It is a leading international journal for publication of articles concerning novel anatomic pathology and laboratory medicine observations on human disease. AJCP emphasizes articles that focus on the application of evolving technologies for the diagnosis and characterization of diseases and conditions, as well as those that have a direct link toward improving patient care.