将 ChatGPT 作为教学工具:用人工智能生成的消化系统病理学测试为病理学住院医师考试做准备。

IF 4.6 Q2 MATERIALS SCIENCE, BIOMATERIALS
Thiyaphat Laohawetwanit, Sompon Apornvirat, Charinee Kantasiripitak
{"title":"将 ChatGPT 作为教学工具:用人工智能生成的消化系统病理学测试为病理学住院医师考试做准备。","authors":"Thiyaphat Laohawetwanit, Sompon Apornvirat, Charinee Kantasiripitak","doi":"10.1093/ajcp/aqae062","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To evaluate the effectiveness of ChatGPT 4 in generating multiple-choice questions (MCQs) with explanations for pathology board examinations, specifically for digestive system pathology.</p><p><strong>Methods: </strong>The customized ChatGPT 4 model was developed for MCQ and explanation generation. Expert pathologists evaluated content accuracy and relevance. These MCQs were then administered to pathology residents, followed by an analysis focusing on question difficulty, accuracy, item discrimination, and internal consistency.</p><p><strong>Results: </strong>The customized ChatGPT 4 generated 80 MCQs covering various gastrointestinal and hepatobiliary topics. While the MCQs demonstrated moderate to high agreement in evaluation parameters such as content accuracy, clinical relevance, and overall quality, there were issues in cognitive level and distractor quality. The explanations were generally acceptable. Involving 9 residents with a median experience of 1 year, the average score was 57.4 (71.8%). Pairwise comparisons revealed a significant difference in performance between each year group (P < .01). The test analysis showed moderate difficulty, effective item discrimination (index = 0.15), and good internal consistency (Cronbach's α = 0.74).</p><p><strong>Conclusions: </strong>ChatGPT 4 demonstrated significant potential as a supplementary educational tool in medical education, especially in generating MCQs with explanations similar to those seen in board examinations. While artificial intelligence-generated content was of high quality, it necessitated refinement and expert review.</p>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT as a teaching tool: Preparing pathology residents for board examination with AI-generated digestive system pathology tests.\",\"authors\":\"Thiyaphat Laohawetwanit, Sompon Apornvirat, Charinee Kantasiripitak\",\"doi\":\"10.1093/ajcp/aqae062\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>To evaluate the effectiveness of ChatGPT 4 in generating multiple-choice questions (MCQs) with explanations for pathology board examinations, specifically for digestive system pathology.</p><p><strong>Methods: </strong>The customized ChatGPT 4 model was developed for MCQ and explanation generation. Expert pathologists evaluated content accuracy and relevance. These MCQs were then administered to pathology residents, followed by an analysis focusing on question difficulty, accuracy, item discrimination, and internal consistency.</p><p><strong>Results: </strong>The customized ChatGPT 4 generated 80 MCQs covering various gastrointestinal and hepatobiliary topics. While the MCQs demonstrated moderate to high agreement in evaluation parameters such as content accuracy, clinical relevance, and overall quality, there were issues in cognitive level and distractor quality. The explanations were generally acceptable. Involving 9 residents with a median experience of 1 year, the average score was 57.4 (71.8%). Pairwise comparisons revealed a significant difference in performance between each year group (P < .01). The test analysis showed moderate difficulty, effective item discrimination (index = 0.15), and good internal consistency (Cronbach's α = 0.74).</p><p><strong>Conclusions: </strong>ChatGPT 4 demonstrated significant potential as a supplementary educational tool in medical education, especially in generating MCQs with explanations similar to those seen in board examinations. While artificial intelligence-generated content was of high quality, it necessitated refinement and expert review.</p>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-11-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/ajcp/aqae062\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/ajcp/aqae062","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0

摘要

目的评估 ChatGPT 4 在为病理学考试(特别是消化系统病理学)生成附有解释的多选题(MCQ)方面的有效性:开发了用于生成 MCQ 和解释的定制 ChatGPT 4 模型。病理专家对内容的准确性和相关性进行了评估。然后将这些 MCQ 交给病理学住院医师,然后对问题的难度、准确性、项目区分度和内部一致性进行分析:结果:定制的 ChatGPT 4 生成了 80 个 MCQ,涵盖了各种胃肠道和肝胆主题。虽然 MCQ 在内容准确性、临床相关性和整体质量等评价参数方面表现出中等至高等的一致性,但在认知水平和干扰项质量方面存在问题。解释总体上可以接受。9 名住院医师的平均年龄为 1 年,平均得分为 57.4 分(71.8%)。配对比较显示,每个年级组之间的成绩差异显著(P 结论):ChatGPT 4 作为医学教育中的辅助教学工具,尤其是在生成与医师资格考试中的解释类似的 MCQ 方面,表现出了巨大的潜力。虽然人工智能生成的内容质量很高,但仍需改进和专家审核。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ChatGPT as a teaching tool: Preparing pathology residents for board examination with AI-generated digestive system pathology tests.

Objectives: To evaluate the effectiveness of ChatGPT 4 in generating multiple-choice questions (MCQs) with explanations for pathology board examinations, specifically for digestive system pathology.

Methods: The customized ChatGPT 4 model was developed for MCQ and explanation generation. Expert pathologists evaluated content accuracy and relevance. These MCQs were then administered to pathology residents, followed by an analysis focusing on question difficulty, accuracy, item discrimination, and internal consistency.

Results: The customized ChatGPT 4 generated 80 MCQs covering various gastrointestinal and hepatobiliary topics. While the MCQs demonstrated moderate to high agreement in evaluation parameters such as content accuracy, clinical relevance, and overall quality, there were issues in cognitive level and distractor quality. The explanations were generally acceptable. Involving 9 residents with a median experience of 1 year, the average score was 57.4 (71.8%). Pairwise comparisons revealed a significant difference in performance between each year group (P < .01). The test analysis showed moderate difficulty, effective item discrimination (index = 0.15), and good internal consistency (Cronbach's α = 0.74).

Conclusions: ChatGPT 4 demonstrated significant potential as a supplementary educational tool in medical education, especially in generating MCQs with explanations similar to those seen in board examinations. While artificial intelligence-generated content was of high quality, it necessitated refinement and expert review.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACS Applied Bio Materials
ACS Applied Bio Materials Chemistry-Chemistry (all)
CiteScore
9.40
自引率
2.10%
发文量
464
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信