Ambadasu Bharatha, Nkemcho Ojeh, Ahbab Mohammad Fazle Rabbi, Michael H Campbell, Kandamaran Krishnamurthy, Rhaheem NA Layne-Yarde, Alok Kumar, Dale CR Springer, Kenneth L Connell, Md Anwarul Azim Majumder
{"title":"Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom’s Taxonomy","authors":"Ambadasu Bharatha, Nkemcho Ojeh, Ahbab Mohammad Fazle Rabbi, Michael H Campbell, Kandamaran Krishnamurthy, Rhaheem NA Layne-Yarde, Alok Kumar, Dale CR Springer, Kenneth L Connell, Md Anwarul Azim Majumder","doi":"10.2147/amep.s457408","DOIUrl":null,"url":null,"abstract":"<strong>Introduction:</strong> This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom’s Taxonomy as a benchmark.<br/><strong>Methods:</strong> A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer-based testing.<br/><strong>Results:</strong> The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4’s performance, but revised Bloom’s Taxonomy levels did not. A detailed association check between program levels and Bloom’s taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p< 0.001), reflecting a concentration of “remember-level” questions in preclinical and “evaluate-level” questions in clinical courses.<br/><strong>Discussion:</strong> The study highlights ChatGPT-4’s proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content.<br/><strong>Conclusion:</strong> While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI’s impact on medical education and student performance across educational levels and courses.<br/><br/><strong>Keywords:</strong> artificial intelligence, ChatGPT-4’s, medical students, knowledge, interpretation abilities, multiple choice questions<br/>","PeriodicalId":47404,"journal":{"name":"Advances in Medical Education and Practice","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Medical Education and Practice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2147/amep.s457408","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom’s Taxonomy as a benchmark. Methods: A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer-based testing. Results: The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4’s performance, but revised Bloom’s Taxonomy levels did not. A detailed association check between program levels and Bloom’s taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p< 0.001), reflecting a concentration of “remember-level” questions in preclinical and “evaluate-level” questions in clinical courses. Discussion: The study highlights ChatGPT-4’s proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content. Conclusion: While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI’s impact on medical education and student performance across educational levels and courses.