The Scientific Knowledge of Bard and ChatGPT in Endocrinology, Diabetes, and Diabetes Technology: Multiple-Choice Questions Examination-Based Performance.

IF 4.1 Q2 ENDOCRINOLOGY & METABOLISM
Sultan Ayoub Meo, Thamir Al-Khlaiwi, Abdulelah Adnan AbuKhalaf, Anusha Sultan Meo, David C Klonoff
{"title":"The Scientific Knowledge of Bard and ChatGPT in Endocrinology, Diabetes, and Diabetes Technology: Multiple-Choice Questions Examination-Based Performance.","authors":"Sultan Ayoub Meo, Thamir Al-Khlaiwi, Abdulelah Adnan AbuKhalaf, Anusha Sultan Meo, David C Klonoff","doi":"10.1177/19322968231203987","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The present study aimed to investigate the knowledge level of Bard and ChatGPT in the areas of endocrinology, diabetes, and diabetes technology through a multiple-choice question (MCQ) examination format.</p><p><strong>Methods: </strong>Initially, a 100-MCQ bank was established based on MCQs in endocrinology, diabetes, and diabetes technology. The MCQs were created from physiology, medical textbooks, and academic examination pools in the areas of endocrinology, diabetes, and diabetes technology and academic examination pools. The study team members analyzed the MCQ contents to ensure that they were related to the endocrinology, diabetes, and diabetes technology. The number of MCQs from endocrinology was 50, and that from diabetes and science technology was also 50. The knowledge level of Google's Bard and ChatGPT was assessed with an MCQ-based examination.</p><p><strong>Results: </strong>In the endocrinology examination section, ChatGPT obtained 29 marks (correct responses) of 50 (58%), and Bard obtained a similar score of 29 of 50 (58%). However, in the diabetes technology examination section, ChatGPT obtained 23 marks of 50 (46%), and Bard obtained 20 marks of 50 (40%). Overall, in the entire three-part examination, ChatGPT obtained 52 marks of 100 (52%), and Bard obtained 49 marks of 100 (49%). ChatGPT obtained slightly more marks than Bard. However, both ChatGPT and Bard did not achieve satisfactory scores in endocrinology or diabetes/technology of at least 60%.</p><p><strong>Conclusions: </strong>The overall MCQ-based performance of ChatGPT was slightly better than that of Google's Bard. However, both ChatGPT and Bard did not achieve appropriate scores in endocrinology and diabetes/diabetes technology. The study indicates that Bard and ChatGPT have the potential to facilitate medical students and faculty in academic medical education settings, but both artificial intelligence tools need more updated information in the fields of endocrinology, diabetes, and diabetes technology.</p>","PeriodicalId":15475,"journal":{"name":"Journal of Diabetes Science and Technology","volume":" ","pages":"705-710"},"PeriodicalIF":4.1000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12035228/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Diabetes Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/19322968231203987","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/10/5 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"ENDOCRINOLOGY & METABOLISM","Score":null,"Total":0}
引用次数: 0

Abstract

Background: The present study aimed to investigate the knowledge level of Bard and ChatGPT in the areas of endocrinology, diabetes, and diabetes technology through a multiple-choice question (MCQ) examination format.

Methods: Initially, a 100-MCQ bank was established based on MCQs in endocrinology, diabetes, and diabetes technology. The MCQs were created from physiology, medical textbooks, and academic examination pools in the areas of endocrinology, diabetes, and diabetes technology and academic examination pools. The study team members analyzed the MCQ contents to ensure that they were related to the endocrinology, diabetes, and diabetes technology. The number of MCQs from endocrinology was 50, and that from diabetes and science technology was also 50. The knowledge level of Google's Bard and ChatGPT was assessed with an MCQ-based examination.

Results: In the endocrinology examination section, ChatGPT obtained 29 marks (correct responses) of 50 (58%), and Bard obtained a similar score of 29 of 50 (58%). However, in the diabetes technology examination section, ChatGPT obtained 23 marks of 50 (46%), and Bard obtained 20 marks of 50 (40%). Overall, in the entire three-part examination, ChatGPT obtained 52 marks of 100 (52%), and Bard obtained 49 marks of 100 (49%). ChatGPT obtained slightly more marks than Bard. However, both ChatGPT and Bard did not achieve satisfactory scores in endocrinology or diabetes/technology of at least 60%.

Conclusions: The overall MCQ-based performance of ChatGPT was slightly better than that of Google's Bard. However, both ChatGPT and Bard did not achieve appropriate scores in endocrinology and diabetes/diabetes technology. The study indicates that Bard and ChatGPT have the potential to facilitate medical students and faculty in academic medical education settings, but both artificial intelligence tools need more updated information in the fields of endocrinology, diabetes, and diabetes technology.

Bard和ChatGPT在内分泌学、糖尿病和糖尿病技术方面的科学知识:基于成绩的多项选择题。
背景:本研究旨在通过多项选择题(MCQ)考试形式调查Bard和ChatGPT在内分泌、糖尿病和糖尿病技术领域的知识水平。方法:最初,基于内分泌、糖尿病和糖尿病技术领域的MCQ,建立了一个100-MCQ库。MCQ是根据生理学、医学教科书、内分泌学、糖尿病和糖尿病技术领域的学术考试库和学术考试库创建的。研究小组成员分析了MCQ的内容,以确保它们与内分泌、糖尿病和糖尿病技术有关。来自内分泌科的MCQ数量为50,来自糖尿病和科学技术的MCQ也为50。谷歌Bard和ChatGPT的知识水平通过基于MCQ的考试进行评估。结果:在内分泌检查部分,ChatGPT获得了29分(正确回答),满分50分(58%),Bard获得了类似的29分(满分50分)(58%)。然而,在糖尿病技术考试部分,ChatGPT获得了23分(46%),Bard获得了20分(40%)。总体而言,在整个三部分考试中,ChatGPT获得了52分(52%)的100分,Bard获得了49分(49%)。ChatGPT获得的分数略高于巴德。然而,ChatGPT和Bard在内分泌或糖尿病/技术方面都没有达到至少60%的满意分数。结论:ChatGPT基于MCQ的总体表现略好于谷歌的Bard。然而,ChatGPT和Bard在内分泌和糖尿病/糖尿病技术方面都没有取得适当的分数。该研究表明,Bard和ChatGPT有潜力在学术医学教育环境中为医学生和教师提供便利,但这两种人工智能工具都需要在内分泌、糖尿病和糖尿病技术领域提供更多更新信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Diabetes Science and Technology
Journal of Diabetes Science and Technology Medicine-Internal Medicine
CiteScore
7.50
自引率
12.00%
发文量
148
期刊介绍: The Journal of Diabetes Science and Technology (JDST) is a bi-monthly, peer-reviewed scientific journal published by the Diabetes Technology Society. JDST covers scientific and clinical aspects of diabetes technology including glucose monitoring, insulin and metabolic peptide delivery, the artificial pancreas, digital health, precision medicine, social media, cybersecurity, software for modeling, physiologic monitoring, technology for managing obesity, and diagnostic tests of glycation. The journal also covers the development and use of mobile applications and wireless communication, as well as bioengineered tools such as MEMS, new biomaterials, and nanotechnology to develop new sensors. Articles in JDST cover both basic research and clinical applications of technologies being developed to help people with diabetes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信