Chat GPT, Gemini or Meta AI: A comparison of AI platforms as a tool for answering higher-order questions in microbiology.

Journal of postgraduate medicine Pub Date : 2025-01-01 Epub Date: 2025-03-19 DOI:10.4103/jpgm.jpgm_775_24
R D Roy, S D Gupta, D Das, P D Chowdhury
{"title":"Chat GPT, Gemini or Meta AI: A comparison of AI platforms as a tool for answering higher-order questions in microbiology.","authors":"R D Roy, S D Gupta, D Das, P D Chowdhury","doi":"10.4103/jpgm.jpgm_775_24","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Artificial intelligence (AI) platforms have achieved a noteworthy role in various fields of medical sciences, ranging from medical education to clinical diagnostics and treatment. ChatGPT, Gemini, and Meta AI are some large language models (LLMs) that have gained immense popularity among students for solving questions from different branches of education.</p><p><strong>Materials and methods: </strong>A cross-sectional study was conducted in the Department of Microbiology to assess the performance of ChatGPT, Gemini, and Meta AI in answering higher-order questions from various competencies of the microbiology curriculum (MI 1 to 8), according to CBME guidelines. Sixty higher-order questions were compiled from university question papers of two universities. Their responses were assessed by three faculty members from the department.</p><p><strong>Results: </strong>The mean rank scores of ChatGPT, Gemini, and Meta AI were found to be 102.76, 108.5, and 60.23 by Evaluator 1; 106.03, 88.5, and 76.95 by Evaluator 2; and 104.85, 85.6, and 81.04, respectively, indicating lowest overall mean rank score for Meta AI. ChatGPT had the highest mean score in MI 2,3,5,6,7, and 8 competencies, while Gemini had a higher score for MI 1 and 4 competencies. A qualitative assessment of the three platforms was also performed. ChatGPT provided elaborative responses, some responses from Gemini lacked certain significant points, and Meta AI gave answers in bullet points.</p><p><strong>Conclusions: </strong>Both ChatGPT and Gemini have created vast databases to correctly respond to higher-order queries in medical microbiology in comparison to Meta AI. Our study is the first of its kind to compare these three popular LLM platforms for microbiology.</p>","PeriodicalId":94105,"journal":{"name":"Journal of postgraduate medicine","volume":" ","pages":"28-32"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of postgraduate medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4103/jpgm.jpgm_775_24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/19 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Artificial intelligence (AI) platforms have achieved a noteworthy role in various fields of medical sciences, ranging from medical education to clinical diagnostics and treatment. ChatGPT, Gemini, and Meta AI are some large language models (LLMs) that have gained immense popularity among students for solving questions from different branches of education.

Materials and methods: A cross-sectional study was conducted in the Department of Microbiology to assess the performance of ChatGPT, Gemini, and Meta AI in answering higher-order questions from various competencies of the microbiology curriculum (MI 1 to 8), according to CBME guidelines. Sixty higher-order questions were compiled from university question papers of two universities. Their responses were assessed by three faculty members from the department.

Results: The mean rank scores of ChatGPT, Gemini, and Meta AI were found to be 102.76, 108.5, and 60.23 by Evaluator 1; 106.03, 88.5, and 76.95 by Evaluator 2; and 104.85, 85.6, and 81.04, respectively, indicating lowest overall mean rank score for Meta AI. ChatGPT had the highest mean score in MI 2,3,5,6,7, and 8 competencies, while Gemini had a higher score for MI 1 and 4 competencies. A qualitative assessment of the three platforms was also performed. ChatGPT provided elaborative responses, some responses from Gemini lacked certain significant points, and Meta AI gave answers in bullet points.

Conclusions: Both ChatGPT and Gemini have created vast databases to correctly respond to higher-order queries in medical microbiology in comparison to Meta AI. Our study is the first of its kind to compare these three popular LLM platforms for microbiology.

聊天GPT, Gemini或Meta AI: AI平台作为回答微生物学高阶问题的工具的比较。
导读:人工智能(AI)平台在医学科学的各个领域取得了显著的作用,从医学教育到临床诊断和治疗。ChatGPT、Gemini和Meta AI是一些大型语言模型(llm),它们在学生中非常受欢迎,用于解决来自不同教育分支的问题。材料和方法:根据CBME指南,在微生物学系进行了一项横断面研究,以评估ChatGPT、Gemini和Meta AI在回答微生物学课程(MI 1至8)中各种能力的高阶问题方面的表现。从两所大学的大学考卷中整理出60道高阶题。他们的回答由该系的三名教员进行评估。结果:通过Evaluator 1, ChatGPT、Gemini和Meta AI的平均排名得分分别为102.76、108.5和60.23;106.03, 88.5和76.95由评估者2;以及104.85、85.6和81.04,分别表示Meta AI的总体平均排名得分最低。ChatGPT在MI 2、3、5、6、7和8项能力上的平均得分最高,而Gemini在MI 1和4项能力上的得分更高。还对这三个平台进行了定性评估。ChatGPT给出了详尽的回答,Gemini的一些回答缺乏某些重要的观点,Meta AI给出了要点。结论:与Meta AI相比,ChatGPT和Gemini都创建了庞大的数据库,以正确响应医学微生物学中的高阶查询。我们的研究首次比较了这三种流行的微生物学法学硕士平台。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信