Comparison of ChatGPT and Gemini as sources of references in otorhinolaryngology

W. Wiktor Jedrzejczak, Malgorzata Pastucha, Henryk Skarzynski, Krzysztof Kochanek
{"title":"Comparison of ChatGPT and Gemini as sources of references in otorhinolaryngology","authors":"W. Wiktor Jedrzejczak, Malgorzata Pastucha, Henryk Skarzynski, Krzysztof Kochanek","doi":"10.1101/2024.08.12.24311896","DOIUrl":null,"url":null,"abstract":"Introduction: An effective way of testing chatbots is to ask them for references since such items can be easily verified. The purpose of this study was to compare the ability of ChatGPT–4 and Gemini Advanced to select accurate references on common topics in otorhinolaryngology.\nMethods: ChatGPT–4 and Gemini Advanced were asked to provide references on 25 topics within the otorhinolaryngology category of Web of Science. Within each topic, we set as target the most cited papers which had \"guidelines\" in the title. The chatbot responses were collected on three consecutive days to take into account possible variability. The accuracy and reliability of the provided references were evaluated.\nResults: Across the three days, the accuracy of ChatGPT–4 was 29–45% while that of Gemini Advanced was 10–17%. Common errors included false author names, false DOI numbers, and incomplete information. Lower percentage errors were associated with higher number of citations.\nConclusions: Both chatbots performed poorly in finding references, although ChatGPT–4 provided higher accuracy than Gemini Advanced.","PeriodicalId":501185,"journal":{"name":"medRxiv - Otolaryngology","volume":"14 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Otolaryngology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.08.12.24311896","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: An effective way of testing chatbots is to ask them for references since such items can be easily verified. The purpose of this study was to compare the ability of ChatGPT–4 and Gemini Advanced to select accurate references on common topics in otorhinolaryngology. Methods: ChatGPT–4 and Gemini Advanced were asked to provide references on 25 topics within the otorhinolaryngology category of Web of Science. Within each topic, we set as target the most cited papers which had "guidelines" in the title. The chatbot responses were collected on three consecutive days to take into account possible variability. The accuracy and reliability of the provided references were evaluated. Results: Across the three days, the accuracy of ChatGPT–4 was 29–45% while that of Gemini Advanced was 10–17%. Common errors included false author names, false DOI numbers, and incomplete information. Lower percentage errors were associated with higher number of citations. Conclusions: Both chatbots performed poorly in finding references, although ChatGPT–4 provided higher accuracy than Gemini Advanced.
将 ChatGPT 和 Gemini 作为耳鼻喉科参考资料来源的比较
介绍:测试聊天机器人的一种有效方法是要求它们提供参考资料,因为这类项目很容易验证。本研究的目的是比较 ChatGPT-4 和 Gemini Advanced 就耳鼻喉科常见主题选择准确参考资料的能力:我们要求 ChatGPT-4 和 Gemini Advanced 为 Web of Science 的耳鼻喉科类别中的 25 个主题提供参考文献。在每个主题中,我们将标题中包含 "指南 "的被引用次数最多的论文作为目标。聊天机器人的回复是在连续三天收集的,以考虑到可能的变化。我们对所提供参考文献的准确性和可靠性进行了评估:在这三天中,ChatGPT-4 的准确率为 29-45%,而 Gemini Advanced 的准确率为 10-17%。常见错误包括错误的作者姓名、错误的 DOI 号和不完整的信息。较低的错误率与较高的引用次数有关:两个聊天机器人在查找参考文献方面都表现不佳,但 ChatGPT-4 的准确率高于 Gemini Advanced。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信