W. Wiktor Jedrzejczak, Malgorzata Pastucha, Henryk Skarzynski, Krzysztof Kochanek
{"title":"Comparison of ChatGPT and Gemini as sources of references in otorhinolaryngology","authors":"W. Wiktor Jedrzejczak, Malgorzata Pastucha, Henryk Skarzynski, Krzysztof Kochanek","doi":"10.1101/2024.08.12.24311896","DOIUrl":null,"url":null,"abstract":"Introduction: An effective way of testing chatbots is to ask them for references since such items can be easily verified. The purpose of this study was to compare the ability of ChatGPT–4 and Gemini Advanced to select accurate references on common topics in otorhinolaryngology.\nMethods: ChatGPT–4 and Gemini Advanced were asked to provide references on 25 topics within the otorhinolaryngology category of Web of Science. Within each topic, we set as target the most cited papers which had \"guidelines\" in the title. The chatbot responses were collected on three consecutive days to take into account possible variability. The accuracy and reliability of the provided references were evaluated.\nResults: Across the three days, the accuracy of ChatGPT–4 was 29–45% while that of Gemini Advanced was 10–17%. Common errors included false author names, false DOI numbers, and incomplete information. Lower percentage errors were associated with higher number of citations.\nConclusions: Both chatbots performed poorly in finding references, although ChatGPT–4 provided higher accuracy than Gemini Advanced.","PeriodicalId":501185,"journal":{"name":"medRxiv - Otolaryngology","volume":"14 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Otolaryngology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.08.12.24311896","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: An effective way of testing chatbots is to ask them for references since such items can be easily verified. The purpose of this study was to compare the ability of ChatGPT–4 and Gemini Advanced to select accurate references on common topics in otorhinolaryngology.
Methods: ChatGPT–4 and Gemini Advanced were asked to provide references on 25 topics within the otorhinolaryngology category of Web of Science. Within each topic, we set as target the most cited papers which had "guidelines" in the title. The chatbot responses were collected on three consecutive days to take into account possible variability. The accuracy and reliability of the provided references were evaluated.
Results: Across the three days, the accuracy of ChatGPT–4 was 29–45% while that of Gemini Advanced was 10–17%. Common errors included false author names, false DOI numbers, and incomplete information. Lower percentage errors were associated with higher number of citations.
Conclusions: Both chatbots performed poorly in finding references, although ChatGPT–4 provided higher accuracy than Gemini Advanced.