ChatGPT、谷歌Gemini和Microsoft Copilot对干眼病诊断的准确性、全面性和可读性比较

Beyoglu Eye Journal Pub Date : 2025-09-25 eCollection Date: 2025-01-01 DOI:10.14744/bej.2025.76743
Dilan Colak, Burcu Yakut, Abdullah Agin
{"title":"ChatGPT、谷歌Gemini和Microsoft Copilot对干眼病诊断的准确性、全面性和可读性比较","authors":"Dilan Colak, Burcu Yakut, Abdullah Agin","doi":"10.14744/bej.2025.76743","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>This study compared the performance of ChatGPT, Google Gemini, and Microsoft Copilot in answering 25 questions about dry eye disease and evaluated comprehensiveness, accuracy, and readability metrics.</p><p><strong>Methods: </strong>The artificial intelligence (AI) platforms answered 25 questions derived from the American Academy of Ophthalmology's Eye Health webpage. Three reviewers assigned comprehensiveness (0-5) and accuracy (-2 to 2) scores. Readability metrics included Flesch-Kincaid Grade Level, Flesch Reading Ease Score, sentence/word statistics, and total content measures. Responses were rated by three independent reviewers. Readability metrics were also calculated, and platforms were compared using Kruskal-Wallis and Friedman tests with <i>post hoc</i> analysis. Reviewer consistency was assessed using the intraclass correlation coefficient (ICC).</p><p><strong>Results: </strong>Google Gemini demonstrated the highest comprehensiveness and accuracy scores, significantly outperforming Microsoft Copilot (p<0.001). ChatGPT produced the most sentences and words (p<0.001), while readability metrics showed no significant differences among models (p>0.05). Inter-observer reliability was highest for Google Gemini (ICC=0.701), followed by ChatGPT (ICC=0.578), with Microsoft Copilot showing the lowest agreement (ICC=0.495). These results indicate Google Gemini's superior performance and consistency, whereas Microsoft Copilot had the weakest overall performance.</p><p><strong>Conclusion: </strong>Google Gemini excelled in content volume while maintaining high comprehensiveness and accuracy, outperforming ChatGPT and Microsoft Copilot in content generation. The platforms displayed comparable readability and linguistic complexity. These findings inform AI tool selection in health-related contexts, emphasizing Google Gemini's strengths in detailed responses. Its superior performance suggests potential utility in clinical and patient-facing applications requiring accurate and comprehensive content.</p>","PeriodicalId":8740,"journal":{"name":"Beyoglu Eye Journal","volume":"10 3","pages":"168-174"},"PeriodicalIF":0.0000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12499718/pdf/","citationCount":"0","resultStr":"{\"title\":\"Comparison of the Accuracy, Comprehensiveness, and Readability of ChatGPT, Google Gemini, and Microsoft Copilot on Dry Eye Disease.\",\"authors\":\"Dilan Colak, Burcu Yakut, Abdullah Agin\",\"doi\":\"10.14744/bej.2025.76743\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>This study compared the performance of ChatGPT, Google Gemini, and Microsoft Copilot in answering 25 questions about dry eye disease and evaluated comprehensiveness, accuracy, and readability metrics.</p><p><strong>Methods: </strong>The artificial intelligence (AI) platforms answered 25 questions derived from the American Academy of Ophthalmology's Eye Health webpage. Three reviewers assigned comprehensiveness (0-5) and accuracy (-2 to 2) scores. Readability metrics included Flesch-Kincaid Grade Level, Flesch Reading Ease Score, sentence/word statistics, and total content measures. Responses were rated by three independent reviewers. Readability metrics were also calculated, and platforms were compared using Kruskal-Wallis and Friedman tests with <i>post hoc</i> analysis. Reviewer consistency was assessed using the intraclass correlation coefficient (ICC).</p><p><strong>Results: </strong>Google Gemini demonstrated the highest comprehensiveness and accuracy scores, significantly outperforming Microsoft Copilot (p<0.001). ChatGPT produced the most sentences and words (p<0.001), while readability metrics showed no significant differences among models (p>0.05). Inter-observer reliability was highest for Google Gemini (ICC=0.701), followed by ChatGPT (ICC=0.578), with Microsoft Copilot showing the lowest agreement (ICC=0.495). These results indicate Google Gemini's superior performance and consistency, whereas Microsoft Copilot had the weakest overall performance.</p><p><strong>Conclusion: </strong>Google Gemini excelled in content volume while maintaining high comprehensiveness and accuracy, outperforming ChatGPT and Microsoft Copilot in content generation. The platforms displayed comparable readability and linguistic complexity. These findings inform AI tool selection in health-related contexts, emphasizing Google Gemini's strengths in detailed responses. Its superior performance suggests potential utility in clinical and patient-facing applications requiring accurate and comprehensive content.</p>\",\"PeriodicalId\":8740,\"journal\":{\"name\":\"Beyoglu Eye Journal\",\"volume\":\"10 3\",\"pages\":\"168-174\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12499718/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Beyoglu Eye Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14744/bej.2025.76743\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Beyoglu Eye Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14744/bej.2025.76743","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目的:本研究比较了ChatGPT、谷歌Gemini和Microsoft Copilot在回答有关干眼病的25个问题方面的表现,并评估了全面性、准确性和可读性指标。方法:人工智能(AI)平台回答来自美国眼科学会眼科健康网页的25个问题。三位评论者给出了综合性(0-5)和准确性(-2到2)的分数。可读性指标包括Flesch- kincaid Grade Level、Flesch Reading Ease Score、句子/单词统计和总内容测量。回答由三名独立评论者打分。还计算了可读性指标,并使用事后分析的Kruskal-Wallis和Friedman测试对平台进行了比较。使用类内相关系数(ICC)评估审稿人一致性。结果:谷歌Gemini的综合性和准确性得分最高,显著优于Microsoft Copilot (p0.05)。观察者间信度最高的是谷歌Gemini (ICC=0.701),其次是ChatGPT (ICC=0.578),而Microsoft Copilot的信度最低(ICC=0.495)。这些结果表明谷歌Gemini的性能和一致性更好,而Microsoft Copilot的整体性能最差。结论:谷歌Gemini在内容量方面表现出色,同时保持了较高的全面性和准确性,在内容生成方面优于ChatGPT和Microsoft Copilot。这些平台显示出相当的可读性和语言复杂性。这些发现为健康相关环境下的人工智能工具选择提供了信息,强调了谷歌Gemini在详细响应方面的优势。其优越的性能表明,潜在的效用在临床和病人面对的应用需要准确和全面的内容。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Comparison of the Accuracy, Comprehensiveness, and Readability of ChatGPT, Google Gemini, and Microsoft Copilot on Dry Eye Disease.

Objectives: This study compared the performance of ChatGPT, Google Gemini, and Microsoft Copilot in answering 25 questions about dry eye disease and evaluated comprehensiveness, accuracy, and readability metrics.

Methods: The artificial intelligence (AI) platforms answered 25 questions derived from the American Academy of Ophthalmology's Eye Health webpage. Three reviewers assigned comprehensiveness (0-5) and accuracy (-2 to 2) scores. Readability metrics included Flesch-Kincaid Grade Level, Flesch Reading Ease Score, sentence/word statistics, and total content measures. Responses were rated by three independent reviewers. Readability metrics were also calculated, and platforms were compared using Kruskal-Wallis and Friedman tests with post hoc analysis. Reviewer consistency was assessed using the intraclass correlation coefficient (ICC).

Results: Google Gemini demonstrated the highest comprehensiveness and accuracy scores, significantly outperforming Microsoft Copilot (p<0.001). ChatGPT produced the most sentences and words (p<0.001), while readability metrics showed no significant differences among models (p>0.05). Inter-observer reliability was highest for Google Gemini (ICC=0.701), followed by ChatGPT (ICC=0.578), with Microsoft Copilot showing the lowest agreement (ICC=0.495). These results indicate Google Gemini's superior performance and consistency, whereas Microsoft Copilot had the weakest overall performance.

Conclusion: Google Gemini excelled in content volume while maintaining high comprehensiveness and accuracy, outperforming ChatGPT and Microsoft Copilot in content generation. The platforms displayed comparable readability and linguistic complexity. These findings inform AI tool selection in health-related contexts, emphasizing Google Gemini's strengths in detailed responses. Its superior performance suggests potential utility in clinical and patient-facing applications requiring accurate and comprehensive content.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
42
审稿时长
16 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信