评估牙科修复中不同大语言模型的准确性、可靠性、一致性和可读性。

IF 4.1 3区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE
Zeyneb Merve Ozdemir, Emre Yapici
{"title":"评估牙科修复中不同大语言模型的准确性、可靠性、一致性和可读性。","authors":"Zeyneb Merve Ozdemir,&nbsp;Emre Yapici","doi":"10.1111/jerd.13447","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Objective</h3>\n \n <p>This study aimed to evaluate the reliability, consistency, and readability of responses provided by various artificial intelligence (AI) programs to questions related to Restorative Dentistry.</p>\n </section>\n \n <section>\n \n <h3> Materials and Methods</h3>\n \n <p>Forty-five knowledge-based information and 20 questions (10 patient-related and 10 dentistry-specific) were posed to ChatGPT-3.5, ChatGPT-4, ChatGPT-4o, Chatsonic, Copilot, and Gemini Advanced chatbots. The DISCERN questionnaire was used to assess the reliability; Flesch Reading Ease and Flesch–Kincaid Grade Level scores were utilized to evaluate readability. Accuracy and consistency were determined based on the chatbots' responses to the knowledge-based questions.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>ChatGPT-4, ChatGPT-4o, Chatsonic, and Copilot demonstrated “good” reliability, while ChatGPT-3.5 and Gemini Advanced showed “fair” reliability. Chatsonic exhibited the highest “DISCERN total score” for patient-related questions, while ChatGPT-4o performed best for dentistry-specific questions. No significant differences were found in readability among the chatbots (<i>p</i> &gt; 0.05). ChatGPT-4o showed the highest accuracy (93.3%) for knowledge-based questions, while Copilot had the lowest (68.9%). ChatGPT-4 demonstrated the highest consistency between repetitions.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>Performance of AIs varied in terms of accuracy, reliability, consistency, and readability when responding to Restorative Dentistry questions. ChatGPT-4o and Chatsonic showed promising results for academic and patient education applications. However, the readability of responses was generally above recommended levels for patient education materials.</p>\n </section>\n \n <section>\n \n <h3> Clinical Significance</h3>\n \n <p>The utilization of AI has an increasing impact on various aspects of dentistry. Moreover, if the responses to patient-related and dentistry-specific questions in restorative dentistry prove to be reliable and comprehensible, this may yield promising outcomes for the future.</p>\n </section>\n </div>","PeriodicalId":15988,"journal":{"name":"Journal of Esthetic and Restorative Dentistry","volume":"37 7","pages":"1740-1752"},"PeriodicalIF":4.1000,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jerd.13447","citationCount":"0","resultStr":"{\"title\":\"Evaluating the Accuracy, Reliability, Consistency, and Readability of Different Large Language Models in Restorative Dentistry\",\"authors\":\"Zeyneb Merve Ozdemir,&nbsp;Emre Yapici\",\"doi\":\"10.1111/jerd.13447\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Objective</h3>\\n \\n <p>This study aimed to evaluate the reliability, consistency, and readability of responses provided by various artificial intelligence (AI) programs to questions related to Restorative Dentistry.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Materials and Methods</h3>\\n \\n <p>Forty-five knowledge-based information and 20 questions (10 patient-related and 10 dentistry-specific) were posed to ChatGPT-3.5, ChatGPT-4, ChatGPT-4o, Chatsonic, Copilot, and Gemini Advanced chatbots. The DISCERN questionnaire was used to assess the reliability; Flesch Reading Ease and Flesch–Kincaid Grade Level scores were utilized to evaluate readability. Accuracy and consistency were determined based on the chatbots' responses to the knowledge-based questions.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>ChatGPT-4, ChatGPT-4o, Chatsonic, and Copilot demonstrated “good” reliability, while ChatGPT-3.5 and Gemini Advanced showed “fair” reliability. Chatsonic exhibited the highest “DISCERN total score” for patient-related questions, while ChatGPT-4o performed best for dentistry-specific questions. No significant differences were found in readability among the chatbots (<i>p</i> &gt; 0.05). ChatGPT-4o showed the highest accuracy (93.3%) for knowledge-based questions, while Copilot had the lowest (68.9%). ChatGPT-4 demonstrated the highest consistency between repetitions.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>Performance of AIs varied in terms of accuracy, reliability, consistency, and readability when responding to Restorative Dentistry questions. ChatGPT-4o and Chatsonic showed promising results for academic and patient education applications. However, the readability of responses was generally above recommended levels for patient education materials.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Clinical Significance</h3>\\n \\n <p>The utilization of AI has an increasing impact on various aspects of dentistry. Moreover, if the responses to patient-related and dentistry-specific questions in restorative dentistry prove to be reliable and comprehensible, this may yield promising outcomes for the future.</p>\\n </section>\\n </div>\",\"PeriodicalId\":15988,\"journal\":{\"name\":\"Journal of Esthetic and Restorative Dentistry\",\"volume\":\"37 7\",\"pages\":\"1740-1752\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-03-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jerd.13447\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Esthetic and Restorative Dentistry\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/jerd.13447\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Esthetic and Restorative Dentistry","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jerd.13447","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0

摘要

目的:本研究旨在评估各种人工智能(AI)程序对修复性牙科相关问题的回答的可靠性、一致性和可读性。材料和方法:向ChatGPT-3.5、ChatGPT-4、chatgpt - 40、Chatsonic、Copilot和Gemini Advanced聊天机器人提出45个基于知识的信息和20个问题(10个与患者相关,10个与牙科相关)。采用DISCERN问卷进行信度评估;使用Flesch Reading Ease和Flesch- kincaid Grade Level分数来评估可读性。准确性和一致性是根据聊天机器人对知识问题的回答来确定的。结果:ChatGPT-4、chatgpt - 40、Chatsonic和Copilot的可靠性为“良好”,而ChatGPT-3.5和Gemini Advanced的可靠性为“一般”。Chatsonic在与患者相关的问题上表现出最高的“DISCERN总分”,而chatgpt - 40在牙科特定问题上表现最好。聊天机器人的可读性无显著差异(p < 0.05)。chatgpt - 40在知识型问题上的准确率最高(93.3%),而Copilot的准确率最低(68.9%)。ChatGPT-4在重复之间表现出最高的一致性。结论:人工智能在回答修复性牙科问题时,在准确性、可靠性、一致性和可读性方面表现各异。chatgpt - 40和Chatsonic在学术和患者教育应用中显示出令人鼓舞的结果。然而,回答的可读性一般高于患者教育材料的推荐水平。临床意义:人工智能的应用对牙科各方面的影响越来越大。此外,如果对恢复性牙科中与患者相关和牙科特定问题的回答被证明是可靠和可理解的,这可能会为未来带来有希望的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Evaluating the Accuracy, Reliability, Consistency, and Readability of Different Large Language Models in Restorative Dentistry

Evaluating the Accuracy, Reliability, Consistency, and Readability of Different Large Language Models in Restorative Dentistry

Objective

This study aimed to evaluate the reliability, consistency, and readability of responses provided by various artificial intelligence (AI) programs to questions related to Restorative Dentistry.

Materials and Methods

Forty-five knowledge-based information and 20 questions (10 patient-related and 10 dentistry-specific) were posed to ChatGPT-3.5, ChatGPT-4, ChatGPT-4o, Chatsonic, Copilot, and Gemini Advanced chatbots. The DISCERN questionnaire was used to assess the reliability; Flesch Reading Ease and Flesch–Kincaid Grade Level scores were utilized to evaluate readability. Accuracy and consistency were determined based on the chatbots' responses to the knowledge-based questions.

Results

ChatGPT-4, ChatGPT-4o, Chatsonic, and Copilot demonstrated “good” reliability, while ChatGPT-3.5 and Gemini Advanced showed “fair” reliability. Chatsonic exhibited the highest “DISCERN total score” for patient-related questions, while ChatGPT-4o performed best for dentistry-specific questions. No significant differences were found in readability among the chatbots (p > 0.05). ChatGPT-4o showed the highest accuracy (93.3%) for knowledge-based questions, while Copilot had the lowest (68.9%). ChatGPT-4 demonstrated the highest consistency between repetitions.

Conclusion

Performance of AIs varied in terms of accuracy, reliability, consistency, and readability when responding to Restorative Dentistry questions. ChatGPT-4o and Chatsonic showed promising results for academic and patient education applications. However, the readability of responses was generally above recommended levels for patient education materials.

Clinical Significance

The utilization of AI has an increasing impact on various aspects of dentistry. Moreover, if the responses to patient-related and dentistry-specific questions in restorative dentistry prove to be reliable and comprehensible, this may yield promising outcomes for the future.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Esthetic and Restorative Dentistry
Journal of Esthetic and Restorative Dentistry 医学-牙科与口腔外科
CiteScore
6.30
自引率
6.20%
发文量
124
审稿时长
>12 weeks
期刊介绍: The Journal of Esthetic and Restorative Dentistry (JERD) is the longest standing peer-reviewed journal devoted solely to advancing the knowledge and practice of esthetic dentistry. Its goal is to provide the very latest evidence-based information in the realm of contemporary interdisciplinary esthetic dentistry through high quality clinical papers, sound research reports and educational features. The range of topics covered in the journal includes: - Interdisciplinary esthetic concepts - Implants - Conservative adhesive restorations - Tooth Whitening - Prosthodontic materials and techniques - Dental materials - Orthodontic, periodontal and endodontic esthetics - Esthetics related research - Innovations in esthetics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信