Vasiliki P Koidou, Georgios S Chatzopoulos, Lazaros Tsalikis, Eleutherios G Kaklamanos
{"title":"Large Language Models in peri-implant disease: How well do they perform?","authors":"Vasiliki P Koidou, Georgios S Chatzopoulos, Lazaros Tsalikis, Eleutherios G Kaklamanos","doi":"10.1016/j.prosdent.2025.02.008","DOIUrl":null,"url":null,"abstract":"<p><strong>Statement of problem: </strong>Artificial intelligence (AI) has gained significant recent attention and several AI applications, such as the Large Language Models (LLMs) are promising for use in clinical medicine and dentistry. Nevertheless, assessing the performance of LLMs is essential to identify potential inaccuracies or even prevent harmful outcomes.</p><p><strong>Purpose: </strong>The purpose of this study was to evaluate and compare the evidence-based potential of answers provided by 4 LLMs to clinical questions in the field of implant dentistry.</p><p><strong>Material and methods: </strong>A total of 10 open-ended questions pertinent to prevention and treatment of peri-implant disease were posed to 4 distinct LLMs including ChatGPT 4.0, Google Gemini, Google Gemini Advanced, and Microsoft Copilot. The answers were evaluated independently by 2 periodontists against scientific evidence for comprehensiveness, scientific accuracy, clarity, and relevance. The LLMs responses received scores ranging from 0 (minimum) to 10 (maximum) points. To assess the intra-evaluator reliability, a re-evaluation of the LLM responses was performed after 2 weeks and Cronbach α and interclass correlation coefficient (ICC) was used (α=.05).</p><p><strong>Results: </strong>The scores assigned by the examiners on the 2 occasions were not statistically different and each LLM received an average score. Google Gemini Advanced ranked higher than the rest of the LLMs, while Google Gemini scored worst. The difference between Google Gemini Advanced and Google Gemini was statistically significantly different (P=.005).</p><p><strong>Conclusions: </strong>Dental professionals need to be cautious when using LLMs to access content related to peri-implant diseases. LLMs cannot currently replace dental professionals and caution should be exercised when used in patient care.</p>","PeriodicalId":16866,"journal":{"name":"Journal of Prosthetic Dentistry","volume":" ","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Prosthetic Dentistry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.prosdent.2025.02.008","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0
Abstract
Statement of problem: Artificial intelligence (AI) has gained significant recent attention and several AI applications, such as the Large Language Models (LLMs) are promising for use in clinical medicine and dentistry. Nevertheless, assessing the performance of LLMs is essential to identify potential inaccuracies or even prevent harmful outcomes.
Purpose: The purpose of this study was to evaluate and compare the evidence-based potential of answers provided by 4 LLMs to clinical questions in the field of implant dentistry.
Material and methods: A total of 10 open-ended questions pertinent to prevention and treatment of peri-implant disease were posed to 4 distinct LLMs including ChatGPT 4.0, Google Gemini, Google Gemini Advanced, and Microsoft Copilot. The answers were evaluated independently by 2 periodontists against scientific evidence for comprehensiveness, scientific accuracy, clarity, and relevance. The LLMs responses received scores ranging from 0 (minimum) to 10 (maximum) points. To assess the intra-evaluator reliability, a re-evaluation of the LLM responses was performed after 2 weeks and Cronbach α and interclass correlation coefficient (ICC) was used (α=.05).
Results: The scores assigned by the examiners on the 2 occasions were not statistically different and each LLM received an average score. Google Gemini Advanced ranked higher than the rest of the LLMs, while Google Gemini scored worst. The difference between Google Gemini Advanced and Google Gemini was statistically significantly different (P=.005).
Conclusions: Dental professionals need to be cautious when using LLMs to access content related to peri-implant diseases. LLMs cannot currently replace dental professionals and caution should be exercised when used in patient care.
期刊介绍:
The Journal of Prosthetic Dentistry is the leading professional journal devoted exclusively to prosthetic and restorative dentistry. The Journal is the official publication for 24 leading U.S. international prosthodontic organizations. The monthly publication features timely, original peer-reviewed articles on the newest techniques, dental materials, and research findings. The Journal serves prosthodontists and dentists in advanced practice, and features color photos that illustrate many step-by-step procedures. The Journal of Prosthetic Dentistry is included in Index Medicus and CINAHL.