{"title":"Comparison of responses from different artificial intelligence-powered chatbots regarding the All-on-four dental implant concept.","authors":"Hasan Akpınar","doi":"10.1186/s12903-025-06294-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Recent advancements in Artificial Intelligence (AI) have transformed the healthcare field, particularly through chatbots like ChatGPT, OpenEvidence, and MediSearch. These tools analyze complex data to aid clinical decision-making, enhancing efficiency in diagnosis, treatment planning, and patient management. When applied in the \"All-on-Four\" dental implant concept, AI facilitates immediate prosthetic restorations and meets the demand for expert guidance. This integration boosts the long-term success of surgical outcomes by providing real-time support and improving patient education and postoperative satisfaction. This study aimed to evaluate the effectiveness of three AI-powered chatbots-ChatGPT 4.0, OpenEvidence, and MediSearch-in answering frequently asked questions regarding the All-on-Four dental implant concept.</p><p><strong>Method: </strong>This study investigated the response accuracy of three AI-powered chatbots to common queries about the All-on-Four dental implant concept. Using alsoasked.com, twenty pertinent questions-ten patient-focused and ten technical-were identified. Oral and maxillofacial surgeons evaluated the chatbot responses using a 5-point Likert scale. Statistical analysis was performed with the Kruskal-Wallis test, supplemented by pairwise Mann-Whitney U tests with Bonferroni correction, to assess the significance of differences among the chatbots' performances.</p><p><strong>Results: </strong>The Kruskal-Wallis test showed statistically significant differences between the three chatbots for both patient and technical questions (p < 0.01). Pairwise comparisons were evaluated using the Mann-Whitney U test. While significant differences were found among each chatbot for patient questions, no significant difference was observed between ChatGPT and MediSearch for technical questions (p = 0.158). When comparing responses of the same chatbot to patient and technical questions, it was found that MediSearch performed better in technical questions (p < 0.001).</p><p><strong>Conclusion: </strong>Advancements in technology have made AI-powered chatbots an inevitable influence in specialized medical fields such as Oral, Maxillofacial Surgery. Our findings indicate that these chatbots can provide valuable information for patients undergoing medical procedures and serve as a resource for healthcare professionals.</p>","PeriodicalId":9072,"journal":{"name":"BMC Oral Health","volume":"25 1","pages":"922"},"PeriodicalIF":2.6000,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12142943/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Oral Health","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12903-025-06294-7","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Recent advancements in Artificial Intelligence (AI) have transformed the healthcare field, particularly through chatbots like ChatGPT, OpenEvidence, and MediSearch. These tools analyze complex data to aid clinical decision-making, enhancing efficiency in diagnosis, treatment planning, and patient management. When applied in the "All-on-Four" dental implant concept, AI facilitates immediate prosthetic restorations and meets the demand for expert guidance. This integration boosts the long-term success of surgical outcomes by providing real-time support and improving patient education and postoperative satisfaction. This study aimed to evaluate the effectiveness of three AI-powered chatbots-ChatGPT 4.0, OpenEvidence, and MediSearch-in answering frequently asked questions regarding the All-on-Four dental implant concept.
Method: This study investigated the response accuracy of three AI-powered chatbots to common queries about the All-on-Four dental implant concept. Using alsoasked.com, twenty pertinent questions-ten patient-focused and ten technical-were identified. Oral and maxillofacial surgeons evaluated the chatbot responses using a 5-point Likert scale. Statistical analysis was performed with the Kruskal-Wallis test, supplemented by pairwise Mann-Whitney U tests with Bonferroni correction, to assess the significance of differences among the chatbots' performances.
Results: The Kruskal-Wallis test showed statistically significant differences between the three chatbots for both patient and technical questions (p < 0.01). Pairwise comparisons were evaluated using the Mann-Whitney U test. While significant differences were found among each chatbot for patient questions, no significant difference was observed between ChatGPT and MediSearch for technical questions (p = 0.158). When comparing responses of the same chatbot to patient and technical questions, it was found that MediSearch performed better in technical questions (p < 0.001).
Conclusion: Advancements in technology have made AI-powered chatbots an inevitable influence in specialized medical fields such as Oral, Maxillofacial Surgery. Our findings indicate that these chatbots can provide valuable information for patients undergoing medical procedures and serve as a resource for healthcare professionals.
期刊介绍:
BMC Oral Health is an open access, peer-reviewed journal that considers articles on all aspects of the prevention, diagnosis and management of disorders of the mouth, teeth and gums, as well as related molecular genetics, pathophysiology, and epidemiology.