{"title":"刷、字节和机器人:ChatGPT、Gemini和Copilot中人工智能生成的儿科牙科建议的质量比较。","authors":"Deepika Kapoor, Deepanshu Garg, Santosh Kumar Tadakamadla","doi":"10.3389/froh.2025.1652422","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Artificial intelligence (AI) tools such as ChatGPT, Google Gemini, and Microsoft Copilot are increasingly relied upon by parents for immediate guidance on pediatric dental concerns. This study evaluated and compared the response quality of these AI platforms in addressing real-world parental queries related to pediatric dentistry, including early tooth extraction, space maintenance, and the decision to consult a pediatric or a general dentist.</p><p><strong>Methods: </strong>A structured 30-question survey was developed and submitted to each AI model, and their responses were anonymized and assessed by pediatric dental experts using a standardized rubric across five key domains: clinical accuracy, clarity, completeness, relevance, and absence of misleading information.</p><p><strong>Results: </strong>Statistically significant differences were found across all five domains (<i>p</i> < .001), with ChatGPT consistently achieving the highest scores. Multivariate analysis (MANOVA) confirmed a strong overall effect of the AI model on response quality (Pillai's Trace = 0.892, <i>p</i> < .001), supporting ChatGPT's superior performance in providing accurate, relevant, and comprehensive pediatric dental advice.</p><p><strong>Discussion: </strong>While AI technologies show potential as clinical decision support systems, their variable performance reinforces the need for expert oversight. Future AI development should focus on optimizing response quality and safety to ensure effective and trustworthy digital health communication for pediatric dental care.</p>","PeriodicalId":94016,"journal":{"name":"Frontiers in oral health","volume":"6 ","pages":"1652422"},"PeriodicalIF":3.1000,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12394529/pdf/","citationCount":"0","resultStr":"{\"title\":\"Brush, byte, and bot: quality comparison of artificial intelligence-generated pediatric dental advice across ChatGPT, Gemini, and Copilot.\",\"authors\":\"Deepika Kapoor, Deepanshu Garg, Santosh Kumar Tadakamadla\",\"doi\":\"10.3389/froh.2025.1652422\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Artificial intelligence (AI) tools such as ChatGPT, Google Gemini, and Microsoft Copilot are increasingly relied upon by parents for immediate guidance on pediatric dental concerns. This study evaluated and compared the response quality of these AI platforms in addressing real-world parental queries related to pediatric dentistry, including early tooth extraction, space maintenance, and the decision to consult a pediatric or a general dentist.</p><p><strong>Methods: </strong>A structured 30-question survey was developed and submitted to each AI model, and their responses were anonymized and assessed by pediatric dental experts using a standardized rubric across five key domains: clinical accuracy, clarity, completeness, relevance, and absence of misleading information.</p><p><strong>Results: </strong>Statistically significant differences were found across all five domains (<i>p</i> < .001), with ChatGPT consistently achieving the highest scores. Multivariate analysis (MANOVA) confirmed a strong overall effect of the AI model on response quality (Pillai's Trace = 0.892, <i>p</i> < .001), supporting ChatGPT's superior performance in providing accurate, relevant, and comprehensive pediatric dental advice.</p><p><strong>Discussion: </strong>While AI technologies show potential as clinical decision support systems, their variable performance reinforces the need for expert oversight. Future AI development should focus on optimizing response quality and safety to ensure effective and trustworthy digital health communication for pediatric dental care.</p>\",\"PeriodicalId\":94016,\"journal\":{\"name\":\"Frontiers in oral health\",\"volume\":\"6 \",\"pages\":\"1652422\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12394529/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in oral health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/froh.2025.1652422\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in oral health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/froh.2025.1652422","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
Brush, byte, and bot: quality comparison of artificial intelligence-generated pediatric dental advice across ChatGPT, Gemini, and Copilot.
Introduction: Artificial intelligence (AI) tools such as ChatGPT, Google Gemini, and Microsoft Copilot are increasingly relied upon by parents for immediate guidance on pediatric dental concerns. This study evaluated and compared the response quality of these AI platforms in addressing real-world parental queries related to pediatric dentistry, including early tooth extraction, space maintenance, and the decision to consult a pediatric or a general dentist.
Methods: A structured 30-question survey was developed and submitted to each AI model, and their responses were anonymized and assessed by pediatric dental experts using a standardized rubric across five key domains: clinical accuracy, clarity, completeness, relevance, and absence of misleading information.
Results: Statistically significant differences were found across all five domains (p < .001), with ChatGPT consistently achieving the highest scores. Multivariate analysis (MANOVA) confirmed a strong overall effect of the AI model on response quality (Pillai's Trace = 0.892, p < .001), supporting ChatGPT's superior performance in providing accurate, relevant, and comprehensive pediatric dental advice.
Discussion: While AI technologies show potential as clinical decision support systems, their variable performance reinforces the need for expert oversight. Future AI development should focus on optimizing response quality and safety to ensure effective and trustworthy digital health communication for pediatric dental care.