Khaled Wafaie, Mohamed E Basyouni, Tanmoy Bhattacharjee, Sabarinath Prasad, Baraa Daraqel, Hisham Mohammed
{"title":"生成大语言人工智能模型在牙齿拥挤评估中的诊断准确性。","authors":"Khaled Wafaie, Mohamed E Basyouni, Tanmoy Bhattacharjee, Sabarinath Prasad, Baraa Daraqel, Hisham Mohammed","doi":"10.1186/s12903-025-06960-w","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Generative artificial intelligence (AI) models have shown potential for addressing text-based dental enquiries and answering exam questions. However, their role in diagnosis and treatment planning has not been thoroughly investigated. This study aimed to investigate the reliability of different generative AI models in classifying the severity of dental crowding.</p><p><strong>Methods: </strong>Two experienced orthodontists categorized the severity of dental crowding in 120 intraoral occlusal images as mild, moderate, or severe (40 images per category). These images were then uploaded to three generative AI models (ChatGPT-4o mini, Microsoft Copilot, and Claude 3.5 Sonnet) and prompted to identify the dental arch and to assess the severity of dental crowding. Response times were recorded, and outputs were compared to orthodontists' assessments. A random image subset was re-analyzed after one week to evaluate model consistency.</p><p><strong>Results: </strong>Claude 3.5 Sonnet successfully classified the severity of dental crowding in 50% of the images, followed by ChatGPT-4o mini (44%), and Copilot (34%). Visual recognition of the dental arches was higher with Claude and ChatGPT-4o mini (99%) compared to Copilot (72%). Response generation was significantly longer for ChatGPT-4o mini than for Claude and Copilot (p < .0001). However, the response times were comparable for both Claude and Copilot (p = .98). Repeated analyses showed improvement in image classification for both ChatGPT-4o mini and Copilot, while Claude 3.5 Sonnet misclassified a significant portion of the images.</p><p><strong>Conclusions: </strong>The performance of ChatGPT-4o mini-, Microsoft Copilot, and Claude 3.5 Sonnet in analyzing the severity of dental crowding often did not match the evaluations made by orthodontists. Further developments in image processing algorithms of commercially available generative AI models are required prior to reliable use for dental crowding classification.</p>","PeriodicalId":9072,"journal":{"name":"BMC Oral Health","volume":"25 1","pages":"1558"},"PeriodicalIF":3.1000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12505568/pdf/","citationCount":"0","resultStr":"{\"title\":\"Diagnostic accuracy of generative large language artificial intelligence models for the assessment of dental crowding.\",\"authors\":\"Khaled Wafaie, Mohamed E Basyouni, Tanmoy Bhattacharjee, Sabarinath Prasad, Baraa Daraqel, Hisham Mohammed\",\"doi\":\"10.1186/s12903-025-06960-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Generative artificial intelligence (AI) models have shown potential for addressing text-based dental enquiries and answering exam questions. However, their role in diagnosis and treatment planning has not been thoroughly investigated. This study aimed to investigate the reliability of different generative AI models in classifying the severity of dental crowding.</p><p><strong>Methods: </strong>Two experienced orthodontists categorized the severity of dental crowding in 120 intraoral occlusal images as mild, moderate, or severe (40 images per category). These images were then uploaded to three generative AI models (ChatGPT-4o mini, Microsoft Copilot, and Claude 3.5 Sonnet) and prompted to identify the dental arch and to assess the severity of dental crowding. Response times were recorded, and outputs were compared to orthodontists' assessments. A random image subset was re-analyzed after one week to evaluate model consistency.</p><p><strong>Results: </strong>Claude 3.5 Sonnet successfully classified the severity of dental crowding in 50% of the images, followed by ChatGPT-4o mini (44%), and Copilot (34%). Visual recognition of the dental arches was higher with Claude and ChatGPT-4o mini (99%) compared to Copilot (72%). Response generation was significantly longer for ChatGPT-4o mini than for Claude and Copilot (p < .0001). However, the response times were comparable for both Claude and Copilot (p = .98). Repeated analyses showed improvement in image classification for both ChatGPT-4o mini and Copilot, while Claude 3.5 Sonnet misclassified a significant portion of the images.</p><p><strong>Conclusions: </strong>The performance of ChatGPT-4o mini-, Microsoft Copilot, and Claude 3.5 Sonnet in analyzing the severity of dental crowding often did not match the evaluations made by orthodontists. Further developments in image processing algorithms of commercially available generative AI models are required prior to reliable use for dental crowding classification.</p>\",\"PeriodicalId\":9072,\"journal\":{\"name\":\"BMC Oral Health\",\"volume\":\"25 1\",\"pages\":\"1558\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12505568/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Oral Health\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s12903-025-06960-w\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Oral Health","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12903-025-06960-w","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
Diagnostic accuracy of generative large language artificial intelligence models for the assessment of dental crowding.
Background: Generative artificial intelligence (AI) models have shown potential for addressing text-based dental enquiries and answering exam questions. However, their role in diagnosis and treatment planning has not been thoroughly investigated. This study aimed to investigate the reliability of different generative AI models in classifying the severity of dental crowding.
Methods: Two experienced orthodontists categorized the severity of dental crowding in 120 intraoral occlusal images as mild, moderate, or severe (40 images per category). These images were then uploaded to three generative AI models (ChatGPT-4o mini, Microsoft Copilot, and Claude 3.5 Sonnet) and prompted to identify the dental arch and to assess the severity of dental crowding. Response times were recorded, and outputs were compared to orthodontists' assessments. A random image subset was re-analyzed after one week to evaluate model consistency.
Results: Claude 3.5 Sonnet successfully classified the severity of dental crowding in 50% of the images, followed by ChatGPT-4o mini (44%), and Copilot (34%). Visual recognition of the dental arches was higher with Claude and ChatGPT-4o mini (99%) compared to Copilot (72%). Response generation was significantly longer for ChatGPT-4o mini than for Claude and Copilot (p < .0001). However, the response times were comparable for both Claude and Copilot (p = .98). Repeated analyses showed improvement in image classification for both ChatGPT-4o mini and Copilot, while Claude 3.5 Sonnet misclassified a significant portion of the images.
Conclusions: The performance of ChatGPT-4o mini-, Microsoft Copilot, and Claude 3.5 Sonnet in analyzing the severity of dental crowding often did not match the evaluations made by orthodontists. Further developments in image processing algorithms of commercially available generative AI models are required prior to reliable use for dental crowding classification.
期刊介绍:
BMC Oral Health is an open access, peer-reviewed journal that considers articles on all aspects of the prevention, diagnosis and management of disorders of the mouth, teeth and gums, as well as related molecular genetics, pathophysiology, and epidemiology.