Shankargouda Patil, Gabriel Eisenhuth, Tarek El-Bialy, Frank W Licari
{"title":"四种大型语言模型在正畸知识评估中的可靠性与性能。","authors":"Shankargouda Patil, Gabriel Eisenhuth, Tarek El-Bialy, Frank W Licari","doi":"10.1002/jdd.14002","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence-based large language models (LLMs) are gaining prominence as educational tools. This study evaluated the accuracy and reliability of four popular publicly available LLM models-ChatGPT 4.0, ChatGPT 4o, Google Gemini, and Microsoft CoPilot-in answering orthodontic questions from the National Board of Dental Examiners examinations. Each model was tested across three trials to assess response consistency. Reliability was analyzed using Cohen's and Fleiss' Kappa. Among the four tested models, Microsoft CoPilot demonstrated the highest reliability, while ChatGPT-4.0 had the highest accuracy. Variability across trials suggests that AI-generated responses remain inconsistent. The variable responses generated over time by LLMs limit their standalone applicability in orthodontic education. Older models at times outperformed newer models. AI model updates do not necessarily lead to improved reliability. Although AI models may show potential as supplementary study aids, their accuracy and stability require further refinement before being deployed in educational contexts.</p>","PeriodicalId":50216,"journal":{"name":"Journal of Dental Education","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reliability and Performance of Four Large Language Models in Orthodontic Knowledge Assessment.\",\"authors\":\"Shankargouda Patil, Gabriel Eisenhuth, Tarek El-Bialy, Frank W Licari\",\"doi\":\"10.1002/jdd.14002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Artificial intelligence-based large language models (LLMs) are gaining prominence as educational tools. This study evaluated the accuracy and reliability of four popular publicly available LLM models-ChatGPT 4.0, ChatGPT 4o, Google Gemini, and Microsoft CoPilot-in answering orthodontic questions from the National Board of Dental Examiners examinations. Each model was tested across three trials to assess response consistency. Reliability was analyzed using Cohen's and Fleiss' Kappa. Among the four tested models, Microsoft CoPilot demonstrated the highest reliability, while ChatGPT-4.0 had the highest accuracy. Variability across trials suggests that AI-generated responses remain inconsistent. The variable responses generated over time by LLMs limit their standalone applicability in orthodontic education. Older models at times outperformed newer models. AI model updates do not necessarily lead to improved reliability. Although AI models may show potential as supplementary study aids, their accuracy and stability require further refinement before being deployed in educational contexts.</p>\",\"PeriodicalId\":50216,\"journal\":{\"name\":\"Journal of Dental Education\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2025-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Dental Education\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1002/jdd.14002\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Dental Education","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/jdd.14002","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
Reliability and Performance of Four Large Language Models in Orthodontic Knowledge Assessment.
Artificial intelligence-based large language models (LLMs) are gaining prominence as educational tools. This study evaluated the accuracy and reliability of four popular publicly available LLM models-ChatGPT 4.0, ChatGPT 4o, Google Gemini, and Microsoft CoPilot-in answering orthodontic questions from the National Board of Dental Examiners examinations. Each model was tested across three trials to assess response consistency. Reliability was analyzed using Cohen's and Fleiss' Kappa. Among the four tested models, Microsoft CoPilot demonstrated the highest reliability, while ChatGPT-4.0 had the highest accuracy. Variability across trials suggests that AI-generated responses remain inconsistent. The variable responses generated over time by LLMs limit their standalone applicability in orthodontic education. Older models at times outperformed newer models. AI model updates do not necessarily lead to improved reliability. Although AI models may show potential as supplementary study aids, their accuracy and stability require further refinement before being deployed in educational contexts.
期刊介绍:
The Journal of Dental Education (JDE) is a peer-reviewed monthly journal that publishes a wide variety of educational and scientific research in dental, allied dental and advanced dental education. Published continuously by the American Dental Education Association since 1936 and internationally recognized as the premier journal for academic dentistry, the JDE publishes articles on such topics as curriculum reform, education research methods, innovative educational and assessment methodologies, faculty development, community-based dental education, student recruitment and admissions, professional and educational ethics, dental education around the world and systematic reviews of educational interest. The JDE is one of the top scholarly journals publishing the most important work in oral health education today; it celebrated its 80th anniversary in 2016.