{"title":"牙科教育中的人工智能:基于人工智能的聊天机器人能与全科医生竞争吗?","authors":"Ali Can Bulut, Hasibe Sevilay Bahadır, Gül Ateş","doi":"10.1186/s12909-025-07880-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>This study aimed to evaluate the performance of seven AI-based chatbots (ChatGPT-4, ChatGPT-3.5, ChatGPT 01-Preview, ChatGPT 01-Mini, Microsoft Bing, Claude, and Google Gemini) in answering multiple-choice questions related to prosthetic dentistry from the Turkish Dental Specialization Mock Exam (DUSDATA TR). Additionally, the study investigated whether these chatbots could provide responses at an accuracy level comparable to general practitioners.</p><p><strong>Methods: </strong>A total of ten multiple-choice questions related to prosthetic dentistry were selected from a preparatory exam by a private educational institution. Two groups were formed: (1) General practitioners (Human Group, N = 657) and (2) AI-based chatbots. Each question was manually input into the chatbots, and their responses were recorded. Correct responses were marked as \"1\" and incorrect responses as \"0\". The consistency and accuracy of chatbot responses were analyzed using Fisher's exact test and Cochran's Q test. Statistical significance was set at p < 0.05.</p><p><strong>Results: </strong>A statistically significant difference was found between the accuracy rates of chatbot responses (p < 0.05). ChatGPT-3.5, ChatGPT-4, and Google Gemini failed to provide correct answers to questions 2, 5, 7, 8, and 9, while Microsoft Bing failed on questions 5, 7, 8, and 10. None of the chatbots answered question 7 correctly. General practitioners demonstrated the highest accuracy rates, particularly for question 10 (80.3%) and question 9 (44.4%). Despite variations in accuracy, chatbot responses remained consistent over time (p > 0.05). However, Bing was identified as the chatbot with the highest number of incorrect responses.</p><p><strong>Conclusion: </strong>The study findings indicate that The performance of AI-based chatbots varies significantly and lacks consistency in answering prosthetic dentistry-related exam questions, necessitating further improvement before implementation.</p>","PeriodicalId":51234,"journal":{"name":"BMC Medical Education","volume":"25 1","pages":"1319"},"PeriodicalIF":3.2000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12492586/pdf/","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence in dental education: can AI-based chatbots compete with general practitioners?\",\"authors\":\"Ali Can Bulut, Hasibe Sevilay Bahadır, Gül Ateş\",\"doi\":\"10.1186/s12909-025-07880-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>This study aimed to evaluate the performance of seven AI-based chatbots (ChatGPT-4, ChatGPT-3.5, ChatGPT 01-Preview, ChatGPT 01-Mini, Microsoft Bing, Claude, and Google Gemini) in answering multiple-choice questions related to prosthetic dentistry from the Turkish Dental Specialization Mock Exam (DUSDATA TR). Additionally, the study investigated whether these chatbots could provide responses at an accuracy level comparable to general practitioners.</p><p><strong>Methods: </strong>A total of ten multiple-choice questions related to prosthetic dentistry were selected from a preparatory exam by a private educational institution. Two groups were formed: (1) General practitioners (Human Group, N = 657) and (2) AI-based chatbots. Each question was manually input into the chatbots, and their responses were recorded. Correct responses were marked as \\\"1\\\" and incorrect responses as \\\"0\\\". The consistency and accuracy of chatbot responses were analyzed using Fisher's exact test and Cochran's Q test. Statistical significance was set at p < 0.05.</p><p><strong>Results: </strong>A statistically significant difference was found between the accuracy rates of chatbot responses (p < 0.05). ChatGPT-3.5, ChatGPT-4, and Google Gemini failed to provide correct answers to questions 2, 5, 7, 8, and 9, while Microsoft Bing failed on questions 5, 7, 8, and 10. None of the chatbots answered question 7 correctly. General practitioners demonstrated the highest accuracy rates, particularly for question 10 (80.3%) and question 9 (44.4%). Despite variations in accuracy, chatbot responses remained consistent over time (p > 0.05). However, Bing was identified as the chatbot with the highest number of incorrect responses.</p><p><strong>Conclusion: </strong>The study findings indicate that The performance of AI-based chatbots varies significantly and lacks consistency in answering prosthetic dentistry-related exam questions, necessitating further improvement before implementation.</p>\",\"PeriodicalId\":51234,\"journal\":{\"name\":\"BMC Medical Education\",\"volume\":\"25 1\",\"pages\":\"1319\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12492586/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Medical Education\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s12909-025-07880-7\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Education","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12909-025-07880-7","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Artificial intelligence in dental education: can AI-based chatbots compete with general practitioners?
Background: This study aimed to evaluate the performance of seven AI-based chatbots (ChatGPT-4, ChatGPT-3.5, ChatGPT 01-Preview, ChatGPT 01-Mini, Microsoft Bing, Claude, and Google Gemini) in answering multiple-choice questions related to prosthetic dentistry from the Turkish Dental Specialization Mock Exam (DUSDATA TR). Additionally, the study investigated whether these chatbots could provide responses at an accuracy level comparable to general practitioners.
Methods: A total of ten multiple-choice questions related to prosthetic dentistry were selected from a preparatory exam by a private educational institution. Two groups were formed: (1) General practitioners (Human Group, N = 657) and (2) AI-based chatbots. Each question was manually input into the chatbots, and their responses were recorded. Correct responses were marked as "1" and incorrect responses as "0". The consistency and accuracy of chatbot responses were analyzed using Fisher's exact test and Cochran's Q test. Statistical significance was set at p < 0.05.
Results: A statistically significant difference was found between the accuracy rates of chatbot responses (p < 0.05). ChatGPT-3.5, ChatGPT-4, and Google Gemini failed to provide correct answers to questions 2, 5, 7, 8, and 9, while Microsoft Bing failed on questions 5, 7, 8, and 10. None of the chatbots answered question 7 correctly. General practitioners demonstrated the highest accuracy rates, particularly for question 10 (80.3%) and question 9 (44.4%). Despite variations in accuracy, chatbot responses remained consistent over time (p > 0.05). However, Bing was identified as the chatbot with the highest number of incorrect responses.
Conclusion: The study findings indicate that The performance of AI-based chatbots varies significantly and lacks consistency in answering prosthetic dentistry-related exam questions, necessitating further improvement before implementation.
期刊介绍:
BMC Medical Education is an open access journal publishing original peer-reviewed research articles in relation to the training of healthcare professionals, including undergraduate, postgraduate, and continuing education. The journal has a special focus on curriculum development, evaluations of performance, assessment of training needs and evidence-based medicine.