Pedram Hajibagheri, Sahba Khosousi Sani, Mohammad Samami, Rasoul Tabari-Khomeiran, Kiana Azadpeyma, Mohammad Khosousi Sani
{"title":"ChatGpt在口腔病变诊断中的准确性。","authors":"Pedram Hajibagheri, Sahba Khosousi Sani, Mohammad Samami, Rasoul Tabari-Khomeiran, Kiana Azadpeyma, Mohammad Khosousi Sani","doi":"10.1186/s12903-025-06582-2","DOIUrl":null,"url":null,"abstract":"<p><strong>Aim: </strong>ChatGPT, a large language model (LLM) developed by OpenAI, is designed to generate human-like responses through the analysis of textual data. This study aimed to assess the accuracy and diagnostic capability of ChatGPT-4 in answering clinical scenario-based questions regarding oral lesions.</p><p><strong>Methods: </strong>The study included 133 multiple-choice questions (MCQs), each consisting of five possible answers, randomly selected from the Clinical Guide to Oral Disease. Two oral medicine specialists reviewed the answers in the book to ensure accuracy. A general dentist categorized the questions into three levels of difficulty, and two oral medicine specialists validated these categorizations. At each level of difficulty, 37 questions were randomly selected. Consequently, the final questionnaire, consisting of a total of 111 questions categorized by difficulty level, was prepared. The process of asking questions began using the ''new message'' command, to minimize potential bias (influence of prior answers), the researchers manually cleared the chat history before presenting each new question.</p><p><strong>Result: </strong>ChatGPT-4.0 demonstrated an accuracy rate of 97% for easy questions, 86.5% ± 34.6% for medium-level questions, and 78.4% ± 41.7% for difficult questions, with an overall accuracy rate of 87.4% ± 33.3.</p><p><strong>Conclusion: </strong>Although ChatGPT-4.0 demonstrated satisfactory accuracy in answering clinical questions, its responses should not be exclusively relied upon for diagnostic purposes. Instead, the model should be utilized as a complementary tool under the supervision of clinicians in the diagnosis of oral lesions.</p>","PeriodicalId":9072,"journal":{"name":"BMC Oral Health","volume":"25 1","pages":"1229"},"PeriodicalIF":3.1000,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12281746/pdf/","citationCount":"0","resultStr":"{\"title\":\"ChatGpt's accuracy in the diagnosis of oral lesions.\",\"authors\":\"Pedram Hajibagheri, Sahba Khosousi Sani, Mohammad Samami, Rasoul Tabari-Khomeiran, Kiana Azadpeyma, Mohammad Khosousi Sani\",\"doi\":\"10.1186/s12903-025-06582-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Aim: </strong>ChatGPT, a large language model (LLM) developed by OpenAI, is designed to generate human-like responses through the analysis of textual data. This study aimed to assess the accuracy and diagnostic capability of ChatGPT-4 in answering clinical scenario-based questions regarding oral lesions.</p><p><strong>Methods: </strong>The study included 133 multiple-choice questions (MCQs), each consisting of five possible answers, randomly selected from the Clinical Guide to Oral Disease. Two oral medicine specialists reviewed the answers in the book to ensure accuracy. A general dentist categorized the questions into three levels of difficulty, and two oral medicine specialists validated these categorizations. At each level of difficulty, 37 questions were randomly selected. Consequently, the final questionnaire, consisting of a total of 111 questions categorized by difficulty level, was prepared. The process of asking questions began using the ''new message'' command, to minimize potential bias (influence of prior answers), the researchers manually cleared the chat history before presenting each new question.</p><p><strong>Result: </strong>ChatGPT-4.0 demonstrated an accuracy rate of 97% for easy questions, 86.5% ± 34.6% for medium-level questions, and 78.4% ± 41.7% for difficult questions, with an overall accuracy rate of 87.4% ± 33.3.</p><p><strong>Conclusion: </strong>Although ChatGPT-4.0 demonstrated satisfactory accuracy in answering clinical questions, its responses should not be exclusively relied upon for diagnostic purposes. Instead, the model should be utilized as a complementary tool under the supervision of clinicians in the diagnosis of oral lesions.</p>\",\"PeriodicalId\":9072,\"journal\":{\"name\":\"BMC Oral Health\",\"volume\":\"25 1\",\"pages\":\"1229\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-07-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12281746/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Oral Health\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s12903-025-06582-2\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Oral Health","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12903-025-06582-2","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
ChatGpt's accuracy in the diagnosis of oral lesions.
Aim: ChatGPT, a large language model (LLM) developed by OpenAI, is designed to generate human-like responses through the analysis of textual data. This study aimed to assess the accuracy and diagnostic capability of ChatGPT-4 in answering clinical scenario-based questions regarding oral lesions.
Methods: The study included 133 multiple-choice questions (MCQs), each consisting of five possible answers, randomly selected from the Clinical Guide to Oral Disease. Two oral medicine specialists reviewed the answers in the book to ensure accuracy. A general dentist categorized the questions into three levels of difficulty, and two oral medicine specialists validated these categorizations. At each level of difficulty, 37 questions were randomly selected. Consequently, the final questionnaire, consisting of a total of 111 questions categorized by difficulty level, was prepared. The process of asking questions began using the ''new message'' command, to minimize potential bias (influence of prior answers), the researchers manually cleared the chat history before presenting each new question.
Result: ChatGPT-4.0 demonstrated an accuracy rate of 97% for easy questions, 86.5% ± 34.6% for medium-level questions, and 78.4% ± 41.7% for difficult questions, with an overall accuracy rate of 87.4% ± 33.3.
Conclusion: Although ChatGPT-4.0 demonstrated satisfactory accuracy in answering clinical questions, its responses should not be exclusively relied upon for diagnostic purposes. Instead, the model should be utilized as a complementary tool under the supervision of clinicians in the diagnosis of oral lesions.
期刊介绍:
BMC Oral Health is an open access, peer-reviewed journal that considers articles on all aspects of the prevention, diagnosis and management of disorders of the mouth, teeth and gums, as well as related molecular genetics, pathophysiology, and epidemiology.