Gözde Işik, İrem A Kafadar-Gürbüz, Fulya Elgün, Remziye U Kara, Buse Berber, Semiha Özgül, Tayfun Günbay
{"title":"人工智能是口腔颌面外科临床实践的有用工具吗?","authors":"Gözde Işik, İrem A Kafadar-Gürbüz, Fulya Elgün, Remziye U Kara, Buse Berber, Semiha Özgül, Tayfun Günbay","doi":"10.1097/SCS.0000000000010686","DOIUrl":null,"url":null,"abstract":"<p><p>This study aimed to assess the usefulness of ChatGPT Plus generated responses to clinical-specific questions in oral and maxillofacial surgery. This cross-sectional study was conducted with questions composed according to the Clinical Practise Guide of Ege University, School of Dentistry, and with different subjects of oral and maxillofacial surgery at the undergraduate level. These questions were classified according to their difficulty level (easy, medium, and hard) and inputted into ChatGPT Plus. Three researchers evaluated the responses using a 7-point Likert-type accuracy scale and a modified global quality scale (range: 1-5). Also, error analysis was carried out for the questions scored ≤4 according to the accuracy assessment. A total of 66 questions were enrolled in this study. The questions included dental anesthesia, tooth extraction, preoperative and postoperative complications, suturing, writing prescriptions, and temporomandibular joint examination. The median accuracy score of ChatGPT Plus responses was 5, with 75% of the responses scoring 4 or above. The median quality score was 4, with 75% of the responses scoring 3 or above. There was a significant difference among the 3 difficulty levels, both in accuracy and quality scores (P<0.001 and 0.001, respectively). The median scores of hard-level questions were found to be lower than the easy-level and medium-level questions. The study outcomes emphasized high accuracy and quality in ChatGPT Plus's responses, except for the questions requiring a detailed response or a comment.</p>","PeriodicalId":15462,"journal":{"name":"Journal of Craniofacial Surgery","volume":" ","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Is Artificial Intelligence a Useful Tool for Clinical Practice of Oral and Maxillofacial Surgery?\",\"authors\":\"Gözde Işik, İrem A Kafadar-Gürbüz, Fulya Elgün, Remziye U Kara, Buse Berber, Semiha Özgül, Tayfun Günbay\",\"doi\":\"10.1097/SCS.0000000000010686\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This study aimed to assess the usefulness of ChatGPT Plus generated responses to clinical-specific questions in oral and maxillofacial surgery. This cross-sectional study was conducted with questions composed according to the Clinical Practise Guide of Ege University, School of Dentistry, and with different subjects of oral and maxillofacial surgery at the undergraduate level. These questions were classified according to their difficulty level (easy, medium, and hard) and inputted into ChatGPT Plus. Three researchers evaluated the responses using a 7-point Likert-type accuracy scale and a modified global quality scale (range: 1-5). Also, error analysis was carried out for the questions scored ≤4 according to the accuracy assessment. A total of 66 questions were enrolled in this study. The questions included dental anesthesia, tooth extraction, preoperative and postoperative complications, suturing, writing prescriptions, and temporomandibular joint examination. The median accuracy score of ChatGPT Plus responses was 5, with 75% of the responses scoring 4 or above. The median quality score was 4, with 75% of the responses scoring 3 or above. There was a significant difference among the 3 difficulty levels, both in accuracy and quality scores (P<0.001 and 0.001, respectively). The median scores of hard-level questions were found to be lower than the easy-level and medium-level questions. The study outcomes emphasized high accuracy and quality in ChatGPT Plus's responses, except for the questions requiring a detailed response or a comment.</p>\",\"PeriodicalId\":15462,\"journal\":{\"name\":\"Journal of Craniofacial Surgery\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Craniofacial Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1097/SCS.0000000000010686\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Craniofacial Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/SCS.0000000000010686","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0
摘要
本研究旨在评估 ChatGPT Plus 生成的口腔颌面外科临床特定问题回复的实用性。这项横断面研究的问题是根据埃格大学牙科学院的临床实践指南编写的,涉及本科阶段口腔颌面外科的不同科目。这些问题根据难易程度(易、中、难)进行了分类,并输入到 ChatGPT Plus 中。三位研究人员使用 7 点李克特准确度量表和修改后的总体质量量表(范围:1-5)对回答进行了评估。此外,还对准确性评估得分≤4 分的问题进行了错误分析。本研究共收集了 66 个问题。这些问题包括牙科麻醉、拔牙、术前和术后并发症、缝合、处方书写和颞下颌关节检查。ChatGPT Plus 回答的准确性中位数为 5 分,75% 的回答得分在 4 分或以上。质量得分的中位数为 4 分,75% 的回答得分在 3 分或以上。三个难度级别在准确度和质量得分上都有明显差异(P
Is Artificial Intelligence a Useful Tool for Clinical Practice of Oral and Maxillofacial Surgery?
This study aimed to assess the usefulness of ChatGPT Plus generated responses to clinical-specific questions in oral and maxillofacial surgery. This cross-sectional study was conducted with questions composed according to the Clinical Practise Guide of Ege University, School of Dentistry, and with different subjects of oral and maxillofacial surgery at the undergraduate level. These questions were classified according to their difficulty level (easy, medium, and hard) and inputted into ChatGPT Plus. Three researchers evaluated the responses using a 7-point Likert-type accuracy scale and a modified global quality scale (range: 1-5). Also, error analysis was carried out for the questions scored ≤4 according to the accuracy assessment. A total of 66 questions were enrolled in this study. The questions included dental anesthesia, tooth extraction, preoperative and postoperative complications, suturing, writing prescriptions, and temporomandibular joint examination. The median accuracy score of ChatGPT Plus responses was 5, with 75% of the responses scoring 4 or above. The median quality score was 4, with 75% of the responses scoring 3 or above. There was a significant difference among the 3 difficulty levels, both in accuracy and quality scores (P<0.001 and 0.001, respectively). The median scores of hard-level questions were found to be lower than the easy-level and medium-level questions. The study outcomes emphasized high accuracy and quality in ChatGPT Plus's responses, except for the questions requiring a detailed response or a comment.
期刊介绍:
The Journal of Craniofacial Surgery serves as a forum of communication for all those involved in craniofacial surgery, maxillofacial surgery and pediatric plastic surgery. Coverage ranges from practical aspects of craniofacial surgery to the basic science that underlies surgical practice. The journal publishes original articles, scientific reviews, editorials and invited commentary, abstracts and selected articles from international journals, and occasional international bibliographies in craniofacial surgery.