Alexander Z Fazilat, Charlotte E Berry, Andrew Churukian, Christopher Lavin, Lionel Kameni, Camille Brenac, Silvio Podda, Karl Bruckman, Hermann P Lorenz, Rohit K Khosla, Derrick C Wan
{"title":"AI-based Cleft Lip and Palate Surgical Information is Preferred by Both Plastic Surgeons and Patients in a Blind Comparison.","authors":"Alexander Z Fazilat, Charlotte E Berry, Andrew Churukian, Christopher Lavin, Lionel Kameni, Camille Brenac, Silvio Podda, Karl Bruckman, Hermann P Lorenz, Rohit K Khosla, Derrick C Wan","doi":"10.1177/10556656241266368","DOIUrl":null,"url":null,"abstract":"<p><p>IntroductionThe application of artificial intelligence (AI) in healthcare has expanded in recent years, and these tools such as ChatGPT to generate patient-facing information have garnered particular interest. Online cleft lip and palate (CL/P) surgical information supplied by academic/professional (A/P) sources was therefore evaluated against ChatGPT regarding accuracy, comprehensiveness, and clarity.Methods11 plastic and reconstructive surgeons and 29 non-medical individuals blindly compared responses written by ChatGPT or A/P sources to 30 frequently asked CL/P surgery questions. Surgeons indicated preference, determined accuracy, and scored comprehensiveness and clarity. Non-medical individuals indicated preference. Calculations of readability scores were determined using seven readability formulas. Statistical analysis of CL/P surgical online information was performed using paired t-tests.ResultsSurgeons, 60.88% of the time, blindly preferred material generated by ChatGPT over A/P sources. Additionally, surgeons consistently indicated that ChatGPT-generated material was more comprehensive and had greater clarity. No significant difference was found between ChatGPT and resources provided by professional organizations in terms of accuracy. Among individuals with no medical background, ChatGPT-generated materials were preferred 60.46% of the time. For materials from both ChatGPT and A/P sources, readability scores surpassed advised levels for patient proficiency across seven readability formulas.ConclusionAs the prominence of ChatGPT-based language tools rises in the healthcare space, potential applications of the tools should be assessed by experts against existing high-quality sources. Our results indicate that ChatGPT is capable of producing high-quality material in terms of accuracy, comprehensiveness, and clarity preferred by both plastic surgeons and individuals with no medical background.</p>","PeriodicalId":49220,"journal":{"name":"Cleft Palate-Craniofacial Journal","volume":" ","pages":"1542-1548"},"PeriodicalIF":1.1000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cleft Palate-Craniofacial Journal","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/10556656241266368","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/1 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"Dentistry","Score":null,"Total":0}
引用次数: 0
Abstract
IntroductionThe application of artificial intelligence (AI) in healthcare has expanded in recent years, and these tools such as ChatGPT to generate patient-facing information have garnered particular interest. Online cleft lip and palate (CL/P) surgical information supplied by academic/professional (A/P) sources was therefore evaluated against ChatGPT regarding accuracy, comprehensiveness, and clarity.Methods11 plastic and reconstructive surgeons and 29 non-medical individuals blindly compared responses written by ChatGPT or A/P sources to 30 frequently asked CL/P surgery questions. Surgeons indicated preference, determined accuracy, and scored comprehensiveness and clarity. Non-medical individuals indicated preference. Calculations of readability scores were determined using seven readability formulas. Statistical analysis of CL/P surgical online information was performed using paired t-tests.ResultsSurgeons, 60.88% of the time, blindly preferred material generated by ChatGPT over A/P sources. Additionally, surgeons consistently indicated that ChatGPT-generated material was more comprehensive and had greater clarity. No significant difference was found between ChatGPT and resources provided by professional organizations in terms of accuracy. Among individuals with no medical background, ChatGPT-generated materials were preferred 60.46% of the time. For materials from both ChatGPT and A/P sources, readability scores surpassed advised levels for patient proficiency across seven readability formulas.ConclusionAs the prominence of ChatGPT-based language tools rises in the healthcare space, potential applications of the tools should be assessed by experts against existing high-quality sources. Our results indicate that ChatGPT is capable of producing high-quality material in terms of accuracy, comprehensiveness, and clarity preferred by both plastic surgeons and individuals with no medical background.
期刊介绍:
The Cleft Palate-Craniofacial Journal (CPCJ) is the premiere peer-reviewed, interdisciplinary, international journal dedicated to current research on etiology, prevention, diagnosis, and treatment in all areas pertaining to craniofacial anomalies. CPCJ reports on basic science and clinical research aimed at better elucidating the pathogenesis, pathology, and optimal methods of treatment of cleft and craniofacial anomalies. The journal strives to foster communication and cooperation among professionals from all specialties.