Ali M Alsudays, Khaled A Almanea, Abdullah A Alhajlah, Ahmad Alroqi
{"title":"人工智能在鼻窦术后护理中的应用:ChatGPT-4、谷歌Gemini和DeepSeek在患者教育和支持方面的比较研究","authors":"Ali M Alsudays, Khaled A Almanea, Abdullah A Alhajlah, Ahmad Alroqi","doi":"10.1097/SCS.0000000000011922","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Artificial intelligence (AI) integration into postoperative care has demonstrated significant potential in enhancing patient care and support. This review demonstrates different findings from various studies to evaluate AI's impact on improving postoperative care outcomes, with a specific focus on its application to Functional Endoscopic Sinus Surgery (FESS) in the literature. This study aimed to compare the performance of 3 different large language models in addressing postoperative sinus care questions. The focus is to determining their utility in patient education and support following FESS.</p><p><strong>Methodology: </strong>This cross-sectional study was conducted over a 3-month period. Ten standardized questions were adapted from 3 identified online sources (University of Michigan Health, Kevin Caceres, MD, and GhiamMD). Each question was presented to all 3 AI chatbots under identical conditions, generating a total of 30 AI-generated responses for evaluation. A new chat window was used for every question to ensure unbiased responses.</p><p><strong>Results: </strong>Findings suggest that the number of words (P=0.026), number of sentences (P<0.001), and number of characters per word (P=0.007) were significantly higher in DeepSeek, but DeepSeek showed significantly lower in the number of words per sentence (P<0.001). According to evaluators, ChatGPT-4 ratings were better regarding the clarity of responses, whereas DeepSeek ratings were better in completeness. However, Google Gemini performed the least among the AI Chatbots. Interestingly, reading difficulties in the responses from Google Gemini and DeepSeek were somewhat higher than ChatGPT-4.</p><p><strong>Conclusion: </strong>Both ChatGPT-4 and DeepSeek had comparable ratings on the response's accuracy, relevance, and usefulness. However, the number of words, sentences, and characters per word was significantly higher in DeepSeek. Interestingly, Google Gemini's ratings lagged behind both ChatGPT-4 and DeepSeek. Further investigations are required to determine which AI Chatbots offer the best responses in various clinical case scenarios, particularly in postoperative care in our region.</p>","PeriodicalId":15462,"journal":{"name":"Journal of Craniofacial Surgery","volume":" ","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence in Postoperative Sinus Care: A Comparative Study of ChatGPT-4, Google Gemini, and DeepSeek in Patient Education and Support.\",\"authors\":\"Ali M Alsudays, Khaled A Almanea, Abdullah A Alhajlah, Ahmad Alroqi\",\"doi\":\"10.1097/SCS.0000000000011922\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Artificial intelligence (AI) integration into postoperative care has demonstrated significant potential in enhancing patient care and support. This review demonstrates different findings from various studies to evaluate AI's impact on improving postoperative care outcomes, with a specific focus on its application to Functional Endoscopic Sinus Surgery (FESS) in the literature. This study aimed to compare the performance of 3 different large language models in addressing postoperative sinus care questions. The focus is to determining their utility in patient education and support following FESS.</p><p><strong>Methodology: </strong>This cross-sectional study was conducted over a 3-month period. Ten standardized questions were adapted from 3 identified online sources (University of Michigan Health, Kevin Caceres, MD, and GhiamMD). Each question was presented to all 3 AI chatbots under identical conditions, generating a total of 30 AI-generated responses for evaluation. A new chat window was used for every question to ensure unbiased responses.</p><p><strong>Results: </strong>Findings suggest that the number of words (P=0.026), number of sentences (P<0.001), and number of characters per word (P=0.007) were significantly higher in DeepSeek, but DeepSeek showed significantly lower in the number of words per sentence (P<0.001). According to evaluators, ChatGPT-4 ratings were better regarding the clarity of responses, whereas DeepSeek ratings were better in completeness. However, Google Gemini performed the least among the AI Chatbots. Interestingly, reading difficulties in the responses from Google Gemini and DeepSeek were somewhat higher than ChatGPT-4.</p><p><strong>Conclusion: </strong>Both ChatGPT-4 and DeepSeek had comparable ratings on the response's accuracy, relevance, and usefulness. However, the number of words, sentences, and characters per word was significantly higher in DeepSeek. Interestingly, Google Gemini's ratings lagged behind both ChatGPT-4 and DeepSeek. Further investigations are required to determine which AI Chatbots offer the best responses in various clinical case scenarios, particularly in postoperative care in our region.</p>\",\"PeriodicalId\":15462,\"journal\":{\"name\":\"Journal of Craniofacial Surgery\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Craniofacial Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1097/SCS.0000000000011922\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Craniofacial Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/SCS.0000000000011922","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"SURGERY","Score":null,"Total":0}
Artificial Intelligence in Postoperative Sinus Care: A Comparative Study of ChatGPT-4, Google Gemini, and DeepSeek in Patient Education and Support.
Introduction: Artificial intelligence (AI) integration into postoperative care has demonstrated significant potential in enhancing patient care and support. This review demonstrates different findings from various studies to evaluate AI's impact on improving postoperative care outcomes, with a specific focus on its application to Functional Endoscopic Sinus Surgery (FESS) in the literature. This study aimed to compare the performance of 3 different large language models in addressing postoperative sinus care questions. The focus is to determining their utility in patient education and support following FESS.
Methodology: This cross-sectional study was conducted over a 3-month period. Ten standardized questions were adapted from 3 identified online sources (University of Michigan Health, Kevin Caceres, MD, and GhiamMD). Each question was presented to all 3 AI chatbots under identical conditions, generating a total of 30 AI-generated responses for evaluation. A new chat window was used for every question to ensure unbiased responses.
Results: Findings suggest that the number of words (P=0.026), number of sentences (P<0.001), and number of characters per word (P=0.007) were significantly higher in DeepSeek, but DeepSeek showed significantly lower in the number of words per sentence (P<0.001). According to evaluators, ChatGPT-4 ratings were better regarding the clarity of responses, whereas DeepSeek ratings were better in completeness. However, Google Gemini performed the least among the AI Chatbots. Interestingly, reading difficulties in the responses from Google Gemini and DeepSeek were somewhat higher than ChatGPT-4.
Conclusion: Both ChatGPT-4 and DeepSeek had comparable ratings on the response's accuracy, relevance, and usefulness. However, the number of words, sentences, and characters per word was significantly higher in DeepSeek. Interestingly, Google Gemini's ratings lagged behind both ChatGPT-4 and DeepSeek. Further investigations are required to determine which AI Chatbots offer the best responses in various clinical case scenarios, particularly in postoperative care in our region.
期刊介绍:
The Journal of Craniofacial Surgery serves as a forum of communication for all those involved in craniofacial surgery, maxillofacial surgery and pediatric plastic surgery. Coverage ranges from practical aspects of craniofacial surgery to the basic science that underlies surgical practice. The journal publishes original articles, scientific reviews, editorials and invited commentary, abstracts and selected articles from international journals, and occasional international bibliographies in craniofacial surgery.