Paolo Boscolo-Rizzo, Alberto Vito Marcuzzo, Chiara Lazzarin, Fabiola Giudici, Jerry Polesel, Marco Stellin, Andrea Pettorelli, Giacomo Spinato, Giancarlo Ottaviano, Marco Ferrari, Daniele Borsetto, Simone Zucchini, Franco Trabalzini, Egidio Sia, Nicoletta Gardenal, Roberto Baruca, Alfonso Fortunati, Luigi Angelo Vaira, Giancarlo Tirelli
{"title":"人工智能聊天机器人在头颈癌重建手术中提供的信息质量:ChatGPT4与Claude2的比较分析","authors":"Paolo Boscolo-Rizzo, Alberto Vito Marcuzzo, Chiara Lazzarin, Fabiola Giudici, Jerry Polesel, Marco Stellin, Andrea Pettorelli, Giacomo Spinato, Giancarlo Ottaviano, Marco Ferrari, Daniele Borsetto, Simone Zucchini, Franco Trabalzini, Egidio Sia, Nicoletta Gardenal, Roberto Baruca, Alfonso Fortunati, Luigi Angelo Vaira, Giancarlo Tirelli","doi":"10.1111/coa.14261","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Introduction</h3>\n \n <p>Artificial Intelligences (AIs) are changing the way information is accessed and consumed globally. This study aims to evaluate the information quality provided by AIs ChatGPT4 and Claude2 concerning reconstructive surgery for head and neck cancer.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Thirty questions on reconstructive surgery for head and neck cancer were directed to both AIs and 16 head and neck surgeons assessed the responses using the QAMAI questionnaire. A 5-point Likert scale was used to assess accuracy, clarity, relevance, completeness, sources, and usefulness. Questions were categorised into those suitable for patients (group 1) and those for surgeons (group 2). AI responses were compared using <i>t</i>-Student and McNemar tests. Surgeon score agreement was measured with intraclass correlation coefficient, and readability was assessed with Flesch–Kincaid Grade Level (FKGL).</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>ChatGPT4 and Claude2 had similar overall mean scores of accuracy, clarity, relevance, completeness and usefulness, while Claude2 outperformed ChatGPT4 in sources (110.0 vs. 92.1, <i>p</i> < 0.001). Considering the group 2, Claude2 showed significantly lower accuracy and completeness scores compared to ChatGPT4 (<i>p</i> = 0.003 and <i>p</i> = 0.002, respectively). Regarding readability, ChatGPT4 presented lower complexity than Claude2 (FKGL mean score 4.57 vs. 6.05, <i>p</i> < 0.001) requiring an easy-fairly easy English in 93% of cases.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>Our findings indicate that neither chatbot exhibits a decisive superiority in all aspects. Nonetheless, ChatGPT4 demonstrates greater accuracy and comprehensiveness for specific types of questions and the simpler language used may aid patient inquiries. However, many evaluators disagree with chatbot information, highlighting that AI systems cannot serve as a substitute for advice from medical professionals.</p>\n </section>\n </div>","PeriodicalId":10431,"journal":{"name":"Clinical Otolaryngology","volume":"50 2","pages":"330-335"},"PeriodicalIF":1.7000,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coa.14261","citationCount":"0","resultStr":"{\"title\":\"Quality of Information Provided by Artificial Intelligence Chatbots Surrounding the Reconstructive Surgery for Head and Neck Cancer: A Comparative Analysis Between ChatGPT4 and Claude2\",\"authors\":\"Paolo Boscolo-Rizzo, Alberto Vito Marcuzzo, Chiara Lazzarin, Fabiola Giudici, Jerry Polesel, Marco Stellin, Andrea Pettorelli, Giacomo Spinato, Giancarlo Ottaviano, Marco Ferrari, Daniele Borsetto, Simone Zucchini, Franco Trabalzini, Egidio Sia, Nicoletta Gardenal, Roberto Baruca, Alfonso Fortunati, Luigi Angelo Vaira, Giancarlo Tirelli\",\"doi\":\"10.1111/coa.14261\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Introduction</h3>\\n \\n <p>Artificial Intelligences (AIs) are changing the way information is accessed and consumed globally. This study aims to evaluate the information quality provided by AIs ChatGPT4 and Claude2 concerning reconstructive surgery for head and neck cancer.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>Thirty questions on reconstructive surgery for head and neck cancer were directed to both AIs and 16 head and neck surgeons assessed the responses using the QAMAI questionnaire. A 5-point Likert scale was used to assess accuracy, clarity, relevance, completeness, sources, and usefulness. Questions were categorised into those suitable for patients (group 1) and those for surgeons (group 2). AI responses were compared using <i>t</i>-Student and McNemar tests. Surgeon score agreement was measured with intraclass correlation coefficient, and readability was assessed with Flesch–Kincaid Grade Level (FKGL).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>ChatGPT4 and Claude2 had similar overall mean scores of accuracy, clarity, relevance, completeness and usefulness, while Claude2 outperformed ChatGPT4 in sources (110.0 vs. 92.1, <i>p</i> < 0.001). Considering the group 2, Claude2 showed significantly lower accuracy and completeness scores compared to ChatGPT4 (<i>p</i> = 0.003 and <i>p</i> = 0.002, respectively). Regarding readability, ChatGPT4 presented lower complexity than Claude2 (FKGL mean score 4.57 vs. 6.05, <i>p</i> < 0.001) requiring an easy-fairly easy English in 93% of cases.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>Our findings indicate that neither chatbot exhibits a decisive superiority in all aspects. Nonetheless, ChatGPT4 demonstrates greater accuracy and comprehensiveness for specific types of questions and the simpler language used may aid patient inquiries. However, many evaluators disagree with chatbot information, highlighting that AI systems cannot serve as a substitute for advice from medical professionals.</p>\\n </section>\\n </div>\",\"PeriodicalId\":10431,\"journal\":{\"name\":\"Clinical Otolaryngology\",\"volume\":\"50 2\",\"pages\":\"330-335\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/coa.14261\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical Otolaryngology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/coa.14261\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OTORHINOLARYNGOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Otolaryngology","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/coa.14261","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OTORHINOLARYNGOLOGY","Score":null,"Total":0}
引用次数: 0
摘要
导读:人工智能(ai)正在改变全球获取和消费信息的方式。本研究旨在评价AIs ChatGPT4和Claude2提供的头颈癌重建手术的信息质量。方法:将30个关于头颈癌重建手术的问题直接交给AIs, 16名头颈外科医生使用QAMAI问卷对回答进行评估。5分李克特量表用于评估准确性、清晰度、相关性、完整性、来源和有用性。将问题分为适合患者(第1组)和适合外科医生(第2组)的问题。使用t-Student和McNemar测试比较AI的回答。外科医生评分一致性采用组内相关系数衡量,可读性采用Flesch-Kincaid分级水平(FKGL)评估。结果:ChatGPT4和Claude2在准确性、清晰度、相关性、完整性和有用性方面的总体平均得分相似,而Claude2在来源上优于ChatGPT4 (110.0 vs. 92.1, p)。结论:我们的研究结果表明,两个聊天机器人在所有方面都没有决定性的优势。尽管如此,ChatGPT4对于特定类型的问题显示出更高的准确性和全面性,并且使用的更简单的语言可能有助于患者查询。然而,许多评估者不同意聊天机器人的信息,强调人工智能系统不能代替医疗专业人员的建议。
Quality of Information Provided by Artificial Intelligence Chatbots Surrounding the Reconstructive Surgery for Head and Neck Cancer: A Comparative Analysis Between ChatGPT4 and Claude2
Introduction
Artificial Intelligences (AIs) are changing the way information is accessed and consumed globally. This study aims to evaluate the information quality provided by AIs ChatGPT4 and Claude2 concerning reconstructive surgery for head and neck cancer.
Methods
Thirty questions on reconstructive surgery for head and neck cancer were directed to both AIs and 16 head and neck surgeons assessed the responses using the QAMAI questionnaire. A 5-point Likert scale was used to assess accuracy, clarity, relevance, completeness, sources, and usefulness. Questions were categorised into those suitable for patients (group 1) and those for surgeons (group 2). AI responses were compared using t-Student and McNemar tests. Surgeon score agreement was measured with intraclass correlation coefficient, and readability was assessed with Flesch–Kincaid Grade Level (FKGL).
Results
ChatGPT4 and Claude2 had similar overall mean scores of accuracy, clarity, relevance, completeness and usefulness, while Claude2 outperformed ChatGPT4 in sources (110.0 vs. 92.1, p < 0.001). Considering the group 2, Claude2 showed significantly lower accuracy and completeness scores compared to ChatGPT4 (p = 0.003 and p = 0.002, respectively). Regarding readability, ChatGPT4 presented lower complexity than Claude2 (FKGL mean score 4.57 vs. 6.05, p < 0.001) requiring an easy-fairly easy English in 93% of cases.
Conclusion
Our findings indicate that neither chatbot exhibits a decisive superiority in all aspects. Nonetheless, ChatGPT4 demonstrates greater accuracy and comprehensiveness for specific types of questions and the simpler language used may aid patient inquiries. However, many evaluators disagree with chatbot information, highlighting that AI systems cannot serve as a substitute for advice from medical professionals.
期刊介绍:
Clinical Otolaryngology is a bimonthly journal devoted to clinically-oriented research papers of the highest scientific standards dealing with:
current otorhinolaryngological practice
audiology, otology, balance, rhinology, larynx, voice and paediatric ORL
head and neck oncology
head and neck plastic and reconstructive surgery
continuing medical education and ORL training
The emphasis is on high quality new work in the clinical field and on fresh, original research.
Each issue begins with an editorial expressing the personal opinions of an individual with a particular knowledge of a chosen subject. The main body of each issue is then devoted to original papers carrying important results for those working in the field. In addition, topical review articles are published discussing a particular subject in depth, including not only the opinions of the author but also any controversies surrounding the subject.
• Negative/null results
In order for research to advance, negative results, which often make a valuable contribution to the field, should be published. However, articles containing negative or null results are frequently not considered for publication or rejected by journals. We welcome papers of this kind, where appropriate and valid power calculations are included that give confidence that a negative result can be relied upon.