Assessing the quality of ChatGPT's responses to questions related to radiofrequency ablation for varicose veins.

IF 2.8 2区 医学 Q2 PERIPHERAL VASCULAR DISEASE
Muhammad Anees, Fareed Ahmed Shaikh, Hafsah Shaikh, Nadeem Ahmed Siddiqui, Zia Ur Rehman
{"title":"Assessing the quality of ChatGPT's responses to questions related to radiofrequency ablation for varicose veins.","authors":"Muhammad Anees, Fareed Ahmed Shaikh, Hafsah Shaikh, Nadeem Ahmed Siddiqui, Zia Ur Rehman","doi":"10.1016/j.jvsv.2024.101985","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study aimed to evaluate the accuracy and reproducibility of information provided by ChatGPT, in response to frequently asked questions about radiofrequency ablation (RFA) for varicose veins.</p><p><strong>Methods: </strong>This cross-sectional study was conducted at The Aga Khan University Hospital, Karachi, Pakistan. A set of 18 frequently asked questions regarding RFA for varicose veins were compiled from credible online sources and presented to ChatGPT twice, separately, using the new chat option. Twelve experienced vascular surgeons (with >2 years of experience and ≥20 RFA procedures performed annually) independently evaluated the accuracy of the responses using a 4-point Likert scale and assessed their reproducibility.</p><p><strong>Results: </strong>Most evaluators were males (n = 10/12 [83.3%]) with an average of 12.3 ± 6.2 years of experience as a vascular surgeon. Six evaluators (50%) were from the UK followed by three from Saudi Arabia (25.0%), two from Pakistan (16.7%), and one from the United States (8.3%). Among the 216 accuracy grades, most of the evaluators graded the responses as comprehensive (n = 87/216 [40.3%]) or accurate but insufficient (n = 70/216 [32.4%]), whereas only 17.1% (n = 37/216) were graded as a mixture of both accurate and inaccurate information and 10.8% (n = 22/216) as entirely inaccurate. Overall, 89.8% of the responses (n = 194/216) were deemed reproducible. Of the total responses, 70.4% (n = 152/216) were classified as good quality and reproducible. The remaining responses were poor quality with 19.4% reproducible (n = 42/216) and 10.2% nonreproducible (n = 22/216). There was nonsignificant inter-rater disagreement among the vascular surgeons for overall responses (Fleiss' kappa, -0.028; P = .131).</p><p><strong>Conclusions: </strong>ChatGPT provided generally accurate and reproducible information on RFA for varicose veins; however, variability in response quality and limited inter-rater reliability highlight the need for further improvements. Although it has the potential to enhance patient education and support healthcare decision-making, improvements in its training, validation, transparency, and mechanisms to address inaccurate or incomplete information are essential.</p>","PeriodicalId":17537,"journal":{"name":"Journal of vascular surgery. Venous and lymphatic disorders","volume":null,"pages":null},"PeriodicalIF":2.8000,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of vascular surgery. Venous and lymphatic disorders","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.jvsv.2024.101985","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PERIPHERAL VASCULAR DISEASE","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: This study aimed to evaluate the accuracy and reproducibility of information provided by ChatGPT, in response to frequently asked questions about radiofrequency ablation (RFA) for varicose veins.

Methods: This cross-sectional study was conducted at The Aga Khan University Hospital, Karachi, Pakistan. A set of 18 frequently asked questions regarding RFA for varicose veins were compiled from credible online sources and presented to ChatGPT twice, separately, using the new chat option. Twelve experienced vascular surgeons (with >2 years of experience and ≥20 RFA procedures performed annually) independently evaluated the accuracy of the responses using a 4-point Likert scale and assessed their reproducibility.

Results: Most evaluators were males (n = 10/12 [83.3%]) with an average of 12.3 ± 6.2 years of experience as a vascular surgeon. Six evaluators (50%) were from the UK followed by three from Saudi Arabia (25.0%), two from Pakistan (16.7%), and one from the United States (8.3%). Among the 216 accuracy grades, most of the evaluators graded the responses as comprehensive (n = 87/216 [40.3%]) or accurate but insufficient (n = 70/216 [32.4%]), whereas only 17.1% (n = 37/216) were graded as a mixture of both accurate and inaccurate information and 10.8% (n = 22/216) as entirely inaccurate. Overall, 89.8% of the responses (n = 194/216) were deemed reproducible. Of the total responses, 70.4% (n = 152/216) were classified as good quality and reproducible. The remaining responses were poor quality with 19.4% reproducible (n = 42/216) and 10.2% nonreproducible (n = 22/216). There was nonsignificant inter-rater disagreement among the vascular surgeons for overall responses (Fleiss' kappa, -0.028; P = .131).

Conclusions: ChatGPT provided generally accurate and reproducible information on RFA for varicose veins; however, variability in response quality and limited inter-rater reliability highlight the need for further improvements. Although it has the potential to enhance patient education and support healthcare decision-making, improvements in its training, validation, transparency, and mechanisms to address inaccurate or incomplete information are essential.

评估 ChatGPT 对静脉曲张射频消融相关问题的答复质量。
目的:本研究旨在评估 ChatGPT 针对有关静脉曲张射频消融术(RFA)的常见问题(FAQs)所提供信息的准确性和可重复性:这项横断面研究在巴基斯坦卡拉奇的阿迦汗大学医院进行。研究人员从可靠的在线资料来源收集整理了 18 个有关静脉曲张射频消融的常见问题,并使用 "新聊天 "选项分别向 ChatGPT 演示了两次。12 名经验丰富的血管外科医生(具有 2 年以上经验,每年至少进行 20 次 RFA 手术)使用 4 点李克特量表独立评估了回复的准确性,并评估了回复的可重复性:大多数评估者为男性(10/12,83.3%),平均拥有 12.3 ± 6.2 年的血管外科医生经验。6名(50%)评估者来自英国,其次是3名(25.0%)来自沙特阿拉伯,2名(16.7%)来自巴基斯坦,1名(8.3%)来自美国。在 216 个准确性等级中,大多数评估者将答复评为 "全面"(87/216,40.3%)或 "准确但不充分"(70/216,32.4%),只有 17.1%(37/216)被评为 "既有准确信息也有不准确信息",10.8%(22/216)被评为 "完全不准确"。总体而言,89.8%(n=194/216)的回答被认为是可重复的。在所有回复中,70.4%(n=152/216)被归类为 "质量好 "和 "可重现"。其余答复为 "质量差",19.4%(n=42/216)为 "可再现",10.2%(n=22/216)为 "不可再现"。血管外科医生之间对总体答复的评分者间差异不显著(弗莱斯卡帕:-0.028,P=0.131):结论:ChatGPT 为静脉曲张的射频消融治疗提供了基本准确和可重复的信息,但是,回答质量的差异和评分者之间有限的可靠性凸显了进一步改进的必要性。虽然它具有加强患者教育和支持医疗决策的潜力,但改进其培训、验证、透明度以及处理不准确或不完整信息的机制也至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of vascular surgery. Venous and lymphatic disorders
Journal of vascular surgery. Venous and lymphatic disorders SURGERYPERIPHERAL VASCULAR DISEASE&n-PERIPHERAL VASCULAR DISEASE
CiteScore
6.30
自引率
18.80%
发文量
328
审稿时长
71 days
期刊介绍: Journal of Vascular Surgery: Venous and Lymphatic Disorders is one of a series of specialist journals launched by the Journal of Vascular Surgery. It aims to be the premier international Journal of medical, endovascular and surgical management of venous and lymphatic disorders. It publishes high quality clinical, research, case reports, techniques, and practice manuscripts related to all aspects of venous and lymphatic disorders, including malformations and wound care, with an emphasis on the practicing clinician. The journal seeks to provide novel and timely information to vascular surgeons, interventionalists, phlebologists, wound care specialists, and allied health professionals who treat patients presenting with vascular and lymphatic disorders. As the official publication of The Society for Vascular Surgery and the American Venous Forum, the Journal will publish, after peer review, selected papers presented at the annual meeting of these organizations and affiliated vascular societies, as well as original articles from members and non-members.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信