评估 "ChatGPT "与皮肤外科医生选择的莫氏手术缺损重建方法的比较。

IF 3.7 4区 医学 Q1 DERMATOLOGY
Adrian Cuellar-Barboza, Elizabeth Brussolo-Marroquín, Fanny C Cordero-Martinez, Patrizia E Aguilar-Calderon, Osvaldo Vazquez-Martinez, Jorge Ocampo-Candiani
{"title":"评估 \"ChatGPT \"与皮肤外科医生选择的莫氏手术缺损重建方法的比较。","authors":"Adrian Cuellar-Barboza, Elizabeth Brussolo-Marroquín, Fanny C Cordero-Martinez, Patrizia E Aguilar-Calderon, Osvaldo Vazquez-Martinez, Jorge Ocampo-Candiani","doi":"10.1093/ced/llae184","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>ChatGPT is an open-access chatbot developed using artificial intelligence (AI) that generates human-like responses.</p><p><strong>Objective: </strong>To evaluate the ChatGPT-4's concordance with three dermatological surgeons on reconstructions for dermatological surgical defects.</p><p><strong>Methods: </strong>The cases of 70 patients with nonmelanoma skin cancer treated with surgery were obtained from clinical records for analysis. A list of 30 reconstruction options was designed by the main authors that included primary closure, secondary skin closure, skin flaps and skin grafts. Three dermatological surgeons who were blinded to the real reconstruction, along with ChatGPT-4, were asked to select two reconstruction options from the list.</p><p><strong>Results: </strong>Seventy responses were analysed using Cohen's kappa, looking for concordance between each dermatologist and ChatGPT. The level of agreement among dermatological surgeons was higher compared with that between dermatological surgeons and ChatGPT, highlighting differences in decision making. In the selection of the best reconstruction technique, the results indicated a fair level of agreement among the dermatologists, ranging between κ 0.268 and 0.331. However, the concordance between ChatGPT-4 and the dermatologists was slight, with κ values ranging from 0.107 to 0.121. In the analysis of the second-choice options, the dermatologists showed only slight agreement. In contrast, the level of concordance between ChatGPT-4 and the dermatologists was below chance.</p><p><strong>Conclusions: </strong>As anticipated, this study reveals variability in medical decisions between dermatological surgeons and ChatGPT. Although these tools offer exciting possibilities for the future, it is vital to acknowledge the risk of inadvertently relying on noncertified AI for medical advice.</p>","PeriodicalId":10324,"journal":{"name":"Clinical and Experimental Dermatology","volume":" ","pages":"1367-1371"},"PeriodicalIF":3.7000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An evaluation of ChatGPT compared with dermatological surgeons' choices of reconstruction for surgical defects after Mohs surgery.\",\"authors\":\"Adrian Cuellar-Barboza, Elizabeth Brussolo-Marroquín, Fanny C Cordero-Martinez, Patrizia E Aguilar-Calderon, Osvaldo Vazquez-Martinez, Jorge Ocampo-Candiani\",\"doi\":\"10.1093/ced/llae184\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>ChatGPT is an open-access chatbot developed using artificial intelligence (AI) that generates human-like responses.</p><p><strong>Objective: </strong>To evaluate the ChatGPT-4's concordance with three dermatological surgeons on reconstructions for dermatological surgical defects.</p><p><strong>Methods: </strong>The cases of 70 patients with nonmelanoma skin cancer treated with surgery were obtained from clinical records for analysis. A list of 30 reconstruction options was designed by the main authors that included primary closure, secondary skin closure, skin flaps and skin grafts. Three dermatological surgeons who were blinded to the real reconstruction, along with ChatGPT-4, were asked to select two reconstruction options from the list.</p><p><strong>Results: </strong>Seventy responses were analysed using Cohen's kappa, looking for concordance between each dermatologist and ChatGPT. The level of agreement among dermatological surgeons was higher compared with that between dermatological surgeons and ChatGPT, highlighting differences in decision making. In the selection of the best reconstruction technique, the results indicated a fair level of agreement among the dermatologists, ranging between κ 0.268 and 0.331. However, the concordance between ChatGPT-4 and the dermatologists was slight, with κ values ranging from 0.107 to 0.121. In the analysis of the second-choice options, the dermatologists showed only slight agreement. In contrast, the level of concordance between ChatGPT-4 and the dermatologists was below chance.</p><p><strong>Conclusions: </strong>As anticipated, this study reveals variability in medical decisions between dermatological surgeons and ChatGPT. Although these tools offer exciting possibilities for the future, it is vital to acknowledge the risk of inadvertently relying on noncertified AI for medical advice.</p>\",\"PeriodicalId\":10324,\"journal\":{\"name\":\"Clinical and Experimental Dermatology\",\"volume\":\" \",\"pages\":\"1367-1371\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-10-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical and Experimental Dermatology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/ced/llae184\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"DERMATOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical and Experimental Dermatology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/ced/llae184","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DERMATOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景:ChatGPT®(OpenAI;美国加利福尼亚州)是一个开放访问的聊天机器人,采用人工智能(AI)开发,可生成类似人类的回复:目的:评估 ChatGPT-4 与三位皮肤外科医生在皮肤外科手术缺损重建方面的一致性:方法:从临床病历中获取 70 例通过手术治疗的非黑色素瘤皮肤癌病例进行分析。主要作者设计了一份包含 30 种重建方案的清单,其中包括一次闭合、二次皮肤闭合、皮瓣和植皮。三位蒙眼皮肤外科医生与 ChatGPT-4 一起被要求从列表中选择两个重建方案:使用科恩卡帕法分析了 70 个回答,以确定每位皮肤科医生与 ChatGPT 之间的一致性。与皮肤科外科医生和 ChatGPT 之间的一致性相比,皮肤科外科医生之间的一致性更高,这凸显了决策过程中的差异。在最佳重建技术方面,结果显示皮肤科医生之间的一致性在 κ 0.268 和 0.331 之间。然而,ChatGPT-4 和皮肤科医生之间的吻合度较低,κ 值在 0.107 到 0.121 之间。在对第二选择选项的分析中,皮肤科医生显示出轻微的一致性。相比之下,ChatGPT-4 和皮肤科医生之间的吻合程度低于概率:正如预期的那样,这项研究揭示了皮肤外科医生和 ChatGPT 之间在医疗决策上的差异。虽然这些工具为未来提供了令人兴奋的可能性,但必须认识到无意中依赖未经认证的人工智能提供医疗建议的风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An evaluation of ChatGPT compared with dermatological surgeons' choices of reconstruction for surgical defects after Mohs surgery.

Background: ChatGPT is an open-access chatbot developed using artificial intelligence (AI) that generates human-like responses.

Objective: To evaluate the ChatGPT-4's concordance with three dermatological surgeons on reconstructions for dermatological surgical defects.

Methods: The cases of 70 patients with nonmelanoma skin cancer treated with surgery were obtained from clinical records for analysis. A list of 30 reconstruction options was designed by the main authors that included primary closure, secondary skin closure, skin flaps and skin grafts. Three dermatological surgeons who were blinded to the real reconstruction, along with ChatGPT-4, were asked to select two reconstruction options from the list.

Results: Seventy responses were analysed using Cohen's kappa, looking for concordance between each dermatologist and ChatGPT. The level of agreement among dermatological surgeons was higher compared with that between dermatological surgeons and ChatGPT, highlighting differences in decision making. In the selection of the best reconstruction technique, the results indicated a fair level of agreement among the dermatologists, ranging between κ 0.268 and 0.331. However, the concordance between ChatGPT-4 and the dermatologists was slight, with κ values ranging from 0.107 to 0.121. In the analysis of the second-choice options, the dermatologists showed only slight agreement. In contrast, the level of concordance between ChatGPT-4 and the dermatologists was below chance.

Conclusions: As anticipated, this study reveals variability in medical decisions between dermatological surgeons and ChatGPT. Although these tools offer exciting possibilities for the future, it is vital to acknowledge the risk of inadvertently relying on noncertified AI for medical advice.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.20
自引率
2.40%
发文量
389
审稿时长
3-8 weeks
期刊介绍: Clinical and Experimental Dermatology (CED) is a unique provider of relevant and educational material for practising clinicians and dermatological researchers. We support continuing professional development (CPD) of dermatology specialists to advance the understanding, management and treatment of skin disease in order to improve patient outcomes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信