对患者进行上眼睑整形手术的常见问题进行了探讨。

IF 0.9 Q4 OPHTHALMOLOGY
Arjun Watane, Brittany M Perzia, Madison E Weiss, Andrea A Tooley, Emily Li, Larissa A Habib, Phillip A Tenzel, Michelle M Maeng
{"title":"对患者进行上眼睑整形手术的常见问题进行了探讨。","authors":"Arjun Watane, Brittany M Perzia, Madison E Weiss, Andrea A Tooley, Emily Li, Larissa A Habib, Phillip A Tenzel, Michelle M Maeng","doi":"10.1080/01676830.2024.2435930","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Online health information seekers may access information produced by artificial intelligence language models such as ChatGPT (OpenAI). The medical field may pose a significant challenge for incorporating these applications given the training and experience needed to master clinical reasoning. The objective was to evaluate the performance of ChatGPT responses compared to human oculofacial plastic surgeon (OPS) responses to FAQs about an upper eyelid blepharoplasty procedure.</p><p><strong>Methods: </strong>A cross-sectional survey was conducted. Three OPS trained by the American Society of Ophthalmic Plastic and Reconstructive Surgery (ASOPRS) and three ChatGPT instances each answered 6 frequently asked questions (FAQs) about an upper eyelid blepharoplasty procedure. Two blinded ASOPRS-trained OPS evaluated each response for their accuracy, comprehensiveness, and personal answer similarity based on a Likert scale (1=strongly disagree; 5=strongly agree).</p><p><strong>Results: </strong>ChatGPT achieved a mean Likert scale score of 3.8 (SD 0.9) in accuracy, 3.6 (SD 1.1) in comprehensiveness, and 3.2 (SD 1.1) in personal answer similarity. In comparison, OPS achieved a mean score of 3.6 (SD 1.2) in accuracy (<i>p</i> = .72), 3.0 (SD 1.1) in comprehensiveness (<i>p</i> = .03), and 2.9 (SD 1.1) in personal answer similarity (<i>p</i> = .66).</p><p><strong>Conclusions: </strong>ChatGPT was non-inferior to OPS in answering upper eyelid blepharoplasty FAQs. Compared to OPS, ChatGPT achieved better comprehensiveness ratings and non-inferior accuracy and personal answer similarity ratings. This study poses the potential for ChatGPT to serve as an adjunct to OPS for patient education but not a replacement. However, safeguards to protect patients from possible harm must be implemented.</p>","PeriodicalId":47421,"journal":{"name":"Orbit-The International Journal on Orbital Disorders-Oculoplastic and Lacrimal Surgery","volume":" ","pages":"1-4"},"PeriodicalIF":0.9000,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT and frequently asked patient questions for upper eyelid blepharoplasty surgery.\",\"authors\":\"Arjun Watane, Brittany M Perzia, Madison E Weiss, Andrea A Tooley, Emily Li, Larissa A Habib, Phillip A Tenzel, Michelle M Maeng\",\"doi\":\"10.1080/01676830.2024.2435930\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Online health information seekers may access information produced by artificial intelligence language models such as ChatGPT (OpenAI). The medical field may pose a significant challenge for incorporating these applications given the training and experience needed to master clinical reasoning. The objective was to evaluate the performance of ChatGPT responses compared to human oculofacial plastic surgeon (OPS) responses to FAQs about an upper eyelid blepharoplasty procedure.</p><p><strong>Methods: </strong>A cross-sectional survey was conducted. Three OPS trained by the American Society of Ophthalmic Plastic and Reconstructive Surgery (ASOPRS) and three ChatGPT instances each answered 6 frequently asked questions (FAQs) about an upper eyelid blepharoplasty procedure. Two blinded ASOPRS-trained OPS evaluated each response for their accuracy, comprehensiveness, and personal answer similarity based on a Likert scale (1=strongly disagree; 5=strongly agree).</p><p><strong>Results: </strong>ChatGPT achieved a mean Likert scale score of 3.8 (SD 0.9) in accuracy, 3.6 (SD 1.1) in comprehensiveness, and 3.2 (SD 1.1) in personal answer similarity. In comparison, OPS achieved a mean score of 3.6 (SD 1.2) in accuracy (<i>p</i> = .72), 3.0 (SD 1.1) in comprehensiveness (<i>p</i> = .03), and 2.9 (SD 1.1) in personal answer similarity (<i>p</i> = .66).</p><p><strong>Conclusions: </strong>ChatGPT was non-inferior to OPS in answering upper eyelid blepharoplasty FAQs. Compared to OPS, ChatGPT achieved better comprehensiveness ratings and non-inferior accuracy and personal answer similarity ratings. This study poses the potential for ChatGPT to serve as an adjunct to OPS for patient education but not a replacement. However, safeguards to protect patients from possible harm must be implemented.</p>\",\"PeriodicalId\":47421,\"journal\":{\"name\":\"Orbit-The International Journal on Orbital Disorders-Oculoplastic and Lacrimal Surgery\",\"volume\":\" \",\"pages\":\"1-4\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2024-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Orbit-The International Journal on Orbital Disorders-Oculoplastic and Lacrimal Surgery\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/01676830.2024.2435930\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Orbit-The International Journal on Orbital Disorders-Oculoplastic and Lacrimal Surgery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/01676830.2024.2435930","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:在线健康信息搜索者可以访问由ChatGPT (OpenAI)等人工智能语言模型生成的信息。考虑到掌握临床推理所需的培训和经验,医学领域可能对整合这些应用构成重大挑战。目的是将ChatGPT的反应与人类眼面部整形外科医生(OPS)对有关上眼睑成形术的常见问题的反应进行比较。方法:采用横断面调查法。由美国眼科整形与重建外科学会(ASOPRS)培训的三名OPS和三名ChatGPT实例分别回答了关于上眼睑整形手术的6个常见问题(FAQs)。两个盲法asoprs训练的OPS根据李克特量表评估每个回答的准确性、全面性和个人回答的相似性(1=强烈不同意;5 =非常同意)。结果:ChatGPT的Likert量表平均得分为准确性3.8分(SD 0.9),全面性3.6分(SD 1.1),个人答案相似度3.2分(SD 1.1)。相比之下,OPS在准确性方面的平均得分为3.6 (SD 1.2) (p = 0.72),在综合性方面的平均得分为3.0 (SD 1.1) (p = 0.03),在个人回答相似度方面的平均得分为2.9 (SD 1.1) (p = 0.66)。结论:ChatGPT在回答上睑成形术常见问题方面不逊于OPS。与OPS相比,ChatGPT获得了更好的综合性评分和不逊色的准确性和个人答案相似度评分。本研究提出ChatGPT作为OPS辅助患者教育的潜力,而不是替代。然而,必须实施保护患者免受可能伤害的保障措施。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ChatGPT and frequently asked patient questions for upper eyelid blepharoplasty surgery.

Purpose: Online health information seekers may access information produced by artificial intelligence language models such as ChatGPT (OpenAI). The medical field may pose a significant challenge for incorporating these applications given the training and experience needed to master clinical reasoning. The objective was to evaluate the performance of ChatGPT responses compared to human oculofacial plastic surgeon (OPS) responses to FAQs about an upper eyelid blepharoplasty procedure.

Methods: A cross-sectional survey was conducted. Three OPS trained by the American Society of Ophthalmic Plastic and Reconstructive Surgery (ASOPRS) and three ChatGPT instances each answered 6 frequently asked questions (FAQs) about an upper eyelid blepharoplasty procedure. Two blinded ASOPRS-trained OPS evaluated each response for their accuracy, comprehensiveness, and personal answer similarity based on a Likert scale (1=strongly disagree; 5=strongly agree).

Results: ChatGPT achieved a mean Likert scale score of 3.8 (SD 0.9) in accuracy, 3.6 (SD 1.1) in comprehensiveness, and 3.2 (SD 1.1) in personal answer similarity. In comparison, OPS achieved a mean score of 3.6 (SD 1.2) in accuracy (p = .72), 3.0 (SD 1.1) in comprehensiveness (p = .03), and 2.9 (SD 1.1) in personal answer similarity (p = .66).

Conclusions: ChatGPT was non-inferior to OPS in answering upper eyelid blepharoplasty FAQs. Compared to OPS, ChatGPT achieved better comprehensiveness ratings and non-inferior accuracy and personal answer similarity ratings. This study poses the potential for ChatGPT to serve as an adjunct to OPS for patient education but not a replacement. However, safeguards to protect patients from possible harm must be implemented.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.40
自引率
9.10%
发文量
136
期刊介绍: Orbit is the international medium covering developments and results from the variety of medical disciplines that overlap and converge in the field of orbital disorders: ophthalmology, otolaryngology, reconstructive and maxillofacial surgery, medicine and endocrinology, radiology, radiotherapy and oncology, neurology, neuroophthalmology and neurosurgery, pathology and immunology, haematology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信