ChatGPT as a Source for Patient Information on Patellofemoral Surgery-A Comparative Study Amongst Laymen, Doctors, and Experts.

IF 1.7 Q2 MEDICINE, GENERAL & INTERNAL
Andreas Frodl, Andreas Fuchs, Tayfun Yilmaz, Kaywan Izadpanah, Hagen Schmal, Markus Siegel
{"title":"ChatGPT as a Source for Patient Information on Patellofemoral Surgery-A Comparative Study Amongst Laymen, Doctors, and Experts.","authors":"Andreas Frodl, Andreas Fuchs, Tayfun Yilmaz, Kaywan Izadpanah, Hagen Schmal, Markus Siegel","doi":"10.3390/clinpract14060186","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>In November 2022, OpenAI launched ChatGPT for public use through a free online platform. ChatGPT is an artificial intelligence (AI) chatbot trained on a broad dataset encompassing a wide range of topics, including medical literature. The usability in the medical field and the quality of AI-generated responses are widely discussed and are the subject of current investigations. Patellofemoral pain is one of the most common conditions among young adults, often prompting patients to seek advice. This study examines the quality of ChatGPT as a source of information regarding patellofemoral conditions and surgery, hypothesizing that there will be differences in the evaluation of responses generated by ChatGPT between populations with different levels of expertise in patellofemoral disorders.</p><p><strong>Methods: </strong>A comparison was conducted between laymen, doctors (non-orthopedic), and experts in patellofemoral disorders based on a list of 12 questions. These questions were divided into descriptive and recommendatory categories, with each category further split into basic and advanced content. Questions were used to prompt ChatGPT in April 2024 using the ChatGPT 4.0 engine, and answers were evaluated using a custom tool inspired by the Ensuring Quality Information for Patients (EQIP) instrument. Evaluations were performed independently by laymen, non-orthopedic doctors, and experts, with the results statistically analyzed using a Mann-Whitney U Test. A <i>p</i>-value of less than 0.05 was considered statistically significant.</p><p><strong>Results: </strong>The study included data from seventeen participants: four experts in patellofemoral disorders, seven non-orthopedic doctors, and six laymen. Experts rated the answers lower on average compared to non-experts. Significant differences were observed in the ratings of descriptive answers with increasing complexity. The average score for experts was 29.3 ± 5.8, whereas non-experts averaged 35.3 ± 5.7. For recommendatory answers, experts also gave lower ratings, particularly for more complex questions.</p><p><strong>Conclusion: </strong>ChatGPT provides good quality answers to questions concerning patellofemoral disorders, although questions with higher complexity were rated lower by patellofemoral experts compared to non-experts. This study emphasizes the potential of ChatGPT as a complementary tool for patient information on patellofemoral disorders, although the quality of the answers fluctuates with the complexity of the questions, which might not be recognized by non-experts. The lack of personalized recommendations and the problem of \"AI hallucinations\" remain a challenge. Human expertise and judgement, especially from trained healthcare experts, remain irreplaceable.</p>","PeriodicalId":45306,"journal":{"name":"Clinics and Practice","volume":"14 6","pages":"2376-2384"},"PeriodicalIF":1.7000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinics and Practice","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/clinpract14060186","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: In November 2022, OpenAI launched ChatGPT for public use through a free online platform. ChatGPT is an artificial intelligence (AI) chatbot trained on a broad dataset encompassing a wide range of topics, including medical literature. The usability in the medical field and the quality of AI-generated responses are widely discussed and are the subject of current investigations. Patellofemoral pain is one of the most common conditions among young adults, often prompting patients to seek advice. This study examines the quality of ChatGPT as a source of information regarding patellofemoral conditions and surgery, hypothesizing that there will be differences in the evaluation of responses generated by ChatGPT between populations with different levels of expertise in patellofemoral disorders.

Methods: A comparison was conducted between laymen, doctors (non-orthopedic), and experts in patellofemoral disorders based on a list of 12 questions. These questions were divided into descriptive and recommendatory categories, with each category further split into basic and advanced content. Questions were used to prompt ChatGPT in April 2024 using the ChatGPT 4.0 engine, and answers were evaluated using a custom tool inspired by the Ensuring Quality Information for Patients (EQIP) instrument. Evaluations were performed independently by laymen, non-orthopedic doctors, and experts, with the results statistically analyzed using a Mann-Whitney U Test. A p-value of less than 0.05 was considered statistically significant.

Results: The study included data from seventeen participants: four experts in patellofemoral disorders, seven non-orthopedic doctors, and six laymen. Experts rated the answers lower on average compared to non-experts. Significant differences were observed in the ratings of descriptive answers with increasing complexity. The average score for experts was 29.3 ± 5.8, whereas non-experts averaged 35.3 ± 5.7. For recommendatory answers, experts also gave lower ratings, particularly for more complex questions.

Conclusion: ChatGPT provides good quality answers to questions concerning patellofemoral disorders, although questions with higher complexity were rated lower by patellofemoral experts compared to non-experts. This study emphasizes the potential of ChatGPT as a complementary tool for patient information on patellofemoral disorders, although the quality of the answers fluctuates with the complexity of the questions, which might not be recognized by non-experts. The lack of personalized recommendations and the problem of "AI hallucinations" remain a challenge. Human expertise and judgement, especially from trained healthcare experts, remain irreplaceable.

作为髌骨股骨手术患者信息来源的 ChatGPT--外行、医生和专家之间的比较研究。
简介2022 年 11 月,OpenAI 通过免费在线平台向公众推出了 ChatGPT。ChatGPT 是一个人工智能(AI)聊天机器人,它是在包括医学文献在内的广泛主题数据集上训练出来的。医疗领域的可用性和人工智能生成的回复质量受到广泛讨论,也是当前研究的主题。髌骨股骨痛是青壮年中最常见的疾病之一,常常促使患者寻求建议。本研究探讨了 ChatGPT 作为髌骨股骨疾病和手术相关信息来源的质量,并假设对髌骨股骨疾病具有不同专业水平的人群对 ChatGPT 生成的回复的评价会存在差异:根据 12 个问题的列表,对髌骨股骨疾病的非专业人士、医生(非骨科)和专家进行了比较。这些问题分为描述性和建议性两类,每一类又分为基础和高级内容。2024 年 4 月,使用 ChatGPT 4.0 引擎对问题进行了提示,并使用受 "确保患者质量信息"(EQIP)工具启发而定制的工具对答案进行了评估。评估由非专业人士、非骨科医生和专家独立完成,结果采用 Mann-Whitney U 检验进行统计分析。P 值小于 0.05 即为具有统计学意义:研究包括 17 名参与者的数据:4 名髌骨股骨疾病专家、7 名非骨科医生和 6 名普通人。与非专家相比,专家对答案的平均评分较低。随着复杂程度的增加,描述性答案的评分也出现了显著差异。专家的平均得分为 29.3 ± 5.8,而非专家的平均得分为 35.3 ± 5.7。对于建议性答案,专家的评分也较低,尤其是对于更复杂的问题:结论:ChatGPT 为有关髌骨股关节疾病的问题提供了高质量的答案,但与非专家相比,髌骨股关节专家对复杂性较高的问题的评分较低。本研究强调了 ChatGPT 作为髌骨股骨疾病患者信息补充工具的潜力,尽管回答质量随问题的复杂程度而波动,非专家可能无法识别。缺乏个性化建议和 "人工智能幻觉 "问题仍然是一个挑战。人类的专业知识和判断力,尤其是训练有素的医疗专家的专业知识和判断力,仍然是不可替代的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Clinics and Practice
Clinics and Practice MEDICINE, GENERAL & INTERNAL-
CiteScore
2.60
自引率
4.30%
发文量
91
审稿时长
10 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信