Evaluating the role of LLMs in supporting patient education during the informed consent process for routine radiology procedures.

IF 3.4 4区 医学 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Eric Einspänner, Roland Schwab, Sebastian Hupfeld, Maximilian Thormann, Erelle Fuchs, Matthias Gawlitza, Jan Borggrefe, Daniel Behme
{"title":"Evaluating the role of LLMs in supporting patient education during the informed consent process for routine radiology procedures.","authors":"Eric Einspänner, Roland Schwab, Sebastian Hupfeld, Maximilian Thormann, Erelle Fuchs, Matthias Gawlitza, Jan Borggrefe, Daniel Behme","doi":"10.1093/bjr/tqaf225","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>This study evaluated three LLM chatbots (GPT-3.5-turbo, GPT-4-turbo, and GPT-4o) on their effectiveness in supporting patient education by answering common patient questions for CT, MRI, and DSA informed consent, assessing their accuracy and clarity.</p><p><strong>Methods: </strong>Two radiologists formulated 90 questions categorized as general, clinical, or technical. Each LLM answered every question five times. Radiologists then rated the responses for medical accuracy and clarity, while medical physicists assessed technical accuracy using a Likert scale. semantic similarity was analyzed with SBERT and cosine similarity.</p><p><strong>Results: </strong>Ratings improved with newer model versions. Linear mixed-effects models revealed that GPT-4 models were rated significantly higher than GPT-3.5 (p < 0.001) by both physicians and physicists. However, physicians' ratings for GPT-4 models showed a significant performance decrease for complex modalities like DSA and MRI (p < 0.01), a pattern not observed in physicists' ratings. SBERT analysis revealed high internal consistency across all models. SBERT analysis revealed high internal consistency across all models.</p><p><strong>Conclusion: </strong>Variability in ratings revealed that while models effectively handled general and technical questions, they struggled with contextually complex medical inquiries requiring personalized responses and nuanced understanding. Statistical analysis confirms that while newer models are superior, their performance is modality-dependent and perceived differently by clinical and technical experts.</p><p><strong>Advances in knowledge: </strong>This study evaluates the potential of LLMs to enhance informed consent in radiology, highlighting strengths in general and technical questions while noting limitations with complex clinical inquiries, with performance varying significantly by model type and imaging modality.</p>","PeriodicalId":9306,"journal":{"name":"British Journal of Radiology","volume":" ","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/bjr/tqaf225","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: This study evaluated three LLM chatbots (GPT-3.5-turbo, GPT-4-turbo, and GPT-4o) on their effectiveness in supporting patient education by answering common patient questions for CT, MRI, and DSA informed consent, assessing their accuracy and clarity.

Methods: Two radiologists formulated 90 questions categorized as general, clinical, or technical. Each LLM answered every question five times. Radiologists then rated the responses for medical accuracy and clarity, while medical physicists assessed technical accuracy using a Likert scale. semantic similarity was analyzed with SBERT and cosine similarity.

Results: Ratings improved with newer model versions. Linear mixed-effects models revealed that GPT-4 models were rated significantly higher than GPT-3.5 (p < 0.001) by both physicians and physicists. However, physicians' ratings for GPT-4 models showed a significant performance decrease for complex modalities like DSA and MRI (p < 0.01), a pattern not observed in physicists' ratings. SBERT analysis revealed high internal consistency across all models. SBERT analysis revealed high internal consistency across all models.

Conclusion: Variability in ratings revealed that while models effectively handled general and technical questions, they struggled with contextually complex medical inquiries requiring personalized responses and nuanced understanding. Statistical analysis confirms that while newer models are superior, their performance is modality-dependent and perceived differently by clinical and technical experts.

Advances in knowledge: This study evaluates the potential of LLMs to enhance informed consent in radiology, highlighting strengths in general and technical questions while noting limitations with complex clinical inquiries, with performance varying significantly by model type and imaging modality.

评估法学硕士在常规放射程序知情同意过程中支持患者教育的作用。
目的:本研究评估了三种LLM聊天机器人(gpt -3.5 turbo, GPT-4-turbo和gpt - 40)通过回答CT, MRI和DSA知情同意的常见患者问题来支持患者教育的有效性,评估了它们的准确性和清晰度。方法:两位放射科医生制定了90个问题,分类为一般、临床或技术。每个法学硕士回答每个问题五次。然后放射科医生根据医学准确性和清晰度对回答进行评分,而医学物理学家则使用李克特量表评估技术准确性。用SBERT和余弦相似度分析语义相似度。结果:评分随着型号的更新而提高。线性混合效应模型显示,GPT-4模型的评分明显高于GPT-3.5 (p)。结论:评分的可变性表明,虽然模型有效地处理了一般和技术问题,但它们在处理需要个性化回应和细致入微理解的复杂医学查询时遇到了困难。统计分析证实,虽然较新的模型更优越,但它们的性能取决于模式,并且临床和技术专家对其的看法不同。知识进步:本研究评估了法学硕士在增强放射学知情同意方面的潜力,强调了在一般和技术问题上的优势,同时指出了复杂临床调查的局限性,其表现因模型类型和成像方式而异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
British Journal of Radiology
British Journal of Radiology 医学-核医学
CiteScore
5.30
自引率
3.80%
发文量
330
审稿时长
2-4 weeks
期刊介绍: BJR is the international research journal of the British Institute of Radiology and is the oldest scientific journal in the field of radiology and related sciences. Dating back to 1896, BJR’s history is radiology’s history, and the journal has featured some landmark papers such as the first description of Computed Tomography "Computerized transverse axial tomography" by Godfrey Hounsfield in 1973. A valuable historical resource, the complete BJR archive has been digitized from 1896. Quick Facts: - 2015 Impact Factor – 1.840 - Receipt to first decision – average of 6 weeks - Acceptance to online publication – average of 3 weeks - ISSN: 0007-1285 - eISSN: 1748-880X Open Access option
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信