Eric Einspänner, Roland Schwab, Sebastian Hupfeld, Maximilian Thormann, Erelle Fuchs, Matthias Gawlitza, Jan Borggrefe, Daniel Behme
{"title":"评估法学硕士在常规放射程序知情同意过程中支持患者教育的作用。","authors":"Eric Einspänner, Roland Schwab, Sebastian Hupfeld, Maximilian Thormann, Erelle Fuchs, Matthias Gawlitza, Jan Borggrefe, Daniel Behme","doi":"10.1093/bjr/tqaf225","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>This study evaluated three LLM chatbots (GPT-3.5-turbo, GPT-4-turbo, and GPT-4o) on their effectiveness in supporting patient education by answering common patient questions for CT, MRI, and DSA informed consent, assessing their accuracy and clarity.</p><p><strong>Methods: </strong>Two radiologists formulated 90 questions categorized as general, clinical, or technical. Each LLM answered every question five times. Radiologists then rated the responses for medical accuracy and clarity, while medical physicists assessed technical accuracy using a Likert scale. semantic similarity was analyzed with SBERT and cosine similarity.</p><p><strong>Results: </strong>Ratings improved with newer model versions. Linear mixed-effects models revealed that GPT-4 models were rated significantly higher than GPT-3.5 (p < 0.001) by both physicians and physicists. However, physicians' ratings for GPT-4 models showed a significant performance decrease for complex modalities like DSA and MRI (p < 0.01), a pattern not observed in physicists' ratings. SBERT analysis revealed high internal consistency across all models. SBERT analysis revealed high internal consistency across all models.</p><p><strong>Conclusion: </strong>Variability in ratings revealed that while models effectively handled general and technical questions, they struggled with contextually complex medical inquiries requiring personalized responses and nuanced understanding. Statistical analysis confirms that while newer models are superior, their performance is modality-dependent and perceived differently by clinical and technical experts.</p><p><strong>Advances in knowledge: </strong>This study evaluates the potential of LLMs to enhance informed consent in radiology, highlighting strengths in general and technical questions while noting limitations with complex clinical inquiries, with performance varying significantly by model type and imaging modality.</p>","PeriodicalId":9306,"journal":{"name":"British Journal of Radiology","volume":" ","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating the role of LLMs in supporting patient education during the informed consent process for routine radiology procedures.\",\"authors\":\"Eric Einspänner, Roland Schwab, Sebastian Hupfeld, Maximilian Thormann, Erelle Fuchs, Matthias Gawlitza, Jan Borggrefe, Daniel Behme\",\"doi\":\"10.1093/bjr/tqaf225\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>This study evaluated three LLM chatbots (GPT-3.5-turbo, GPT-4-turbo, and GPT-4o) on their effectiveness in supporting patient education by answering common patient questions for CT, MRI, and DSA informed consent, assessing their accuracy and clarity.</p><p><strong>Methods: </strong>Two radiologists formulated 90 questions categorized as general, clinical, or technical. Each LLM answered every question five times. Radiologists then rated the responses for medical accuracy and clarity, while medical physicists assessed technical accuracy using a Likert scale. semantic similarity was analyzed with SBERT and cosine similarity.</p><p><strong>Results: </strong>Ratings improved with newer model versions. Linear mixed-effects models revealed that GPT-4 models were rated significantly higher than GPT-3.5 (p < 0.001) by both physicians and physicists. However, physicians' ratings for GPT-4 models showed a significant performance decrease for complex modalities like DSA and MRI (p < 0.01), a pattern not observed in physicists' ratings. SBERT analysis revealed high internal consistency across all models. SBERT analysis revealed high internal consistency across all models.</p><p><strong>Conclusion: </strong>Variability in ratings revealed that while models effectively handled general and technical questions, they struggled with contextually complex medical inquiries requiring personalized responses and nuanced understanding. Statistical analysis confirms that while newer models are superior, their performance is modality-dependent and perceived differently by clinical and technical experts.</p><p><strong>Advances in knowledge: </strong>This study evaluates the potential of LLMs to enhance informed consent in radiology, highlighting strengths in general and technical questions while noting limitations with complex clinical inquiries, with performance varying significantly by model type and imaging modality.</p>\",\"PeriodicalId\":9306,\"journal\":{\"name\":\"British Journal of Radiology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-09-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British Journal of Radiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/bjr/tqaf225\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/bjr/tqaf225","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Evaluating the role of LLMs in supporting patient education during the informed consent process for routine radiology procedures.
Objectives: This study evaluated three LLM chatbots (GPT-3.5-turbo, GPT-4-turbo, and GPT-4o) on their effectiveness in supporting patient education by answering common patient questions for CT, MRI, and DSA informed consent, assessing their accuracy and clarity.
Methods: Two radiologists formulated 90 questions categorized as general, clinical, or technical. Each LLM answered every question five times. Radiologists then rated the responses for medical accuracy and clarity, while medical physicists assessed technical accuracy using a Likert scale. semantic similarity was analyzed with SBERT and cosine similarity.
Results: Ratings improved with newer model versions. Linear mixed-effects models revealed that GPT-4 models were rated significantly higher than GPT-3.5 (p < 0.001) by both physicians and physicists. However, physicians' ratings for GPT-4 models showed a significant performance decrease for complex modalities like DSA and MRI (p < 0.01), a pattern not observed in physicists' ratings. SBERT analysis revealed high internal consistency across all models. SBERT analysis revealed high internal consistency across all models.
Conclusion: Variability in ratings revealed that while models effectively handled general and technical questions, they struggled with contextually complex medical inquiries requiring personalized responses and nuanced understanding. Statistical analysis confirms that while newer models are superior, their performance is modality-dependent and perceived differently by clinical and technical experts.
Advances in knowledge: This study evaluates the potential of LLMs to enhance informed consent in radiology, highlighting strengths in general and technical questions while noting limitations with complex clinical inquiries, with performance varying significantly by model type and imaging modality.
期刊介绍:
BJR is the international research journal of the British Institute of Radiology and is the oldest scientific journal in the field of radiology and related sciences.
Dating back to 1896, BJR’s history is radiology’s history, and the journal has featured some landmark papers such as the first description of Computed Tomography "Computerized transverse axial tomography" by Godfrey Hounsfield in 1973. A valuable historical resource, the complete BJR archive has been digitized from 1896.
Quick Facts:
- 2015 Impact Factor – 1.840
- Receipt to first decision – average of 6 weeks
- Acceptance to online publication – average of 3 weeks
- ISSN: 0007-1285
- eISSN: 1748-880X
Open Access option