The Utility of ChatGPT for Assisting Patients with Study Preparation and Report Interpretation of Myocardial Viability Scintigraphy: Exploring the Future of AI-driven Patient Comprehension in Nuclear Medicine.

IF 0.5 Q4 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Indian Journal of Nuclear Medicine Pub Date : 2025-07-01 Epub Date: 2025-09-19 DOI:10.4103/ijnm.ijnm_76_25
Malay Mishra, Sameer Taywade, Rajesh Kumar
{"title":"The Utility of ChatGPT for Assisting Patients with Study Preparation and Report Interpretation of Myocardial Viability Scintigraphy: Exploring the Future of AI-driven Patient Comprehension in Nuclear Medicine.","authors":"Malay Mishra, Sameer Taywade, Rajesh Kumar","doi":"10.4103/ijnm.ijnm_76_25","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>India's high-volume healthcare system, including nuclear medicine departments, restricts the quality and frequency of patient-clinician communication. Poor understanding of preparation requirements increases cancellations, image artefacts, and repeat studies. Artificial intelligence (AI) chatbots like chat generative pre-trained transformer (ChatGPT) can be a promising tool to mitigate these challenges. We evaluated the efficiency of ChatGPT to address patients' queries about study instructions and report findings while undergoing nuclear myocardial viability study.</p><p><strong>Subjects and methods: </strong>Six myocardial-viability mock reports were created. OpenAI ChatGPT-4o responses were evaluated for the set of 14-questions regarding patient preparation and 2-questions regarding the reports. All questions and reports were entered as single prompts in separate chats. Each prompt was repeated twice using regenerate-response function. Furthermore, references used to generate responses were analyzed. The responses were then rated based on 5 key parameters: appropriateness, helpfulness, empathy, consistency, and validity of references.</p><p><strong>Results: </strong>Most responses were appropriate and helpful for both preparation (1.5 ± 0.76; 1.64 ± 0.63) and report prompts (1.67 ± 0.49; 2.0). However, empathy and consistency had lower scores in preparations (1.43 ± 0.76; 1.14 ± 0.66) than in report prompts (1.58 ± 0.51; 1.67 ± 0.49). Reference validity remained an issue, as only one response had a valid reference. A hallucinatory response was noted twice. The study demonstrated that none of the prompt responses could have caused harm to the patient under real-life conditions.</p><p><strong>Conclusions: </strong>ChatGPT helps in query resolution in myocardial viability studies. It enhances patient engagement, quality of patient preparation, and comprehension of nuclear medicine reports. However, inconsistent and less empathetic responses mandate supervised use and further refinement before incorporating it into routine practices.</p>","PeriodicalId":45830,"journal":{"name":"Indian Journal of Nuclear Medicine","volume":"40 4","pages":"222-226"},"PeriodicalIF":0.5000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12503166/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Indian Journal of Nuclear Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4103/ijnm.ijnm_76_25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/9/19 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: India's high-volume healthcare system, including nuclear medicine departments, restricts the quality and frequency of patient-clinician communication. Poor understanding of preparation requirements increases cancellations, image artefacts, and repeat studies. Artificial intelligence (AI) chatbots like chat generative pre-trained transformer (ChatGPT) can be a promising tool to mitigate these challenges. We evaluated the efficiency of ChatGPT to address patients' queries about study instructions and report findings while undergoing nuclear myocardial viability study.

Subjects and methods: Six myocardial-viability mock reports were created. OpenAI ChatGPT-4o responses were evaluated for the set of 14-questions regarding patient preparation and 2-questions regarding the reports. All questions and reports were entered as single prompts in separate chats. Each prompt was repeated twice using regenerate-response function. Furthermore, references used to generate responses were analyzed. The responses were then rated based on 5 key parameters: appropriateness, helpfulness, empathy, consistency, and validity of references.

Results: Most responses were appropriate and helpful for both preparation (1.5 ± 0.76; 1.64 ± 0.63) and report prompts (1.67 ± 0.49; 2.0). However, empathy and consistency had lower scores in preparations (1.43 ± 0.76; 1.14 ± 0.66) than in report prompts (1.58 ± 0.51; 1.67 ± 0.49). Reference validity remained an issue, as only one response had a valid reference. A hallucinatory response was noted twice. The study demonstrated that none of the prompt responses could have caused harm to the patient under real-life conditions.

Conclusions: ChatGPT helps in query resolution in myocardial viability studies. It enhances patient engagement, quality of patient preparation, and comprehension of nuclear medicine reports. However, inconsistent and less empathetic responses mandate supervised use and further refinement before incorporating it into routine practices.

ChatGPT在帮助患者进行心肌活力闪烁成像的研究准备和报告解释方面的应用:探索人工智能驱动的核医学患者理解的未来。
目的:印度庞大的医疗保健系统,包括核医学部门,限制了医患沟通的质量和频率。对准备要求的不理解增加了取消、图像伪影和重复研究。人工智能(AI)聊天机器人,如聊天生成预训练变压器(ChatGPT),可以成为缓解这些挑战的有前途的工具。我们评估了ChatGPT在解决患者在进行心肌活力研究时对研究说明的查询和报告结果的效率。对象和方法:制作6份心肌活力模拟报告。对OpenAI chatgpt - 40的应答进行评估,包括14个关于患者准备的问题和2个关于报告的问题。所有的问题和报告都在单独的聊天中作为单个提示输入。每个提示使用再生响应功能重复两次。此外,还分析了用于生成响应的参考文献。然后根据5个关键参数对回答进行评分:适当性、帮助性、同理心、一致性和引用的有效性。结果:大多数回答在准备(1.5±0.76;1.64±0.63)和报告提示(1.67±0.49;2.0)方面都是适当的,有帮助的。而在准备阶段共情和一致性得分(1.43±0.76;1.14±0.66)低于报告提示阶段(1.58±0.51;1.67±0.49)。参考有效性仍然是一个问题,因为只有一个回答具有有效的参考。两次出现了幻觉反应。研究表明,在现实生活条件下,这些快速反应都不会对患者造成伤害。结论:ChatGPT有助于心肌活力研究的查询解析。它提高了患者的参与度、患者准备的质量和对核医学报告的理解。然而,在将其纳入日常实践之前,不一致和缺乏同情心的反应要求监督使用和进一步改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Indian Journal of Nuclear Medicine
Indian Journal of Nuclear Medicine RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
0.70
自引率
0.00%
发文量
46
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信