The Utility of ChatGPT for Assisting Patients with Study Preparation and Report Interpretation of Myocardial Viability Scintigraphy: Exploring the Future of AI-driven Patient Comprehension in Nuclear Medicine.
IF 0.5 Q4 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
{"title":"The Utility of ChatGPT for Assisting Patients with Study Preparation and Report Interpretation of Myocardial Viability Scintigraphy: Exploring the Future of AI-driven Patient Comprehension in Nuclear Medicine.","authors":"Malay Mishra, Sameer Taywade, Rajesh Kumar","doi":"10.4103/ijnm.ijnm_76_25","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>India's high-volume healthcare system, including nuclear medicine departments, restricts the quality and frequency of patient-clinician communication. Poor understanding of preparation requirements increases cancellations, image artefacts, and repeat studies. Artificial intelligence (AI) chatbots like chat generative pre-trained transformer (ChatGPT) can be a promising tool to mitigate these challenges. We evaluated the efficiency of ChatGPT to address patients' queries about study instructions and report findings while undergoing nuclear myocardial viability study.</p><p><strong>Subjects and methods: </strong>Six myocardial-viability mock reports were created. OpenAI ChatGPT-4o responses were evaluated for the set of 14-questions regarding patient preparation and 2-questions regarding the reports. All questions and reports were entered as single prompts in separate chats. Each prompt was repeated twice using regenerate-response function. Furthermore, references used to generate responses were analyzed. The responses were then rated based on 5 key parameters: appropriateness, helpfulness, empathy, consistency, and validity of references.</p><p><strong>Results: </strong>Most responses were appropriate and helpful for both preparation (1.5 ± 0.76; 1.64 ± 0.63) and report prompts (1.67 ± 0.49; 2.0). However, empathy and consistency had lower scores in preparations (1.43 ± 0.76; 1.14 ± 0.66) than in report prompts (1.58 ± 0.51; 1.67 ± 0.49). Reference validity remained an issue, as only one response had a valid reference. A hallucinatory response was noted twice. The study demonstrated that none of the prompt responses could have caused harm to the patient under real-life conditions.</p><p><strong>Conclusions: </strong>ChatGPT helps in query resolution in myocardial viability studies. It enhances patient engagement, quality of patient preparation, and comprehension of nuclear medicine reports. However, inconsistent and less empathetic responses mandate supervised use and further refinement before incorporating it into routine practices.</p>","PeriodicalId":45830,"journal":{"name":"Indian Journal of Nuclear Medicine","volume":"40 4","pages":"222-226"},"PeriodicalIF":0.5000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12503166/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Indian Journal of Nuclear Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4103/ijnm.ijnm_76_25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/9/19 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: India's high-volume healthcare system, including nuclear medicine departments, restricts the quality and frequency of patient-clinician communication. Poor understanding of preparation requirements increases cancellations, image artefacts, and repeat studies. Artificial intelligence (AI) chatbots like chat generative pre-trained transformer (ChatGPT) can be a promising tool to mitigate these challenges. We evaluated the efficiency of ChatGPT to address patients' queries about study instructions and report findings while undergoing nuclear myocardial viability study.
Subjects and methods: Six myocardial-viability mock reports were created. OpenAI ChatGPT-4o responses were evaluated for the set of 14-questions regarding patient preparation and 2-questions regarding the reports. All questions and reports were entered as single prompts in separate chats. Each prompt was repeated twice using regenerate-response function. Furthermore, references used to generate responses were analyzed. The responses were then rated based on 5 key parameters: appropriateness, helpfulness, empathy, consistency, and validity of references.
Results: Most responses were appropriate and helpful for both preparation (1.5 ± 0.76; 1.64 ± 0.63) and report prompts (1.67 ± 0.49; 2.0). However, empathy and consistency had lower scores in preparations (1.43 ± 0.76; 1.14 ± 0.66) than in report prompts (1.58 ± 0.51; 1.67 ± 0.49). Reference validity remained an issue, as only one response had a valid reference. A hallucinatory response was noted twice. The study demonstrated that none of the prompt responses could have caused harm to the patient under real-life conditions.
Conclusions: ChatGPT helps in query resolution in myocardial viability studies. It enhances patient engagement, quality of patient preparation, and comprehension of nuclear medicine reports. However, inconsistent and less empathetic responses mandate supervised use and further refinement before incorporating it into routine practices.