心脏康复中的人工智能:评估ChatGPT的知识和临床情景反应。

IF 0.6
Muhammet Geneş, Salim Yaşar, Serdar Fırtına, Ahmet Faruk Yağcı, Erkan Yıldırım, Cem Barçın, Uygar Çağdaş Yüksel
{"title":"心脏康复中的人工智能:评估ChatGPT的知识和临床情景反应。","authors":"Muhammet Geneş, Salim Yaşar, Serdar Fırtına, Ahmet Faruk Yağcı, Erkan Yıldırım, Cem Barçın, Uygar Çağdaş Yüksel","doi":"10.5543/tkda.2025.57195","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Cardiac rehabilitation (CR) improves survival, reduces hospital readmissions, and enhances quality of life; however, participation remains low due to barriers related to access, awareness, and socioeconomic factors. This study explores the potential of artificial intelligence (AI), specifically ChatGPT, in supporting CR by providing guideline-aligned recommendations and fostering patient motivation.</p><p><strong>Method: </strong>This cross-sectional study evaluated ChatGPT-4's responses to 40 questions developed by two cardiologists based on current cardiology guidelines. The questions covered fundamental principles of CR, clinical applications, and real-life scenarios. Responses were categorized based on guideline adherence as fully compliant, partially compliant, compliant but insufficient, or non-compliant. Two expert evaluators assessed the responses, and inter-rater reliability was analyzed using Cohen's kappa coefficient.</p><p><strong>Results: </strong>ChatGPT provided responses to all 40 questions. Among the 20 general open-ended questions, 14 were rated as fully compliant, while six were compliant but insufficient. Of the 20 clinical scenario-based questions, 16 were fully compliant, and four were compliant but insufficient. ChatGPT demonstrated strengths in areas such as risk stratification and patient safety strategies, but limitations were noted in managing elderly patients and high-intensity interval training. Inter-rater reliability was calculated as 90% using Cohen's kappa coefficient.</p><p><strong>Conclusion: </strong>ChatGPT shows promise as a complementary decision-support tool in CR by providing guideline-compliant information. However, limitations in contextual understanding and lack of real-world validation restrict its independent clinical use. Future improvements should focus on personalization, clinical validation, and integration with healthcare professionals.</p>","PeriodicalId":94261,"journal":{"name":"Turk Kardiyoloji Dernegi arsivi : Turk Kardiyoloji Derneginin yayin organidir","volume":"53 3","pages":"173-177"},"PeriodicalIF":0.6000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence in Cardiac Rehabilitation: Assessing ChatGPT's Knowledge and Clinical Scenario Responses.\",\"authors\":\"Muhammet Geneş, Salim Yaşar, Serdar Fırtına, Ahmet Faruk Yağcı, Erkan Yıldırım, Cem Barçın, Uygar Çağdaş Yüksel\",\"doi\":\"10.5543/tkda.2025.57195\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>Cardiac rehabilitation (CR) improves survival, reduces hospital readmissions, and enhances quality of life; however, participation remains low due to barriers related to access, awareness, and socioeconomic factors. This study explores the potential of artificial intelligence (AI), specifically ChatGPT, in supporting CR by providing guideline-aligned recommendations and fostering patient motivation.</p><p><strong>Method: </strong>This cross-sectional study evaluated ChatGPT-4's responses to 40 questions developed by two cardiologists based on current cardiology guidelines. The questions covered fundamental principles of CR, clinical applications, and real-life scenarios. Responses were categorized based on guideline adherence as fully compliant, partially compliant, compliant but insufficient, or non-compliant. Two expert evaluators assessed the responses, and inter-rater reliability was analyzed using Cohen's kappa coefficient.</p><p><strong>Results: </strong>ChatGPT provided responses to all 40 questions. Among the 20 general open-ended questions, 14 were rated as fully compliant, while six were compliant but insufficient. Of the 20 clinical scenario-based questions, 16 were fully compliant, and four were compliant but insufficient. ChatGPT demonstrated strengths in areas such as risk stratification and patient safety strategies, but limitations were noted in managing elderly patients and high-intensity interval training. Inter-rater reliability was calculated as 90% using Cohen's kappa coefficient.</p><p><strong>Conclusion: </strong>ChatGPT shows promise as a complementary decision-support tool in CR by providing guideline-compliant information. However, limitations in contextual understanding and lack of real-world validation restrict its independent clinical use. Future improvements should focus on personalization, clinical validation, and integration with healthcare professionals.</p>\",\"PeriodicalId\":94261,\"journal\":{\"name\":\"Turk Kardiyoloji Dernegi arsivi : Turk Kardiyoloji Derneginin yayin organidir\",\"volume\":\"53 3\",\"pages\":\"173-177\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Turk Kardiyoloji Dernegi arsivi : Turk Kardiyoloji Derneginin yayin organidir\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5543/tkda.2025.57195\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Turk Kardiyoloji Dernegi arsivi : Turk Kardiyoloji Derneginin yayin organidir","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5543/tkda.2025.57195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目的:心脏康复(CR)可提高生存率,减少再入院率,提高生活质量;然而,由于与获取、意识和社会经济因素有关的障碍,参与率仍然很低。本研究探讨了人工智能(AI)的潜力,特别是ChatGPT,通过提供与指南一致的建议和培养患者动机来支持CR。方法:这项横断面研究评估了ChatGPT-4对两位心脏病专家根据当前心脏病学指南制定的40个问题的回答。这些问题涵盖了CR的基本原理、临床应用和现实生活场景。应答根据指南依从性被分类为完全服从、部分服从、服从但不充分或不服从。两名专家评估者评估了反应,并使用科恩卡帕系数分析了评估者之间的信度。结果:ChatGPT提供了所有40个问题的答案。在20个一般开放式问题中,14个被评为完全服从,6个被评为服从但不充分。在20个基于临床场景的问题中,16个完全符合,4个符合但不充分。ChatGPT在风险分层和患者安全策略等方面表现出优势,但在管理老年患者和高强度间歇训练方面存在局限性。采用Cohen’s kappa系数计算的信度为90%。结论:ChatGPT通过提供符合指南的信息,显示了作为CR补充决策支持工具的前景。然而,上下文理解的局限性和缺乏实际验证限制了其独立的临床应用。未来的改进应该集中在个性化、临床验证和与医疗保健专业人员的集成上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial Intelligence in Cardiac Rehabilitation: Assessing ChatGPT's Knowledge and Clinical Scenario Responses.

Objective: Cardiac rehabilitation (CR) improves survival, reduces hospital readmissions, and enhances quality of life; however, participation remains low due to barriers related to access, awareness, and socioeconomic factors. This study explores the potential of artificial intelligence (AI), specifically ChatGPT, in supporting CR by providing guideline-aligned recommendations and fostering patient motivation.

Method: This cross-sectional study evaluated ChatGPT-4's responses to 40 questions developed by two cardiologists based on current cardiology guidelines. The questions covered fundamental principles of CR, clinical applications, and real-life scenarios. Responses were categorized based on guideline adherence as fully compliant, partially compliant, compliant but insufficient, or non-compliant. Two expert evaluators assessed the responses, and inter-rater reliability was analyzed using Cohen's kappa coefficient.

Results: ChatGPT provided responses to all 40 questions. Among the 20 general open-ended questions, 14 were rated as fully compliant, while six were compliant but insufficient. Of the 20 clinical scenario-based questions, 16 were fully compliant, and four were compliant but insufficient. ChatGPT demonstrated strengths in areas such as risk stratification and patient safety strategies, but limitations were noted in managing elderly patients and high-intensity interval training. Inter-rater reliability was calculated as 90% using Cohen's kappa coefficient.

Conclusion: ChatGPT shows promise as a complementary decision-support tool in CR by providing guideline-compliant information. However, limitations in contextual understanding and lack of real-world validation restrict its independent clinical use. Future improvements should focus on personalization, clinical validation, and integration with healthcare professionals.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信