Muhammet Geneş, Salim Yaşar, Serdar Fırtına, Ahmet Faruk Yağcı, Erkan Yıldırım, Cem Barçın, Uygar Çağdaş Yüksel
{"title":"心脏康复中的人工智能:评估ChatGPT的知识和临床情景反应。","authors":"Muhammet Geneş, Salim Yaşar, Serdar Fırtına, Ahmet Faruk Yağcı, Erkan Yıldırım, Cem Barçın, Uygar Çağdaş Yüksel","doi":"10.5543/tkda.2025.57195","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Cardiac rehabilitation (CR) improves survival, reduces hospital readmissions, and enhances quality of life; however, participation remains low due to barriers related to access, awareness, and socioeconomic factors. This study explores the potential of artificial intelligence (AI), specifically ChatGPT, in supporting CR by providing guideline-aligned recommendations and fostering patient motivation.</p><p><strong>Method: </strong>This cross-sectional study evaluated ChatGPT-4's responses to 40 questions developed by two cardiologists based on current cardiology guidelines. The questions covered fundamental principles of CR, clinical applications, and real-life scenarios. Responses were categorized based on guideline adherence as fully compliant, partially compliant, compliant but insufficient, or non-compliant. Two expert evaluators assessed the responses, and inter-rater reliability was analyzed using Cohen's kappa coefficient.</p><p><strong>Results: </strong>ChatGPT provided responses to all 40 questions. Among the 20 general open-ended questions, 14 were rated as fully compliant, while six were compliant but insufficient. Of the 20 clinical scenario-based questions, 16 were fully compliant, and four were compliant but insufficient. ChatGPT demonstrated strengths in areas such as risk stratification and patient safety strategies, but limitations were noted in managing elderly patients and high-intensity interval training. Inter-rater reliability was calculated as 90% using Cohen's kappa coefficient.</p><p><strong>Conclusion: </strong>ChatGPT shows promise as a complementary decision-support tool in CR by providing guideline-compliant information. However, limitations in contextual understanding and lack of real-world validation restrict its independent clinical use. Future improvements should focus on personalization, clinical validation, and integration with healthcare professionals.</p>","PeriodicalId":94261,"journal":{"name":"Turk Kardiyoloji Dernegi arsivi : Turk Kardiyoloji Derneginin yayin organidir","volume":"53 3","pages":"173-177"},"PeriodicalIF":0.6000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence in Cardiac Rehabilitation: Assessing ChatGPT's Knowledge and Clinical Scenario Responses.\",\"authors\":\"Muhammet Geneş, Salim Yaşar, Serdar Fırtına, Ahmet Faruk Yağcı, Erkan Yıldırım, Cem Barçın, Uygar Çağdaş Yüksel\",\"doi\":\"10.5543/tkda.2025.57195\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>Cardiac rehabilitation (CR) improves survival, reduces hospital readmissions, and enhances quality of life; however, participation remains low due to barriers related to access, awareness, and socioeconomic factors. This study explores the potential of artificial intelligence (AI), specifically ChatGPT, in supporting CR by providing guideline-aligned recommendations and fostering patient motivation.</p><p><strong>Method: </strong>This cross-sectional study evaluated ChatGPT-4's responses to 40 questions developed by two cardiologists based on current cardiology guidelines. The questions covered fundamental principles of CR, clinical applications, and real-life scenarios. Responses were categorized based on guideline adherence as fully compliant, partially compliant, compliant but insufficient, or non-compliant. Two expert evaluators assessed the responses, and inter-rater reliability was analyzed using Cohen's kappa coefficient.</p><p><strong>Results: </strong>ChatGPT provided responses to all 40 questions. Among the 20 general open-ended questions, 14 were rated as fully compliant, while six were compliant but insufficient. Of the 20 clinical scenario-based questions, 16 were fully compliant, and four were compliant but insufficient. ChatGPT demonstrated strengths in areas such as risk stratification and patient safety strategies, but limitations were noted in managing elderly patients and high-intensity interval training. Inter-rater reliability was calculated as 90% using Cohen's kappa coefficient.</p><p><strong>Conclusion: </strong>ChatGPT shows promise as a complementary decision-support tool in CR by providing guideline-compliant information. However, limitations in contextual understanding and lack of real-world validation restrict its independent clinical use. Future improvements should focus on personalization, clinical validation, and integration with healthcare professionals.</p>\",\"PeriodicalId\":94261,\"journal\":{\"name\":\"Turk Kardiyoloji Dernegi arsivi : Turk Kardiyoloji Derneginin yayin organidir\",\"volume\":\"53 3\",\"pages\":\"173-177\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Turk Kardiyoloji Dernegi arsivi : Turk Kardiyoloji Derneginin yayin organidir\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5543/tkda.2025.57195\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Turk Kardiyoloji Dernegi arsivi : Turk Kardiyoloji Derneginin yayin organidir","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5543/tkda.2025.57195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Artificial Intelligence in Cardiac Rehabilitation: Assessing ChatGPT's Knowledge and Clinical Scenario Responses.
Objective: Cardiac rehabilitation (CR) improves survival, reduces hospital readmissions, and enhances quality of life; however, participation remains low due to barriers related to access, awareness, and socioeconomic factors. This study explores the potential of artificial intelligence (AI), specifically ChatGPT, in supporting CR by providing guideline-aligned recommendations and fostering patient motivation.
Method: This cross-sectional study evaluated ChatGPT-4's responses to 40 questions developed by two cardiologists based on current cardiology guidelines. The questions covered fundamental principles of CR, clinical applications, and real-life scenarios. Responses were categorized based on guideline adherence as fully compliant, partially compliant, compliant but insufficient, or non-compliant. Two expert evaluators assessed the responses, and inter-rater reliability was analyzed using Cohen's kappa coefficient.
Results: ChatGPT provided responses to all 40 questions. Among the 20 general open-ended questions, 14 were rated as fully compliant, while six were compliant but insufficient. Of the 20 clinical scenario-based questions, 16 were fully compliant, and four were compliant but insufficient. ChatGPT demonstrated strengths in areas such as risk stratification and patient safety strategies, but limitations were noted in managing elderly patients and high-intensity interval training. Inter-rater reliability was calculated as 90% using Cohen's kappa coefficient.
Conclusion: ChatGPT shows promise as a complementary decision-support tool in CR by providing guideline-compliant information. However, limitations in contextual understanding and lack of real-world validation restrict its independent clinical use. Future improvements should focus on personalization, clinical validation, and integration with healthcare professionals.