Exploring ChatGPT-4o-generated reflections: Alignment with professional standards in diagnostic radiography - A pilot experiment.

C Nabasenja, M Chau, E Green
{"title":"Exploring ChatGPT-4o-generated reflections: Alignment with professional standards in diagnostic radiography - A pilot experiment.","authors":"C Nabasenja, M Chau, E Green","doi":"10.1016/j.jmir.2025.102082","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction/background: </strong>Artificial intelligence (AI) tools such as ChatGPT-4o are increasingly being explored in education. This study examined the potential of ChatGPT-4o to support reflective practice in medical radiation science (MRS) education. The focus was on the quality of AI-generated reflections in terms of alignment with professional standards, depth, clarity, and practical relevance.</p><p><strong>Methods: </strong>Four clinical scenarios representing third-year diagnostic radiography placements were used as prompts. ChatGPT-4o generated reflective responses, which were assessed by three reviewers. Reflections were evaluated against the Medical Radiation Practice Board of Australia's professional capability domains and the National Safety and Quality Health Service Standards. Review criteria included clarity, depth, authenticity, and practical relevance. Inter-rater reliability was analysed using intraclass correlation coefficients (ICC) and the Friedman test.</p><p><strong>Results: </strong>Scenario 3 achieved the highest inter-rater reliability (ICC: moderate to excellent; p = 0.022). Scenario 2 showed the lowest reliability (ICC: poor to fair; p = 0.060). Reflections were consistently well-structured and clear, but often lacked emotional depth, contextual awareness, and person-centered insights. Qualitative feedback identified limitations in empathetic reflection and critical self-awareness.</p><p><strong>Discussion: </strong>ChatGPT-4o can produce structured reflective responses aligned with professional frameworks. However, its lack of emotional and contextual depth limits its ability to replace authentic reflective practice. Reviewer agreement varied depending on scenario complexity and emotional content.</p><p><strong>Conclusion: </strong>AI tools such as ChatGPT-4o can assist in structuring reflections in MRS education but should complement, not replace, human-guided reflective learning. Hybrid models combining AI and educator input may enhance both efficiency and authenticity.</p>","PeriodicalId":94092,"journal":{"name":"Journal of medical imaging and radiation sciences","volume":"56 6","pages":"102082"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of medical imaging and radiation sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.jmir.2025.102082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction/background: Artificial intelligence (AI) tools such as ChatGPT-4o are increasingly being explored in education. This study examined the potential of ChatGPT-4o to support reflective practice in medical radiation science (MRS) education. The focus was on the quality of AI-generated reflections in terms of alignment with professional standards, depth, clarity, and practical relevance.

Methods: Four clinical scenarios representing third-year diagnostic radiography placements were used as prompts. ChatGPT-4o generated reflective responses, which were assessed by three reviewers. Reflections were evaluated against the Medical Radiation Practice Board of Australia's professional capability domains and the National Safety and Quality Health Service Standards. Review criteria included clarity, depth, authenticity, and practical relevance. Inter-rater reliability was analysed using intraclass correlation coefficients (ICC) and the Friedman test.

Results: Scenario 3 achieved the highest inter-rater reliability (ICC: moderate to excellent; p = 0.022). Scenario 2 showed the lowest reliability (ICC: poor to fair; p = 0.060). Reflections were consistently well-structured and clear, but often lacked emotional depth, contextual awareness, and person-centered insights. Qualitative feedback identified limitations in empathetic reflection and critical self-awareness.

Discussion: ChatGPT-4o can produce structured reflective responses aligned with professional frameworks. However, its lack of emotional and contextual depth limits its ability to replace authentic reflective practice. Reviewer agreement varied depending on scenario complexity and emotional content.

Conclusion: AI tools such as ChatGPT-4o can assist in structuring reflections in MRS education but should complement, not replace, human-guided reflective learning. Hybrid models combining AI and educator input may enhance both efficiency and authenticity.

探索chatgpt - 40产生的反射:与诊断放射学的专业标准保持一致-一个试点实验。
简介/背景:chatgpt - 40等人工智能(AI)工具在教育领域的探索越来越多。本研究考察了chatgpt - 40在医学放射科学(MRS)教育中支持反思性实践的潜力。重点是人工智能生成的反射在符合专业标准、深度、清晰度和实际相关性方面的质量。方法:四个临床场景代表第三年诊断放射学实习作为提示。chatgpt - 40产生了反思性反应,由三名审稿人进行评估。根据澳大利亚医疗辐射实践委员会的专业能力领域和国家安全和质量卫生服务标准对反思进行了评估。审查标准包括清晰度、深度、真实性和实际相关性。采用类内相关系数(ICC)和Friedman检验分析了等级间信度。结果:情景3达到了最高的评分者间信度(ICC:中等至优秀;P = 0.022)。情景2的可靠性最低(ICC:差到公平;P = 0.060)。思考始终是结构良好、清晰的,但往往缺乏情感深度、上下文意识和以人为本的见解。定性反馈确定了移情反射和批判性自我意识的局限性。讨论:chatgpt - 40可以产生与专业框架一致的结构化反射响应。然而,它缺乏情感和语境深度,限制了它取代真实反思实践的能力。审稿人的同意程度因场景复杂性和情感内容而异。结论:chatgpt - 40等人工智能工具可以帮助构建MRS教育中的反思,但应该补充而不是取代人类引导的反思学习。结合人工智能和教育工作者投入的混合模型可以提高效率和真实性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
期刊介绍:
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信