Artificial intelligence in clinical practice: a cross-sectional survey of paediatric surgery residents' perspectives.

IF 4.1 Q1 HEALTH CARE SCIENCES & SERVICES
Francesca Gigola, Tommaso Amato, Marco Del Riccio, Alessandro Raffaele, Antonino Morabito, Riccardo Coletta
{"title":"Artificial intelligence in clinical practice: a cross-sectional survey of paediatric surgery residents' perspectives.","authors":"Francesca Gigola, Tommaso Amato, Marco Del Riccio, Alessandro Raffaele, Antonino Morabito, Riccardo Coletta","doi":"10.1136/bmjhci-2025-101456","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>The aim of this study was to compare the performances of residents and ChatGPT in answering validated questions and assess paediatric surgery residents' acceptance, perceptions and readiness to integrate artificial intelligence (AI) into clinical practice.</p><p><strong>Methods: </strong>We conducted a cross-sectional study using randomly selected questions and clinical cases on paediatric surgery topics. We examined residents' acceptance of AI before and after comparing their results to ChatGPT's results using the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model. Data analysis was performed using Jamovi V.2.4.12.0.</p><p><strong>Results: </strong>30 residents participated. ChatGPT-4.0's median score was 13.75, while ChatGPT-3.5's was 8.75. The median score among residents was 8.13. Differences appeared statistically significant. ChatGPT outperformed residents specifically in definition questions (ChatGPT-4.0 vs residents, p<0.0001; ChatGPT-3.5 vs residents, p=0.03). In the UTAUT2 Questionnaire, respondents expressed a more positive evaluation of ChatGPT with higher mean values for each construct and lower fear of technology after learning about test scores.</p><p><strong>Discussion: </strong>ChatGPT performed better than residents in knowledge-based questions and simple clinical cases. The accuracy of ChatGPT declined when confronted with more complex questions. The UTAUT questionnaire results showed that learning about the potential of ChatGPT could lead to a shift in perception, resulting in a more positive attitude towards AI.</p><p><strong>Conclusion: </strong>Our study reveals residents' positive receptivity towards AI, especially after being confronted with its efficacy. These results highlight the importance of integrating AI-related topics into medical curricula and residency to help future physicians and surgeons better understand the advantages and limitations of AI.</p>","PeriodicalId":9050,"journal":{"name":"BMJ Health & Care Informatics","volume":"32 1","pages":""},"PeriodicalIF":4.1000,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12097045/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Health & Care Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjhci-2025-101456","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: The aim of this study was to compare the performances of residents and ChatGPT in answering validated questions and assess paediatric surgery residents' acceptance, perceptions and readiness to integrate artificial intelligence (AI) into clinical practice.

Methods: We conducted a cross-sectional study using randomly selected questions and clinical cases on paediatric surgery topics. We examined residents' acceptance of AI before and after comparing their results to ChatGPT's results using the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model. Data analysis was performed using Jamovi V.2.4.12.0.

Results: 30 residents participated. ChatGPT-4.0's median score was 13.75, while ChatGPT-3.5's was 8.75. The median score among residents was 8.13. Differences appeared statistically significant. ChatGPT outperformed residents specifically in definition questions (ChatGPT-4.0 vs residents, p<0.0001; ChatGPT-3.5 vs residents, p=0.03). In the UTAUT2 Questionnaire, respondents expressed a more positive evaluation of ChatGPT with higher mean values for each construct and lower fear of technology after learning about test scores.

Discussion: ChatGPT performed better than residents in knowledge-based questions and simple clinical cases. The accuracy of ChatGPT declined when confronted with more complex questions. The UTAUT questionnaire results showed that learning about the potential of ChatGPT could lead to a shift in perception, resulting in a more positive attitude towards AI.

Conclusion: Our study reveals residents' positive receptivity towards AI, especially after being confronted with its efficacy. These results highlight the importance of integrating AI-related topics into medical curricula and residency to help future physicians and surgeons better understand the advantages and limitations of AI.

临床实践中的人工智能:儿科外科住院医师观点的横断面调查。
目的:本研究的目的是比较住院医生和ChatGPT在回答验证问题方面的表现,并评估儿科外科住院医生对将人工智能(AI)整合到临床实践中的接受程度、认知程度和准备程度。方法:我们采用随机选择的问题和临床病例对儿科外科主题进行横断面研究。我们使用技术接受和使用统一理论2 (UTAUT2)模型,在将结果与ChatGPT的结果进行比较之前和之后,检查了居民对人工智能的接受程度。使用Jamovi V.2.4.12.0进行数据分析。结果:30名居民参与。ChatGPT-4.0的中位数得分为13.75,而ChatGPT-3.5的中位数得分为8.75。居民的平均得分为8.13。差异有统计学意义。ChatGPT在定义问题上的表现优于居民(ChatGPT-4.0 vs居民),讨论:ChatGPT在知识型问题和简单临床病例上的表现优于居民。当面对更复杂的问题时,ChatGPT的准确性会下降。UTAUT问卷调查结果显示,了解ChatGPT的潜力可能会导致观念的转变,从而对人工智能持更积极的态度。结论:我们的研究揭示了居民对人工智能的积极接受度,特别是在面对其功效后。这些结果强调了将人工智能相关主题纳入医学课程和住院医师培训的重要性,以帮助未来的医生和外科医生更好地了解人工智能的优势和局限性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.10
自引率
4.90%
发文量
40
审稿时长
18 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信