Francesca Gigola, Tommaso Amato, Marco Del Riccio, Alessandro Raffaele, Antonino Morabito, Riccardo Coletta
{"title":"临床实践中的人工智能:儿科外科住院医师观点的横断面调查。","authors":"Francesca Gigola, Tommaso Amato, Marco Del Riccio, Alessandro Raffaele, Antonino Morabito, Riccardo Coletta","doi":"10.1136/bmjhci-2025-101456","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>The aim of this study was to compare the performances of residents and ChatGPT in answering validated questions and assess paediatric surgery residents' acceptance, perceptions and readiness to integrate artificial intelligence (AI) into clinical practice.</p><p><strong>Methods: </strong>We conducted a cross-sectional study using randomly selected questions and clinical cases on paediatric surgery topics. We examined residents' acceptance of AI before and after comparing their results to ChatGPT's results using the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model. Data analysis was performed using Jamovi V.2.4.12.0.</p><p><strong>Results: </strong>30 residents participated. ChatGPT-4.0's median score was 13.75, while ChatGPT-3.5's was 8.75. The median score among residents was 8.13. Differences appeared statistically significant. ChatGPT outperformed residents specifically in definition questions (ChatGPT-4.0 vs residents, p<0.0001; ChatGPT-3.5 vs residents, p=0.03). In the UTAUT2 Questionnaire, respondents expressed a more positive evaluation of ChatGPT with higher mean values for each construct and lower fear of technology after learning about test scores.</p><p><strong>Discussion: </strong>ChatGPT performed better than residents in knowledge-based questions and simple clinical cases. The accuracy of ChatGPT declined when confronted with more complex questions. The UTAUT questionnaire results showed that learning about the potential of ChatGPT could lead to a shift in perception, resulting in a more positive attitude towards AI.</p><p><strong>Conclusion: </strong>Our study reveals residents' positive receptivity towards AI, especially after being confronted with its efficacy. These results highlight the importance of integrating AI-related topics into medical curricula and residency to help future physicians and surgeons better understand the advantages and limitations of AI.</p>","PeriodicalId":9050,"journal":{"name":"BMJ Health & Care Informatics","volume":"32 1","pages":""},"PeriodicalIF":4.1000,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12097045/pdf/","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence in clinical practice: a cross-sectional survey of paediatric surgery residents' perspectives.\",\"authors\":\"Francesca Gigola, Tommaso Amato, Marco Del Riccio, Alessandro Raffaele, Antonino Morabito, Riccardo Coletta\",\"doi\":\"10.1136/bmjhci-2025-101456\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>The aim of this study was to compare the performances of residents and ChatGPT in answering validated questions and assess paediatric surgery residents' acceptance, perceptions and readiness to integrate artificial intelligence (AI) into clinical practice.</p><p><strong>Methods: </strong>We conducted a cross-sectional study using randomly selected questions and clinical cases on paediatric surgery topics. We examined residents' acceptance of AI before and after comparing their results to ChatGPT's results using the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model. Data analysis was performed using Jamovi V.2.4.12.0.</p><p><strong>Results: </strong>30 residents participated. ChatGPT-4.0's median score was 13.75, while ChatGPT-3.5's was 8.75. The median score among residents was 8.13. Differences appeared statistically significant. ChatGPT outperformed residents specifically in definition questions (ChatGPT-4.0 vs residents, p<0.0001; ChatGPT-3.5 vs residents, p=0.03). In the UTAUT2 Questionnaire, respondents expressed a more positive evaluation of ChatGPT with higher mean values for each construct and lower fear of technology after learning about test scores.</p><p><strong>Discussion: </strong>ChatGPT performed better than residents in knowledge-based questions and simple clinical cases. The accuracy of ChatGPT declined when confronted with more complex questions. The UTAUT questionnaire results showed that learning about the potential of ChatGPT could lead to a shift in perception, resulting in a more positive attitude towards AI.</p><p><strong>Conclusion: </strong>Our study reveals residents' positive receptivity towards AI, especially after being confronted with its efficacy. These results highlight the importance of integrating AI-related topics into medical curricula and residency to help future physicians and surgeons better understand the advantages and limitations of AI.</p>\",\"PeriodicalId\":9050,\"journal\":{\"name\":\"BMJ Health & Care Informatics\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-05-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12097045/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMJ Health & Care Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1136/bmjhci-2025-101456\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Health & Care Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjhci-2025-101456","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Artificial intelligence in clinical practice: a cross-sectional survey of paediatric surgery residents' perspectives.
Objectives: The aim of this study was to compare the performances of residents and ChatGPT in answering validated questions and assess paediatric surgery residents' acceptance, perceptions and readiness to integrate artificial intelligence (AI) into clinical practice.
Methods: We conducted a cross-sectional study using randomly selected questions and clinical cases on paediatric surgery topics. We examined residents' acceptance of AI before and after comparing their results to ChatGPT's results using the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model. Data analysis was performed using Jamovi V.2.4.12.0.
Results: 30 residents participated. ChatGPT-4.0's median score was 13.75, while ChatGPT-3.5's was 8.75. The median score among residents was 8.13. Differences appeared statistically significant. ChatGPT outperformed residents specifically in definition questions (ChatGPT-4.0 vs residents, p<0.0001; ChatGPT-3.5 vs residents, p=0.03). In the UTAUT2 Questionnaire, respondents expressed a more positive evaluation of ChatGPT with higher mean values for each construct and lower fear of technology after learning about test scores.
Discussion: ChatGPT performed better than residents in knowledge-based questions and simple clinical cases. The accuracy of ChatGPT declined when confronted with more complex questions. The UTAUT questionnaire results showed that learning about the potential of ChatGPT could lead to a shift in perception, resulting in a more positive attitude towards AI.
Conclusion: Our study reveals residents' positive receptivity towards AI, especially after being confronted with its efficacy. These results highlight the importance of integrating AI-related topics into medical curricula and residency to help future physicians and surgeons better understand the advantages and limitations of AI.