在肾细胞癌中整合人工智能:评估ChatGPT对患者和学员的教育效果。

IF 1.5 4区 医学 Q4 ONCOLOGY
Translational cancer research Pub Date : 2024-11-30 Epub Date: 2024-05-21 DOI:10.21037/tcr-23-2234
J Patrick Mershon, Tasha Posid, Keyan Salari, Richard S Matulewicz, Eric A Singer, Shawn Dason
{"title":"在肾细胞癌中整合人工智能:评估ChatGPT对患者和学员的教育效果。","authors":"J Patrick Mershon, Tasha Posid, Keyan Salari, Richard S Matulewicz, Eric A Singer, Shawn Dason","doi":"10.21037/tcr-23-2234","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>OpenAI's ChatGPT is a large language model-based artificial intelligence (AI) chatbot that can be used to answer unique, user-generated questions without direct training on specific content. Large language models have significant potential in urologic education. We reviewed the primary data surrounding the use of large language models in urology. We also reported findings of our primary study assessing the performance of ChatGPT in renal cell carcinoma (RCC) education.</p><p><strong>Methods: </strong>For our primary study, we utilized three professional society guidelines addressing RCC to generate fifteen content questions. These questions were inputted into ChatGPT 3.5. ChatGPT responses along with pre- and post-content assessment questions regarding ChatGPT were then presented to evaluators. Evaluators consisted of four urologic oncologists and four non-clinical staff members. Medline was reviewed for additional studies pertaining to the use of ChatGPT in urologic education.</p><p><strong>Results: </strong>We found that all assessors rated ChatGPT highly on the accuracy and usefulness of information provided with overall mean scores of 3.64 [±0.62 standard deviation (SD)] and 3.58 (±0.75) out of 5, respectively. Clinicians and non-clinicians did not differ in their scoring of responses (P=0.37). Completing content assessment improved confidence in the accuracy of ChatGPT's information (P=0.01) and increased agreement that it should be used for medical education (P=0.007). Attitudes towards use for patient education did not change (P=0.30). We also review the current state of the literature regarding ChatGPT use for patient and trainee education and discuss future steps towards optimization.</p><p><strong>Conclusions: </strong>ChatGPT has significant potential utility in medical education if it can continue to provide accurate and useful information. We have found it to be a useful adjunct to expert human guidance both for medical trainee and, less so, for patient education. Further work is needed to validate ChatGPT before widespread adoption.</p>","PeriodicalId":23216,"journal":{"name":"Translational cancer research","volume":"13 11","pages":"6246-6254"},"PeriodicalIF":1.5000,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11651803/pdf/","citationCount":"0","resultStr":"{\"title\":\"Integrating artificial intelligence in renal cell carcinoma: evaluating ChatGPT's performance in educating patients and trainees.\",\"authors\":\"J Patrick Mershon, Tasha Posid, Keyan Salari, Richard S Matulewicz, Eric A Singer, Shawn Dason\",\"doi\":\"10.21037/tcr-23-2234\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>OpenAI's ChatGPT is a large language model-based artificial intelligence (AI) chatbot that can be used to answer unique, user-generated questions without direct training on specific content. Large language models have significant potential in urologic education. We reviewed the primary data surrounding the use of large language models in urology. We also reported findings of our primary study assessing the performance of ChatGPT in renal cell carcinoma (RCC) education.</p><p><strong>Methods: </strong>For our primary study, we utilized three professional society guidelines addressing RCC to generate fifteen content questions. These questions were inputted into ChatGPT 3.5. ChatGPT responses along with pre- and post-content assessment questions regarding ChatGPT were then presented to evaluators. Evaluators consisted of four urologic oncologists and four non-clinical staff members. Medline was reviewed for additional studies pertaining to the use of ChatGPT in urologic education.</p><p><strong>Results: </strong>We found that all assessors rated ChatGPT highly on the accuracy and usefulness of information provided with overall mean scores of 3.64 [±0.62 standard deviation (SD)] and 3.58 (±0.75) out of 5, respectively. Clinicians and non-clinicians did not differ in their scoring of responses (P=0.37). Completing content assessment improved confidence in the accuracy of ChatGPT's information (P=0.01) and increased agreement that it should be used for medical education (P=0.007). Attitudes towards use for patient education did not change (P=0.30). We also review the current state of the literature regarding ChatGPT use for patient and trainee education and discuss future steps towards optimization.</p><p><strong>Conclusions: </strong>ChatGPT has significant potential utility in medical education if it can continue to provide accurate and useful information. We have found it to be a useful adjunct to expert human guidance both for medical trainee and, less so, for patient education. Further work is needed to validate ChatGPT before widespread adoption.</p>\",\"PeriodicalId\":23216,\"journal\":{\"name\":\"Translational cancer research\",\"volume\":\"13 11\",\"pages\":\"6246-6254\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2024-11-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11651803/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Translational cancer research\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.21037/tcr-23-2234\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/5/21 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q4\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Translational cancer research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.21037/tcr-23-2234","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/5/21 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

背景:OpenAI的ChatGPT是一个大型的基于语言模型的人工智能(AI)聊天机器人,可以用来回答独特的、用户生成的问题,而不需要对特定内容进行直接培训。大型语言模型在泌尿学教育中具有重要的应用潜力。我们回顾了在泌尿外科中使用大型语言模型的主要数据。我们还报道了我们评估ChatGPT在肾细胞癌(RCC)教育中的表现的初步研究结果。方法:在我们的初步研究中,我们使用了三个专业协会的RCC指南来生成15个内容问题。这些问题输入到ChatGPT 3.5中。ChatGPT的回答以及关于ChatGPT的前后内容评估问题随后被提交给评估者。评估人员包括4名泌尿肿瘤学家和4名非临床工作人员。Medline回顾了关于在泌尿学教育中使用ChatGPT的其他研究。结果:我们发现所有的评估者对ChatGPT提供的信息的准确性和有用性评价很高,总体平均得分分别为3.64[±0.62标准差(SD)]和3.58(±0.75)。临床医生和非临床医生在反应评分上没有差异(P=0.37)。完成内容评估提高了对ChatGPT信息准确性的信心(P=0.01),并增加了将其用于医学教育的共识(P=0.007)。对患者用药教育的态度没有变化(P=0.30)。我们还回顾了关于ChatGPT用于患者和培训生教育的文献现状,并讨论了优化的未来步骤。结论:ChatGPT若能持续提供准确、有用的信息,在医学教育中具有重要的潜在效用。我们发现,无论是对医学培训生,还是对患者教育,它都是专家指导的有用辅助。在广泛采用ChatGPT之前,需要进一步的工作来验证ChatGPT。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Integrating artificial intelligence in renal cell carcinoma: evaluating ChatGPT's performance in educating patients and trainees.

Background: OpenAI's ChatGPT is a large language model-based artificial intelligence (AI) chatbot that can be used to answer unique, user-generated questions without direct training on specific content. Large language models have significant potential in urologic education. We reviewed the primary data surrounding the use of large language models in urology. We also reported findings of our primary study assessing the performance of ChatGPT in renal cell carcinoma (RCC) education.

Methods: For our primary study, we utilized three professional society guidelines addressing RCC to generate fifteen content questions. These questions were inputted into ChatGPT 3.5. ChatGPT responses along with pre- and post-content assessment questions regarding ChatGPT were then presented to evaluators. Evaluators consisted of four urologic oncologists and four non-clinical staff members. Medline was reviewed for additional studies pertaining to the use of ChatGPT in urologic education.

Results: We found that all assessors rated ChatGPT highly on the accuracy and usefulness of information provided with overall mean scores of 3.64 [±0.62 standard deviation (SD)] and 3.58 (±0.75) out of 5, respectively. Clinicians and non-clinicians did not differ in their scoring of responses (P=0.37). Completing content assessment improved confidence in the accuracy of ChatGPT's information (P=0.01) and increased agreement that it should be used for medical education (P=0.007). Attitudes towards use for patient education did not change (P=0.30). We also review the current state of the literature regarding ChatGPT use for patient and trainee education and discuss future steps towards optimization.

Conclusions: ChatGPT has significant potential utility in medical education if it can continue to provide accurate and useful information. We have found it to be a useful adjunct to expert human guidance both for medical trainee and, less so, for patient education. Further work is needed to validate ChatGPT before widespread adoption.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.10
自引率
0.00%
发文量
252
期刊介绍: Translational Cancer Research (Transl Cancer Res TCR; Print ISSN: 2218-676X; Online ISSN 2219-6803; http://tcr.amegroups.com/) is an Open Access, peer-reviewed journal, indexed in Science Citation Index Expanded (SCIE). TCR publishes laboratory studies of novel therapeutic interventions as well as clinical trials which evaluate new treatment paradigms for cancer; results of novel research investigations which bridge the laboratory and clinical settings including risk assessment, cellular and molecular characterization, prevention, detection, diagnosis and treatment of human cancers with the overall goal of improving the clinical care of cancer patients. The focus of TCR is original, peer-reviewed, science-based research that successfully advances clinical medicine toward the goal of improving patients'' quality of life. The editors and an international advisory group of scientists and clinician-scientists as well as other experts will hold TCR articles to the high-quality standards. We accept Original Articles as well as Review Articles, Editorials and Brief Articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信