Accuracy and Reproducibility of ChatGPT Responses to Breast Cancer Tumor Board Patients.

IF 3.3 Q2 ONCOLOGY
JCO Clinical Cancer Informatics Pub Date : 2025-06-01 Epub Date: 2025-06-04 DOI:10.1200/CCI-25-00001
Ning Liao, Cheukfai Li, William J Gradishar, V Suzanne Klimberg, Joshua A Roshal, Taize Yuan, Sanjiv S Agarwala, Vincente K Valero, Sandra M Swain, Julie A Margenthaler, Isabel T Rubio, Sara A Hurvitz, Charles E Geyer, Nancy U Lin, Hope S Rugo, Guochun Zhang, Nanqiu Liu, Charles M Balch
{"title":"Accuracy and Reproducibility of ChatGPT Responses to Breast Cancer Tumor Board Patients.","authors":"Ning Liao, Cheukfai Li, William J Gradishar, V Suzanne Klimberg, Joshua A Roshal, Taize Yuan, Sanjiv S Agarwala, Vincente K Valero, Sandra M Swain, Julie A Margenthaler, Isabel T Rubio, Sara A Hurvitz, Charles E Geyer, Nancy U Lin, Hope S Rugo, Guochun Zhang, Nanqiu Liu, Charles M Balch","doi":"10.1200/CCI-25-00001","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>We assessed the accuracy and reproducibility of Chat Generative Pre-Trained Transformer's (ChatGPT) recommendations in response to breast cancer patients by comparing generated outputs with consensus expert opinions.</p><p><strong>Methods: </strong>362 consecutive breast cancer patients sourced from a weekly international breast cancer webinar series were submitted to a tumor board of renowned experts. The same 362 clinical patients were also prompted to ChatGPT-4.0 three separate times to examine reproducibility.</p><p><strong>Results: </strong>Only 46% of ChatGPT-generated content was entirely concordant with the recommendations of breast cancer experts, and only 39% of ChatGPT's responses demonstrated inter-response similarity. ChatGPT's responses demonstrated higher concordance with CEN experts in earlier stages of breast cancer (0, I, II, III) compared to advanced (IV) patients (<i>P</i> = .019). There were less accurate responses from ChatGPT when responding to patients involving molecular markers and genetic testing (<i>P</i> = .025), and in patients involving antibody drug conjugates (<i>P</i> = .006). ChatGPT's responses were not necessarily incorrect but often omitted specific details about clinical management. When the same prompt was independently sent to CEN into the model on three occasions, each time by difference users, ChatGPT's responses exhibited variable content and formatting in 68% (246 out of 362) of patients and were entirely consistent with one another in only 32% of responses.</p><p><strong>Conclusion: </strong>Since this promising clinical decision-making support tool is widely used currently by physicians worldwide, it is important for the user to understand its limitations as currently constructed when responding to multidisciplinary breast cancer patients, and for researchers in the field to continue improving its ability with contemporary, accurate and complete breast cancer information. As currently constructed, ChatGPT is not engineered to generate identical outputs to the same input and was less likely to correctly interpret and recommend treatments for complex breast cancer patients.</p>","PeriodicalId":51626,"journal":{"name":"JCO Clinical Cancer Informatics","volume":"9 ","pages":"e2500001"},"PeriodicalIF":3.3000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JCO Clinical Cancer Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1200/CCI-25-00001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/4 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"ONCOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: We assessed the accuracy and reproducibility of Chat Generative Pre-Trained Transformer's (ChatGPT) recommendations in response to breast cancer patients by comparing generated outputs with consensus expert opinions.

Methods: 362 consecutive breast cancer patients sourced from a weekly international breast cancer webinar series were submitted to a tumor board of renowned experts. The same 362 clinical patients were also prompted to ChatGPT-4.0 three separate times to examine reproducibility.

Results: Only 46% of ChatGPT-generated content was entirely concordant with the recommendations of breast cancer experts, and only 39% of ChatGPT's responses demonstrated inter-response similarity. ChatGPT's responses demonstrated higher concordance with CEN experts in earlier stages of breast cancer (0, I, II, III) compared to advanced (IV) patients (P = .019). There were less accurate responses from ChatGPT when responding to patients involving molecular markers and genetic testing (P = .025), and in patients involving antibody drug conjugates (P = .006). ChatGPT's responses were not necessarily incorrect but often omitted specific details about clinical management. When the same prompt was independently sent to CEN into the model on three occasions, each time by difference users, ChatGPT's responses exhibited variable content and formatting in 68% (246 out of 362) of patients and were entirely consistent with one another in only 32% of responses.

Conclusion: Since this promising clinical decision-making support tool is widely used currently by physicians worldwide, it is important for the user to understand its limitations as currently constructed when responding to multidisciplinary breast cancer patients, and for researchers in the field to continue improving its ability with contemporary, accurate and complete breast cancer information. As currently constructed, ChatGPT is not engineered to generate identical outputs to the same input and was less likely to correctly interpret and recommend treatments for complex breast cancer patients.

乳腺癌肿瘤板患者ChatGPT反应的准确性和可重复性。
目的:我们通过比较生成的输出和一致的专家意见,评估聊天生成预训练变压器(ChatGPT)对乳腺癌患者的建议的准确性和可重复性。方法:362名连续参加每周一次国际乳腺癌网络研讨会的乳腺癌患者被提交给一个由知名专家组成的肿瘤委员会。同样的362名临床患者也被提示三次使用ChatGPT-4.0来检查可重复性。结果:只有46%的ChatGPT生成的内容与乳腺癌专家的建议完全一致,只有39%的ChatGPT回复显示出响应间的相似性。与晚期(IV)患者相比,ChatGPT在早期乳腺癌(0、I、II、III)患者的反应与CEN专家的一致性更高(P = 0.019)。在涉及分子标记和基因检测的患者(P = 0.025)和涉及抗体药物偶联物的患者(P = 0.006)时,ChatGPT的反应准确性较低。ChatGPT的回答不一定是不正确的,但经常遗漏有关临床管理的具体细节。当同样的提示由不同的用户分别三次独立发送给CEN到模型中时,ChatGPT的回复在68%(362名患者中的246名)的患者中显示出不同的内容和格式,并且只有32%的回复完全一致。结论:由于这一有前景的临床决策支持工具目前被世界各地的医生广泛使用,因此在应对多学科乳腺癌患者时,用户必须了解其目前构建的局限性,并为该领域的研究人员提供及时,准确和完整的乳腺癌信息继续提高其能力。根据目前的构建,ChatGPT不能对相同的输入产生相同的输出,并且不太可能正确解释和推荐复杂的乳腺癌患者的治疗方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.20
自引率
4.80%
发文量
190
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信