Discussion of the ability to use chatGPT to answer questions related to esophageal cancer of patient concern.

IF 1.1 Q4 PRIMARY HEALTH CARE
Fengxia Yu, Mingyu Lei, Shiyu Wang, Miao Liu, Xiao Fu, Yuan Yu
{"title":"Discussion of the ability to use chatGPT to answer questions related to esophageal cancer of patient concern.","authors":"Fengxia Yu, Mingyu Lei, Shiyu Wang, Miao Liu, Xiao Fu, Yuan Yu","doi":"10.4103/jfmpc.jfmpc_1236_24","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Chat Generation Pre-Trained Converter (ChatGPT) is a language processing model based on artificial intelligence (AI). It covers a wide range of topics, including medicine, and can provide patients with knowledge about esophageal cancer.</p><p><strong>Objective: </strong>Based on its risk, this study aimed to assess ChatGPT's accuracy in answering patients' questions about esophageal cancer.</p><p><strong>Methods: </strong>By referring to professional association websites, social software and the author's clinical experience, 55 questions concerned by Chinese patients and their families were generated and scored by two deputy chief physicians of esophageal cancer. The answers were: (1) comprehensive/correct, (2) incomplete/partially correct, (3) partially accurate, partially inaccurate, and (4) completely inaccurate/irrelevant. Score differences are resolved by a third reviewer.</p><p><strong>Results: </strong>Out of 55 questions, 24 (43.6%) of the answers provided by ChatGPT were complete and correct, 13 (23.6%) were correct but incomplete, 18 (32.7%) were partially wrong, and no answers were completely wrong. Comprehensive and correct answers were highest in the field of prevention (50 percent), while partially incorrect answers were highest in the field of treatment (77.8 percent).</p><p><strong>Conclusion: </strong>ChatGPT can accurately answer the questions about the prevention and diagnosis of esophageal cancer, but it cannot accurately answer the questions about the treatment and prognosis of esophageal cancer. Further investigation and refinement of this widely used large-scale language model are needed before it can be recommended to patients with esophageal cancer, and ongoing research is still needed to verify the safety and accuracy of these tools and their medical applications.</p>","PeriodicalId":15856,"journal":{"name":"Journal of Family Medicine and Primary Care","volume":"14 4","pages":"1384-1388"},"PeriodicalIF":1.1000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12088566/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Family Medicine and Primary Care","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4103/jfmpc.jfmpc_1236_24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/25 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"PRIMARY HEALTH CARE","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Chat Generation Pre-Trained Converter (ChatGPT) is a language processing model based on artificial intelligence (AI). It covers a wide range of topics, including medicine, and can provide patients with knowledge about esophageal cancer.

Objective: Based on its risk, this study aimed to assess ChatGPT's accuracy in answering patients' questions about esophageal cancer.

Methods: By referring to professional association websites, social software and the author's clinical experience, 55 questions concerned by Chinese patients and their families were generated and scored by two deputy chief physicians of esophageal cancer. The answers were: (1) comprehensive/correct, (2) incomplete/partially correct, (3) partially accurate, partially inaccurate, and (4) completely inaccurate/irrelevant. Score differences are resolved by a third reviewer.

Results: Out of 55 questions, 24 (43.6%) of the answers provided by ChatGPT were complete and correct, 13 (23.6%) were correct but incomplete, 18 (32.7%) were partially wrong, and no answers were completely wrong. Comprehensive and correct answers were highest in the field of prevention (50 percent), while partially incorrect answers were highest in the field of treatment (77.8 percent).

Conclusion: ChatGPT can accurately answer the questions about the prevention and diagnosis of esophageal cancer, but it cannot accurately answer the questions about the treatment and prognosis of esophageal cancer. Further investigation and refinement of this widely used large-scale language model are needed before it can be recommended to patients with esophageal cancer, and ongoing research is still needed to verify the safety and accuracy of these tools and their medical applications.

讨论使用chatGPT回答患者关心的食管癌相关问题的能力。
聊天生成预训练转换器(ChatGPT)是一种基于人工智能(AI)的语言处理模型。它涵盖了广泛的主题,包括医学,并可以为患者提供有关食管癌的知识。目的:基于ChatGPT的风险,本研究旨在评估ChatGPT在回答食管癌患者问题时的准确性。方法:通过查阅专业协会网站、社交软件及笔者的临床经验,由两位食管癌副主任医师对中国患者及家属关心的55个问题进行问卷调查和评分。答案是:(1)全面/正确,(2)不完整/部分正确,(3)部分准确,部分不准确,(4)完全不准确/不相关。分数差异由第三位审稿人解决。结果:在55个问题中,ChatGPT提供的答案中完整正确的有24个(43.6%),正确不完整的有13个(23.6%),部分错误的有18个(32.7%),没有完全错误的答案。全面正确的回答在预防领域最高(50%),部分错误的回答在治疗领域最高(77.8%)。结论:ChatGPT能准确回答食管癌的预防和诊断问题,但不能准确回答食管癌的治疗和预后问题。在将这种广泛使用的大规模语言模型推荐给食管癌患者之前,还需要进一步的研究和完善,并且仍然需要进行研究来验证这些工具及其医疗应用的安全性和准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
7.10%
发文量
884
审稿时长
40 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信