{"title":"Discussion of the ability to use chatGPT to answer questions related to esophageal cancer of patient concern.","authors":"Fengxia Yu, Mingyu Lei, Shiyu Wang, Miao Liu, Xiao Fu, Yuan Yu","doi":"10.4103/jfmpc.jfmpc_1236_24","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Chat Generation Pre-Trained Converter (ChatGPT) is a language processing model based on artificial intelligence (AI). It covers a wide range of topics, including medicine, and can provide patients with knowledge about esophageal cancer.</p><p><strong>Objective: </strong>Based on its risk, this study aimed to assess ChatGPT's accuracy in answering patients' questions about esophageal cancer.</p><p><strong>Methods: </strong>By referring to professional association websites, social software and the author's clinical experience, 55 questions concerned by Chinese patients and their families were generated and scored by two deputy chief physicians of esophageal cancer. The answers were: (1) comprehensive/correct, (2) incomplete/partially correct, (3) partially accurate, partially inaccurate, and (4) completely inaccurate/irrelevant. Score differences are resolved by a third reviewer.</p><p><strong>Results: </strong>Out of 55 questions, 24 (43.6%) of the answers provided by ChatGPT were complete and correct, 13 (23.6%) were correct but incomplete, 18 (32.7%) were partially wrong, and no answers were completely wrong. Comprehensive and correct answers were highest in the field of prevention (50 percent), while partially incorrect answers were highest in the field of treatment (77.8 percent).</p><p><strong>Conclusion: </strong>ChatGPT can accurately answer the questions about the prevention and diagnosis of esophageal cancer, but it cannot accurately answer the questions about the treatment and prognosis of esophageal cancer. Further investigation and refinement of this widely used large-scale language model are needed before it can be recommended to patients with esophageal cancer, and ongoing research is still needed to verify the safety and accuracy of these tools and their medical applications.</p>","PeriodicalId":15856,"journal":{"name":"Journal of Family Medicine and Primary Care","volume":"14 4","pages":"1384-1388"},"PeriodicalIF":1.1000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12088566/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Family Medicine and Primary Care","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4103/jfmpc.jfmpc_1236_24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/25 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"PRIMARY HEALTH CARE","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Chat Generation Pre-Trained Converter (ChatGPT) is a language processing model based on artificial intelligence (AI). It covers a wide range of topics, including medicine, and can provide patients with knowledge about esophageal cancer.
Objective: Based on its risk, this study aimed to assess ChatGPT's accuracy in answering patients' questions about esophageal cancer.
Methods: By referring to professional association websites, social software and the author's clinical experience, 55 questions concerned by Chinese patients and their families were generated and scored by two deputy chief physicians of esophageal cancer. The answers were: (1) comprehensive/correct, (2) incomplete/partially correct, (3) partially accurate, partially inaccurate, and (4) completely inaccurate/irrelevant. Score differences are resolved by a third reviewer.
Results: Out of 55 questions, 24 (43.6%) of the answers provided by ChatGPT were complete and correct, 13 (23.6%) were correct but incomplete, 18 (32.7%) were partially wrong, and no answers were completely wrong. Comprehensive and correct answers were highest in the field of prevention (50 percent), while partially incorrect answers were highest in the field of treatment (77.8 percent).
Conclusion: ChatGPT can accurately answer the questions about the prevention and diagnosis of esophageal cancer, but it cannot accurately answer the questions about the treatment and prognosis of esophageal cancer. Further investigation and refinement of this widely used large-scale language model are needed before it can be recommended to patients with esophageal cancer, and ongoing research is still needed to verify the safety and accuracy of these tools and their medical applications.