Frances Dang, Josh Kwon, Andy Lin, Shoujit Banerjee, Trevor McCracken, Amirali Tavangar, Shravani R Reddy, Alyssa Y Choi, Jennifer Phan, Jeffrey D Mosko, Samir C Grover, Tyler M Berzin, Jason Samarasena
{"title":"CHATGPT4.0作为人工智能助手在Barrett食管患者教育和管理中的潜在效用。","authors":"Frances Dang, Josh Kwon, Andy Lin, Shoujit Banerjee, Trevor McCracken, Amirali Tavangar, Shravani R Reddy, Alyssa Y Choi, Jennifer Phan, Jeffrey D Mosko, Samir C Grover, Tyler M Berzin, Jason Samarasena","doi":"10.1093/dote/doaf050","DOIUrl":null,"url":null,"abstract":"<p><p>Chat Generative Pre-trained Transformer (ChatGPT) has emerged as a new technology for physicians and patients to obtain medical information. Our aim was to assess the ability of ChatGPT 4.0 to deliver high-quality information in response to commonly asked questions and management recommendations for Barrett's esophagus (BE). Twenty-nine questions (14 clinical vignettes and 15 frequently asked questions (FAQ)) on BE were entered into ChatGPT 4.0. Using a 5-point Likert scale, three gastroenterologists with expertise in BE rated the 29 ChatGPT responses for accuracy, completeness, empathy, use of excessive medical jargon, and appropriateness to send to patients. Three separate gastroenterologists generated responses to the same 15 FAQs on BE. A group of blinded patients with BE evaluated both ChatGPT and gastroenterologist responses on quality, clarity, empathy and which of the two responses was preferred. Gastroenterologists rated ChatGPT responses as mostly accurate overall (4.01 out of 5) with 79.3% of responses completely accurate or mostly accurate with minor errors. When compared to gastroenterologist responses, the patient panel rated ChatGPT responses to be of significantly higher quality (4.42 vs. 3.07 out of 5) and empathy (4.33 vs. 2.55 out of 5) (p < 0.0001). In conclusion, ChatGPT 4.0 provides generally accurate and comprehensive information about BE. Patients expressed a clear preference for ChatGPT responses over those of gastroenterologists, finding responses from ChatGPT to be of higher quality and empathy. This study highlights the potential use of ChatGPT 4.0 as an adjunctive tool for physicians to provide real-time, high-quality information about BE to their patients.</p>","PeriodicalId":54277,"journal":{"name":"Diseases of the Esophagus","volume":"38 4","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12233505/pdf/","citationCount":"0","resultStr":"{\"title\":\"The potential utility of CHATGPT4.0 as an AI assistant in the education and management of patients with Barrett's esophagus.\",\"authors\":\"Frances Dang, Josh Kwon, Andy Lin, Shoujit Banerjee, Trevor McCracken, Amirali Tavangar, Shravani R Reddy, Alyssa Y Choi, Jennifer Phan, Jeffrey D Mosko, Samir C Grover, Tyler M Berzin, Jason Samarasena\",\"doi\":\"10.1093/dote/doaf050\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Chat Generative Pre-trained Transformer (ChatGPT) has emerged as a new technology for physicians and patients to obtain medical information. Our aim was to assess the ability of ChatGPT 4.0 to deliver high-quality information in response to commonly asked questions and management recommendations for Barrett's esophagus (BE). Twenty-nine questions (14 clinical vignettes and 15 frequently asked questions (FAQ)) on BE were entered into ChatGPT 4.0. Using a 5-point Likert scale, three gastroenterologists with expertise in BE rated the 29 ChatGPT responses for accuracy, completeness, empathy, use of excessive medical jargon, and appropriateness to send to patients. Three separate gastroenterologists generated responses to the same 15 FAQs on BE. A group of blinded patients with BE evaluated both ChatGPT and gastroenterologist responses on quality, clarity, empathy and which of the two responses was preferred. Gastroenterologists rated ChatGPT responses as mostly accurate overall (4.01 out of 5) with 79.3% of responses completely accurate or mostly accurate with minor errors. When compared to gastroenterologist responses, the patient panel rated ChatGPT responses to be of significantly higher quality (4.42 vs. 3.07 out of 5) and empathy (4.33 vs. 2.55 out of 5) (p < 0.0001). In conclusion, ChatGPT 4.0 provides generally accurate and comprehensive information about BE. Patients expressed a clear preference for ChatGPT responses over those of gastroenterologists, finding responses from ChatGPT to be of higher quality and empathy. This study highlights the potential use of ChatGPT 4.0 as an adjunctive tool for physicians to provide real-time, high-quality information about BE to their patients.</p>\",\"PeriodicalId\":54277,\"journal\":{\"name\":\"Diseases of the Esophagus\",\"volume\":\"38 4\",\"pages\":\"\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2025-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12233505/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Diseases of the Esophagus\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/dote/doaf050\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Diseases of the Esophagus","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/dote/doaf050","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The potential utility of CHATGPT4.0 as an AI assistant in the education and management of patients with Barrett's esophagus.
Chat Generative Pre-trained Transformer (ChatGPT) has emerged as a new technology for physicians and patients to obtain medical information. Our aim was to assess the ability of ChatGPT 4.0 to deliver high-quality information in response to commonly asked questions and management recommendations for Barrett's esophagus (BE). Twenty-nine questions (14 clinical vignettes and 15 frequently asked questions (FAQ)) on BE were entered into ChatGPT 4.0. Using a 5-point Likert scale, three gastroenterologists with expertise in BE rated the 29 ChatGPT responses for accuracy, completeness, empathy, use of excessive medical jargon, and appropriateness to send to patients. Three separate gastroenterologists generated responses to the same 15 FAQs on BE. A group of blinded patients with BE evaluated both ChatGPT and gastroenterologist responses on quality, clarity, empathy and which of the two responses was preferred. Gastroenterologists rated ChatGPT responses as mostly accurate overall (4.01 out of 5) with 79.3% of responses completely accurate or mostly accurate with minor errors. When compared to gastroenterologist responses, the patient panel rated ChatGPT responses to be of significantly higher quality (4.42 vs. 3.07 out of 5) and empathy (4.33 vs. 2.55 out of 5) (p < 0.0001). In conclusion, ChatGPT 4.0 provides generally accurate and comprehensive information about BE. Patients expressed a clear preference for ChatGPT responses over those of gastroenterologists, finding responses from ChatGPT to be of higher quality and empathy. This study highlights the potential use of ChatGPT 4.0 as an adjunctive tool for physicians to provide real-time, high-quality information about BE to their patients.