Yongkang Lai, Foqiang Liao, Jiulong Zhao, Chunping Zhu, Yi Hu, Zhaoshen Li
{"title":"探索 ChatGPT 的能力:全面评估其处理幽门螺旋杆菌相关查询的准确性和可重复性。","authors":"Yongkang Lai, Foqiang Liao, Jiulong Zhao, Chunping Zhu, Yi Hu, Zhaoshen Li","doi":"10.1111/hel.13078","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Educational initiatives on <i>Helicobacter pylori</i> (<i>H. pylori</i>) constitute a highly effective approach for preventing its infection and establishing standardized protocols for its eradication. ChatGPT, a large language model, is a potentially patient-friendly online tool capable of providing health-related knowledge. This study aims to assess the accuracy and repeatability of ChatGPT in responding to questions related to <i>H. pylori.</i></p>\n </section>\n \n <section>\n \n <h3> Materials and Methods</h3>\n \n <p>Twenty-one common questions about <i>H. pylori</i> were collected and categorized into four domains: basic knowledge, diagnosis, treatment, and prevention. ChatGPT was utilized to individually answer the aforementioned 21 questions. Its responses were independently assessed by two experts on <i>H. pylori</i>. Questions with divergent ratings were resolved by a third reviewer. Cohen's kappa coefficient was calculated to assess the consistency between the scores of the two reviewers.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The responses of ChatGPT on <i>H. pylori</i>-related questions were generally satisfactory, with 61.9% marked as “completely correct” and 33.33% as “correct but inadequate.” The repeatability of the responses of ChatGPT to <i>H. pylori</i>-related questions was 95.23%. Among the responses, those related to prevention (comprehensive: 75%) had the best response, followed by those on treatment (comprehensive: 66.7%), basic knowledge (comprehensive: 60%), and diagnosis (comprehensive: 50%). In the “treatment” domain, 16.6% of the ChatGPT responses were categorized as “mixed with correct or incorrect/outdated data.” However, ChatGPT still lacks relevant knowledge regarding <i>H. pylori</i> resistance and the use of sensitive antibiotics.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>ChatGPT can provide correct answers to the majority of <i>H. pylori</i>-related queries. It exhibited good reproducibility and delivered responses that were easily comprehensible to patients. Further enhancement of real-time information updates and correction of inaccurate information will make ChatGPT an essential auxiliary tool for providing accurate <i>H. pylori</i>-related health information to patients.</p>\n </section>\n </div>","PeriodicalId":13223,"journal":{"name":"Helicobacter","volume":"29 3","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring the capacities of ChatGPT: A comprehensive evaluation of its accuracy and repeatability in addressing helicobacter pylori-related queries\",\"authors\":\"Yongkang Lai, Foqiang Liao, Jiulong Zhao, Chunping Zhu, Yi Hu, Zhaoshen Li\",\"doi\":\"10.1111/hel.13078\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Educational initiatives on <i>Helicobacter pylori</i> (<i>H. pylori</i>) constitute a highly effective approach for preventing its infection and establishing standardized protocols for its eradication. ChatGPT, a large language model, is a potentially patient-friendly online tool capable of providing health-related knowledge. This study aims to assess the accuracy and repeatability of ChatGPT in responding to questions related to <i>H. pylori.</i></p>\\n </section>\\n \\n <section>\\n \\n <h3> Materials and Methods</h3>\\n \\n <p>Twenty-one common questions about <i>H. pylori</i> were collected and categorized into four domains: basic knowledge, diagnosis, treatment, and prevention. ChatGPT was utilized to individually answer the aforementioned 21 questions. Its responses were independently assessed by two experts on <i>H. pylori</i>. Questions with divergent ratings were resolved by a third reviewer. Cohen's kappa coefficient was calculated to assess the consistency between the scores of the two reviewers.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>The responses of ChatGPT on <i>H. pylori</i>-related questions were generally satisfactory, with 61.9% marked as “completely correct” and 33.33% as “correct but inadequate.” The repeatability of the responses of ChatGPT to <i>H. pylori</i>-related questions was 95.23%. Among the responses, those related to prevention (comprehensive: 75%) had the best response, followed by those on treatment (comprehensive: 66.7%), basic knowledge (comprehensive: 60%), and diagnosis (comprehensive: 50%). In the “treatment” domain, 16.6% of the ChatGPT responses were categorized as “mixed with correct or incorrect/outdated data.” However, ChatGPT still lacks relevant knowledge regarding <i>H. pylori</i> resistance and the use of sensitive antibiotics.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>ChatGPT can provide correct answers to the majority of <i>H. pylori</i>-related queries. It exhibited good reproducibility and delivered responses that were easily comprehensible to patients. Further enhancement of real-time information updates and correction of inaccurate information will make ChatGPT an essential auxiliary tool for providing accurate <i>H. pylori</i>-related health information to patients.</p>\\n </section>\\n </div>\",\"PeriodicalId\":13223,\"journal\":{\"name\":\"Helicobacter\",\"volume\":\"29 3\",\"pages\":\"\"},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Helicobacter\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/hel.13078\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GASTROENTEROLOGY & HEPATOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Helicobacter","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/hel.13078","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GASTROENTEROLOGY & HEPATOLOGY","Score":null,"Total":0}
Exploring the capacities of ChatGPT: A comprehensive evaluation of its accuracy and repeatability in addressing helicobacter pylori-related queries
Background
Educational initiatives on Helicobacter pylori (H. pylori) constitute a highly effective approach for preventing its infection and establishing standardized protocols for its eradication. ChatGPT, a large language model, is a potentially patient-friendly online tool capable of providing health-related knowledge. This study aims to assess the accuracy and repeatability of ChatGPT in responding to questions related to H. pylori.
Materials and Methods
Twenty-one common questions about H. pylori were collected and categorized into four domains: basic knowledge, diagnosis, treatment, and prevention. ChatGPT was utilized to individually answer the aforementioned 21 questions. Its responses were independently assessed by two experts on H. pylori. Questions with divergent ratings were resolved by a third reviewer. Cohen's kappa coefficient was calculated to assess the consistency between the scores of the two reviewers.
Results
The responses of ChatGPT on H. pylori-related questions were generally satisfactory, with 61.9% marked as “completely correct” and 33.33% as “correct but inadequate.” The repeatability of the responses of ChatGPT to H. pylori-related questions was 95.23%. Among the responses, those related to prevention (comprehensive: 75%) had the best response, followed by those on treatment (comprehensive: 66.7%), basic knowledge (comprehensive: 60%), and diagnosis (comprehensive: 50%). In the “treatment” domain, 16.6% of the ChatGPT responses were categorized as “mixed with correct or incorrect/outdated data.” However, ChatGPT still lacks relevant knowledge regarding H. pylori resistance and the use of sensitive antibiotics.
Conclusions
ChatGPT can provide correct answers to the majority of H. pylori-related queries. It exhibited good reproducibility and delivered responses that were easily comprehensible to patients. Further enhancement of real-time information updates and correction of inaccurate information will make ChatGPT an essential auxiliary tool for providing accurate H. pylori-related health information to patients.
期刊介绍:
Helicobacter is edited by Professor David Y Graham. The editorial and peer review process is an independent process. Whenever there is a conflict of interest, the editor and editorial board will declare their interests and affiliations. Helicobacter recognises the critical role that has been established for Helicobacter pylori in peptic ulcer, gastric adenocarcinoma, and primary gastric lymphoma. As new helicobacter species are now regularly being discovered, Helicobacter covers the entire range of helicobacter research, increasing communication among the fields of gastroenterology; microbiology; vaccine development; laboratory animal science.