Martina Sciberras, Yvette Farrugia, Hannah Gordon, Federica Furfaro, Mariangela Allocca, Joana Torres, Naila Arebi, Gionata Fiorino, Marietta Iacucci, Bram Verstockt, Fernando Magro, Kostas Katsanos, Josef Busuttil, Katya De Giovanni, Valerie Anne Fenech, Stefania Chetcuti Zammit, Pierre Ellul
{"title":"ChatGPT 为炎症性肠病患者提供的与 ECCO 指南相关信息的准确性。","authors":"Martina Sciberras, Yvette Farrugia, Hannah Gordon, Federica Furfaro, Mariangela Allocca, Joana Torres, Naila Arebi, Gionata Fiorino, Marietta Iacucci, Bram Verstockt, Fernando Magro, Kostas Katsanos, Josef Busuttil, Katya De Giovanni, Valerie Anne Fenech, Stefania Chetcuti Zammit, Pierre Ellul","doi":"10.1093/ecco-jcc/jjae040","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>As acceptance of artificial intelligence [AI] platforms increases, more patients will consider these tools as sources of information. The ChatGPT architecture utilizes a neural network to process natural language, thus generating responses based on the context of input text. The accuracy and completeness of ChatGPT3.5 in the context of inflammatory bowel disease [IBD] remains unclear.</p><p><strong>Methods: </strong>In this prospective study, 38 questions worded by IBD patients were inputted into ChatGPT3.5. The following topics were covered: [1] Crohn's disease [CD], ulcerative colitis [UC], and malignancy; [2] maternal medicine; [3] infection and vaccination; and [4] complementary medicine. Responses given by ChatGPT were assessed for accuracy [1-completely incorrect to 5-completely correct] and completeness [3-point Likert scale; range 1-incomplete to 3-complete] by 14 expert gastroenterologists, in comparison with relevant ECCO guidelines.</p><p><strong>Results: </strong>In terms of accuracy, most replies [84.2%] had a median score of ≥4 (interquartile range [IQR]: 2) and a mean score of 3.87 [SD: ±0.6]. For completeness, 34.2% of the replies had a median score of 3 and 55.3% had a median score of between 2 and <3. Overall, the mean rating was 2.24 [SD: ±0.4, median: 2, IQR: 1]. Though groups 3 and 4 had a higher mean for both accuracy and completeness, there was no significant scoring variation between the four question groups [Kruskal-Wallis test p > 0.05]. However, statistical analysis for the different individual questions revealed a significant difference for both accuracy [p < 0.001] and completeness [p < 0.001]. The questions which rated the highest for both accuracy and completeness were related to smoking, while the lowest rating was related to screening for malignancy and vaccinations especially in the context of immunosuppression and family planning.</p><p><strong>Conclusion: </strong>This is the first study to demonstrate the capability of an AI-based system to provide accurate and comprehensive answers to real-world patient queries in IBD. AI systems may serve as a useful adjunct for patients, in addition to standard of care in clinics and validated patient information resources. However, responses in specialist areas may deviate from evidence-based guidance and the replies need to give more firm advice.</p>","PeriodicalId":94074,"journal":{"name":"Journal of Crohn's & colitis","volume":" ","pages":"1215-1221"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Accuracy of Information given by ChatGPT for Patients with Inflammatory Bowel Disease in Relation to ECCO Guidelines.\",\"authors\":\"Martina Sciberras, Yvette Farrugia, Hannah Gordon, Federica Furfaro, Mariangela Allocca, Joana Torres, Naila Arebi, Gionata Fiorino, Marietta Iacucci, Bram Verstockt, Fernando Magro, Kostas Katsanos, Josef Busuttil, Katya De Giovanni, Valerie Anne Fenech, Stefania Chetcuti Zammit, Pierre Ellul\",\"doi\":\"10.1093/ecco-jcc/jjae040\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>As acceptance of artificial intelligence [AI] platforms increases, more patients will consider these tools as sources of information. The ChatGPT architecture utilizes a neural network to process natural language, thus generating responses based on the context of input text. The accuracy and completeness of ChatGPT3.5 in the context of inflammatory bowel disease [IBD] remains unclear.</p><p><strong>Methods: </strong>In this prospective study, 38 questions worded by IBD patients were inputted into ChatGPT3.5. The following topics were covered: [1] Crohn's disease [CD], ulcerative colitis [UC], and malignancy; [2] maternal medicine; [3] infection and vaccination; and [4] complementary medicine. Responses given by ChatGPT were assessed for accuracy [1-completely incorrect to 5-completely correct] and completeness [3-point Likert scale; range 1-incomplete to 3-complete] by 14 expert gastroenterologists, in comparison with relevant ECCO guidelines.</p><p><strong>Results: </strong>In terms of accuracy, most replies [84.2%] had a median score of ≥4 (interquartile range [IQR]: 2) and a mean score of 3.87 [SD: ±0.6]. For completeness, 34.2% of the replies had a median score of 3 and 55.3% had a median score of between 2 and <3. Overall, the mean rating was 2.24 [SD: ±0.4, median: 2, IQR: 1]. Though groups 3 and 4 had a higher mean for both accuracy and completeness, there was no significant scoring variation between the four question groups [Kruskal-Wallis test p > 0.05]. However, statistical analysis for the different individual questions revealed a significant difference for both accuracy [p < 0.001] and completeness [p < 0.001]. The questions which rated the highest for both accuracy and completeness were related to smoking, while the lowest rating was related to screening for malignancy and vaccinations especially in the context of immunosuppression and family planning.</p><p><strong>Conclusion: </strong>This is the first study to demonstrate the capability of an AI-based system to provide accurate and comprehensive answers to real-world patient queries in IBD. AI systems may serve as a useful adjunct for patients, in addition to standard of care in clinics and validated patient information resources. However, responses in specialist areas may deviate from evidence-based guidance and the replies need to give more firm advice.</p>\",\"PeriodicalId\":94074,\"journal\":{\"name\":\"Journal of Crohn's & colitis\",\"volume\":\" \",\"pages\":\"1215-1221\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Crohn's & colitis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/ecco-jcc/jjae040\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Crohn's & colitis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/ecco-jcc/jjae040","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Accuracy of Information given by ChatGPT for Patients with Inflammatory Bowel Disease in Relation to ECCO Guidelines.
Background: As acceptance of artificial intelligence [AI] platforms increases, more patients will consider these tools as sources of information. The ChatGPT architecture utilizes a neural network to process natural language, thus generating responses based on the context of input text. The accuracy and completeness of ChatGPT3.5 in the context of inflammatory bowel disease [IBD] remains unclear.
Methods: In this prospective study, 38 questions worded by IBD patients were inputted into ChatGPT3.5. The following topics were covered: [1] Crohn's disease [CD], ulcerative colitis [UC], and malignancy; [2] maternal medicine; [3] infection and vaccination; and [4] complementary medicine. Responses given by ChatGPT were assessed for accuracy [1-completely incorrect to 5-completely correct] and completeness [3-point Likert scale; range 1-incomplete to 3-complete] by 14 expert gastroenterologists, in comparison with relevant ECCO guidelines.
Results: In terms of accuracy, most replies [84.2%] had a median score of ≥4 (interquartile range [IQR]: 2) and a mean score of 3.87 [SD: ±0.6]. For completeness, 34.2% of the replies had a median score of 3 and 55.3% had a median score of between 2 and <3. Overall, the mean rating was 2.24 [SD: ±0.4, median: 2, IQR: 1]. Though groups 3 and 4 had a higher mean for both accuracy and completeness, there was no significant scoring variation between the four question groups [Kruskal-Wallis test p > 0.05]. However, statistical analysis for the different individual questions revealed a significant difference for both accuracy [p < 0.001] and completeness [p < 0.001]. The questions which rated the highest for both accuracy and completeness were related to smoking, while the lowest rating was related to screening for malignancy and vaccinations especially in the context of immunosuppression and family planning.
Conclusion: This is the first study to demonstrate the capability of an AI-based system to provide accurate and comprehensive answers to real-world patient queries in IBD. AI systems may serve as a useful adjunct for patients, in addition to standard of care in clinics and validated patient information resources. However, responses in specialist areas may deviate from evidence-based guidance and the replies need to give more firm advice.