{"title":"评估 ChatGPT 回答尿路感染相关问题的能力。","authors":"Hakan Cakir , Ufuk Caglar , Sami Sekkeli , Esra Zerdali , Omer Sarilar , Oguzhan Yildiz , Faruk Ozgor","doi":"10.1016/j.idnow.2024.104884","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><p>For the first time, the accuracy and proficiency of ChatGPT answers on urogenital tract infection (UTIs) were evaluated.</p></div><div><h3>Methods</h3><p>The study aimed to create two lists of questions: frequently asked questions (FAQs, public-based inquiries) on relevant topics, and questions based on guideline information (guideline-based inquiries). ChatGPT responses to FAQs and scientific questions were scored by two urologists and an infectious disease specialist. Quality and reliability of all ChatGPT answers were checked using the Global Quality Score (GQS). The reproducibility of ChatGPT answers was analyzed by asking each question twice.</p></div><div><h3>Results</h3><p>All in all, 96.2 % of FAQs (75/78 inquiries) related to UTIs were correctly and adequately answered by ChatGPT, and scored GQS 5. None of the ChatGPT answers were classified as GQS 2 and GQS 1. Moreover, FAQs about cystitis, urethritis, and epididymo-orchitis were answered by ChatGPT with 100 % accuracy (GQS 5). ChatGPT answers for EAU urological infections guidelines showed that 61 (89.7 %), 5 (7.4 %), and 2 (2.9 %) ChatGPT responses were scored GQS 5, GQS 4, and GQS 3, respectively. None of the ChatGPT responses for EAU urological infections guidelines were categorized as GQS 2 and GQS 1. Comparison of mean GQS values of ChatGPT answers for FAQs and EAU urological guideline questions showed that ChatGPT was similarly able to respond to both question groups (p = 0.168). The ChatGPT response reproducibility rate was highest for the FAQ subgroups of cystitis, urethritis, and epididymo-orchitis (100 % for each subgroup).</p></div><div><h3>Conclusion</h3><p>The present study showed that ChatGPT gave accurate and satisfactory answers for both public-based inquiries, and EAU urological infection guideline-based questions. Reproducibility of ChatGPT answers exceeded 90% for both FAQs and scientific questions.</p></div>","PeriodicalId":13539,"journal":{"name":"Infectious diseases now","volume":"54 4","pages":"Article 104884"},"PeriodicalIF":2.9000,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666991924000393/pdfft?md5=c75d26b339be033e68ccd551c6111988&pid=1-s2.0-S2666991924000393-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Evaluating ChatGPT ability to answer urinary tract Infection-Related questions\",\"authors\":\"Hakan Cakir , Ufuk Caglar , Sami Sekkeli , Esra Zerdali , Omer Sarilar , Oguzhan Yildiz , Faruk Ozgor\",\"doi\":\"10.1016/j.idnow.2024.104884\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Introduction</h3><p>For the first time, the accuracy and proficiency of ChatGPT answers on urogenital tract infection (UTIs) were evaluated.</p></div><div><h3>Methods</h3><p>The study aimed to create two lists of questions: frequently asked questions (FAQs, public-based inquiries) on relevant topics, and questions based on guideline information (guideline-based inquiries). ChatGPT responses to FAQs and scientific questions were scored by two urologists and an infectious disease specialist. Quality and reliability of all ChatGPT answers were checked using the Global Quality Score (GQS). The reproducibility of ChatGPT answers was analyzed by asking each question twice.</p></div><div><h3>Results</h3><p>All in all, 96.2 % of FAQs (75/78 inquiries) related to UTIs were correctly and adequately answered by ChatGPT, and scored GQS 5. None of the ChatGPT answers were classified as GQS 2 and GQS 1. Moreover, FAQs about cystitis, urethritis, and epididymo-orchitis were answered by ChatGPT with 100 % accuracy (GQS 5). ChatGPT answers for EAU urological infections guidelines showed that 61 (89.7 %), 5 (7.4 %), and 2 (2.9 %) ChatGPT responses were scored GQS 5, GQS 4, and GQS 3, respectively. None of the ChatGPT responses for EAU urological infections guidelines were categorized as GQS 2 and GQS 1. Comparison of mean GQS values of ChatGPT answers for FAQs and EAU urological guideline questions showed that ChatGPT was similarly able to respond to both question groups (p = 0.168). The ChatGPT response reproducibility rate was highest for the FAQ subgroups of cystitis, urethritis, and epididymo-orchitis (100 % for each subgroup).</p></div><div><h3>Conclusion</h3><p>The present study showed that ChatGPT gave accurate and satisfactory answers for both public-based inquiries, and EAU urological infection guideline-based questions. Reproducibility of ChatGPT answers exceeded 90% for both FAQs and scientific questions.</p></div>\",\"PeriodicalId\":13539,\"journal\":{\"name\":\"Infectious diseases now\",\"volume\":\"54 4\",\"pages\":\"Article 104884\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-03-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666991924000393/pdfft?md5=c75d26b339be033e68ccd551c6111988&pid=1-s2.0-S2666991924000393-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Infectious diseases now\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666991924000393\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"INFECTIOUS DISEASES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Infectious diseases now","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666991924000393","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"INFECTIOUS DISEASES","Score":null,"Total":0}
Evaluating ChatGPT ability to answer urinary tract Infection-Related questions
Introduction
For the first time, the accuracy and proficiency of ChatGPT answers on urogenital tract infection (UTIs) were evaluated.
Methods
The study aimed to create two lists of questions: frequently asked questions (FAQs, public-based inquiries) on relevant topics, and questions based on guideline information (guideline-based inquiries). ChatGPT responses to FAQs and scientific questions were scored by two urologists and an infectious disease specialist. Quality and reliability of all ChatGPT answers were checked using the Global Quality Score (GQS). The reproducibility of ChatGPT answers was analyzed by asking each question twice.
Results
All in all, 96.2 % of FAQs (75/78 inquiries) related to UTIs were correctly and adequately answered by ChatGPT, and scored GQS 5. None of the ChatGPT answers were classified as GQS 2 and GQS 1. Moreover, FAQs about cystitis, urethritis, and epididymo-orchitis were answered by ChatGPT with 100 % accuracy (GQS 5). ChatGPT answers for EAU urological infections guidelines showed that 61 (89.7 %), 5 (7.4 %), and 2 (2.9 %) ChatGPT responses were scored GQS 5, GQS 4, and GQS 3, respectively. None of the ChatGPT responses for EAU urological infections guidelines were categorized as GQS 2 and GQS 1. Comparison of mean GQS values of ChatGPT answers for FAQs and EAU urological guideline questions showed that ChatGPT was similarly able to respond to both question groups (p = 0.168). The ChatGPT response reproducibility rate was highest for the FAQ subgroups of cystitis, urethritis, and epididymo-orchitis (100 % for each subgroup).
Conclusion
The present study showed that ChatGPT gave accurate and satisfactory answers for both public-based inquiries, and EAU urological infection guideline-based questions. Reproducibility of ChatGPT answers exceeded 90% for both FAQs and scientific questions.