Simon Høj BSc , Simon Francis Thomsen MD, PhD, DMSci , Charlotte Suppli Ulrik MD, PhD, DMSci , Hanieh Meteran MD , Torben Sigsgaard MD, PhD , Howraman Meteran MD, PhD
{"title":"Evaluating the scientific reliability of ChatGPT as a source of information on asthma","authors":"Simon Høj BSc , Simon Francis Thomsen MD, PhD, DMSci , Charlotte Suppli Ulrik MD, PhD, DMSci , Hanieh Meteran MD , Torben Sigsgaard MD, PhD , Howraman Meteran MD, PhD","doi":"10.1016/j.jacig.2024.100330","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>This study assessed the reliability of ChatGPT as a source of information on asthma, given the increasing use of artificial intelligence–driven models for medical information. Prior concerns about misinformation on atopic diseases in various digital platforms underline the importance of this evaluation.</p></div><div><h3>Objective</h3><p>We aimed to evaluate the scientific reliability of ChatGPT as a source of information on asthma.</p></div><div><h3>Methods</h3><p>The study involved analyzing ChatGPT’s responses to 26 asthma-related questions, each followed by a follow-up question. These encompassed definition/risk factors, diagnosis, treatment, lifestyle factors, and specific clinical inquiries. Medical professionals specialized in allergic and respiratory diseases independently assessed the responses using a 1-to-5 accuracy scale.</p></div><div><h3>Results</h3><p>Approximately 81% of the responses scored 4 or higher, suggesting a generally high accuracy level. However, 5 responses scored >3, indicating minor potentially harmful inaccuracies. The overall median score was 4. Fleiss multirater kappa value showed moderate agreement among raters.</p></div><div><h3>Conclusion</h3><p>ChatGPT generally provides reliable asthma-related information, but its limitations, such as lack of depth in certain responses and inability to cite sources or update in real time, were noted. It shows promise as an educational tool, but it should not be a substitute for professional medical advice. Future studies should explore its applicability for different user demographics and compare it with newer artificial intelligence models.</p></div>","PeriodicalId":75041,"journal":{"name":"The journal of allergy and clinical immunology. Global","volume":"3 4","pages":"Article 100330"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772829324001267/pdfft?md5=31dff3c3eab39fc9f40519b2c8ca51eb&pid=1-s2.0-S2772829324001267-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The journal of allergy and clinical immunology. Global","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772829324001267","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background
This study assessed the reliability of ChatGPT as a source of information on asthma, given the increasing use of artificial intelligence–driven models for medical information. Prior concerns about misinformation on atopic diseases in various digital platforms underline the importance of this evaluation.
Objective
We aimed to evaluate the scientific reliability of ChatGPT as a source of information on asthma.
Methods
The study involved analyzing ChatGPT’s responses to 26 asthma-related questions, each followed by a follow-up question. These encompassed definition/risk factors, diagnosis, treatment, lifestyle factors, and specific clinical inquiries. Medical professionals specialized in allergic and respiratory diseases independently assessed the responses using a 1-to-5 accuracy scale.
Results
Approximately 81% of the responses scored 4 or higher, suggesting a generally high accuracy level. However, 5 responses scored >3, indicating minor potentially harmful inaccuracies. The overall median score was 4. Fleiss multirater kappa value showed moderate agreement among raters.
Conclusion
ChatGPT generally provides reliable asthma-related information, but its limitations, such as lack of depth in certain responses and inability to cite sources or update in real time, were noted. It shows promise as an educational tool, but it should not be a substitute for professional medical advice. Future studies should explore its applicability for different user demographics and compare it with newer artificial intelligence models.