{"title":"人工智能在临床神经学查询中的表现:ChatGPT 模型。","authors":"Erman Altunisik, Yasemin Ekmekyapar Firat, Emine Kilicparlar Cengiz, Gulsum Bayana Comruk","doi":"10.1080/01616412.2024.2334118","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>The use of artificial intelligence technology is progressively expanding and advancing in the health and biomedical literature. Since its launch, ChatGPT has rapidly gained popularity and become one of the fastest-growing artificial intelligence applications in history. This study evaluated the accuracy and comprehensiveness of ChatGPT-generated responses to medical queries in clinical neurology.</p><p><strong>Methods: </strong>We directed 216 questions from different subspecialties to ChatGPT. The questions were classified into three categories: multiple-choice, descriptive, and binary (yes/no answers). Each question in all categories was subjectively rated as easy, medium, or hard according to its difficulty level. Questions that also tested for intuitive clinical thinking and reasoning ability were evaluated in a separate category.</p><p><strong>Results: </strong>ChatGPT correctly answered 141 questions (65.3%). No significant difference was detected in the accuracy and comprehensiveness scale scores or correct answer rates in comparisons made according to the question style or difficulty level. However, a comparative analysis assessing question characteristics revealed significantly lower accuracy and comprehensiveness scale scores and correct answer rates for questions based on interpretations that required critical thinking (<i>p</i> = 0.007, 0.007, and 0.001, respectively).</p><p><strong>Conclusion: </strong>ChatGPT had a moderate overall performance in clinical neurology and demonstrated inadequate performance in answering questions that required interpretation and critical thinking. It also displayed limited performance in specific subspecialties. It is essential to acknowledge the limitations of artificial intelligence and diligently verify medical information produced by such models using reliable sources.</p>","PeriodicalId":19131,"journal":{"name":"Neurological Research","volume":" ","pages":"437-443"},"PeriodicalIF":1.7000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence performance in clinical neurology queries: the ChatGPT model.\",\"authors\":\"Erman Altunisik, Yasemin Ekmekyapar Firat, Emine Kilicparlar Cengiz, Gulsum Bayana Comruk\",\"doi\":\"10.1080/01616412.2024.2334118\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>The use of artificial intelligence technology is progressively expanding and advancing in the health and biomedical literature. Since its launch, ChatGPT has rapidly gained popularity and become one of the fastest-growing artificial intelligence applications in history. This study evaluated the accuracy and comprehensiveness of ChatGPT-generated responses to medical queries in clinical neurology.</p><p><strong>Methods: </strong>We directed 216 questions from different subspecialties to ChatGPT. The questions were classified into three categories: multiple-choice, descriptive, and binary (yes/no answers). Each question in all categories was subjectively rated as easy, medium, or hard according to its difficulty level. Questions that also tested for intuitive clinical thinking and reasoning ability were evaluated in a separate category.</p><p><strong>Results: </strong>ChatGPT correctly answered 141 questions (65.3%). No significant difference was detected in the accuracy and comprehensiveness scale scores or correct answer rates in comparisons made according to the question style or difficulty level. However, a comparative analysis assessing question characteristics revealed significantly lower accuracy and comprehensiveness scale scores and correct answer rates for questions based on interpretations that required critical thinking (<i>p</i> = 0.007, 0.007, and 0.001, respectively).</p><p><strong>Conclusion: </strong>ChatGPT had a moderate overall performance in clinical neurology and demonstrated inadequate performance in answering questions that required interpretation and critical thinking. It also displayed limited performance in specific subspecialties. It is essential to acknowledge the limitations of artificial intelligence and diligently verify medical information produced by such models using reliable sources.</p>\",\"PeriodicalId\":19131,\"journal\":{\"name\":\"Neurological Research\",\"volume\":\" \",\"pages\":\"437-443\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurological Research\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1080/01616412.2024.2334118\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/3/24 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"CLINICAL NEUROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurological Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1080/01616412.2024.2334118","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/24 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
Artificial intelligence performance in clinical neurology queries: the ChatGPT model.
Introduction: The use of artificial intelligence technology is progressively expanding and advancing in the health and biomedical literature. Since its launch, ChatGPT has rapidly gained popularity and become one of the fastest-growing artificial intelligence applications in history. This study evaluated the accuracy and comprehensiveness of ChatGPT-generated responses to medical queries in clinical neurology.
Methods: We directed 216 questions from different subspecialties to ChatGPT. The questions were classified into three categories: multiple-choice, descriptive, and binary (yes/no answers). Each question in all categories was subjectively rated as easy, medium, or hard according to its difficulty level. Questions that also tested for intuitive clinical thinking and reasoning ability were evaluated in a separate category.
Results: ChatGPT correctly answered 141 questions (65.3%). No significant difference was detected in the accuracy and comprehensiveness scale scores or correct answer rates in comparisons made according to the question style or difficulty level. However, a comparative analysis assessing question characteristics revealed significantly lower accuracy and comprehensiveness scale scores and correct answer rates for questions based on interpretations that required critical thinking (p = 0.007, 0.007, and 0.001, respectively).
Conclusion: ChatGPT had a moderate overall performance in clinical neurology and demonstrated inadequate performance in answering questions that required interpretation and critical thinking. It also displayed limited performance in specific subspecialties. It is essential to acknowledge the limitations of artificial intelligence and diligently verify medical information produced by such models using reliable sources.
期刊介绍:
Neurological Research is an international, peer-reviewed journal for reporting both basic and clinical research in the fields of neurosurgery, neurology, neuroengineering and neurosciences. It provides a medium for those who recognize the wider implications of their work and who wish to be informed of the relevant experience of others in related and more distant fields.
The scope of the journal includes:
•Stem cell applications
•Molecular neuroscience
•Neuropharmacology
•Neuroradiology
•Neurochemistry
•Biomathematical models
•Endovascular neurosurgery
•Innovation in neurosurgery.