Artificial intelligence (ChatGPT 4.0) vs. Human expertise for epileptic seizure and epilepsy diagnosis and classification in Adults: An exploratory study
{"title":"Artificial intelligence (ChatGPT 4.0) vs. Human expertise for epileptic seizure and epilepsy diagnosis and classification in Adults: An exploratory study","authors":"Francesco Brigo , Serena Broggi , Gionata Strigaro , Sasha Olivo , Valentina Tommasini , Magdalena Massar , Gianni Turcato , Arian Zaboli","doi":"10.1016/j.yebeh.2025.110364","DOIUrl":null,"url":null,"abstract":"<div><h3>Aims</h3><div>Artificial intelligence (AI) tools like ChatGPT hold promise for enhancing diagnostic accuracy and efficiency in clinical practice. This exploratory study evaluates ChatGPT’s performance in diagnosing and classifying epileptic seizures, epilepsy, and underlying etiologies in adult patients compared to epileptologists and neurologists.</div></div><div><h3>Methods</h3><div>A prospective simulation study assessed 37 clinical vignettes based on real adult patient cases. ChatGPT was ’trained’ using official ILAE documents on epilepsy diagnosis and classification. Diagnoses and classifications by ChatGPT, two epileptologists, and two neurologists were compared against a reference standard set by a senior epileptologist. Diagnostic accuracy was evaluated using sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Cohen’s kappa (κ) was calculated to assess agreement.</div></div><div><h3>Results</h3><div>ChatGPT demonstrated high sensitivity (≥96.9 %) in identifying epileptic seizures and diagnosing epilepsy, ensuring no cases were missed. However, its specificity was lower, particularly for distinguishing acute symptomatic from unprovoked seizures (33.3 %) and diagnosing epilepsy (26.7 %), leading to frequent false positives. ChatGPT excelled in diagnosing epileptic syndromes (κ = 1.00) and structural etiologies (accuracy = 90.0 %) but struggled with ambiguous cases such as unknown seizure onset (accuracy = 12.5 %) and rare etiologies. Human experts consistently outperformed ChatGPT with near-perfect accuracy and higher κ values.</div></div><div><h3>Conclusion</h3><div>ChatGPT shows potential as a supplementary diagnostic tool but requires human oversight due to reduced specificity and limitations in nuanced clinical judgment. Further development with diverse datasets and targeted training is necessary to improve AI performance. Integrating AI with expert clinicians can optimize diagnostic workflows in epilepsy care.</div></div>","PeriodicalId":11847,"journal":{"name":"Epilepsy & Behavior","volume":"166 ","pages":"Article 110364"},"PeriodicalIF":2.3000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Epilepsy & Behavior","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1525505025001039","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Aims
Artificial intelligence (AI) tools like ChatGPT hold promise for enhancing diagnostic accuracy and efficiency in clinical practice. This exploratory study evaluates ChatGPT’s performance in diagnosing and classifying epileptic seizures, epilepsy, and underlying etiologies in adult patients compared to epileptologists and neurologists.
Methods
A prospective simulation study assessed 37 clinical vignettes based on real adult patient cases. ChatGPT was ’trained’ using official ILAE documents on epilepsy diagnosis and classification. Diagnoses and classifications by ChatGPT, two epileptologists, and two neurologists were compared against a reference standard set by a senior epileptologist. Diagnostic accuracy was evaluated using sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Cohen’s kappa (κ) was calculated to assess agreement.
Results
ChatGPT demonstrated high sensitivity (≥96.9 %) in identifying epileptic seizures and diagnosing epilepsy, ensuring no cases were missed. However, its specificity was lower, particularly for distinguishing acute symptomatic from unprovoked seizures (33.3 %) and diagnosing epilepsy (26.7 %), leading to frequent false positives. ChatGPT excelled in diagnosing epileptic syndromes (κ = 1.00) and structural etiologies (accuracy = 90.0 %) but struggled with ambiguous cases such as unknown seizure onset (accuracy = 12.5 %) and rare etiologies. Human experts consistently outperformed ChatGPT with near-perfect accuracy and higher κ values.
Conclusion
ChatGPT shows potential as a supplementary diagnostic tool but requires human oversight due to reduced specificity and limitations in nuanced clinical judgment. Further development with diverse datasets and targeted training is necessary to improve AI performance. Integrating AI with expert clinicians can optimize diagnostic workflows in epilepsy care.
期刊介绍:
Epilepsy & Behavior is the fastest-growing international journal uniquely devoted to the rapid dissemination of the most current information available on the behavioral aspects of seizures and epilepsy.
Epilepsy & Behavior presents original peer-reviewed articles based on laboratory and clinical research. Topics are drawn from a variety of fields, including clinical neurology, neurosurgery, neuropsychiatry, neuropsychology, neurophysiology, neuropharmacology, and neuroimaging.
From September 2012 Epilepsy & Behavior stopped accepting Case Reports for publication in the journal. From this date authors who submit to Epilepsy & Behavior will be offered a transfer or asked to resubmit their Case Reports to its new sister journal, Epilepsy & Behavior Case Reports.