Nan Lin, Lian Li, Weifang Gao, Peng Hu, Gonglin Yuan, Heyang Sun, Fang Qi, Lin Wang, Shengsong Wang, Zi Liang, Haibo He, Yisu Dong, Zaifen Gao, Xiaoqiu Shao, Liying Cui, Qiang Lu
{"title":"多模态癫痫样放电自动检测模型的开发和验证:一项前瞻性多中心研究。","authors":"Nan Lin, Lian Li, Weifang Gao, Peng Hu, Gonglin Yuan, Heyang Sun, Fang Qi, Lin Wang, Shengsong Wang, Zi Liang, Haibo He, Yisu Dong, Zaifen Gao, Xiaoqiu Shao, Liying Cui, Qiang Lu","doi":"10.1186/s12916-025-04316-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Visual identification of interictal epileptiform discharge (IED) is expert-biased and time-consuming. Accurate automated IED detection models can facilitate epilepsy diagnosis. This study aims to develop a multimodal IED detection model (vEpiNetV2) and conduct a multi-center validation.</p><p><strong>Methods: </strong>We constructed a large training dataset to train vEpiNetV2, which comprises 26,706 IEDs and 194,797 non-IED 4-s video-EEG epochs from 530 patients at Peking Union Medical College Hospital (PUMCH). The automated IED detection model was constructed using deep learning based on video and electroencephalogram (EEG) features. We proposed a bad channel removal model and patient detection method to improve the robustness of vEpiNetV2 for multi-center validation. Performance is verified in a prospective multi-center test dataset, with area under the precision-recall curve (AUPRC) and area under the curve (AUC) as metrics.</p><p><strong>Results: </strong>To fairly evaluate the model performance, we constructed a large test dataset containing 149 patients, 377 h video-EEG data, and 9232 IEDs from PUMCH, Children's Hospital Affiliated to Shandong University (SDQLCH) and Beijing Tiantan Hospital (BJTTH). Amplitude discrepancies are observed across centers and could be classified by a classifier. vEpiNetV2 demonstrated favorable accuracy for the IED detection, achieving AUPRC/AUC values of 0.76/0.98 (PUMCH), 0.78/0.96 (SDQLCH), and 0.76/0.98 (BJTTH), with false positive rates of 0.16-0.31 per minute at 80% sensitivity. Incorporating video features improves precision by 9%, 7%, and 5% at three centers, respectively. At 95% sensitivity, video features eliminated 24% false positives in the whole test dataset. While bad channels decreased model precision, video features compensate for this deficiency. Accurate patient detection is essential; otherwise, incorrect patient detection can negatively impact overall performance.</p><p><strong>Conclusions: </strong>The multimodal IED detection model, which integrates video and EEG features, demonstrated high precision and robustness. The large multi-center validation confirmed its potential for real-world clinical application and the value of video features in IED analysis.</p>","PeriodicalId":9188,"journal":{"name":"BMC Medicine","volume":"23 1","pages":"479"},"PeriodicalIF":8.3000,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12357389/pdf/","citationCount":"0","resultStr":"{\"title\":\"Development and validation of a multimodal automatic interictal epileptiform discharge detection model: a prospective multi-center study.\",\"authors\":\"Nan Lin, Lian Li, Weifang Gao, Peng Hu, Gonglin Yuan, Heyang Sun, Fang Qi, Lin Wang, Shengsong Wang, Zi Liang, Haibo He, Yisu Dong, Zaifen Gao, Xiaoqiu Shao, Liying Cui, Qiang Lu\",\"doi\":\"10.1186/s12916-025-04316-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Visual identification of interictal epileptiform discharge (IED) is expert-biased and time-consuming. Accurate automated IED detection models can facilitate epilepsy diagnosis. This study aims to develop a multimodal IED detection model (vEpiNetV2) and conduct a multi-center validation.</p><p><strong>Methods: </strong>We constructed a large training dataset to train vEpiNetV2, which comprises 26,706 IEDs and 194,797 non-IED 4-s video-EEG epochs from 530 patients at Peking Union Medical College Hospital (PUMCH). The automated IED detection model was constructed using deep learning based on video and electroencephalogram (EEG) features. We proposed a bad channel removal model and patient detection method to improve the robustness of vEpiNetV2 for multi-center validation. Performance is verified in a prospective multi-center test dataset, with area under the precision-recall curve (AUPRC) and area under the curve (AUC) as metrics.</p><p><strong>Results: </strong>To fairly evaluate the model performance, we constructed a large test dataset containing 149 patients, 377 h video-EEG data, and 9232 IEDs from PUMCH, Children's Hospital Affiliated to Shandong University (SDQLCH) and Beijing Tiantan Hospital (BJTTH). Amplitude discrepancies are observed across centers and could be classified by a classifier. vEpiNetV2 demonstrated favorable accuracy for the IED detection, achieving AUPRC/AUC values of 0.76/0.98 (PUMCH), 0.78/0.96 (SDQLCH), and 0.76/0.98 (BJTTH), with false positive rates of 0.16-0.31 per minute at 80% sensitivity. Incorporating video features improves precision by 9%, 7%, and 5% at three centers, respectively. At 95% sensitivity, video features eliminated 24% false positives in the whole test dataset. While bad channels decreased model precision, video features compensate for this deficiency. Accurate patient detection is essential; otherwise, incorrect patient detection can negatively impact overall performance.</p><p><strong>Conclusions: </strong>The multimodal IED detection model, which integrates video and EEG features, demonstrated high precision and robustness. The large multi-center validation confirmed its potential for real-world clinical application and the value of video features in IED analysis.</p>\",\"PeriodicalId\":9188,\"journal\":{\"name\":\"BMC Medicine\",\"volume\":\"23 1\",\"pages\":\"479\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2025-08-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12357389/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Medicine\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s12916-025-04316-3\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12916-025-04316-3","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
Development and validation of a multimodal automatic interictal epileptiform discharge detection model: a prospective multi-center study.
Background: Visual identification of interictal epileptiform discharge (IED) is expert-biased and time-consuming. Accurate automated IED detection models can facilitate epilepsy diagnosis. This study aims to develop a multimodal IED detection model (vEpiNetV2) and conduct a multi-center validation.
Methods: We constructed a large training dataset to train vEpiNetV2, which comprises 26,706 IEDs and 194,797 non-IED 4-s video-EEG epochs from 530 patients at Peking Union Medical College Hospital (PUMCH). The automated IED detection model was constructed using deep learning based on video and electroencephalogram (EEG) features. We proposed a bad channel removal model and patient detection method to improve the robustness of vEpiNetV2 for multi-center validation. Performance is verified in a prospective multi-center test dataset, with area under the precision-recall curve (AUPRC) and area under the curve (AUC) as metrics.
Results: To fairly evaluate the model performance, we constructed a large test dataset containing 149 patients, 377 h video-EEG data, and 9232 IEDs from PUMCH, Children's Hospital Affiliated to Shandong University (SDQLCH) and Beijing Tiantan Hospital (BJTTH). Amplitude discrepancies are observed across centers and could be classified by a classifier. vEpiNetV2 demonstrated favorable accuracy for the IED detection, achieving AUPRC/AUC values of 0.76/0.98 (PUMCH), 0.78/0.96 (SDQLCH), and 0.76/0.98 (BJTTH), with false positive rates of 0.16-0.31 per minute at 80% sensitivity. Incorporating video features improves precision by 9%, 7%, and 5% at three centers, respectively. At 95% sensitivity, video features eliminated 24% false positives in the whole test dataset. While bad channels decreased model precision, video features compensate for this deficiency. Accurate patient detection is essential; otherwise, incorrect patient detection can negatively impact overall performance.
Conclusions: The multimodal IED detection model, which integrates video and EEG features, demonstrated high precision and robustness. The large multi-center validation confirmed its potential for real-world clinical application and the value of video features in IED analysis.
期刊介绍:
BMC Medicine is an open access, transparent peer-reviewed general medical journal. It is the flagship journal of the BMC series and publishes outstanding and influential research in various areas including clinical practice, translational medicine, medical and health advances, public health, global health, policy, and general topics of interest to the biomedical and sociomedical professional communities. In addition to research articles, the journal also publishes stimulating debates, reviews, unique forum articles, and concise tutorials. All articles published in BMC Medicine are included in various databases such as Biological Abstracts, BIOSIS, CAS, Citebase, Current contents, DOAJ, Embase, MEDLINE, PubMed, Science Citation Index Expanded, OAIster, SCImago, Scopus, SOCOLAR, and Zetoc.