Development and validation of a multimodal automatic interictal epileptiform discharge detection model: a prospective multi-center study.

IF 8.3 1区 医学 Q1 MEDICINE, GENERAL & INTERNAL
Nan Lin, Lian Li, Weifang Gao, Peng Hu, Gonglin Yuan, Heyang Sun, Fang Qi, Lin Wang, Shengsong Wang, Zi Liang, Haibo He, Yisu Dong, Zaifen Gao, Xiaoqiu Shao, Liying Cui, Qiang Lu
{"title":"Development and validation of a multimodal automatic interictal epileptiform discharge detection model: a prospective multi-center study.","authors":"Nan Lin, Lian Li, Weifang Gao, Peng Hu, Gonglin Yuan, Heyang Sun, Fang Qi, Lin Wang, Shengsong Wang, Zi Liang, Haibo He, Yisu Dong, Zaifen Gao, Xiaoqiu Shao, Liying Cui, Qiang Lu","doi":"10.1186/s12916-025-04316-3","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Visual identification of interictal epileptiform discharge (IED) is expert-biased and time-consuming. Accurate automated IED detection models can facilitate epilepsy diagnosis. This study aims to develop a multimodal IED detection model (vEpiNetV2) and conduct a multi-center validation.</p><p><strong>Methods: </strong>We constructed a large training dataset to train vEpiNetV2, which comprises 26,706 IEDs and 194,797 non-IED 4-s video-EEG epochs from 530 patients at Peking Union Medical College Hospital (PUMCH). The automated IED detection model was constructed using deep learning based on video and electroencephalogram (EEG) features. We proposed a bad channel removal model and patient detection method to improve the robustness of vEpiNetV2 for multi-center validation. Performance is verified in a prospective multi-center test dataset, with area under the precision-recall curve (AUPRC) and area under the curve (AUC) as metrics.</p><p><strong>Results: </strong>To fairly evaluate the model performance, we constructed a large test dataset containing 149 patients, 377 h video-EEG data, and 9232 IEDs from PUMCH, Children's Hospital Affiliated to Shandong University (SDQLCH) and Beijing Tiantan Hospital (BJTTH). Amplitude discrepancies are observed across centers and could be classified by a classifier. vEpiNetV2 demonstrated favorable accuracy for the IED detection, achieving AUPRC/AUC values of 0.76/0.98 (PUMCH), 0.78/0.96 (SDQLCH), and 0.76/0.98 (BJTTH), with false positive rates of 0.16-0.31 per minute at 80% sensitivity. Incorporating video features improves precision by 9%, 7%, and 5% at three centers, respectively. At 95% sensitivity, video features eliminated 24% false positives in the whole test dataset. While bad channels decreased model precision, video features compensate for this deficiency. Accurate patient detection is essential; otherwise, incorrect patient detection can negatively impact overall performance.</p><p><strong>Conclusions: </strong>The multimodal IED detection model, which integrates video and EEG features, demonstrated high precision and robustness. The large multi-center validation confirmed its potential for real-world clinical application and the value of video features in IED analysis.</p>","PeriodicalId":9188,"journal":{"name":"BMC Medicine","volume":"23 1","pages":"479"},"PeriodicalIF":8.3000,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12357389/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12916-025-04316-3","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Visual identification of interictal epileptiform discharge (IED) is expert-biased and time-consuming. Accurate automated IED detection models can facilitate epilepsy diagnosis. This study aims to develop a multimodal IED detection model (vEpiNetV2) and conduct a multi-center validation.

Methods: We constructed a large training dataset to train vEpiNetV2, which comprises 26,706 IEDs and 194,797 non-IED 4-s video-EEG epochs from 530 patients at Peking Union Medical College Hospital (PUMCH). The automated IED detection model was constructed using deep learning based on video and electroencephalogram (EEG) features. We proposed a bad channel removal model and patient detection method to improve the robustness of vEpiNetV2 for multi-center validation. Performance is verified in a prospective multi-center test dataset, with area under the precision-recall curve (AUPRC) and area under the curve (AUC) as metrics.

Results: To fairly evaluate the model performance, we constructed a large test dataset containing 149 patients, 377 h video-EEG data, and 9232 IEDs from PUMCH, Children's Hospital Affiliated to Shandong University (SDQLCH) and Beijing Tiantan Hospital (BJTTH). Amplitude discrepancies are observed across centers and could be classified by a classifier. vEpiNetV2 demonstrated favorable accuracy for the IED detection, achieving AUPRC/AUC values of 0.76/0.98 (PUMCH), 0.78/0.96 (SDQLCH), and 0.76/0.98 (BJTTH), with false positive rates of 0.16-0.31 per minute at 80% sensitivity. Incorporating video features improves precision by 9%, 7%, and 5% at three centers, respectively. At 95% sensitivity, video features eliminated 24% false positives in the whole test dataset. While bad channels decreased model precision, video features compensate for this deficiency. Accurate patient detection is essential; otherwise, incorrect patient detection can negatively impact overall performance.

Conclusions: The multimodal IED detection model, which integrates video and EEG features, demonstrated high precision and robustness. The large multi-center validation confirmed its potential for real-world clinical application and the value of video features in IED analysis.

Abstract Image

Abstract Image

Abstract Image

多模态癫痫样放电自动检测模型的开发和验证:一项前瞻性多中心研究。
背景:癫痫样放电(IED)的视觉识别是专家偏见和耗时的。准确的IED自动检测模型有助于癫痫诊断。本研究旨在建立一个多模态IED检测模型(vEpiNetV2),并进行多中心验证。方法:构建大型训练数据集对北京协和医院(PUMCH) 530例患者的26706个ied和194797个非ied 4-s视频脑电图进行训练。基于视频和脑电图特征,利用深度学习技术构建IED自动检测模型。为了提高vEpiNetV2多中心验证的鲁棒性,我们提出了一种不良通道去除模型和患者检测方法。在一个前瞻性的多中心测试数据集中,以精确召回率曲线下面积(AUPRC)和曲线下面积(AUC)为指标验证了性能。结果:为了公平地评价模型的性能,我们构建了一个大型测试数据集,其中包含来自PUMCH、山东大学附属儿童医院(SDQLCH)和北京天坛医院(BJTTH)的149例患者、377 h视频脑电数据和9232个简易爆炸装置。振幅差异被观察到跨中心,可以通过分类器分类。vEpiNetV2在IED检测中表现出良好的准确性,AUPRC/AUC值分别为0.76/0.98 (PUMCH)、0.78/0.96 (SDQLCH)和0.76/0.98 (BJTTH),在80%的灵敏度下,假阳性率为0.16-0.31 /分钟。结合视频功能,三个中心的精度分别提高了9%、7%和5%。在95%的灵敏度下,视频特征在整个测试数据集中消除了24%的误报。虽然不良信道降低了模型精度,但视频特性弥补了这一缺陷。准确的患者检测至关重要;否则,错误的患者检测可能会对整体性能产生负面影响。结论:融合视频和脑电特征的多模态IED检测模型具有较高的检测精度和鲁棒性。大型多中心验证证实了其在现实世界临床应用的潜力以及视频特征在IED分析中的价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
BMC Medicine
BMC Medicine 医学-医学:内科
CiteScore
13.10
自引率
1.10%
发文量
435
审稿时长
4-8 weeks
期刊介绍: BMC Medicine is an open access, transparent peer-reviewed general medical journal. It is the flagship journal of the BMC series and publishes outstanding and influential research in various areas including clinical practice, translational medicine, medical and health advances, public health, global health, policy, and general topics of interest to the biomedical and sociomedical professional communities. In addition to research articles, the journal also publishes stimulating debates, reviews, unique forum articles, and concise tutorials. All articles published in BMC Medicine are included in various databases such as Biological Abstracts, BIOSIS, CAS, Citebase, Current contents, DOAJ, Embase, MEDLINE, PubMed, Science Citation Index Expanded, OAIster, SCImago, Scopus, SOCOLAR, and Zetoc.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信