Sahaj A. Patel , Helen Brinyark , Caila Coyne , Noshin Tasnia , Rebekah Chatfield , Erin C. Conrad , Benjamin Cox , Arie Nakhmani , Rachel J. Smith
{"title":"Cortico-cortical evoked potentials: Automated localization and classification of early and late responses","authors":"Sahaj A. Patel , Helen Brinyark , Caila Coyne , Noshin Tasnia , Rebekah Chatfield , Erin C. Conrad , Benjamin Cox , Arie Nakhmani , Rachel J. Smith","doi":"10.1016/j.jneumeth.2025.110571","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Cortico-cortical evoked potentials (CCEPs), elicited via single-pulse electrical stimulation, are used to map brain networks. These responses comprise early (N1) and late (N2) components, which reflect direct and indirect cortical connectivity. Reliable identification of these components remains difficult due to substantial variability in amplitude, phase, and timing. Traditional statistical methods often struggle to localize N1 and N2 peaks under such conditions.</div></div><div><h3>New Method</h3><div>A deep learning framework based on You Only Look Once (YOLO v10) was developed. Each CCEP epoch was converted into a two-dimensional image using Matplotlib and subsequently analyzed by the YOLO model to localize and classify N1 and N2 components. Detected image coordinates were mapped back to corresponding time-series indices for clinical interpretation.</div></div><div><h3>Results</h3><div>The framework was trained and validated on intracranial EEG data from 9 patients with drug-resistant epilepsy (DRE) at the University of Alabama at Birmingham (UAB), achieving a mean average precision (mAP) of 0.928 at an Intersection over Union (IoU) threshold of 0.5 on the test dataset. Generalizability was assessed on more than 4000 unannotated epochs obtained from 5 additional UAB patients and 10 patients at the Hospital of the University of Pennsylvania.</div></div><div><h3>Comparison with existing methods</h3><div>To our knowledge, no existing deep learning methods localize and classify both N1 and N2 components, limiting comparison. Current approaches rely on manual identification within fixed windows, introducing inter-rater variability and often missing inter-individual differences.</div></div><div><h3>Conclusion</h3><div>The proposed framework accurately detects and classifies CCEP components, offering a robust, automated alternative to manual analysis.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110571"},"PeriodicalIF":2.3000,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Neuroscience Methods","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165027025002158","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Background
Cortico-cortical evoked potentials (CCEPs), elicited via single-pulse electrical stimulation, are used to map brain networks. These responses comprise early (N1) and late (N2) components, which reflect direct and indirect cortical connectivity. Reliable identification of these components remains difficult due to substantial variability in amplitude, phase, and timing. Traditional statistical methods often struggle to localize N1 and N2 peaks under such conditions.
New Method
A deep learning framework based on You Only Look Once (YOLO v10) was developed. Each CCEP epoch was converted into a two-dimensional image using Matplotlib and subsequently analyzed by the YOLO model to localize and classify N1 and N2 components. Detected image coordinates were mapped back to corresponding time-series indices for clinical interpretation.
Results
The framework was trained and validated on intracranial EEG data from 9 patients with drug-resistant epilepsy (DRE) at the University of Alabama at Birmingham (UAB), achieving a mean average precision (mAP) of 0.928 at an Intersection over Union (IoU) threshold of 0.5 on the test dataset. Generalizability was assessed on more than 4000 unannotated epochs obtained from 5 additional UAB patients and 10 patients at the Hospital of the University of Pennsylvania.
Comparison with existing methods
To our knowledge, no existing deep learning methods localize and classify both N1 and N2 components, limiting comparison. Current approaches rely on manual identification within fixed windows, introducing inter-rater variability and often missing inter-individual differences.
Conclusion
The proposed framework accurately detects and classifies CCEP components, offering a robust, automated alternative to manual analysis.
背景:皮质-皮质诱发电位(CCEPs)是通过单脉冲电刺激引起的,用于绘制大脑网络。这些反应包括早期(N1)和晚期(N2)成分,反映了直接和间接的皮层连通性。由于振幅、相位和时间上的巨大变化,对这些成分的可靠识别仍然很困难。在这种情况下,传统的统计方法往往难以定位N1和N2峰。新方法:开发了基于You Only Look Once (YOLO v10)的深度学习框架。利用Matplotlib将每个CCEP历元转换为二维图像,然后利用YOLO模型进行分析,对N1和N2分量进行定位和分类。检测到的图像坐标被映射回相应的时间序列指数用于临床解释。结果:该框架在阿拉巴马大学伯明翰分校(UAB) 9例耐药癫痫(DRE)患者的颅内脑电图数据上进行了训练和验证,在交汇(IoU)阈值为0.5的测试数据集上实现了0.928的平均精度(mAP)。对来自宾夕法尼亚大学医院另外5名UAB患者和10名患者的4,000多个未注释的epoch进行了概括性评估。与现有方法的比较:据我们所知,现有的深度学习方法没有同时对N1和N2分量进行定位和分类,限制了比较。目前的方法依赖于在固定窗口内的人工识别,引入了物种间的可变性,并且经常忽略了个体间的差异。结论:提出的框架准确地检测和分类CCEP组件,提供了一个强大的,自动化的替代人工分析。
期刊介绍:
The Journal of Neuroscience Methods publishes papers that describe new methods that are specifically for neuroscience research conducted in invertebrates, vertebrates or in man. Major methodological improvements or important refinements of established neuroscience methods are also considered for publication. The Journal''s Scope includes all aspects of contemporary neuroscience research, including anatomical, behavioural, biochemical, cellular, computational, molecular, invasive and non-invasive imaging, optogenetic, and physiological research investigations.