A Sparse-Integrated Filtering Residual Spiking Neural Network for High-Accuracy Spike Sorting and Co-optimization on Memristor Platforms.

IF 4.9
Yiwen Zhu, Jingyi Chen, Lingli Cheng, Fangduo Zhu, Xumeng Zhang, Qi Liu
{"title":"A Sparse-Integrated Filtering Residual Spiking Neural Network for High-Accuracy Spike Sorting and Co-optimization on Memristor Platforms.","authors":"Yiwen Zhu, Jingyi Chen, Lingli Cheng, Fangduo Zhu, Xumeng Zhang, Qi Liu","doi":"10.1109/TBCAS.2025.3601403","DOIUrl":null,"url":null,"abstract":"<p><p>Brain-computer interfaces rely on precise decoding of neural signals, where spike sorting is a critical step to extract individual neuronal activities from complex neural data. This works presents a spiking neural network (SNN) framework for efficient spike sorting, named SIFT-RSNN. In the SIFT-RSNN, raw neural signals are encoded into spike trains using a threshold-based temporal encoding strategy, then a sparse-integrated filtering module refines misfiring spikes, enhancing data sparsity for pattern learning. The RSNN module with a membrane shortcut structure ensures efficient feature transfer and improves generalization performance of the overall system. The SIFT-RSNN achieves an accuracy of 96.2% and 99.6% on the Difficult1 and Difficult2 subset of Leicester dataset, surpassing state-of-the-art methods. Also, we conducted it on a compute-in-memory platform with 8k memristor cells utilizing quantization-free mapping method and propose two algorithm-hardware co-optimization strategies to mitigate non-ideal hardware effects: weight outlier pre-constraint (WOP) and noise adaptation training (NAT). After optimization, our algorithm continues to outperform existing spike sorting methods, achieving accuracies of 94.2% and 99.7%, while also demonstrating improved robustness. The memristor platform only exhibits a 2% and 1.5% accuracy drop compared to software results on the two difficult subsets. Additionally, it achieves 3.52 μJ energy consumption and 0.5 ms latency per inference. This work offers promising solutions for brain-computer interfaces systems and neural prosthesis applications in the future.</p>","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":"PP ","pages":""},"PeriodicalIF":4.9000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biomedical circuits and systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TBCAS.2025.3601403","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Brain-computer interfaces rely on precise decoding of neural signals, where spike sorting is a critical step to extract individual neuronal activities from complex neural data. This works presents a spiking neural network (SNN) framework for efficient spike sorting, named SIFT-RSNN. In the SIFT-RSNN, raw neural signals are encoded into spike trains using a threshold-based temporal encoding strategy, then a sparse-integrated filtering module refines misfiring spikes, enhancing data sparsity for pattern learning. The RSNN module with a membrane shortcut structure ensures efficient feature transfer and improves generalization performance of the overall system. The SIFT-RSNN achieves an accuracy of 96.2% and 99.6% on the Difficult1 and Difficult2 subset of Leicester dataset, surpassing state-of-the-art methods. Also, we conducted it on a compute-in-memory platform with 8k memristor cells utilizing quantization-free mapping method and propose two algorithm-hardware co-optimization strategies to mitigate non-ideal hardware effects: weight outlier pre-constraint (WOP) and noise adaptation training (NAT). After optimization, our algorithm continues to outperform existing spike sorting methods, achieving accuracies of 94.2% and 99.7%, while also demonstrating improved robustness. The memristor platform only exhibits a 2% and 1.5% accuracy drop compared to software results on the two difficult subsets. Additionally, it achieves 3.52 μJ energy consumption and 0.5 ms latency per inference. This work offers promising solutions for brain-computer interfaces systems and neural prosthesis applications in the future.

忆阻器平台上高精度尖峰排序与协同优化的稀疏集成滤波残差尖峰神经网络。
脑机接口依赖于神经信号的精确解码,其中脉冲排序是从复杂的神经数据中提取单个神经元活动的关键步骤。本文提出了一种用于高效尖峰排序的尖峰神经网络(SNN)框架,称为SIFT-RSNN。在SIFT-RSNN中,使用基于阈值的时间编码策略将原始神经信号编码成尖峰序列,然后使用稀疏集成滤波模块对失发尖峰进行细化,增强数据的稀疏性,用于模式学习。RSNN模块采用膜捷径结构,保证了特征的高效传递,提高了整个系统的泛化性能。SIFT-RSNN在Leicester数据集的hardt1和hardt2子集上实现了96.2%和99.6%的准确率,超过了最先进的方法。此外,我们利用无量化映射方法在内存中计算平台上进行了8k忆阻器单元,并提出了两种算法-硬件协同优化策略来减轻非理想硬件影响:权重异常值预约束(WOP)和噪声适应训练(NAT)。优化后,我们的算法继续优于现有的尖峰排序方法,准确率达到94.2%和99.7%,同时也显示出更好的鲁棒性。与软件在两个困难子集上的结果相比,忆阻器平台的精度仅下降了2%和1.5%。此外,每次推理的能耗为3.52 μJ,延迟为0.5 ms。这项工作为未来的脑机接口系统和神经假体应用提供了有希望的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信