{"title":"A Sparse-Integrated Filtering Residual Spiking Neural Network for High-Accuracy Spike Sorting and Co-optimization on Memristor Platforms.","authors":"Yiwen Zhu, Jingyi Chen, Lingli Cheng, Fangduo Zhu, Xumeng Zhang, Qi Liu","doi":"10.1109/TBCAS.2025.3601403","DOIUrl":null,"url":null,"abstract":"<p><p>Brain-computer interfaces rely on precise decoding of neural signals, where spike sorting is a critical step to extract individual neuronal activities from complex neural data. This works presents a spiking neural network (SNN) framework for efficient spike sorting, named SIFT-RSNN. In the SIFT-RSNN, raw neural signals are encoded into spike trains using a threshold-based temporal encoding strategy, then a sparse-integrated filtering module refines misfiring spikes, enhancing data sparsity for pattern learning. The RSNN module with a membrane shortcut structure ensures efficient feature transfer and improves generalization performance of the overall system. The SIFT-RSNN achieves an accuracy of 96.2% and 99.6% on the Difficult1 and Difficult2 subset of Leicester dataset, surpassing state-of-the-art methods. Also, we conducted it on a compute-in-memory platform with 8k memristor cells utilizing quantization-free mapping method and propose two algorithm-hardware co-optimization strategies to mitigate non-ideal hardware effects: weight outlier pre-constraint (WOP) and noise adaptation training (NAT). After optimization, our algorithm continues to outperform existing spike sorting methods, achieving accuracies of 94.2% and 99.7%, while also demonstrating improved robustness. The memristor platform only exhibits a 2% and 1.5% accuracy drop compared to software results on the two difficult subsets. Additionally, it achieves 3.52 μJ energy consumption and 0.5 ms latency per inference. This work offers promising solutions for brain-computer interfaces systems and neural prosthesis applications in the future.</p>","PeriodicalId":94031,"journal":{"name":"IEEE transactions on biomedical circuits and systems","volume":"PP ","pages":""},"PeriodicalIF":4.9000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biomedical circuits and systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TBCAS.2025.3601403","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Brain-computer interfaces rely on precise decoding of neural signals, where spike sorting is a critical step to extract individual neuronal activities from complex neural data. This works presents a spiking neural network (SNN) framework for efficient spike sorting, named SIFT-RSNN. In the SIFT-RSNN, raw neural signals are encoded into spike trains using a threshold-based temporal encoding strategy, then a sparse-integrated filtering module refines misfiring spikes, enhancing data sparsity for pattern learning. The RSNN module with a membrane shortcut structure ensures efficient feature transfer and improves generalization performance of the overall system. The SIFT-RSNN achieves an accuracy of 96.2% and 99.6% on the Difficult1 and Difficult2 subset of Leicester dataset, surpassing state-of-the-art methods. Also, we conducted it on a compute-in-memory platform with 8k memristor cells utilizing quantization-free mapping method and propose two algorithm-hardware co-optimization strategies to mitigate non-ideal hardware effects: weight outlier pre-constraint (WOP) and noise adaptation training (NAT). After optimization, our algorithm continues to outperform existing spike sorting methods, achieving accuracies of 94.2% and 99.7%, while also demonstrating improved robustness. The memristor platform only exhibits a 2% and 1.5% accuracy drop compared to software results on the two difficult subsets. Additionally, it achieves 3.52 μJ energy consumption and 0.5 ms latency per inference. This work offers promising solutions for brain-computer interfaces systems and neural prosthesis applications in the future.