Jump-GRS: a multi-phase approach to structured pruning of neural networks for neural decoding.

IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL
Xiaomin Wu, Da-Ting Lin, Rong Chen, Shuvra S Bhattacharyya
{"title":"Jump-GRS: a multi-phase approach to structured pruning of neural networks for neural decoding.","authors":"Xiaomin Wu, Da-Ting Lin, Rong Chen, Shuvra S Bhattacharyya","doi":"10.1088/1741-2552/ace5dc","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>Neural decoding, an important area of neural engineering, helps to link neural activity to behavior. Deep neural networks (DNNs), which are becoming increasingly popular in many application fields of machine learning, show promising performance in neural decoding compared to traditional neural decoding methods. Various neural decoding applications, such as brain computer interface applications, require both high decoding accuracy and real-time decoding speed. Pruning methods are used to produce compact DNN models for faster computational speed. Greedy inter-layer order with Random Selection (GRS) is a recently-designed structured pruning method that derives compact DNN models for calcium-imaging-based neural decoding. Although GRS has advantages in terms of detailed structure analysis and consideration of both learned information and model structure during the pruning process, the method is very computationally intensive, and is not feasible when large-scale DNN models need to be pruned within typical constraints on time and computational resources. Large-scale DNN models arise in neural decoding when large numbers of neurons are involved. In this paper, we build on GRS to develop a new structured pruning algorithm called jump GRS (JGRS) that is designed to efficiently compress large-scale DNN models.<i>Approach.</i>On top of GRS, JGRS implements a 'jump mechanism', which bypasses retraining intermediate models when model accuracy is relatively less sensitive to pruning operations. Design of the jump mechanism is motivated by identifying different phases of the structured pruning process, where retraining can be done infrequently in earlier phases without sacrificing accuracy. The jump mechanism helps to significantly speed up execution of the pruning process and greatly enhance its scalability. We compare the pruning performance and speed of JGRS and GRS with extensive experiments in the context of neural decoding.<i>Main results.</i>Our results demonstrate that JGRS provides significantly faster pruning speed compared to GRS, and at the same time, JGRS provides pruned models that are similarly compact as those generated by GRS.<i>Significance.</i>In our experiments, we demonstrate that JGRS achieves on average 9%-20% more compressed models compared to GRS with 2-8 times faster speed (less time required for pruning) across four different initial models on a relevant dataset for neural data analysis.</p>","PeriodicalId":16753,"journal":{"name":"Journal of neural engineering","volume":"20 4","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10801788/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1741-2552/ace5dc","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective.Neural decoding, an important area of neural engineering, helps to link neural activity to behavior. Deep neural networks (DNNs), which are becoming increasingly popular in many application fields of machine learning, show promising performance in neural decoding compared to traditional neural decoding methods. Various neural decoding applications, such as brain computer interface applications, require both high decoding accuracy and real-time decoding speed. Pruning methods are used to produce compact DNN models for faster computational speed. Greedy inter-layer order with Random Selection (GRS) is a recently-designed structured pruning method that derives compact DNN models for calcium-imaging-based neural decoding. Although GRS has advantages in terms of detailed structure analysis and consideration of both learned information and model structure during the pruning process, the method is very computationally intensive, and is not feasible when large-scale DNN models need to be pruned within typical constraints on time and computational resources. Large-scale DNN models arise in neural decoding when large numbers of neurons are involved. In this paper, we build on GRS to develop a new structured pruning algorithm called jump GRS (JGRS) that is designed to efficiently compress large-scale DNN models.Approach.On top of GRS, JGRS implements a 'jump mechanism', which bypasses retraining intermediate models when model accuracy is relatively less sensitive to pruning operations. Design of the jump mechanism is motivated by identifying different phases of the structured pruning process, where retraining can be done infrequently in earlier phases without sacrificing accuracy. The jump mechanism helps to significantly speed up execution of the pruning process and greatly enhance its scalability. We compare the pruning performance and speed of JGRS and GRS with extensive experiments in the context of neural decoding.Main results.Our results demonstrate that JGRS provides significantly faster pruning speed compared to GRS, and at the same time, JGRS provides pruned models that are similarly compact as those generated by GRS.Significance.In our experiments, we demonstrate that JGRS achieves on average 9%-20% more compressed models compared to GRS with 2-8 times faster speed (less time required for pruning) across four different initial models on a relevant dataset for neural data analysis.

Jump-GRS:用于神经解码的神经网络结构化剪枝的多阶段方法。
目的:神经解码是神经工程学的一个重要领域,有助于将神经活动与行为联系起来。深度神经网络(DNN)在机器学习的许多应用领域越来越受欢迎,与传统的神经解码方法相比,深度神经网络在神经解码方面表现出良好的性能。各种神经解码应用,如脑计算机接口应用,都需要高解码精度和实时解码速度。剪枝方法用于生成紧凑的 DNN 模型,以提高计算速度。具有随机选择功能的贪婪层间顺序(GRS)是一种最新设计的结构化剪枝方法,可为基于钙成像的神经解码生成紧凑的 DNN 模型。虽然 GRS 在剪枝过程中具有详细的结构分析和同时考虑学习信息和模型结构的优势,但该方法的计算量非常大,当需要在典型的时间和计算资源限制下剪枝大规模 DNN 模型时,该方法并不可行。当神经解码涉及大量神经元时,就会出现大规模 DNN 模型。在本文中,我们在 GRS 的基础上开发了一种新的结构化剪枝算法,称为跳转 GRS(JGRS),旨在有效压缩大规模 DNN 模型。跳转机制的设计动机在于确定结构化剪枝过程的不同阶段,在这些阶段中,可以在不影响准确性的情况下,在早期阶段不频繁地进行重新训练。跳转机制有助于显著加快剪枝过程的执行速度,并大大提高其可扩展性。我们以神经解码为背景,通过大量实验比较了 JGRS 和 GRS 的剪枝性能和速度。主要结果:我们的结果表明,与 GRS 相比,JGRS 的剪枝速度明显更快,同时,JGRS 提供的剪枝模型与 GRS 生成的模型同样紧凑。在我们的实验中,我们证明了与 GRS 相比,JGRS 在神经数据分析的相关数据集上的四个不同的初始模型中,平均压缩了 9%-20% 的模型,速度提高了 2-8 倍(剪枝所需的时间更少)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of neural engineering
Journal of neural engineering 工程技术-工程:生物医学
CiteScore
7.80
自引率
12.50%
发文量
319
审稿时长
4.2 months
期刊介绍: The goal of Journal of Neural Engineering (JNE) is to act as a forum for the interdisciplinary field of neural engineering where neuroscientists, neurobiologists and engineers can publish their work in one periodical that bridges the gap between neuroscience and engineering. The journal publishes articles in the field of neural engineering at the molecular, cellular and systems levels. The scope of the journal encompasses experimental, computational, theoretical, clinical and applied aspects of: Innovative neurotechnology; Brain-machine (computer) interface; Neural interfacing; Bioelectronic medicines; Neuromodulation; Neural prostheses; Neural control; Neuro-rehabilitation; Neurorobotics; Optical neural engineering; Neural circuits: artificial & biological; Neuromorphic engineering; Neural tissue regeneration; Neural signal processing; Theoretical and computational neuroscience; Systems neuroscience; Translational neuroscience; Neuroimaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信