卷积神经网络中Hebbian学习的实现挑战与策略

IF 1 Q4 OPTICS
A. V. Demidovskij, M. S. Kazyulina, I. G. Salnikov, A. M. Tugaryov, A. I. Trutnev, S. V. Pavlov
{"title":"卷积神经网络中Hebbian学习的实现挑战与策略","authors":"A. V. Demidovskij,&nbsp;M. S. Kazyulina,&nbsp;I. G. Salnikov,&nbsp;A. M. Tugaryov,&nbsp;A. I. Trutnev,&nbsp;S. V. Pavlov","doi":"10.3103/S1060992X23060048","DOIUrl":null,"url":null,"abstract":"<p>Given the unprecedented growth of deep learning applications, training acceleration is becoming a subject of strong academic interest. Hebbian learning as a training strategy alternative to backpropagation presents a promising optimization approach due to its locality, lower computational complexity and parallelization potential. Nevertheless, due to the challenging optimization of Hebbian learning, there is no widely accepted approach to the implementation of such mixed strategies. The current paper overviews the 4 main strategies for updating weights using the Hebbian rule, including its widely used modifications—Oja’s and Instar rules. Additionally, the paper analyses 21 industrial implementations of Hebbian learning, discusses merits and shortcomings of Hebbian rules, as well as presents the results of computational experiments on 4 convolutional networks. Experiments show that the most efficient implementation strategy of Hebbian learning allows for <span>\\(1.66 \\times \\)</span> acceleration and <span>\\(3.76 \\times \\)</span> memory consumption when updating DenseNet121 weights compared to backpropagation. Finally, a comparative analysis of the implementation strategies is carried out and grounded recommendations for Hebbian learning application are formulated.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S252 - S264"},"PeriodicalIF":1.0000,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Implementation Challenges and Strategies for Hebbian Learning in Convolutional Neural Networks\",\"authors\":\"A. V. Demidovskij,&nbsp;M. S. Kazyulina,&nbsp;I. G. Salnikov,&nbsp;A. M. Tugaryov,&nbsp;A. I. Trutnev,&nbsp;S. V. Pavlov\",\"doi\":\"10.3103/S1060992X23060048\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Given the unprecedented growth of deep learning applications, training acceleration is becoming a subject of strong academic interest. Hebbian learning as a training strategy alternative to backpropagation presents a promising optimization approach due to its locality, lower computational complexity and parallelization potential. Nevertheless, due to the challenging optimization of Hebbian learning, there is no widely accepted approach to the implementation of such mixed strategies. The current paper overviews the 4 main strategies for updating weights using the Hebbian rule, including its widely used modifications—Oja’s and Instar rules. Additionally, the paper analyses 21 industrial implementations of Hebbian learning, discusses merits and shortcomings of Hebbian rules, as well as presents the results of computational experiments on 4 convolutional networks. Experiments show that the most efficient implementation strategy of Hebbian learning allows for <span>\\\\(1.66 \\\\times \\\\)</span> acceleration and <span>\\\\(3.76 \\\\times \\\\)</span> memory consumption when updating DenseNet121 weights compared to backpropagation. Finally, a comparative analysis of the implementation strategies is carried out and grounded recommendations for Hebbian learning application are formulated.</p>\",\"PeriodicalId\":721,\"journal\":{\"name\":\"Optical Memory and Neural Networks\",\"volume\":\"32 2\",\"pages\":\"S252 - S264\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2023-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optical Memory and Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.3103/S1060992X23060048\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optical Memory and Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.3103/S1060992X23060048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

摘要

鉴于深度学习应用的空前增长,训练加速正在成为一个强烈的学术兴趣的主题。Hebbian学习作为一种替代反向传播的训练策略,由于其局域性、较低的计算复杂度和并行化潜力,呈现出一种很有前途的优化方法。然而,由于Hebbian学习的优化具有挑战性,目前还没有广泛接受的方法来实施这种混合策略。本文概述了使用Hebbian规则更新权重的4种主要策略,包括其广泛使用的修改- oja规则和Instar规则。此外,本文还分析了21种Hebbian学习的工业实现,讨论了Hebbian规则的优缺点,并给出了4种卷积网络的计算实验结果。实验表明,与反向传播相比,Hebbian学习最有效的实现策略允许在更新DenseNet121权重时\(1.66 \times \)加速和\(3.76 \times \)内存消耗。最后,对实施策略进行了比较分析,并提出了有根据的Hebbian学习应用建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Implementation Challenges and Strategies for Hebbian Learning in Convolutional Neural Networks

Given the unprecedented growth of deep learning applications, training acceleration is becoming a subject of strong academic interest. Hebbian learning as a training strategy alternative to backpropagation presents a promising optimization approach due to its locality, lower computational complexity and parallelization potential. Nevertheless, due to the challenging optimization of Hebbian learning, there is no widely accepted approach to the implementation of such mixed strategies. The current paper overviews the 4 main strategies for updating weights using the Hebbian rule, including its widely used modifications—Oja’s and Instar rules. Additionally, the paper analyses 21 industrial implementations of Hebbian learning, discusses merits and shortcomings of Hebbian rules, as well as presents the results of computational experiments on 4 convolutional networks. Experiments show that the most efficient implementation strategy of Hebbian learning allows for \(1.66 \times \) acceleration and \(3.76 \times \) memory consumption when updating DenseNet121 weights compared to backpropagation. Finally, a comparative analysis of the implementation strategies is carried out and grounded recommendations for Hebbian learning application are formulated.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.50
自引率
11.10%
发文量
25
期刊介绍: The journal covers a wide range of issues in information optics such as optical memory, mechanisms for optical data recording and processing, photosensitive materials, optical, optoelectronic and holographic nanostructures, and many other related topics. Papers on memory systems using holographic and biological structures and concepts of brain operation are also included. The journal pays particular attention to research in the field of neural net systems that may lead to a new generation of computional technologies by endowing them with intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信