Reward-optimizing learning using stochastic release plasticity.

IF 3 3区 医学 Q2 NEUROSCIENCES
Frontiers in Neural Circuits Pub Date : 2025-08-14 eCollection Date: 2025-01-01 DOI:10.3389/fncir.2025.1618506
Yuhao Sun, Wantong Liao, Jinhao Li, Xinche Zhang, Guan Wang, Zhiyuan Ma, Sen Song
{"title":"Reward-optimizing learning using stochastic release plasticity.","authors":"Yuhao Sun, Wantong Liao, Jinhao Li, Xinche Zhang, Guan Wang, Zhiyuan Ma, Sen Song","doi":"10.3389/fncir.2025.1618506","DOIUrl":null,"url":null,"abstract":"<p><p>Synaptic plasticity underlies adaptive learning in neural systems, offering a biologically plausible framework for reward-driven learning. However, a question remains: how can plasticity rules achieve robustness and effectiveness comparable to error backpropagation? In this study, we introduce Reward-Optimized Stochastic Release Plasticity (RSRP), a learning framework where synaptic release is modeled as a parameterized distribution. Utilizing natural gradient estimation, we derive a synaptic plasticity learning rule that effectively adapts to maximize reward signals. Our approach achieves competitive performance and demonstrates stability in reinforcement learning, comparable to Proximal Policy Optimization (PPO), while attaining accuracy comparable with error backpropagation in digit classification. Additionally, we identify reward regularization as a key stabilizing mechanism and validate our method in biologically plausible networks. Our findings suggest that RSRP offers a robust and effective plasticity learning rule, especially in a discontinuous reinforcement learning paradigm, with potential implications for both artificial intelligence and experimental neuroscience.</p>","PeriodicalId":12498,"journal":{"name":"Frontiers in Neural Circuits","volume":"19 ","pages":"1618506"},"PeriodicalIF":3.0000,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12390965/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Neural Circuits","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fncir.2025.1618506","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Synaptic plasticity underlies adaptive learning in neural systems, offering a biologically plausible framework for reward-driven learning. However, a question remains: how can plasticity rules achieve robustness and effectiveness comparable to error backpropagation? In this study, we introduce Reward-Optimized Stochastic Release Plasticity (RSRP), a learning framework where synaptic release is modeled as a parameterized distribution. Utilizing natural gradient estimation, we derive a synaptic plasticity learning rule that effectively adapts to maximize reward signals. Our approach achieves competitive performance and demonstrates stability in reinforcement learning, comparable to Proximal Policy Optimization (PPO), while attaining accuracy comparable with error backpropagation in digit classification. Additionally, we identify reward regularization as a key stabilizing mechanism and validate our method in biologically plausible networks. Our findings suggest that RSRP offers a robust and effective plasticity learning rule, especially in a discontinuous reinforcement learning paradigm, with potential implications for both artificial intelligence and experimental neuroscience.

Abstract Image

Abstract Image

Abstract Image

基于随机释放可塑性的奖励优化学习。
突触可塑性是神经系统适应性学习的基础,为奖励驱动学习提供了生物学上合理的框架。然而,一个问题仍然存在:塑性规则如何实现与误差反向传播相当的鲁棒性和有效性?在本研究中,我们引入了奖励优化随机释放可塑性(RSRP),这是一个学习框架,其中突触释放被建模为参数化分布。利用自然梯度估计,我们推导了一个突触可塑性学习规则,有效地适应最大化奖励信号。我们的方法在强化学习中获得了具有竞争力的性能和稳定性,与近端策略优化(PPO)相当,同时在数字分类中获得与误差反向传播相当的准确性。此外,我们确定奖励正则化是一个关键的稳定机制,并在生物学上合理的网络中验证了我们的方法。我们的研究结果表明,RSRP提供了一个强大而有效的可塑性学习规则,特别是在不连续强化学习范式中,对人工智能和实验神经科学都有潜在的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.00
自引率
5.70%
发文量
135
审稿时长
4-8 weeks
期刊介绍: Frontiers in Neural Circuits publishes rigorously peer-reviewed research on the emergent properties of neural circuits - the elementary modules of the brain. Specialty Chief Editors Takao K. Hensch and Edward Ruthazer at Harvard University and McGill University respectively, are supported by an outstanding Editorial Board of international experts. This multidisciplinary open-access journal is at the forefront of disseminating and communicating scientific knowledge and impactful discoveries to researchers, academics and the public worldwide. Frontiers in Neural Circuits launched in 2011 with great success and remains a "central watering hole" for research in neural circuits, serving the community worldwide to share data, ideas and inspiration. Articles revealing the anatomy, physiology, development or function of any neural circuitry in any species (from sponges to humans) are welcome. Our common thread seeks the computational strategies used by different circuits to link their structure with function (perceptual, motor, or internal), the general rules by which they operate, and how their particular designs lead to the emergence of complex properties and behaviors. Submissions focused on synaptic, cellular and connectivity principles in neural microcircuits using multidisciplinary approaches, especially newer molecular, developmental and genetic tools, are encouraged. Studies with an evolutionary perspective to better understand how circuit design and capabilities evolved to produce progressively more complex properties and behaviors are especially welcome. The journal is further interested in research revealing how plasticity shapes the structural and functional architecture of neural circuits.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信