论通过神经极化提高前向学习的泛化和稳定性

Erik B. Terres-Escudero, Javier Del Ser, Pablo Garcia-Bringas
{"title":"论通过神经极化提高前向学习的泛化和稳定性","authors":"Erik B. Terres-Escudero, Javier Del Ser, Pablo Garcia-Bringas","doi":"arxiv-2408.09210","DOIUrl":null,"url":null,"abstract":"Forward-only learning algorithms have recently gained attention as\nalternatives to gradient backpropagation, replacing the backward step of this\nlatter solver with an additional contrastive forward pass. Among these\napproaches, the so-called Forward-Forward Algorithm (FFA) has been shown to\nachieve competitive levels of performance in terms of generalization and\ncomplexity. Networks trained using FFA learn to contrastively maximize a\nlayer-wise defined goodness score when presented with real data (denoted as\npositive samples) and to minimize it when processing synthetic data (corr.\nnegative samples). However, this algorithm still faces weaknesses that\nnegatively affect the model accuracy and training stability, primarily due to a\ngradient imbalance between positive and negative samples. To overcome this\nissue, in this work we propose a novel implementation of the FFA algorithm,\ndenoted as Polar-FFA, which extends the original formulation by introducing a\nneural division (\\emph{polarization}) between positive and negative instances.\nNeurons in each of these groups aim to maximize their goodness when presented\nwith their respective data type, thereby creating a symmetric gradient\nbehavior. To empirically gauge the improved learning capabilities of our\nproposed Polar-FFA, we perform several systematic experiments using different\nactivation and goodness functions over image classification datasets. Our\nresults demonstrate that Polar-FFA outperforms FFA in terms of accuracy and\nconvergence speed. Furthermore, its lower reliance on hyperparameters reduces\nthe need for hyperparameter tuning to guarantee optimal generalization\ncapabilities, thereby allowing for a broader range of neural network\nconfigurations.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the Improvement of Generalization and Stability of Forward-Only Learning via Neural Polarization\",\"authors\":\"Erik B. Terres-Escudero, Javier Del Ser, Pablo Garcia-Bringas\",\"doi\":\"arxiv-2408.09210\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Forward-only learning algorithms have recently gained attention as\\nalternatives to gradient backpropagation, replacing the backward step of this\\nlatter solver with an additional contrastive forward pass. Among these\\napproaches, the so-called Forward-Forward Algorithm (FFA) has been shown to\\nachieve competitive levels of performance in terms of generalization and\\ncomplexity. Networks trained using FFA learn to contrastively maximize a\\nlayer-wise defined goodness score when presented with real data (denoted as\\npositive samples) and to minimize it when processing synthetic data (corr.\\nnegative samples). However, this algorithm still faces weaknesses that\\nnegatively affect the model accuracy and training stability, primarily due to a\\ngradient imbalance between positive and negative samples. To overcome this\\nissue, in this work we propose a novel implementation of the FFA algorithm,\\ndenoted as Polar-FFA, which extends the original formulation by introducing a\\nneural division (\\\\emph{polarization}) between positive and negative instances.\\nNeurons in each of these groups aim to maximize their goodness when presented\\nwith their respective data type, thereby creating a symmetric gradient\\nbehavior. To empirically gauge the improved learning capabilities of our\\nproposed Polar-FFA, we perform several systematic experiments using different\\nactivation and goodness functions over image classification datasets. Our\\nresults demonstrate that Polar-FFA outperforms FFA in terms of accuracy and\\nconvergence speed. Furthermore, its lower reliance on hyperparameters reduces\\nthe need for hyperparameter tuning to guarantee optimal generalization\\ncapabilities, thereby allowing for a broader range of neural network\\nconfigurations.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.09210\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09210","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

作为梯度反向传播的替代方法,只向前学习算法最近备受关注,它以额外的对比性前向传递取代了梯度反向传播求解器的后向步骤。在这些算法中,所谓的前向算法(FFA)已被证明在泛化和复杂性方面达到了具有竞争力的性能水平。使用 FFA 训练的网络在处理真实数据(表示为阳性样本)时,会学习对比性地最大化按层定义的好度得分,而在处理合成数据(表示为阴性样本)时,会学习最小化好度得分。然而,这种算法仍然面临着一些弱点,对模型的准确性和训练稳定性造成了负面影响,这主要是由于正负样本之间的不平衡造成的。为了克服这一问题,我们在这项工作中提出了一种新的 FFA 算法实现方法,称为 Polar-FFA,该方法通过在正负实例之间引入神经划分(emph{polarization})对原始公式进行了扩展。为了从经验上衡量我们提出的 Polar-FFA 的改进学习能力,我们在图像分类数据集上使用不同的激活和良度函数进行了多次系统实验。结果表明,Polar-FFA 在准确性和收敛速度方面都优于 FFA。此外,Polar-FFA 对超参数的依赖性较低,减少了为保证最佳泛化能力而对超参数进行调整的需要,从而允许更广泛的神经网络配置。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
On the Improvement of Generalization and Stability of Forward-Only Learning via Neural Polarization
Forward-only learning algorithms have recently gained attention as alternatives to gradient backpropagation, replacing the backward step of this latter solver with an additional contrastive forward pass. Among these approaches, the so-called Forward-Forward Algorithm (FFA) has been shown to achieve competitive levels of performance in terms of generalization and complexity. Networks trained using FFA learn to contrastively maximize a layer-wise defined goodness score when presented with real data (denoted as positive samples) and to minimize it when processing synthetic data (corr. negative samples). However, this algorithm still faces weaknesses that negatively affect the model accuracy and training stability, primarily due to a gradient imbalance between positive and negative samples. To overcome this issue, in this work we propose a novel implementation of the FFA algorithm, denoted as Polar-FFA, which extends the original formulation by introducing a neural division (\emph{polarization}) between positive and negative instances. Neurons in each of these groups aim to maximize their goodness when presented with their respective data type, thereby creating a symmetric gradient behavior. To empirically gauge the improved learning capabilities of our proposed Polar-FFA, we perform several systematic experiments using different activation and goodness functions over image classification datasets. Our results demonstrate that Polar-FFA outperforms FFA in terms of accuracy and convergence speed. Furthermore, its lower reliance on hyperparameters reduces the need for hyperparameter tuning to guarantee optimal generalization capabilities, thereby allowing for a broader range of neural network configurations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信