随机梯度负动量加速训练深度神经网络

IF 3.5 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xiaotian Li, Zhuang Yang, Yang Wang
{"title":"随机梯度负动量加速训练深度神经网络","authors":"Xiaotian Li,&nbsp;Zhuang Yang,&nbsp;Yang Wang","doi":"10.1007/s10489-025-06900-9","DOIUrl":null,"url":null,"abstract":"<div><p>The fast and robust stochastic optimization algorithms for training deep neural networks (DNNs) are still a topic of heated discussion. As a simple but effective way, the momentum technique, which utilizes historical gradient information, shows significant promise in training DNNs both theoretically and empirically. Nonetheless, the accumulation of error gradients in stochastic settings leads to the failure of momentum techniques, e.g., Nesterov’s accelerated gradient (NAG), in accelerating stochastic optimization algorithms. To address this problem, a novel type of stochastic optimization algorithm based on negative momentum (NM) is developed and analyzed. In this work, we applied NM to vanilla stochastic gradient descent (SGD), leading to SGD-NM. Although a convex combination of previous and current historical information is adopted in SGD-NM, fewer hyperparameters are introduced than those of the existing NM techniques. Meanwhile, we establish a theoretical guarantee for the resulting SGD-NM and show that SGD-NM enjoys the same low computational cost as vanilla SGD. To further show the superiority of NM in stochastic optimization algorithms, we propose a variant of stochastically controlled stochastic gradient (SCSG) based on the negative momentum technique, termed SCSG-NM, which achieves faster convergence compared to SCSG. Finally, we conduct experiments on various DNN architectures and benchmark datasets. The comparison results with state-of-the-art stochastic optimization algorithms show the great potential of NM in accelerating stochastic optimization, including more robust to large learning rates and better generalization.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 15","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Stochastic gradient accelerated by negative momentum for training deep neural networks\",\"authors\":\"Xiaotian Li,&nbsp;Zhuang Yang,&nbsp;Yang Wang\",\"doi\":\"10.1007/s10489-025-06900-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The fast and robust stochastic optimization algorithms for training deep neural networks (DNNs) are still a topic of heated discussion. As a simple but effective way, the momentum technique, which utilizes historical gradient information, shows significant promise in training DNNs both theoretically and empirically. Nonetheless, the accumulation of error gradients in stochastic settings leads to the failure of momentum techniques, e.g., Nesterov’s accelerated gradient (NAG), in accelerating stochastic optimization algorithms. To address this problem, a novel type of stochastic optimization algorithm based on negative momentum (NM) is developed and analyzed. In this work, we applied NM to vanilla stochastic gradient descent (SGD), leading to SGD-NM. Although a convex combination of previous and current historical information is adopted in SGD-NM, fewer hyperparameters are introduced than those of the existing NM techniques. Meanwhile, we establish a theoretical guarantee for the resulting SGD-NM and show that SGD-NM enjoys the same low computational cost as vanilla SGD. To further show the superiority of NM in stochastic optimization algorithms, we propose a variant of stochastically controlled stochastic gradient (SCSG) based on the negative momentum technique, termed SCSG-NM, which achieves faster convergence compared to SCSG. Finally, we conduct experiments on various DNN architectures and benchmark datasets. The comparison results with state-of-the-art stochastic optimization algorithms show the great potential of NM in accelerating stochastic optimization, including more robust to large learning rates and better generalization.</p></div>\",\"PeriodicalId\":8041,\"journal\":{\"name\":\"Applied Intelligence\",\"volume\":\"55 15\",\"pages\":\"\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10489-025-06900-9\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06900-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

训练深度神经网络的快速鲁棒随机优化算法一直是人们讨论的热点。动量技术作为一种简单而有效的方法,利用了历史梯度信息,在训练深度神经网络方面具有重要的理论和经验意义。尽管如此,随机设置中误差梯度的积累导致动量技术(例如Nesterov加速梯度(NAG))在加速随机优化算法中的失败。针对这一问题,提出并分析了一种基于负动量的随机优化算法。在这项工作中,我们将NM应用于香草随机梯度下降(SGD),从而得到SGD-NM。虽然SGD-NM采用了以前和当前历史信息的凸组合,但与现有的NM技术相比,引入的超参数较少。同时,我们为所得到的SGD- nm建立了理论保证,并证明SGD- nm具有与vanilla SGD相同的低计算成本。为了进一步显示随机优化算法在随机优化算法中的优势,我们提出了一种基于负动量技术的随机控制随机梯度(SCSG)的变体,称为SCSG-NM,与SCSG相比,它的收敛速度更快。最后,我们在各种深度神经网络架构和基准数据集上进行了实验。与最先进的随机优化算法的比较结果表明,神经网络在加速随机优化方面具有巨大的潜力,包括对大学习率的鲁棒性和更好的泛化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Stochastic gradient accelerated by negative momentum for training deep neural networks

Stochastic gradient accelerated by negative momentum for training deep neural networks

Stochastic gradient accelerated by negative momentum for training deep neural networks

The fast and robust stochastic optimization algorithms for training deep neural networks (DNNs) are still a topic of heated discussion. As a simple but effective way, the momentum technique, which utilizes historical gradient information, shows significant promise in training DNNs both theoretically and empirically. Nonetheless, the accumulation of error gradients in stochastic settings leads to the failure of momentum techniques, e.g., Nesterov’s accelerated gradient (NAG), in accelerating stochastic optimization algorithms. To address this problem, a novel type of stochastic optimization algorithm based on negative momentum (NM) is developed and analyzed. In this work, we applied NM to vanilla stochastic gradient descent (SGD), leading to SGD-NM. Although a convex combination of previous and current historical information is adopted in SGD-NM, fewer hyperparameters are introduced than those of the existing NM techniques. Meanwhile, we establish a theoretical guarantee for the resulting SGD-NM and show that SGD-NM enjoys the same low computational cost as vanilla SGD. To further show the superiority of NM in stochastic optimization algorithms, we propose a variant of stochastically controlled stochastic gradient (SCSG) based on the negative momentum technique, termed SCSG-NM, which achieves faster convergence compared to SCSG. Finally, we conduct experiments on various DNN architectures and benchmark datasets. The comparison results with state-of-the-art stochastic optimization algorithms show the great potential of NM in accelerating stochastic optimization, including more robust to large learning rates and better generalization.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Applied Intelligence
Applied Intelligence 工程技术-计算机:人工智能
CiteScore
6.60
自引率
20.80%
发文量
1361
审稿时长
5.9 months
期刊介绍: With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信