学习线性神经网络的梯度下降收敛性

IF 3.1 3区 数学 Q1 MATHEMATICS
Gabin Maxime Nguegnang, Holger Rauhut, Ulrich Terstiege
{"title":"学习线性神经网络的梯度下降收敛性","authors":"Gabin Maxime Nguegnang, Holger Rauhut, Ulrich Terstiege","doi":"10.1186/s13662-023-03797-x","DOIUrl":null,"url":null,"abstract":"<p>We study the convergence properties of gradient descent for training deep linear neural networks, i.e., deep matrix factorizations, by extending a previous analysis for the related gradient flow. We show that under suitable conditions on the stepsizes gradient descent converges to a critical point of the loss function, i.e., the square loss in this article. Furthermore, we demonstrate that for almost all initializations gradient descent converges to a global minimum in the case of two layers. In the case of three or more layers, we show that gradient descent converges to a global minimum on the manifold matrices of some fixed rank, where the rank cannot be determined a priori.</p>","PeriodicalId":49245,"journal":{"name":"Advances in Difference Equations","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Convergence of gradient descent for learning linear neural networks\",\"authors\":\"Gabin Maxime Nguegnang, Holger Rauhut, Ulrich Terstiege\",\"doi\":\"10.1186/s13662-023-03797-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>We study the convergence properties of gradient descent for training deep linear neural networks, i.e., deep matrix factorizations, by extending a previous analysis for the related gradient flow. We show that under suitable conditions on the stepsizes gradient descent converges to a critical point of the loss function, i.e., the square loss in this article. Furthermore, we demonstrate that for almost all initializations gradient descent converges to a global minimum in the case of two layers. In the case of three or more layers, we show that gradient descent converges to a global minimum on the manifold matrices of some fixed rank, where the rank cannot be determined a priori.</p>\",\"PeriodicalId\":49245,\"journal\":{\"name\":\"Advances in Difference Equations\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in Difference Equations\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1186/s13662-023-03797-x\",\"RegionNum\":3,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Difference Equations","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1186/s13662-023-03797-x","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 0

摘要

我们通过扩展之前对相关梯度流的分析,研究了梯度下降训练深度线性神经网络(即深度矩阵因式分解)的收敛特性。我们证明,在步长合适的条件下,梯度下降会收敛到损失函数的临界点,即本文中的平方损失。此外,我们还证明,在两层的情况下,几乎所有的初始化梯度下降都会收敛到全局最小值。在三层或更多层的情况下,我们证明梯度下降会收敛到某个固定秩的流形矩阵上的全局最小值,而秩是无法事先确定的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Convergence of gradient descent for learning linear neural networks

Convergence of gradient descent for learning linear neural networks

We study the convergence properties of gradient descent for training deep linear neural networks, i.e., deep matrix factorizations, by extending a previous analysis for the related gradient flow. We show that under suitable conditions on the stepsizes gradient descent converges to a critical point of the loss function, i.e., the square loss in this article. Furthermore, we demonstrate that for almost all initializations gradient descent converges to a global minimum in the case of two layers. In the case of three or more layers, we show that gradient descent converges to a global minimum on the manifold matrices of some fixed rank, where the rank cannot be determined a priori.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Advances in Difference Equations
Advances in Difference Equations MATHEMATICS, APPLIED-MATHEMATICS
CiteScore
8.60
自引率
0.00%
发文量
0
审稿时长
4-8 weeks
期刊介绍: The theory of difference equations, the methods used, and their wide applications have advanced beyond their adolescent stage to occupy a central position in applicable analysis. In fact, in the last 15 years, the proliferation of the subject has been witnessed by hundreds of research articles, several monographs, many international conferences, and numerous special sessions. The theory of differential and difference equations forms two extreme representations of real world problems. For example, a simple population model when represented as a differential equation shows the good behavior of solutions whereas the corresponding discrete analogue shows the chaotic behavior. The actual behavior of the population is somewhere in between. The aim of Advances in Difference Equations is to report mainly the new developments in the field of difference equations, and their applications in all fields. We will also consider research articles emphasizing the qualitative behavior of solutions of ordinary, partial, delay, fractional, abstract, stochastic, fuzzy, and set-valued differential equations. Advances in Difference Equations will accept high-quality articles containing original research results and survey articles of exceptional merit.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信