Gabin Maxime Nguegnang, Holger Rauhut, Ulrich Terstiege
{"title":"Convergence of gradient descent for learning linear neural networks","authors":"Gabin Maxime Nguegnang, Holger Rauhut, Ulrich Terstiege","doi":"10.1186/s13662-023-03797-x","DOIUrl":null,"url":null,"abstract":"<p>We study the convergence properties of gradient descent for training deep linear neural networks, i.e., deep matrix factorizations, by extending a previous analysis for the related gradient flow. We show that under suitable conditions on the stepsizes gradient descent converges to a critical point of the loss function, i.e., the square loss in this article. Furthermore, we demonstrate that for almost all initializations gradient descent converges to a global minimum in the case of two layers. In the case of three or more layers, we show that gradient descent converges to a global minimum on the manifold matrices of some fixed rank, where the rank cannot be determined a priori.</p>","PeriodicalId":49245,"journal":{"name":"Advances in Difference Equations","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Difference Equations","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1186/s13662-023-03797-x","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 0
Abstract
We study the convergence properties of gradient descent for training deep linear neural networks, i.e., deep matrix factorizations, by extending a previous analysis for the related gradient flow. We show that under suitable conditions on the stepsizes gradient descent converges to a critical point of the loss function, i.e., the square loss in this article. Furthermore, we demonstrate that for almost all initializations gradient descent converges to a global minimum in the case of two layers. In the case of three or more layers, we show that gradient descent converges to a global minimum on the manifold matrices of some fixed rank, where the rank cannot be determined a priori.
期刊介绍:
The theory of difference equations, the methods used, and their wide applications have advanced beyond their adolescent stage to occupy a central position in applicable analysis. In fact, in the last 15 years, the proliferation of the subject has been witnessed by hundreds of research articles, several monographs, many international conferences, and numerous special sessions.
The theory of differential and difference equations forms two extreme representations of real world problems. For example, a simple population model when represented as a differential equation shows the good behavior of solutions whereas the corresponding discrete analogue shows the chaotic behavior. The actual behavior of the population is somewhere in between.
The aim of Advances in Difference Equations is to report mainly the new developments in the field of difference equations, and their applications in all fields. We will also consider research articles emphasizing the qualitative behavior of solutions of ordinary, partial, delay, fractional, abstract, stochastic, fuzzy, and set-valued differential equations.
Advances in Difference Equations will accept high-quality articles containing original research results and survey articles of exceptional merit.