Patrick Cheridito, Arnulf Jentzen, Florian Rossmannek
{"title":"Gradient Descent Provably Escapes Saddle Points in the Training of Shallow ReLU Networks","authors":"Patrick Cheridito, Arnulf Jentzen, Florian Rossmannek","doi":"10.1007/s10957-024-02513-3","DOIUrl":null,"url":null,"abstract":"<p>Dynamical systems theory has recently been applied in optimization to prove that gradient descent algorithms bypass so-called strict saddle points of the loss function. However, in many modern machine learning applications, the required regularity conditions are not satisfied. In this paper, we prove a variant of the relevant dynamical systems result, a center-stable manifold theorem, in which we relax some of the regularity requirements. We explore its relevance for various machine learning tasks, with a particular focus on shallow rectified linear unit (ReLU) and leaky ReLU networks with scalar input. Building on a detailed examination of critical points of the square integral loss function for shallow ReLU and leaky ReLU networks relative to an affine target function, we show that gradient descent circumvents most saddle points. Furthermore, we prove convergence to global minima under favourable initialization conditions, quantified by an explicit threshold on the limiting loss.</p>","PeriodicalId":50100,"journal":{"name":"Journal of Optimization Theory and Applications","volume":"58 1","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Optimization Theory and Applications","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s10957-024-02513-3","RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
Dynamical systems theory has recently been applied in optimization to prove that gradient descent algorithms bypass so-called strict saddle points of the loss function. However, in many modern machine learning applications, the required regularity conditions are not satisfied. In this paper, we prove a variant of the relevant dynamical systems result, a center-stable manifold theorem, in which we relax some of the regularity requirements. We explore its relevance for various machine learning tasks, with a particular focus on shallow rectified linear unit (ReLU) and leaky ReLU networks with scalar input. Building on a detailed examination of critical points of the square integral loss function for shallow ReLU and leaky ReLU networks relative to an affine target function, we show that gradient descent circumvents most saddle points. Furthermore, we prove convergence to global minima under favourable initialization conditions, quantified by an explicit threshold on the limiting loss.
期刊介绍:
The Journal of Optimization Theory and Applications is devoted to the publication of carefully selected regular papers, invited papers, survey papers, technical notes, book notices, and forums that cover mathematical optimization techniques and their applications to science and engineering. Typical theoretical areas include linear, nonlinear, mathematical, and dynamic programming. Among the areas of application covered are mathematical economics, mathematical physics and biology, and aerospace, chemical, civil, electrical, and mechanical engineering.