{"title":"有限样本条件下相位检索的局部图景","authors":"Kaizhao Liu, Zihao Wang, Lei Wu","doi":"arxiv-2311.15221","DOIUrl":null,"url":null,"abstract":"In this paper, we provide a fine-grained analysis of the local landscape of\nphase retrieval under the regime with limited samples. Our aim is to ascertain\nthe minimal sample size necessary to guarantee a benign local landscape\nsurrounding global minima in high dimensions. Let $n$ and $d$ denote the sample\nsize and input dimension, respectively. We first explore the local convexity\nand establish that when $n=o(d\\log d)$, for almost every fixed point in the\nlocal ball, the Hessian matrix must have negative eigenvalues as long as $d$ is\nsufficiently large. Consequently, the local landscape is highly non-convex. We\nnext consider the one-point strong convexity and show that as long as\n$n=\\omega(d)$, with high probability, the landscape is one-point strongly\nconvex in the local annulus: $\\{w\\in\\mathbb{R}^d: o_d(1)\\leqslant\n\\|w-w^*\\|\\leqslant c\\}$, where $w^*$ is the ground truth and $c$ is an absolute\nconstant. This implies that gradient descent initialized from any point in this\ndomain can converge to an $o_d(1)$-loss solution exponentially fast.\nFurthermore, we show that when $n=o(d\\log d)$, there is a radius of\n$\\widetilde\\Theta\\left(\\sqrt{1/d}\\right)$ such that one-point convexity breaks\nin the corresponding smaller local ball. This indicates an impossibility to\nestablish a convergence to exact $w^*$ for gradient descent under limited\nsamples by relying solely on one-point convexity.","PeriodicalId":501330,"journal":{"name":"arXiv - MATH - Statistics Theory","volume":"55 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Local Landscape of Phase Retrieval Under Limited Samples\",\"authors\":\"Kaizhao Liu, Zihao Wang, Lei Wu\",\"doi\":\"arxiv-2311.15221\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we provide a fine-grained analysis of the local landscape of\\nphase retrieval under the regime with limited samples. Our aim is to ascertain\\nthe minimal sample size necessary to guarantee a benign local landscape\\nsurrounding global minima in high dimensions. Let $n$ and $d$ denote the sample\\nsize and input dimension, respectively. We first explore the local convexity\\nand establish that when $n=o(d\\\\log d)$, for almost every fixed point in the\\nlocal ball, the Hessian matrix must have negative eigenvalues as long as $d$ is\\nsufficiently large. Consequently, the local landscape is highly non-convex. We\\nnext consider the one-point strong convexity and show that as long as\\n$n=\\\\omega(d)$, with high probability, the landscape is one-point strongly\\nconvex in the local annulus: $\\\\{w\\\\in\\\\mathbb{R}^d: o_d(1)\\\\leqslant\\n\\\\|w-w^*\\\\|\\\\leqslant c\\\\}$, where $w^*$ is the ground truth and $c$ is an absolute\\nconstant. This implies that gradient descent initialized from any point in this\\ndomain can converge to an $o_d(1)$-loss solution exponentially fast.\\nFurthermore, we show that when $n=o(d\\\\log d)$, there is a radius of\\n$\\\\widetilde\\\\Theta\\\\left(\\\\sqrt{1/d}\\\\right)$ such that one-point convexity breaks\\nin the corresponding smaller local ball. This indicates an impossibility to\\nestablish a convergence to exact $w^*$ for gradient descent under limited\\nsamples by relying solely on one-point convexity.\",\"PeriodicalId\":501330,\"journal\":{\"name\":\"arXiv - MATH - Statistics Theory\",\"volume\":\"55 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - MATH - Statistics Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2311.15221\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Statistics Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2311.15221","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Local Landscape of Phase Retrieval Under Limited Samples
In this paper, we provide a fine-grained analysis of the local landscape of
phase retrieval under the regime with limited samples. Our aim is to ascertain
the minimal sample size necessary to guarantee a benign local landscape
surrounding global minima in high dimensions. Let $n$ and $d$ denote the sample
size and input dimension, respectively. We first explore the local convexity
and establish that when $n=o(d\log d)$, for almost every fixed point in the
local ball, the Hessian matrix must have negative eigenvalues as long as $d$ is
sufficiently large. Consequently, the local landscape is highly non-convex. We
next consider the one-point strong convexity and show that as long as
$n=\omega(d)$, with high probability, the landscape is one-point strongly
convex in the local annulus: $\{w\in\mathbb{R}^d: o_d(1)\leqslant
\|w-w^*\|\leqslant c\}$, where $w^*$ is the ground truth and $c$ is an absolute
constant. This implies that gradient descent initialized from any point in this
domain can converge to an $o_d(1)$-loss solution exponentially fast.
Furthermore, we show that when $n=o(d\log d)$, there is a radius of
$\widetilde\Theta\left(\sqrt{1/d}\right)$ such that one-point convexity breaks
in the corresponding smaller local ball. This indicates an impossibility to
establish a convergence to exact $w^*$ for gradient descent under limited
samples by relying solely on one-point convexity.