Armin Eftekhari, M. Wakin, Ping Li, P. Constantine
{"title":"光滑函数二阶矩阵的随机学习","authors":"Armin Eftekhari, M. Wakin, Ping Li, P. Constantine","doi":"10.3934/fods.2019015","DOIUrl":null,"url":null,"abstract":"Consider an open set $\\mathbb{D}\\subseteq\\mathbb{R}^n$, equipped with a probability measure $\\mu$. An important characteristic of a smooth function $f:\\mathbb{D}\\rightarrow\\mathbb{R}$ is its \\emph{second-moment matrix} $\\Sigma_{\\mu}:=\\int \\nabla f(x) \\nabla f(x)^* \\mu(dx) \\in\\mathbb{R}^{n\\times n}$, where $\\nabla f(x)\\in\\mathbb{R}^n$ is the gradient of $f(\\cdot)$ at $x\\in\\mathbb{D}$ and $*$ stands for transpose. For instance, the span of the leading $r$ eigenvectors of $\\Sigma_{\\mu}$ forms an \\emph{active subspace} of $f(\\cdot)$, which contains the directions along which $f(\\cdot)$ changes the most and is of particular interest in \\emph{ridge approximation}. In this work, we propose a simple algorithm for estimating $\\Sigma_{\\mu}$ from random point evaluations of $f(\\cdot)$ \\emph{without} imposing any structural assumptions on $\\Sigma_{\\mu}$. Theoretical guarantees for this algorithm are established with the aid of the same technical tools that have proved valuable in the context of covariance matrix estimation from partial measurements.","PeriodicalId":73054,"journal":{"name":"Foundations of data science (Springfield, Mo.)","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2016-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Randomized learning of the second-moment matrix of a smooth function\",\"authors\":\"Armin Eftekhari, M. Wakin, Ping Li, P. Constantine\",\"doi\":\"10.3934/fods.2019015\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Consider an open set $\\\\mathbb{D}\\\\subseteq\\\\mathbb{R}^n$, equipped with a probability measure $\\\\mu$. An important characteristic of a smooth function $f:\\\\mathbb{D}\\\\rightarrow\\\\mathbb{R}$ is its \\\\emph{second-moment matrix} $\\\\Sigma_{\\\\mu}:=\\\\int \\\\nabla f(x) \\\\nabla f(x)^* \\\\mu(dx) \\\\in\\\\mathbb{R}^{n\\\\times n}$, where $\\\\nabla f(x)\\\\in\\\\mathbb{R}^n$ is the gradient of $f(\\\\cdot)$ at $x\\\\in\\\\mathbb{D}$ and $*$ stands for transpose. For instance, the span of the leading $r$ eigenvectors of $\\\\Sigma_{\\\\mu}$ forms an \\\\emph{active subspace} of $f(\\\\cdot)$, which contains the directions along which $f(\\\\cdot)$ changes the most and is of particular interest in \\\\emph{ridge approximation}. In this work, we propose a simple algorithm for estimating $\\\\Sigma_{\\\\mu}$ from random point evaluations of $f(\\\\cdot)$ \\\\emph{without} imposing any structural assumptions on $\\\\Sigma_{\\\\mu}$. Theoretical guarantees for this algorithm are established with the aid of the same technical tools that have proved valuable in the context of covariance matrix estimation from partial measurements.\",\"PeriodicalId\":73054,\"journal\":{\"name\":\"Foundations of data science (Springfield, Mo.)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2016-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Foundations of data science (Springfield, Mo.)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3934/fods.2019015\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Foundations of data science (Springfield, Mo.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3934/fods.2019015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
Randomized learning of the second-moment matrix of a smooth function
Consider an open set $\mathbb{D}\subseteq\mathbb{R}^n$, equipped with a probability measure $\mu$. An important characteristic of a smooth function $f:\mathbb{D}\rightarrow\mathbb{R}$ is its \emph{second-moment matrix} $\Sigma_{\mu}:=\int \nabla f(x) \nabla f(x)^* \mu(dx) \in\mathbb{R}^{n\times n}$, where $\nabla f(x)\in\mathbb{R}^n$ is the gradient of $f(\cdot)$ at $x\in\mathbb{D}$ and $*$ stands for transpose. For instance, the span of the leading $r$ eigenvectors of $\Sigma_{\mu}$ forms an \emph{active subspace} of $f(\cdot)$, which contains the directions along which $f(\cdot)$ changes the most and is of particular interest in \emph{ridge approximation}. In this work, we propose a simple algorithm for estimating $\Sigma_{\mu}$ from random point evaluations of $f(\cdot)$ \emph{without} imposing any structural assumptions on $\Sigma_{\mu}$. Theoretical guarantees for this algorithm are established with the aid of the same technical tools that have proved valuable in the context of covariance matrix estimation from partial measurements.