{"title":"普通非稳态核的对数线性缩放高斯过程回归","authors":"P. Michael Kielstra, Michael Lindsey","doi":"arxiv-2407.03608","DOIUrl":null,"url":null,"abstract":"We introduce a fast algorithm for Gaussian process regression in low\ndimensions, applicable to a widely-used family of non-stationary kernels. The\nnon-stationarity of these kernels is induced by arbitrary spatially-varying\nvertical and horizontal scales. In particular, any stationary kernel can be\naccommodated as a special case, and we focus especially on the generalization\nof the standard Mat\\'ern kernel. Our subroutine for kernel matrix-vector\nmultiplications scales almost optimally as $O(N\\log N)$, where $N$ is the\nnumber of regression points. Like the recently developed equispaced Fourier\nGaussian process (EFGP) methodology, which is applicable only to stationary\nkernels, our approach exploits non-uniform fast Fourier transforms (NUFFTs). We\noffer a complete analysis controlling the approximation error of our method,\nand we validate the method's practical performance with numerical experiments.\nIn particular we demonstrate improved scalability compared to to\nstate-of-the-art rank-structured approaches in spatial dimension $d>1$.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Gaussian process regression with log-linear scaling for common non-stationary kernels\",\"authors\":\"P. Michael Kielstra, Michael Lindsey\",\"doi\":\"arxiv-2407.03608\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We introduce a fast algorithm for Gaussian process regression in low\\ndimensions, applicable to a widely-used family of non-stationary kernels. The\\nnon-stationarity of these kernels is induced by arbitrary spatially-varying\\nvertical and horizontal scales. In particular, any stationary kernel can be\\naccommodated as a special case, and we focus especially on the generalization\\nof the standard Mat\\\\'ern kernel. Our subroutine for kernel matrix-vector\\nmultiplications scales almost optimally as $O(N\\\\log N)$, where $N$ is the\\nnumber of regression points. Like the recently developed equispaced Fourier\\nGaussian process (EFGP) methodology, which is applicable only to stationary\\nkernels, our approach exploits non-uniform fast Fourier transforms (NUFFTs). We\\noffer a complete analysis controlling the approximation error of our method,\\nand we validate the method's practical performance with numerical experiments.\\nIn particular we demonstrate improved scalability compared to to\\nstate-of-the-art rank-structured approaches in spatial dimension $d>1$.\",\"PeriodicalId\":501215,\"journal\":{\"name\":\"arXiv - STAT - Computation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - STAT - Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.03608\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.03608","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Gaussian process regression with log-linear scaling for common non-stationary kernels
We introduce a fast algorithm for Gaussian process regression in low
dimensions, applicable to a widely-used family of non-stationary kernels. The
non-stationarity of these kernels is induced by arbitrary spatially-varying
vertical and horizontal scales. In particular, any stationary kernel can be
accommodated as a special case, and we focus especially on the generalization
of the standard Mat\'ern kernel. Our subroutine for kernel matrix-vector
multiplications scales almost optimally as $O(N\log N)$, where $N$ is the
number of regression points. Like the recently developed equispaced Fourier
Gaussian process (EFGP) methodology, which is applicable only to stationary
kernels, our approach exploits non-uniform fast Fourier transforms (NUFFTs). We
offer a complete analysis controlling the approximation error of our method,
and we validate the method's practical performance with numerical experiments.
In particular we demonstrate improved scalability compared to to
state-of-the-art rank-structured approaches in spatial dimension $d>1$.