{"title":"A dual‐space multilevel kernel‐splitting framework for discrete and continuous convolution","authors":"Shidong Jiang, Leslie Greengard","doi":"10.1002/cpa.22240","DOIUrl":null,"url":null,"abstract":"We introduce a new class of multilevel, adaptive, dual‐space methods for computing fast convolutional transformations. These methods can be applied to a broad class of kernels, from the Green's functions for classical partial differential equations (PDEs) to power functions and radial basis functions such as those used in statistics and machine learning. The DMK (<jats:italic>dual‐space multilevel kernel‐splitting</jats:italic>) framework uses a hierarchy of grids, computing a smoothed interaction at the coarsest level, followed by a sequence of corrections at finer and finer scales until the problem is entirely local, at which point direct summation is applied. Unlike earlier multilevel summation schemes, DMK exploits the fact that the interaction at each scale is diagonalized by a short Fourier transform, permitting the use of separation of variables, but without relying on the FFT. This requires careful attention to the discretization of the Fourier transform at each spatial scale. Like multilevel summation, we make use of a recursive (telescoping) decomposition of the original kernel into the sum of a smooth far‐field kernel, a sequence of difference kernels, and a residual kernel, which plays a role only in leaf boxes in the adaptive tree. At all higher levels in the grid hierarchy, the interaction kernels are designed to be smooth in both physical and Fourier space, admitting efficient Fourier spectral approximations. The DMK framework substantially simplifies the algorithmic structure of the fast multipole method (FMM) and unifies the FMM, Ewald summation, and multilevel summation, achieving speeds comparable to the FFT in work per gridpoint, even in a fully adaptive context. For continuous source distributions, the evaluation of local interactions is further accelerated by approximating the kernel at the finest level as a sum of Gaussians (SOG) with a highly localized remainder. The Gaussian convolutions are calculated using tensor product transforms, and the remainder term is calculated using asymptotic methods. We illustrate the performance of DMK for both continuous and discrete sources with extensive numerical examples in two and three dimensions.","PeriodicalId":10601,"journal":{"name":"Communications on Pure and Applied Mathematics","volume":"42 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications on Pure and Applied Mathematics","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1002/cpa.22240","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 0
Abstract
We introduce a new class of multilevel, adaptive, dual‐space methods for computing fast convolutional transformations. These methods can be applied to a broad class of kernels, from the Green's functions for classical partial differential equations (PDEs) to power functions and radial basis functions such as those used in statistics and machine learning. The DMK (dual‐space multilevel kernel‐splitting) framework uses a hierarchy of grids, computing a smoothed interaction at the coarsest level, followed by a sequence of corrections at finer and finer scales until the problem is entirely local, at which point direct summation is applied. Unlike earlier multilevel summation schemes, DMK exploits the fact that the interaction at each scale is diagonalized by a short Fourier transform, permitting the use of separation of variables, but without relying on the FFT. This requires careful attention to the discretization of the Fourier transform at each spatial scale. Like multilevel summation, we make use of a recursive (telescoping) decomposition of the original kernel into the sum of a smooth far‐field kernel, a sequence of difference kernels, and a residual kernel, which plays a role only in leaf boxes in the adaptive tree. At all higher levels in the grid hierarchy, the interaction kernels are designed to be smooth in both physical and Fourier space, admitting efficient Fourier spectral approximations. The DMK framework substantially simplifies the algorithmic structure of the fast multipole method (FMM) and unifies the FMM, Ewald summation, and multilevel summation, achieving speeds comparable to the FFT in work per gridpoint, even in a fully adaptive context. For continuous source distributions, the evaluation of local interactions is further accelerated by approximating the kernel at the finest level as a sum of Gaussians (SOG) with a highly localized remainder. The Gaussian convolutions are calculated using tensor product transforms, and the remainder term is calculated using asymptotic methods. We illustrate the performance of DMK for both continuous and discrete sources with extensive numerical examples in two and three dimensions.