{"title":"研究聚光灯","authors":"Misha E. Kilmer","doi":"10.1137/22n975561","DOIUrl":null,"url":null,"abstract":"SIAM Review, Volume 64, Issue 4, Page 919-919, November 2022. <br/> The first Research Spotlights article in this issue is concerned with filtering, a task of paramount importance in a great many applications such as numerical weather prediction and geophysical data assimilation. Authors Alessio Spantini, Ricardo Baptista, and Youssef M. Marzouk, in their article “Coupling Techniques for Nonlinear Ensemble Filtering,” describe discrete-time filtering as the act of characterizing the sequence of conditional distributions of the latent field at observation times, given all currently available measurements. Despite the existing literature on filtering, issues such as high-dimensional state spaces and sparse (in both space and time) observations still prove formidable in practice. The traditional approach of ensemble-based data assimilation is the ensemble Kalman filter (EnKF), involving a prediction (forecasting) step followed by an analysis step. However, the authors note an intrinsic bias of EnKF due to the linearity of the transformation, estimated under Gaussian assumptions, that is used in the analysis step, which limits its accuracy. To overcome this, they propose two non-Gaussian generalizations of the EnKF---the so-called stochastic and deterministic map filters---using nonlinear transformations derived from couplings between the forecast distribution and the filtering distribution. What is crucial is that the transformations “can be estimated efficiently...perhaps using only convex optimization,” that they “are easy to `localize' in high dimensions,” and that their computation “should not become increasingly challenging as the variance of the observation noise decreases.” Following a comprehensive description of their new approaches, the authors demonstrate numerically the superiority of their stochastic map filter approach over traditional EnKF. The subsequent discussion offers the reader several jumping off points for future research. Recovery of a sparse solution to a large-scale optimization problem is another ubiquitous problem arising in many applications such as image reconstruction, signal processing, and machine learning. The cost functional typically includes a regularization term in the form of an $\\ell_1$ norm term on the solution and/or regularized solution to enforce sparsity. Designing suitable algorithms for such recovery problems is the subject of our second Research Spotlights article. In “Sparse Approximations with Interior Point Methods,” authors Valentina De Simone, Daniela di Serafino, Jacek Gondzio, Spyridon Pougkakiotis, and Marco Viola set out to correct the misconception that first-order methods are to be preferred over second-order methods out of hand. Through case studies, they offer evidence that interior point methods (IPMs) which are constructed to “exploit special features of the problems in the linear algebra of IPMs” and which are designed “to take advantage of the expected sparsity of the optimal solution” can in fact be the method of choice for solving this class of optimization problems. The key to their approach is a reformulation of the original sparse approximation problem to one which is seemingly larger but which has properties upon which one can capitalize for computational gain. For each of four representative applications, the authors show how to take computational advantage of the problem-specific structure of the underlying linear systems involved at each iteration. These efforts are complemented by leveraging the expected sparsity: employing heuristics to drop near zero variables, thereby replacing very large, ill-conditioned intermediate systems by better conditioned, smaller systems. Their conclusion is that time invested in tailoring solvers to structure admitted by the reformulated variant, and in taking advantage of expected sparsity, may be well spent, since their demonstrations have shown it is possible for IPMs to have a “noticeable advantage” over state-of-the-art first-order methods for sparse approximation problems.","PeriodicalId":49525,"journal":{"name":"SIAM Review","volume":"11 1","pages":""},"PeriodicalIF":10.8000,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research Spotlights\",\"authors\":\"Misha E. Kilmer\",\"doi\":\"10.1137/22n975561\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"SIAM Review, Volume 64, Issue 4, Page 919-919, November 2022. <br/> The first Research Spotlights article in this issue is concerned with filtering, a task of paramount importance in a great many applications such as numerical weather prediction and geophysical data assimilation. Authors Alessio Spantini, Ricardo Baptista, and Youssef M. Marzouk, in their article “Coupling Techniques for Nonlinear Ensemble Filtering,” describe discrete-time filtering as the act of characterizing the sequence of conditional distributions of the latent field at observation times, given all currently available measurements. Despite the existing literature on filtering, issues such as high-dimensional state spaces and sparse (in both space and time) observations still prove formidable in practice. The traditional approach of ensemble-based data assimilation is the ensemble Kalman filter (EnKF), involving a prediction (forecasting) step followed by an analysis step. However, the authors note an intrinsic bias of EnKF due to the linearity of the transformation, estimated under Gaussian assumptions, that is used in the analysis step, which limits its accuracy. To overcome this, they propose two non-Gaussian generalizations of the EnKF---the so-called stochastic and deterministic map filters---using nonlinear transformations derived from couplings between the forecast distribution and the filtering distribution. What is crucial is that the transformations “can be estimated efficiently...perhaps using only convex optimization,” that they “are easy to `localize' in high dimensions,” and that their computation “should not become increasingly challenging as the variance of the observation noise decreases.” Following a comprehensive description of their new approaches, the authors demonstrate numerically the superiority of their stochastic map filter approach over traditional EnKF. The subsequent discussion offers the reader several jumping off points for future research. Recovery of a sparse solution to a large-scale optimization problem is another ubiquitous problem arising in many applications such as image reconstruction, signal processing, and machine learning. The cost functional typically includes a regularization term in the form of an $\\\\ell_1$ norm term on the solution and/or regularized solution to enforce sparsity. Designing suitable algorithms for such recovery problems is the subject of our second Research Spotlights article. In “Sparse Approximations with Interior Point Methods,” authors Valentina De Simone, Daniela di Serafino, Jacek Gondzio, Spyridon Pougkakiotis, and Marco Viola set out to correct the misconception that first-order methods are to be preferred over second-order methods out of hand. Through case studies, they offer evidence that interior point methods (IPMs) which are constructed to “exploit special features of the problems in the linear algebra of IPMs” and which are designed “to take advantage of the expected sparsity of the optimal solution” can in fact be the method of choice for solving this class of optimization problems. The key to their approach is a reformulation of the original sparse approximation problem to one which is seemingly larger but which has properties upon which one can capitalize for computational gain. For each of four representative applications, the authors show how to take computational advantage of the problem-specific structure of the underlying linear systems involved at each iteration. These efforts are complemented by leveraging the expected sparsity: employing heuristics to drop near zero variables, thereby replacing very large, ill-conditioned intermediate systems by better conditioned, smaller systems. Their conclusion is that time invested in tailoring solvers to structure admitted by the reformulated variant, and in taking advantage of expected sparsity, may be well spent, since their demonstrations have shown it is possible for IPMs to have a “noticeable advantage” over state-of-the-art first-order methods for sparse approximation problems.\",\"PeriodicalId\":49525,\"journal\":{\"name\":\"SIAM Review\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":10.8000,\"publicationDate\":\"2022-11-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIAM Review\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1137/22n975561\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM Review","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1137/22n975561","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
摘要
SIAM评论,第64卷,第4期,第919-919页,2022年11月。这一期的第一篇研究聚焦文章是关于滤波的,滤波是一项在许多应用中至关重要的任务,如数值天气预报和地球物理数据同化。作者Alessio Spantini, Ricardo Baptista和Youssef M. Marzouk在他们的文章“非线性集合滤波的耦合技术”中,将离散时间滤波描述为在给定所有当前可用的测量值的情况下,在观测时间表征潜在场的条件分布序列的行为。尽管已有关于滤波的文献,但诸如高维状态空间和稀疏(在空间和时间上)观测等问题在实践中仍然证明是艰巨的。基于集成的数据同化的传统方法是集成卡尔曼滤波(EnKF),它包括一个预测(预报)步骤,然后是一个分析步骤。然而,作者指出,由于在高斯假设下估计的变换的线性,在分析步骤中使用了EnKF的固有偏差,这限制了其准确性。为了克服这个问题,他们提出了EnKF的两种非高斯推广——所谓的随机和确定性映射滤波器——使用从预测分布和滤波分布之间的耦合中导出的非线性变换。至关重要的是,转换“可以有效地估计……也许只使用凸优化”,它们“很容易在高维中‘定位’”,并且它们的计算“不会随着观测噪声的方差减少而变得越来越具有挑战性”。在全面描述了他们的新方法之后,作者在数值上证明了他们的随机映射滤波方法比传统的EnKF方法优越。随后的讨论为读者提供了未来研究的几个出发点。大规模优化问题的稀疏解的恢复是在图像重建、信号处理和机器学习等许多应用中出现的另一个普遍问题。代价函数通常包括一个正则化项,其形式为解决方案和/或正则化解决方案上的$\ell_1$范数项,以加强稀疏性。为这样的恢复问题设计合适的算法是我们第二篇研究重点文章的主题。在《用内点法进行稀疏逼近》一书中,作者Valentina De Simone、Daniela di Serafino、Jacek Gondzio、Spyridon Pougkakiotis和Marco Viola着手纠正一阶方法优于二阶方法的错误观念。通过案例研究,他们提供了证据,证明内点法(ipm)是为了“利用ipm线性代数中问题的特殊特征”而构建的,并且是为了“利用最优解的期望稀疏性”而设计的,实际上可以成为解决这类优化问题的首选方法。他们的方法的关键是将原来的稀疏近似问题重新表述为一个看起来更大但具有可以资本化计算增益的特性的问题。对于四个代表性应用程序中的每一个,作者都展示了如何利用每次迭代中涉及的底层线性系统的特定问题结构的计算优势。这些努力是通过利用预期的稀疏性来补充的:使用启发式方法来减少接近零的变量,从而用条件较好的、较小的系统取代非常大的、条件不良的中间系统。他们的结论是,将时间投入到调整求解器以适应重新表述的变体所允许的结构上,并利用预期的稀疏性,可能是值得的,因为他们的演示表明,对于稀疏近似问题,ipm可能比最先进的一阶方法具有“明显的优势”。
SIAM Review, Volume 64, Issue 4, Page 919-919, November 2022. The first Research Spotlights article in this issue is concerned with filtering, a task of paramount importance in a great many applications such as numerical weather prediction and geophysical data assimilation. Authors Alessio Spantini, Ricardo Baptista, and Youssef M. Marzouk, in their article “Coupling Techniques for Nonlinear Ensemble Filtering,” describe discrete-time filtering as the act of characterizing the sequence of conditional distributions of the latent field at observation times, given all currently available measurements. Despite the existing literature on filtering, issues such as high-dimensional state spaces and sparse (in both space and time) observations still prove formidable in practice. The traditional approach of ensemble-based data assimilation is the ensemble Kalman filter (EnKF), involving a prediction (forecasting) step followed by an analysis step. However, the authors note an intrinsic bias of EnKF due to the linearity of the transformation, estimated under Gaussian assumptions, that is used in the analysis step, which limits its accuracy. To overcome this, they propose two non-Gaussian generalizations of the EnKF---the so-called stochastic and deterministic map filters---using nonlinear transformations derived from couplings between the forecast distribution and the filtering distribution. What is crucial is that the transformations “can be estimated efficiently...perhaps using only convex optimization,” that they “are easy to `localize' in high dimensions,” and that their computation “should not become increasingly challenging as the variance of the observation noise decreases.” Following a comprehensive description of their new approaches, the authors demonstrate numerically the superiority of their stochastic map filter approach over traditional EnKF. The subsequent discussion offers the reader several jumping off points for future research. Recovery of a sparse solution to a large-scale optimization problem is another ubiquitous problem arising in many applications such as image reconstruction, signal processing, and machine learning. The cost functional typically includes a regularization term in the form of an $\ell_1$ norm term on the solution and/or regularized solution to enforce sparsity. Designing suitable algorithms for such recovery problems is the subject of our second Research Spotlights article. In “Sparse Approximations with Interior Point Methods,” authors Valentina De Simone, Daniela di Serafino, Jacek Gondzio, Spyridon Pougkakiotis, and Marco Viola set out to correct the misconception that first-order methods are to be preferred over second-order methods out of hand. Through case studies, they offer evidence that interior point methods (IPMs) which are constructed to “exploit special features of the problems in the linear algebra of IPMs” and which are designed “to take advantage of the expected sparsity of the optimal solution” can in fact be the method of choice for solving this class of optimization problems. The key to their approach is a reformulation of the original sparse approximation problem to one which is seemingly larger but which has properties upon which one can capitalize for computational gain. For each of four representative applications, the authors show how to take computational advantage of the problem-specific structure of the underlying linear systems involved at each iteration. These efforts are complemented by leveraging the expected sparsity: employing heuristics to drop near zero variables, thereby replacing very large, ill-conditioned intermediate systems by better conditioned, smaller systems. Their conclusion is that time invested in tailoring solvers to structure admitted by the reformulated variant, and in taking advantage of expected sparsity, may be well spent, since their demonstrations have shown it is possible for IPMs to have a “noticeable advantage” over state-of-the-art first-order methods for sparse approximation problems.
期刊介绍:
Survey and Review feature papers that provide an integrative and current viewpoint on important topics in applied or computational mathematics and scientific computing. These papers aim to offer a comprehensive perspective on the subject matter.
Research Spotlights publish concise research papers in applied and computational mathematics that are of interest to a wide range of readers in SIAM Review. The papers in this section present innovative ideas that are clearly explained and motivated. They stand out from regular publications in specific SIAM journals due to their accessibility and potential for widespread and long-lasting influence.