Toby van Gastelen, Wouter Edeling, Benjamin Sanderse
{"title":"Modeling Advection-Dominated Flows with Space-Local Reduced-Order Models","authors":"Toby van Gastelen, Wouter Edeling, Benjamin Sanderse","doi":"arxiv-2409.08793","DOIUrl":"https://doi.org/arxiv-2409.08793","url":null,"abstract":"Reduced-order models (ROMs) are often used to accelerate the simulation of\u0000large physical systems. However, traditional ROM techniques, such as those\u0000based on proper orthogonal decomposition (POD), often struggle with\u0000advection-dominated flows due to the slow decay of singular values. This\u0000results in high computational costs and potential instabilities. This paper\u0000proposes a novel approach using space-local POD to address the challenges\u0000arising from the slow singular value decay. Instead of global basis functions,\u0000our method employs local basis functions that are applied across the domain,\u0000analogous to the finite element method. By dividing the domain into subdomains\u0000and applying a space-local POD within each subdomain, we achieve a\u0000representation that is sparse and that generalizes better outside the training\u0000regime. This allows the use of a larger number of basis functions, without\u0000prohibitive computational costs. To ensure smoothness across subdomain\u0000boundaries, we introduce overlapping subdomains inspired by the partition of\u0000unity method. Our approach is validated through simulations of the 1D advection\u0000equation discretized using a central difference scheme. We demonstrate that\u0000using our space-local approach we obtain a ROM that generalizes better to flow\u0000conditions which are not part of the training data. In addition, we show that\u0000the constructed ROM inherits the energy conservation and non-linear stability\u0000properties from the full-order model. Finally, we find that using a space-local\u0000ROM allows for larger time steps.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"117 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measurability and continuity of parametric low-rank approximation in Hilbert spaces: linear operators and random variables","authors":"Nicola Rares Franco","doi":"arxiv-2409.09102","DOIUrl":"https://doi.org/arxiv-2409.09102","url":null,"abstract":"We develop a unified theoretical framework for low-rank approximation\u0000techniques in parametric settings, where traditional methods like Singular\u0000Value Decomposition (SVD), Proper Orthogonal Decomposition (POD), and Principal\u0000Component Analysis (PCA) face significant challenges due to repeated queries.\u0000Applications include, e.g., the numerical treatment of parameter-dependent\u0000partial differential equations (PDEs), where operators vary with parameters,\u0000and the statistical analysis of longitudinal data, where complex measurements\u0000like audio signals and images are collected over time. Although the applied\u0000literature has introduced partial solutions through adaptive algorithms, these\u0000advancements lack a comprehensive mathematical foundation. As a result, key\u0000theoretical questions -- such as the existence and parametric regularity of\u0000optimal low-rank approximants -- remain inadequately addressed. Our goal is to\u0000bridge this gap between theory and practice by establishing a rigorous\u0000framework for parametric low-rank approximation under minimal assumptions,\u0000specifically focusing on cases where parameterizations are either measurable or\u0000continuous. The analysis is carried out within the context of separable Hilbert\u0000spaces, ensuring applicability to both finite and infinite-dimensional\u0000settings. Finally, connections to recently emerging trends in the Deep Learning\u0000literature, relevant for engineering and data science, are also discussed.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shantanu Shahane, Sheide Chammas, Deniz A. Bezgin, Aaron B. Buhendwa, Steffen J. Schmidt, Nikolaus A. Adams, Spencer H. Bryngelson, Yi-Fan Chen, Qing Wang, Fei Sha, Leonardo Zepeda-Núñez
{"title":"Rational-WENO: A lightweight, physically-consistent three-point weighted essentially non-oscillatory scheme","authors":"Shantanu Shahane, Sheide Chammas, Deniz A. Bezgin, Aaron B. Buhendwa, Steffen J. Schmidt, Nikolaus A. Adams, Spencer H. Bryngelson, Yi-Fan Chen, Qing Wang, Fei Sha, Leonardo Zepeda-Núñez","doi":"arxiv-2409.09217","DOIUrl":"https://doi.org/arxiv-2409.09217","url":null,"abstract":"Conventional WENO3 methods are known to be highly dissipative at lower\u0000resolutions, introducing significant errors in the pre-asymptotic regime. In\u0000this paper, we employ a rational neural network to accurately estimate the\u0000local smoothness of the solution, dynamically adapting the stencil weights\u0000based on local solution features. As rational neural networks can represent\u0000fast transitions between smooth and sharp regimes, this approach achieves a\u0000granular reconstruction with significantly reduced dissipation, improving the\u0000accuracy of the simulation. The network is trained offline on a carefully\u0000chosen dataset of analytical functions, bypassing the need for differentiable\u0000solvers. We also propose a robust model selection criterion based on estimates\u0000of the interpolation's convergence order on a set of test functions, which\u0000correlates better with the model performance in downstream tasks. We\u0000demonstrate the effectiveness of our approach on several one-, two-, and\u0000three-dimensional fluid flow problems: our scheme generalizes across grid\u0000resolutions while handling smooth and discontinuous solutions. In most cases,\u0000our rational network-based scheme achieves higher accuracy than conventional\u0000WENO3 with the same stencil size, and in a few of them, it achieves accuracy\u0000comparable to WENO5, which uses a larger stencil.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"195 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Fundamental Subspaces of Ensemble Kalman Inversion","authors":"Elizabeth Qian, Christopher Beattie","doi":"arxiv-2409.08862","DOIUrl":"https://doi.org/arxiv-2409.08862","url":null,"abstract":"Ensemble Kalman Inversion (EKI) methods are a family of iterative methods for\u0000solving weighted least-squares problems, especially those arising in scientific\u0000and engineering inverse problems in which unknown parameters or states are\u0000estimated from observed data by minimizing the weighted square norm of the data\u0000misfit. Implementation of EKI requires only evaluation of the forward model\u0000mapping the unknown to the data, and does not require derivatives or adjoints\u0000of the forward model. The methods therefore offer an attractive alternative to\u0000gradient-based optimization approaches in large-scale inverse problems where\u0000evaluating derivatives or adjoints of the forward model is computationally\u0000intractable. This work presents a new analysis of the behavior of both\u0000deterministic and stochastic versions of basic EKI for linear observation\u0000operators, resulting in a natural interpretation of EKI's convergence\u0000properties in terms of ``fundamental subspaces'' analogous to Strang's\u0000fundamental subspaces of linear algebra. Our analysis directly examines the\u0000discrete EKI iterations instead of their continuous-time limits considered in\u0000previous analyses, and provides spectral decompositions that define six\u0000fundamental subspaces of EKI spanning both observation and state spaces. This\u0000approach verifies convergence rates previously derived for continuous-time\u0000limits, and yields new results describing both deterministic and stochastic EKI\u0000convergence behavior with respect to the standard minimum-norm weighted least\u0000squares solution in terms of the fundamental subspaces. Numerical experiments\u0000illustrate our theoretical results.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maciej Sikora, Albert Oliver-Serra, Leszek Siwik, Natalia Leszczyńska, Tomasz Maciej Ciesielski, Eirik Valseth, Jacek Leszczyński, Anna Paszyńska, Maciej Paszyński
{"title":"Graph grammars and Physics Informed Neural Networks for simulating of pollution propagation on Spitzbergen","authors":"Maciej Sikora, Albert Oliver-Serra, Leszek Siwik, Natalia Leszczyńska, Tomasz Maciej Ciesielski, Eirik Valseth, Jacek Leszczyński, Anna Paszyńska, Maciej Paszyński","doi":"arxiv-2409.08799","DOIUrl":"https://doi.org/arxiv-2409.08799","url":null,"abstract":"In this paper, we present two computational methods for performing\u0000simulations of pollution propagation described by advection-diffusion\u0000equations. The first method employs graph grammars to describe the generation\u0000process of the computational mesh used in simulations with the meshless solver\u0000of the three-dimensional finite element method. The graph transformation rules\u0000express the three-dimensional Rivara longest-edge refinement algorithm. This\u0000solver is used for an exemplary application: performing three-dimensional\u0000simulations of pollution generation by the coal-burning power plant and its\u0000propagation in the city of Longyearbyen, the capital of Spitsbergen. The second\u0000computational code is based on the Physics Informed Neural Networks method. It\u0000is used to calculate the dissipation of the pollution along the valley in which\u0000the city of Longyearbyen is located. We discuss the instantiation and execution\u0000of the PINN method using Google Colab implementation. We discuss the benefits\u0000and limitations of the PINN implementation.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lorenzo Lazzarino, Hussam Al Daas, Yuji Nakatsukasa
{"title":"Matrix perturbation analysis of methods for extracting singular values from approximate singular subspaces","authors":"Lorenzo Lazzarino, Hussam Al Daas, Yuji Nakatsukasa","doi":"arxiv-2409.09187","DOIUrl":"https://doi.org/arxiv-2409.09187","url":null,"abstract":"Given (orthonormal) approximations $tilde{U}$ and $tilde{V}$ to the left\u0000and right subspaces spanned by the leading singular vectors of a matrix $A$, we\u0000discuss methods to approximate the leading singular values of $A$ and study\u0000their accuracy. In particular, we focus our analysis on the generalized\u0000Nystr\"om approximation, as surprisingly, it is able to obtain significantly\u0000better accuracy than classical methods, namely Rayleigh-Ritz and (one-sided)\u0000projected SVD. A key idea of the analysis is to view the methods as finding the exact\u0000singular values of a perturbation of $A$. In this context, we derive a matrix\u0000perturbation result that exploits the structure of such $2times2$ block matrix\u0000perturbation. Furthermore, we extend it to block tridiagonal matrices. We then\u0000obtain bounds on the accuracy of the extracted singular values. This leads to\u0000sharp bounds that predict well the approximation error trends and explain the\u0000difference in the behavior of these methods. Finally, we present an approach to\u0000derive an a-posteriori version of those bounds, which are more amenable to\u0000computation in practice.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimatable variation neural networks and their application to ODEs and scalar hyperbolic conservation laws","authors":"Mária Lukáčová-Medviďová, Simon Schneider","doi":"arxiv-2409.08909","DOIUrl":"https://doi.org/arxiv-2409.08909","url":null,"abstract":"We introduce estimatable variation neural networks (EVNNs), a class of neural\u0000networks that allow a computationally cheap estimate on the $BV$ norm motivated\u0000by the space $BMV$ of functions with bounded M-variation. We prove a universal\u0000approximation theorem for EVNNs and discuss possible implementations. We\u0000construct sequences of loss functionals for ODEs and scalar hyperbolic\u0000conservation laws for which a vanishing loss leads to convergence. Moreover, we\u0000show the existence of sequences of loss minimizing neural networks if the\u0000solution is an element of $BMV$. Several numerical test cases illustrate that\u0000it is possible to use standard techniques to minimize these loss functionals\u0000for EVNNs.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust optimal design of large-scale Bayesian nonlinear inverse problems","authors":"Abhijit Chowdhary, Ahmed Attia, Alen Alexanderian","doi":"arxiv-2409.09137","DOIUrl":"https://doi.org/arxiv-2409.09137","url":null,"abstract":"We consider robust optimal experimental design (ROED) for nonlinear Bayesian\u0000inverse problems governed by partial differential equations (PDEs). An optimal\u0000design is one that maximizes some utility quantifying the quality of the\u0000solution of an inverse problem. However, the optimal design is dependent on\u0000elements of the inverse problem such as the simulation model, the prior, or the\u0000measurement error model. ROED aims to produce an optimal design that is aware\u0000of the additional uncertainties encoded in the inverse problem and remains\u0000optimal even after variations in them. We follow a worst-case scenario approach\u0000to develop a new framework for robust optimal design of nonlinear Bayesian\u0000inverse problems. The proposed framework a) is scalable and designed for\u0000infinite-dimensional Bayesian nonlinear inverse problems constrained by PDEs;\u0000b) develops efficient approximations of the utility, namely, the expected\u0000information gain; c) employs eigenvalue sensitivity techniques to develop\u0000analytical forms and efficient evaluation methods of the gradient of the\u0000utility with respect to the uncertainties we wish to be robust against; and d)\u0000employs a probabilistic optimization paradigm that properly defines and\u0000efficiently solves the resulting combinatorial max-min optimization problem.\u0000The effectiveness of the proposed approach is illustrated for optimal sensor\u0000placement problem in an inverse problem governed by an elliptic PDE.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid LSMR algorithms for large-scale general-form regularization","authors":"Yanfei Yang","doi":"arxiv-2409.09104","DOIUrl":"https://doi.org/arxiv-2409.09104","url":null,"abstract":"The hybrid LSMR algorithm is proposed for large-scale general-form\u0000regularization. It is based on a Krylov subspace projection method where the\u0000matrix $A$ is first projected onto a subspace, typically a Krylov subspace,\u0000which is implemented via the Golub-Kahan bidiagonalization process applied to\u0000$A$, with starting vector $b$. Then a regularization term is employed to the\u0000projections. Finally, an iterative algorithm is exploited to solve a least\u0000squares problem with constraints. The resulting algorithms are called the\u0000{hybrid LSMR algorithm}. At every step, we exploit LSQR algorithm to solve the\u0000inner least squares problem, which is proven to become better conditioned as\u0000the number of $k$ increases, so that the LSQR algorithm converges faster. We\u0000prove how to select the stopping tolerances for LSQR in order to guarantee that\u0000the regularized solution obtained by iteratively computing the inner least\u0000squares problems and the one obtained by exactly computing the inner least\u0000squares problems have the same accuracy. Numerical experiments illustrate that\u0000the best regularized solution by the hybrid LSMR algorithm is as accurate as\u0000that by JBDQR which is a joint bidiagonalization based algorithm.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"101 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kraus is King: High-order Completely Positive and Trace Preserving (CPTP) Low Rank Method for the Lindblad Master Equation","authors":"Daniel Appelo, Yingda Cheng","doi":"arxiv-2409.08898","DOIUrl":"https://doi.org/arxiv-2409.08898","url":null,"abstract":"We design high order accurate methods that exploit low rank structure in the\u0000density matrix while respecting the essential structure of the Lindblad\u0000equation. Our methods preserves complete positivity and are trace preserving.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}