{"title":"A perspective on machine learning methods in turbulence modeling","authors":"Andrea Beck, Marius Kurz","doi":"10.1002/gamm.202100002","DOIUrl":"10.1002/gamm.202100002","url":null,"abstract":"<p>This work presents a review of the current state of research in data-driven turbulence closure modeling. It offers a perspective on the challenges and open issues but also on the advantages and promises of machine learning (ML) methods applied to parameter estimation, model identification, closure term reconstruction, and beyond, mostly from the perspective of large Eddy simulation and related techniques. We stress that consistency of the training data, the model, the underlying physics, and the discretization is a key issue that needs to be considered for a successful ML-augmented modeling strategy. In order to make the discussion useful for non-experts in either field, we introduce both the modeling problem in turbulence as well as the prominent ML paradigms and methods in a concise and self-consistent manner. In this study, we present a survey of the current data-driven model concepts and methods, highlight important developments, and put them into the context of the discussed challenges.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76272585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine learning for material characterization with an application for predicting mechanical properties","authors":"Anke Stoll, Peter Benner","doi":"10.1002/gamm.202100003","DOIUrl":"10.1002/gamm.202100003","url":null,"abstract":"<p>Currently, the growth of material data from experiments and simulations is expanding beyond processable amounts. This makes the development of new data-driven methods for the discovery of patterns among multiple lengthscales and time-scales and structure-property relationships essential. These data-driven approaches show enormous promise within materials science. The following review covers machine learning (ML) applications for metallic material characterization. Many parameters associated with the processing and the structure of materials affect the properties and the performance of manufactured components. Thus, this study is an attempt to investigate the usefulness of ML methods for material property prediction. Material characteristics such as strength, toughness, hardness, brittleness, or ductility are relevant to categorize a material or component according to their quality. In industry, material tests like tensile tests, compression tests, or creep tests are often time consuming and expensive to perform. Therefore, the application of ML approaches is considered helpful for an easier generation of material property information. This study also gives an application of ML methods on small punch test (SPT) data for the determination of the property ultimate tensile strength for various materials. A strong correlation between SPT data and tensile test data was found which ultimately allows to replace more costly tests by simple and fast tests in combination with ML.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202100003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74302427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Topical Issue Applied and Numerical Linear Algebra (2/2)","authors":"Stefan Güttel, Jörg Liesen","doi":"10.1002/gamm.202000021","DOIUrl":"10.1002/gamm.202000021","url":null,"abstract":"<p>The present special issue of the GAMM Mitteilungen, which is the second of a two-part series, contains contributions on the topic of Applied and Numerical Linear Algebra, compiled by the GAMM Activity Group of the same name. The Activity Group has already contributed special issues to the GAMM Mitteilungen in 2004, 2006, and 2013. Because of the rapid development both in the theoretical foundations and the applicability of numerical linear algebra techniques throughout science and engineering, it is time again to survey the field and present the results to the readers of the GAMM Mitteilungen. We are happy that eight authors or teams of authors have accepted our invitation to report on recent research highlights in Applied Numerical Linear Algebra, and to point out the relevant literature as well as software.</p><p>This work by Federico Poloni reviews a family of algorithms for Lyapunov- and Riccati-type equations which are all related by the idea of doubling. The algorithms are compared and their connections are highlighted. The paper also discusses open problems relating to their theory.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"43 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202000021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76920951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preconditioners for Krylov subspace methods: An overview","authors":"John W. Pearson, Jennifer Pestana","doi":"10.1002/gamm.202000015","DOIUrl":"10.1002/gamm.202000015","url":null,"abstract":"<p>When simulating a mechanism from science or engineering, or an industrial process, one is frequently required to construct a mathematical model, and then resolve this model numerically. If accurate numerical solutions are necessary or desirable, this can involve solving large-scale systems of equations. One major class of solution methods is that of preconditioned iterative methods, involving preconditioners which are computationally cheap to apply while also capturing information contained in the linear system. In this article, we give a short survey of the field of preconditioning. We introduce a range of preconditioners for partial differential equations, followed by optimization problems, before discussing preconditioners constructed with less standard objectives in mind.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"43 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202000015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73816556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Iterative and doubling algorithms for Riccati-type matrix equations: A comparative introduction","authors":"Federico Poloni","doi":"10.1002/gamm.202000018","DOIUrl":"10.1002/gamm.202000018","url":null,"abstract":"<p>We review a family of algorithms for Lyapunov- and Riccati-type equations which are all related to each other by the idea of <i>doubling</i>: they construct the iterate <math>\u0000 <mrow>\u0000 <msub>\u0000 <mrow>\u0000 <mi>Q</mi>\u0000 </mrow>\u0000 <mrow>\u0000 <mi>k</mi>\u0000 </mrow>\u0000 </msub>\u0000 <mo>=</mo>\u0000 <msub>\u0000 <mrow>\u0000 <mi>X</mi>\u0000 </mrow>\u0000 <mrow>\u0000 <msup>\u0000 <mrow>\u0000 <mn>2</mn>\u0000 </mrow>\u0000 <mrow>\u0000 <mi>k</mi>\u0000 </mrow>\u0000 </msup>\u0000 </mrow>\u0000 </msub>\u0000 </mrow></math> of another naturally-arising fixed-point iteration <span>(<i>X</i><sub><i>h</i></sub>)</span> via a sort of repeated squaring. The equations we consider are Stein equations <span><i>X</i> − <i>A</i><sup>∗</sup> <i>X A</i> = <i>Q</i></span>, Lyapunov equations <span><i>A</i><sup>∗</sup> <i>X</i> + <i>X A</i> + <i>Q</i> = 0</span>, discrete-time algebraic Riccati equations <span><i>X</i> = <i>Q</i> + <i>A</i><sup>∗</sup> <i>X</i>(<i>I</i> + <i>G X</i>)<sup>−1</sup><i>A</i></span>, continuous-time algebraic Riccati equations <span><i>Q</i> + <i>A</i><sup>∗</sup> <i>X</i> + <i>X A</i> − <i>X G X</i> = 0</span>, palindromic quadratic matrix equations <span><i>A</i> + <i>Q Y</i> + <i>A</i><sup>∗</sup><i>Y</i><sup>2</sup> = 0</span>, and nonlinear matrix equations <span><i>X</i> + <i>A</i><sup>∗</sup> <i>X</i><sup>−1</sup><i>A</i> = <i>Q</i></span>. We draw comparisons among these algorithms, highlight the connections between them and to other algorithms such as subspace iteration, and discuss open issues in their theory.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"43 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202000018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88337416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Krylov methods for inverse problems: Surveying classical, and introducing new, algorithmic approaches","authors":"Silvia Gazzola, Malena Sabaté Landman","doi":"10.1002/gamm.202000017","DOIUrl":"10.1002/gamm.202000017","url":null,"abstract":"<p>Large-scale linear systems coming from suitable discretizations of linear inverse problems are challenging to solve. Indeed, since they are inherently ill-posed, appropriate regularization should be applied; since they are large-scale, well-established direct regularization methods (such as Tikhonov regularization) cannot often be straightforwardly employed, and iterative linear solvers should be exploited. Moreover, every regularization method crucially depends on the choice of one or more regularization parameters, which should be suitably tuned. The aim of this paper is twofold: (a) survey some well-established regularizing projection methods based on Krylov subspace methods (with a particular emphasis on methods based on the Golub-Kahan bidiagonalization algorithm), and the so-called hybrid approaches (which combine Tikhonov regularization and projection onto Krylov subspaces of increasing dimension); (b) introduce a new principled and adaptive algorithmic approach for regularization similar to specific instances of hybrid methods. In particular, the new strategy provides reliable parameter choice rules by leveraging the framework of bilevel optimization, and the links between Gauss quadrature and Golub-Kahan bidiagonalization. Numerical tests modeling inverse problems in imaging illustrate the performance of existing regularizing Krylov methods, and validate the new algorithms.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"43 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202000017","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76182732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kirk M. Soodhalter, Eric de Sturler, Misha E. Kilmer
{"title":"A survey of subspace recycling iterative methods","authors":"Kirk M. Soodhalter, Eric de Sturler, Misha E. Kilmer","doi":"10.1002/gamm.202000016","DOIUrl":"10.1002/gamm.202000016","url":null,"abstract":"<p>This survey concerns <i>subspace recycling methods</i>, a popular class of iterative methods that enable effective reuse of subspace information in order to speed up convergence and find good initial vectors over a sequence of linear systems with slowly changing coefficient matrices, multiple right-hand sides, or both. The subspace information that is recycled is usually generated during the run of an iterative method (usually a Krylov subspace method) on one or more of the systems. Following introduction of definitions and notation, we examine the history of early augmentation schemes along with deflation preconditioning schemes and their influence on the development of recycling methods. We then discuss a general residual constraint framework through which many augmented Krylov and recycling methods can both be viewed. We review several augmented and recycling methods within this framework. We then discuss some known effective strategies for choosing subspaces to recycle before taking the reader through more recent developments that have generalized recycling for (sequences of) shifted linear systems, some of them with multiple right-hand sides in mind. We round out our survey with a brief review of application areas that have seen benefit from subspace recycling methods.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"43 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202000016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76431675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Limited-memory polynomial methods for large-scale matrix functions","authors":"Stefan Güttel, Daniel Kressner, Kathryn Lund","doi":"10.1002/gamm.202000019","DOIUrl":"10.1002/gamm.202000019","url":null,"abstract":"<p>Matrix functions are a central topic of linear algebra, and problems requiring their numerical approximation appear increasingly often in scientific computing. We review various limited-memory methods for the approximation of the action of a large-scale matrix function on a vector. Emphasis is put on polynomial methods, whose memory requirements are known or prescribed a priori. Methods based on explicit polynomial approximation or interpolation, as well as restarted Arnoldi methods, are treated in detail. An overview of existing software is also given, as well as a discussion of challenging open problems.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"43 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78119560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A literature survey of matrix methods for data science","authors":"Martin Stoll","doi":"10.1002/gamm.202000013","DOIUrl":"10.1002/gamm.202000013","url":null,"abstract":"<p>Efficient numerical linear algebra is a core ingredient in many applications across almost all scientific and industrial disciplines. With this survey we want to illustrate that numerical linear algebra has played and is playing a crucial role in enabling and improving data science computations with many new developments being fueled by the availability of data and computing resources. We highlight the role of various different factorizations and the power of changing the representation of the data as well as discussing topics such as randomized algorithms, functions of matrices, and high-dimensional problems. We briefly touch upon the role of techniques from numerical linear algebra used within deep learning.</p>","PeriodicalId":53634,"journal":{"name":"GAMM Mitteilungen","volume":"43 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/gamm.202000013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72476180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}