{"title":"模算子叠加(MOS):一种物理引导的机器学习框架,用于解决计算流体动力学中的维数诅咒和多尺度挑战","authors":"Kai Liu , S. Balachandar , Haochen Li","doi":"10.1016/j.jcp.2025.114435","DOIUrl":null,"url":null,"abstract":"<div><div>We introduce Modular Operator Superposition (MOS), a physics-guided and AI-augmented framework for efficient, scalable, and generalizable flow field modeling in high-dimensional and multiscale fluid systems. Rather than globally resolving flow fields via mesh-based discretization, MOS decomposes the system into physically meaningful flow primitives, each represented by a reusable modular operator. These operators are trained offline using a parameterized physics-informed neural network (P-PINN) in a single pre-processing step, and later composed through a physics-guided superposition strategy to approximate the full system-level mapping. The core advantage of MOS lies in its modularization strategy. By learning only small-scale flow primitives offline, MOS reduces the training cost to a fixed, minimal investment independent of system-level complexity. In the online stage, MOS dynamically solves for primitive-level interactions for any specific configuration of a system, then reconstructs the global flow field through the superposition of modular outputs. This two-stage online process, comprising both solving and inference, enables scalable and generalizable predictions. As a result, MOS addresses the curse of dimensionality by reducing high-dimensional systems to tractable compositions of modular operators and overcomes multiscale challenges through a scale-adaptive operator assembly that flexibly resolves flow features with minimal overhead. We demonstrate MOS for static and dynamic arrays of up to <span><math><mrow><mn>15</mn><mo>,</mo><mn>000</mn></mrow></math></span> cylinders in a channel cross-flow (corresponding to roughly <span><math><msup><mn>10</mn><mn>5</mn></msup></math></span> input parameters). All of these configurations are solved using a shared single-cylinder cross-flow modular operator, trained offline in 30 hours using a data-free, physics-informed machine learning strategy. In the online stage, MOS achieves end-to-end flow field prediction at 3 to 5 orders of magnitude speedup over conventional numerical solvers, while maintaining high fidelity (<span><math><mrow><msup><mi>R</mi><mn>2</mn></msup><mo>></mo><mn>0.85</mn></mrow></math></span> for all cases). Moreover, the MOS solution format requires 3 to 5 orders of magnitude lower memory usage than conventional numerical outputs. Once solved, the solution can be queried in real-time to infer flow variables at arbitrary spatial resolutions or scattered points, enabling flexible and efficient visualization across scales. Additional tests indicate that MOS remains robust to polydispersity and translational/rotational motion of the cylinders.</div></div>","PeriodicalId":352,"journal":{"name":"Journal of Computational Physics","volume":"544 ","pages":"Article 114435"},"PeriodicalIF":3.8000,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Modular operator superposition (MOS): A physics-guided machine learning framework for addressing the curse of dimensionality and multiscale challenges in computational fluid dynamics\",\"authors\":\"Kai Liu , S. Balachandar , Haochen Li\",\"doi\":\"10.1016/j.jcp.2025.114435\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>We introduce Modular Operator Superposition (MOS), a physics-guided and AI-augmented framework for efficient, scalable, and generalizable flow field modeling in high-dimensional and multiscale fluid systems. Rather than globally resolving flow fields via mesh-based discretization, MOS decomposes the system into physically meaningful flow primitives, each represented by a reusable modular operator. These operators are trained offline using a parameterized physics-informed neural network (P-PINN) in a single pre-processing step, and later composed through a physics-guided superposition strategy to approximate the full system-level mapping. The core advantage of MOS lies in its modularization strategy. By learning only small-scale flow primitives offline, MOS reduces the training cost to a fixed, minimal investment independent of system-level complexity. In the online stage, MOS dynamically solves for primitive-level interactions for any specific configuration of a system, then reconstructs the global flow field through the superposition of modular outputs. This two-stage online process, comprising both solving and inference, enables scalable and generalizable predictions. As a result, MOS addresses the curse of dimensionality by reducing high-dimensional systems to tractable compositions of modular operators and overcomes multiscale challenges through a scale-adaptive operator assembly that flexibly resolves flow features with minimal overhead. We demonstrate MOS for static and dynamic arrays of up to <span><math><mrow><mn>15</mn><mo>,</mo><mn>000</mn></mrow></math></span> cylinders in a channel cross-flow (corresponding to roughly <span><math><msup><mn>10</mn><mn>5</mn></msup></math></span> input parameters). All of these configurations are solved using a shared single-cylinder cross-flow modular operator, trained offline in 30 hours using a data-free, physics-informed machine learning strategy. In the online stage, MOS achieves end-to-end flow field prediction at 3 to 5 orders of magnitude speedup over conventional numerical solvers, while maintaining high fidelity (<span><math><mrow><msup><mi>R</mi><mn>2</mn></msup><mo>></mo><mn>0.85</mn></mrow></math></span> for all cases). Moreover, the MOS solution format requires 3 to 5 orders of magnitude lower memory usage than conventional numerical outputs. Once solved, the solution can be queried in real-time to infer flow variables at arbitrary spatial resolutions or scattered points, enabling flexible and efficient visualization across scales. Additional tests indicate that MOS remains robust to polydispersity and translational/rotational motion of the cylinders.</div></div>\",\"PeriodicalId\":352,\"journal\":{\"name\":\"Journal of Computational Physics\",\"volume\":\"544 \",\"pages\":\"Article 114435\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computational Physics\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S002199912500717X\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Physics","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S002199912500717X","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Modular operator superposition (MOS): A physics-guided machine learning framework for addressing the curse of dimensionality and multiscale challenges in computational fluid dynamics
We introduce Modular Operator Superposition (MOS), a physics-guided and AI-augmented framework for efficient, scalable, and generalizable flow field modeling in high-dimensional and multiscale fluid systems. Rather than globally resolving flow fields via mesh-based discretization, MOS decomposes the system into physically meaningful flow primitives, each represented by a reusable modular operator. These operators are trained offline using a parameterized physics-informed neural network (P-PINN) in a single pre-processing step, and later composed through a physics-guided superposition strategy to approximate the full system-level mapping. The core advantage of MOS lies in its modularization strategy. By learning only small-scale flow primitives offline, MOS reduces the training cost to a fixed, minimal investment independent of system-level complexity. In the online stage, MOS dynamically solves for primitive-level interactions for any specific configuration of a system, then reconstructs the global flow field through the superposition of modular outputs. This two-stage online process, comprising both solving and inference, enables scalable and generalizable predictions. As a result, MOS addresses the curse of dimensionality by reducing high-dimensional systems to tractable compositions of modular operators and overcomes multiscale challenges through a scale-adaptive operator assembly that flexibly resolves flow features with minimal overhead. We demonstrate MOS for static and dynamic arrays of up to cylinders in a channel cross-flow (corresponding to roughly input parameters). All of these configurations are solved using a shared single-cylinder cross-flow modular operator, trained offline in 30 hours using a data-free, physics-informed machine learning strategy. In the online stage, MOS achieves end-to-end flow field prediction at 3 to 5 orders of magnitude speedup over conventional numerical solvers, while maintaining high fidelity ( for all cases). Moreover, the MOS solution format requires 3 to 5 orders of magnitude lower memory usage than conventional numerical outputs. Once solved, the solution can be queried in real-time to infer flow variables at arbitrary spatial resolutions or scattered points, enabling flexible and efficient visualization across scales. Additional tests indicate that MOS remains robust to polydispersity and translational/rotational motion of the cylinders.
期刊介绍:
Journal of Computational Physics thoroughly treats the computational aspects of physical problems, presenting techniques for the numerical solution of mathematical equations arising in all areas of physics. The journal seeks to emphasize methods that cross disciplinary boundaries.
The Journal of Computational Physics also publishes short notes of 4 pages or less (including figures, tables, and references but excluding title pages). Letters to the Editor commenting on articles already published in this Journal will also be considered. Neither notes nor letters should have an abstract.