{"title":"Implicit factorized transformer approach to fast prediction of turbulent channel flows","authors":"Huiyu Yang, Yunpeng Wang, Jianchun Wang","doi":"10.1007/s11433-024-2666-9","DOIUrl":null,"url":null,"abstract":"<div><p>Transformer neural operators have recently become an effective approach for surrogate modeling of systems governed by partial differential equations (PDEs). In this paper, we introduce a modified implicit factorized transformer (IFactFormer-m) model, replacing the original chained factorized attention with parallel factorized attention. The IFactFormer-m model successfully performs long-term predictions for turbulent channel flow. In contrast, the original IFactFormer (IFactFormer-o), Fourier neural operator (FNO), and implicit Fourier neural operator (IFNO) exhibit a poor performance. Turbulent channel flows are simulated by direct numerical simulation using fine grids at friction Reynolds numbers <i>Re</i><sub><i>τ</i></sub> ≈ 180, 395, 590, and filtered to coarse grids for training neural operator. The neural operator takes the current flow field as input and predicts the flow field at the next time step, and long-term prediction is achieved in the posterior through an autoregressive approach. The results show that IFactFormer-m, compared with other neural operators and the traditional large eddy simulation (LES) methods, including the dynamic Smagorinsky model (DSM) and the wall-adapted local eddy-viscosity (WALE) model, reduces prediction errors in the short term, and achieves stable and accurate long-term prediction of various statistical properties and flow structures, including the energy spectrum, mean streamwise velocity, root mean square (RMS) values of fluctuating velocities, Reynolds shear stress, and spatial structures of instantaneous velocity. Moreover, the trained IFactFormer-m is much faster than traditional LES methods. By analyzing the attention kernels, we elucidate why IFactFormer-m converges faster and achieves a stable and accurate long-term prediction compared with IFactFormer-o. Code and data are available at: https://github.com/huiyu-2002/IFactFormer-m.</p></div>","PeriodicalId":774,"journal":{"name":"Science China Physics, Mechanics & Astronomy","volume":"69 1","pages":""},"PeriodicalIF":7.5000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science China Physics, Mechanics & Astronomy","FirstCategoryId":"101","ListUrlMain":"https://link.springer.com/article/10.1007/s11433-024-2666-9","RegionNum":1,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Transformer neural operators have recently become an effective approach for surrogate modeling of systems governed by partial differential equations (PDEs). In this paper, we introduce a modified implicit factorized transformer (IFactFormer-m) model, replacing the original chained factorized attention with parallel factorized attention. The IFactFormer-m model successfully performs long-term predictions for turbulent channel flow. In contrast, the original IFactFormer (IFactFormer-o), Fourier neural operator (FNO), and implicit Fourier neural operator (IFNO) exhibit a poor performance. Turbulent channel flows are simulated by direct numerical simulation using fine grids at friction Reynolds numbers Reτ ≈ 180, 395, 590, and filtered to coarse grids for training neural operator. The neural operator takes the current flow field as input and predicts the flow field at the next time step, and long-term prediction is achieved in the posterior through an autoregressive approach. The results show that IFactFormer-m, compared with other neural operators and the traditional large eddy simulation (LES) methods, including the dynamic Smagorinsky model (DSM) and the wall-adapted local eddy-viscosity (WALE) model, reduces prediction errors in the short term, and achieves stable and accurate long-term prediction of various statistical properties and flow structures, including the energy spectrum, mean streamwise velocity, root mean square (RMS) values of fluctuating velocities, Reynolds shear stress, and spatial structures of instantaneous velocity. Moreover, the trained IFactFormer-m is much faster than traditional LES methods. By analyzing the attention kernels, we elucidate why IFactFormer-m converges faster and achieves a stable and accurate long-term prediction compared with IFactFormer-o. Code and data are available at: https://github.com/huiyu-2002/IFactFormer-m.
期刊介绍:
Science China Physics, Mechanics & Astronomy, an academic journal cosponsored by the Chinese Academy of Sciences and the National Natural Science Foundation of China, and published by Science China Press, is committed to publishing high-quality, original results in both basic and applied research.
Science China Physics, Mechanics & Astronomy, is published in both print and electronic forms. It is indexed by Science Citation Index.
Categories of articles:
Reviews summarize representative results and achievements in a particular topic or an area, comment on the current state of research, and advise on the research directions. The author’s own opinion and related discussion is requested.
Research papers report on important original results in all areas of physics, mechanics and astronomy.
Brief reports present short reports in a timely manner of the latest important results.