{"title":"CooLBM: A GPU-accelerated collaborative open-source reactive multi-phase/component simulation code via lattice Boltzmann method","authors":"R. Alamian , A.K. Nayak , M.S. Shadloo","doi":"10.1016/j.cpc.2025.109711","DOIUrl":"10.1016/j.cpc.2025.109711","url":null,"abstract":"<div><div>The current work presents a novel <em>COllaborative Open-source Lattice Boltzmann Method</em> framework, so-called <em>CooLBM</em>. The computational framework is developed for the simulation of single and multi-component multi-phase problems, along with a reactive interface and conjugate fluid-solid heat transfer problems. CooLBM utilizes a multi-CPU/GPU architecture to achieve high-performance computing (HPC), enabling efficient and parallelized simulations for large scale problems. The code is implemented in C++ and makes extensive use of the Standard Template Library (STL) to improve code modularity, flexibility, and re-usability. The developed framework incorporates advanced numerical methods and algorithms to accurately capture complex fluid dynamics and phase interactions. It offers a wide range of capabilities, including phase separation, interfacial tension, and mass transfer phenomena. The reactive interface simulation module enables the study of chemical reactions occurring at the fluid-fluid interface, expanding its applicability to reactive multi-phase systems. The performance and accuracy of CooLBM are demonstrated through various benchmark simulations, showcasing its ability to capture intricate fluid behaviors and interface dynamics. The modular structure of the code allows for easy customization and extension, facilitating the implementation of additional models and boundary conditions. Finally, CooLBM provides visualization tools for the analysis and interpretation of simulation results. Overall, CooLBM offers an efficient computational framework for studying complex multi-phase systems and reactive interfaces, making it a valuable tool for researchers and engineers in several fields including, but not limited to chemical engineering, materials science, and environmental engineering. CooLBM is available under open source initiatives for scientific communities in the gitlab repository: <span><span>https://gitlab.coria-cfd.fr/lbm/coolbm</span><svg><path></path></svg></span>.</div></div><div><h3>Program summary</h3><div><em>Program Title:</em> CooLBM: A GPU-Accelerated Collaborative Open-Source Reactive Multi-Phase/Component Simulation Code via Lattice Boltzmann Method</div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/79p5z26scz.1</span><svg><path></path></svg></span></div><div><em>Developer's repository link:</em> <span><span>https://gitlab.coria-cfd.fr/lbm/coolbm</span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> GNU GPLv3</div><div><em>Programming language:</em> C++</div><div><em>Nature of problem:</em> Multi-phase and multi-component reactive flows are fundamental to numerous industrial and engineering applications, including combustion systems, energy storage, and environmental processes. These flows often exhibit complex interactions across multiple spatial and temporal scales, requiring numerical modeling to accurately capture the underlying physi","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"315 ","pages":"Article 109711"},"PeriodicalIF":7.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144253344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guiyu Wang , Linjie Zhang , Shusen Xie , Dong Liang , Kai Fu
{"title":"A GPU-based compact conservative characteristic finite volume parallel algorithm for advection diffusion equations","authors":"Guiyu Wang , Linjie Zhang , Shusen Xie , Dong Liang , Kai Fu","doi":"10.1016/j.cpc.2025.109693","DOIUrl":"10.1016/j.cpc.2025.109693","url":null,"abstract":"<div><div>This study introduces a novel parallel algorithm for efficiently solving two-dimensional advection-diffusion problems with constant diffusion coefficient, specifically designed for implementation on GPU. The algorithm is compact, conservative, and inherently parallel, offering second-order accuracy in time and fourth-order accuracy in space. It employs a second-order operator splitting method to decompose the two-dimensional problem into one-dimensional subproblems, significantly enhancing parallelism in computations. The convective term is addressed using the characteristic method, which ensures high accuracy in time and allows for larger time steps. The conservative interpolation technique is implemented for integration within the Lagrangian tracking cell. For the diffusion term, we average along the characteristic curves and derive the discrete fluxes that are continuous at the cell boundaries. Taking advantage of the compact scheme, only three cells are required for the unknowns to achieve spatial fourth order accuracy. The primary computational tasks are performed on the GPU, distributing the computational load evenly across multiple cores. Numerical experiments demonstrate the conservation property and convergence rates of the new algorithm and its effectiveness in solving problems with steep fronts. The results also indicate the algorithm's superior computational speed compared to traditional CPU computations.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"315 ","pages":"Article 109693"},"PeriodicalIF":7.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144242151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PINTO: Physics-informed transformer neural operator for learning generalized solutions of partial differential equations for any initial and boundary condition","authors":"Sumanth Kumar Boya, Deepak N. Subramani","doi":"10.1016/j.cpc.2025.109702","DOIUrl":"10.1016/j.cpc.2025.109702","url":null,"abstract":"<div><div>Applications in physics, engineering, mechanics, and fluid dynamics necessitate solving nonlinear partial differential equations (PDEs) with different initial and boundary conditions. Operator learning, an emerging field, solves these PDEs by employing neural networks to map the infinite-dimensional input and output function spaces. These neural operators are trained using data (observations or simulations) and PDE residuals (physics loss). A key limitation of current neural methods is the need to retrain for new initial/boundary conditions and the substantial simulation data required for training. We introduce a physics-informed transformer neural operator (named PINTO) that generalizes efficiently to new conditions, trained solely with physics loss in a simulation-free setting. Our core innovation is the development of iterative kernel integral operator units that use cross-attention to transform domain points of PDE solutions into initial/boundary condition-aware representation vectors, supporting efficient and generalizable learning. The working of PINTO is demonstrated by simulating important 1D and 2D equations used in fluid mechanics, physics and engineering applications: advection, Burgers, and steady and unsteady Navier-Stokes equations (three flow scenarios). We show that under challenging unseen conditions, the relative errors compared to analytical or numerical (finite difference and volume) solutions are low, merely 20% to 33% of those obtained by other leading physics-informed neural operator methods. Furthermore, PINTO accurately solves advection and Burgers equations at time steps not present in the training points, an ability absent for other neural operators. The code is accessible at <span><span>https://github.com/quest-lab-iisc/PINTO</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"315 ","pages":"Article 109702"},"PeriodicalIF":7.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144242150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semtex: Development and application of the solver methodology for incompressible flows with generalized Newtonian rheologies","authors":"H.M. Blackburn , M. Rudman , J. Singh","doi":"10.1016/j.cpc.2025.109694","DOIUrl":"10.1016/j.cpc.2025.109694","url":null,"abstract":"<div><div>The methodology for simulation of incompressible flows with generalized Newtonian viscosity models, for example shear-thinning rheologies, within the <em>Semtex</em> framework of open-source spectral-element/Fourier flow solvers [1,2] is outlined. Direction is given regarding the rheology models employed and how appropriate parameters are derived and supplied to the solver. Exponential spatial convergence of solutions is demonstrated for both Cartesian and cylindrical geometries. Other example applications deal with DNS of turbulent flows in pipes. We use <em>Semtex</em> to highlight the central importance of adequate rheology characterization for accurate simulation of turbulent flows of generalized Newtonian fluids.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"315 ","pages":"Article 109694"},"PeriodicalIF":7.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144242149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Călin A. Georgescu, Merel A. Schalkers, Matthias Möller
{"title":"qlbm – A quantum lattice Boltzmann software framework","authors":"Călin A. Georgescu, Merel A. Schalkers, Matthias Möller","doi":"10.1016/j.cpc.2025.109699","DOIUrl":"10.1016/j.cpc.2025.109699","url":null,"abstract":"<div><div>We present <span>qlbm</span>, a Python software package designed to facilitate the development, simulation, and analysis of Quantum Lattice Boltzmann Methods (QBMs). <span>qlbm</span> is a modular framework that introduces a quantum component abstraction hierarchy tailored to the implementation of novel QBMs. The framework interfaces with state-of-the-art quantum software infrastructure to enable efficient simulation and validation pipelines, and leverages novel execution and pre-processing techniques that significantly reduce the computational resources required to develop quantum circuits. We demonstrate the versatility of the software by showcasing multiple QBMs in 2D and 3D with complex boundary conditions, integrated within automated benchmarking utilities. Accompanying the source code are extensive test suites, thorough online documentation resources, analysis tools, visualization methods, and demos that aim to increase the accessibility of QBMs while encouraging reproducibility and collaboration.</div></div><div><h3>Program summary</h3><div><em>Program Title:</em> <span>qlbm</span></div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/28hkvsg7p2.1</span><svg><path></path></svg></span></div><div><em>Developer's repository link:</em> <span><span>https://github.com/QCFD-Lab/qlbm</span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> MPL-2.0</div><div><em>Programming language:</em> Python3</div><div><em>Supplementary material:</em> The documentation of is available at <span><span>https://qcfd-lab.github.io/qlbm/</span><svg><path></path></svg></span>.</div><div><em>Nature of problem:</em> The advent of quantum algorithms for computational fluid dynamics brings with it challenges that are new to the established field of computational physics. These challenges include the lack of standardized implementations of the still nascent quantum methods, the intense computational demands of developing and simulating quantum algorithms on hardware available today, and the absence of tools that integrate novel developments into established infrastructure. Because of these current limitations, physicists and mathematicians expend superfluous resources on tasks that more mature computational physics branches have surmounted long ago.</div><div><em>Solution method:</em> QLBM is a software package that provides an end-to-end development environment for quantum lattice Boltzmann methods. The modular design and flexible quantum circuit library provide a base for extending and generalizing quantum algorithms. Performance enhancements exploit the paradigm of quantum computing simulations to accelerate the speed at which researchers can verify the validity of their methods. Its integration with state-of-the-art quantum computing software and visualization tools increases the algorithms' accessibility. These features allow QLBM to effectively generate, simulate, and analyze quantum circuits for 2D","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"315 ","pages":"Article 109699"},"PeriodicalIF":7.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144242153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federico D. Halpern, Min-Gu Yoo, Brendan C. Lyons, Juan Diego Colmenares
{"title":"Parallel diffusion operator for magnetized plasmas with improved spectral fidelity","authors":"Federico D. Halpern, Min-Gu Yoo, Brendan C. Lyons, Juan Diego Colmenares","doi":"10.1016/j.cpc.2025.109696","DOIUrl":"10.1016/j.cpc.2025.109696","url":null,"abstract":"<div><div>Diffusive transport processes in magnetized plasmas are highly anisotropic, with fast parallel transport along the magnetic field lines sometimes faster than perpendicular transport by orders of magnitude. This constitutes a major challenge for describing non-grid-aligned magnetic structures in Eulerian (grid-based) simulations. The present paper describes and validates a new method for parallel diffusion in magnetized plasmas based on the anti-symmetry representation [Halpern and Waltz, Phys. Plasmas 25, 060703 (2018)]. In the anti-symmetry formalism, diffusion manifests as a flow operator involving the logarithmic derivative of the transported quantity. Qualitative plane wave analysis shows that the new operator naturally yields better discrete spectral resolution compared to its conventional counterpart. Numerical simulations comparing the new method against existing finite difference methods are carried out, showing significant improvement. In particular, we find that combining anti-symmetry with finite differences in diagonally staggered grids essentially eliminates the so-called “artificial numerical diffusion” that affects conventional finite difference and finite volume methods.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"315 ","pages":"Article 109696"},"PeriodicalIF":7.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144242256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Polar Shift: Charge carrier polarization energies in organic electronic materials","authors":"K. Kaklamanis, D.G. Papageorgiou","doi":"10.1016/j.cpc.2025.109700","DOIUrl":"10.1016/j.cpc.2025.109700","url":null,"abstract":"<div><div>Electronic polarization of charge carriers in the solid state plays an important role in organic electronics, as it alters the gas phase energy levels associated with phenomena such as charge transport, molecular doping, charge injection and charge separation at interfaces. In this article we present P<span>olar</span> S<span>hift</span>, a software package for calculating the polarization energy of an electron or hole charge carrier in organic electronic materials. The software uses an atomistic approach employing the microelectrostatics model. Molecular charge distributions are represented by atomic point charges, while the molecular polarizability is divided into distributed atomic contributions. The electrostatic and inductive components of the polarization energy are calculated separately. For the electrostatic interactions we propose an efficient cutoff–based scheme that allows fast yet accurate evaluation of the relevant energy. For the induction part we use a self–consistent iterative method based on modified field interaction tensors in the framework of the Thole model. P<span>olar</span> S<span>hift</span> can be applied to ideal molecular crystals, thermally disordered crystalline packings or completely amorphous materials. Many additional features are implemented such as calculation of the molecular polarizability tensor, fitting of molecular polarizabilities to reference values, different schemes for computing induction energies, and extrapolation of induction energies to the bulk limit. Special attention has been paid to the interoperability with other software packages, so P<span>olar</span> S<span>hift</span> can obtain the required input from various widely used file types such as pdb, mol2 or even binary dcd files. The software is parallelized using the MPI standard thus exploiting a wide range of shared and distributed memory computer architectures. P<span>olar</span> S<span>hift</span> is applied to eight different test cases of prototype organic electronics materials demonstrating its capabilities, and the results are compared with existing literature.</div></div><div><h3>Program summary</h3><div><em>Program Title:</em> P<span>olar</span> S<span>hift</span></div><div><em>CPC Library link to program files:</em> <span><span><span>https://doi.org/10.17632/26ck9stzh9.1</span></span><svg><path></path></svg></span></div><div><em>Developer's repository link:</em> <span><span><span>http://cmsl.materials.uoi.gr/polar-shift</span></span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> GPLv2</div><div><em>Programming language:</em> Fortran 2008</div><div><em>Supplementary material:</em> User manual (45 pages), 22 annotated examples with reference output, input and output files for the eight test cases described in the paper.</div><div><em>Nature of problem:</em> Electronic polarization of charge carriers in organic electronic materials is responsible for altering key quantities from their gas phase coun","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"315 ","pages":"Article 109700"},"PeriodicalIF":7.2,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144242152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Massive-scale simulations of 2D Ising and Blume-Capel models on rack-scale multi-GPU systems","authors":"Mauro Bisson , Massimo Bernaschi , Massimiliano Fatica , Nikolaos G. Fytas , Isidoro González-Adalid Pemartín , Víctor Martín-Mayor , Alexandros Vasilopoulos","doi":"10.1016/j.cpc.2025.109690","DOIUrl":"10.1016/j.cpc.2025.109690","url":null,"abstract":"<div><div>We present high-performance implementations of the two-dimensional Ising and Blume-Capel models for large-scale, multi-GPU simulations. Our approach takes full advantage of the NVIDIA GB200 NVL72 system, which features up to 72 GPUs interconnected via high-bandwidth NVLink, enabling direct GPU-to-GPU memory access across multiple nodes. By utilizing Fabric Memory and an optimized Monte Carlo kernel for the Ising model, our implementation supports simulations of systems with linear sizes up to <span><math><mi>L</mi><mo>=</mo><msup><mrow><mn>2</mn></mrow><mrow><mn>23</mn></mrow></msup></math></span>, corresponding to approximately 70 trillion spins. This allows for a peak processing rate of nearly <span><math><mn>1.15</mn><mo>×</mo><msup><mrow><mn>10</mn></mrow><mrow><mn>5</mn></mrow></msup></math></span> lattice updates per nanosecond—setting a new performance benchmark for Ising model simulations. Additionally, we introduce a custom protocol for computing correlation functions, which strikes an optimal balance between computational efficiency and statistical accuracy. This protocol enables large-scale simulations without incurring prohibitive runtime costs. Benchmark results show near-perfect strong and weak scaling up to 64 GPUs, demonstrating the effectiveness of our approach for large-scale statistical physics simulations.</div></div><div><h3>Program summary</h3><div><em>Program title:</em> cuIsing (optimized)</div><div><em>CPC Library link to program files:</em> <span><span>https://doi.org/10.17632/ppkwwmcpwg.1</span><svg><path></path></svg></span></div><div><em>Licensing provisions:</em> MIT license</div><div><em>Programming languages:</em> CUDA C</div><div><em>Nature of problem:</em> Comparative studies of the critical dynamics of the Ising and Blume-Capel models are essential for gaining deeper insights into phase transitions, enhancing computational methods, and developing more accurate models for complex physical systems. To minimize finite-size effects and optimize the statistical quality of simulations, large-scale simulations over extended time scales are necessary. To support this, we provide two high-performance codes capable of running simulations with up to 70 trillion spins.</div><div><em>Solution method:</em> We present updated versions of our multi-GPU code for Monte Carlo simulations, implementing both the Ising and Blume-Capel models. These codes take full advantage of multi-node NVLink systems, such as the NVIDIA GB200 NVL72, enabling scaling across GPUs connected across different nodes within the same NVLink domain. Communication between GPUs is handled seamlessly via Fabric Memory–a novel memory allocation technique that facilitates direct memory access between GPUs within the same domain, eliminating the need for explicit data transfers. By employing highly optimized CUDA kernels for the Metropolis algorithm and a custom protocol that reduces the computational overhead of the correlation function, our implementa","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"315 ","pages":"Article 109690"},"PeriodicalIF":7.2,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monte Carlo phase space integration of multiparticle cross sections with carlomat_4.5","authors":"Karol Kołodziej","doi":"10.1016/j.cpc.2025.109697","DOIUrl":"10.1016/j.cpc.2025.109697","url":null,"abstract":"<div><div>Multidimensional phase space integrals must be calculated in order to obtain predictions for total or differential cross sections, or to simulate unweighted events of multiparticle reactions. The corresponding matrix elements, already in the leading order, receive contributions typically from dozens of thousands of the Feynman diagrams, many of which often involve strong peaks due to denominators of some Feynman propagators approaching their minima. As the number of peaks exceeds by far the number of integration variables, such integrals can practically be performed within the multichannel Monte Carlo approach, with different phase space parameterizations, each designed to smooth possibly a few peaks at a time. This obviously requires a lot different phase space parameterizations which, if possible, should be generated and combined in a single multichannel Monte Carlo procedure in a fully automatic way. A few different approaches to the calculation of the multidimensional phase space integrals have been incorporated in version 4.5 of the multipurpose Monte Carlo program <span>carlomat</span>. The present work illustrates how <span>carlomat_4.5</span> can facilitate the challenging task of calculating the multidimensional phase space integrals.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"315 ","pages":"Article 109697"},"PeriodicalIF":7.2,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144204338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hyper Boris integrators for kinetic plasma simulations","authors":"Seiji Zenitani , Tsunehiko N. Kato","doi":"10.1016/j.cpc.2025.109695","DOIUrl":"10.1016/j.cpc.2025.109695","url":null,"abstract":"<div><div>We propose a family of numerical solvers for the nonrelativistic Newton–Lorentz equation in kinetic plasma simulations. The new solvers extend the standard 4-step Boris procedure, which has second-order accuracy in time, in three ways. First, we repeat the 4-step procedure multiple times, using an <em>n</em>-times smaller timestep (<span><math><mi>Δ</mi><mi>t</mi><mo>/</mo><mi>n</mi></math></span>). We derive a formula for the arbitrary subcycling number <em>n</em>, so that we obtain the result without repeating the same calculations. Second, prior to the 4-step procedure, we apply Boris-type gyrophase corrections to the electromagnetic field. In addition to a well-known correction to the magnetic field, we correct the electric field in an anisotropic manner to achieve higher-order (<span><math><mi>N</mi><mo>=</mo><mn>2</mn><mo>,</mo><mn>4</mn><mo>,</mo><mn>6</mn><mo>…</mo></math></span>th order) accuracy. Third, combining these two methods, we propose a family of high-accuracy particle solvers, <em>the hyper Boris solvers</em>, which have two hyperparameters of the subcycling number <em>n</em> and the order of accuracy, <em>N</em>. The <em>n</em>-cycle <em>N</em>th-order solver gives a numerical error of <span><math><mo>∼</mo><msup><mrow><mo>(</mo><mi>Δ</mi><mi>t</mi><mo>/</mo><mi>n</mi><mo>)</mo></mrow><mrow><mi>N</mi></mrow></msup></math></span> at affordable computational cost.</div></div>","PeriodicalId":285,"journal":{"name":"Computer Physics Communications","volume":"315 ","pages":"Article 109695"},"PeriodicalIF":7.2,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144212973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}