关于恒星演化代码的并行化

IF 16.281
David Martin, Jordi José, Richard Longland
{"title":"关于恒星演化代码的并行化","authors":"David Martin,&nbsp;Jordi José,&nbsp;Richard Longland","doi":"10.1186/s40668-018-0025-5","DOIUrl":null,"url":null,"abstract":"<p>Multidimensional nucleosynthesis studies with hundreds of nuclei linked through thousands of nuclear processes are still computationally prohibitive. To date, most nucleosynthesis studies rely either on hydrostatic/hydrodynamic simulations in spherical symmetry, or on post-processing simulations using temperature and density versus time profiles directly linked to huge nuclear reaction networks.</p><p>Parallel computing has been regarded as the main permitting factor of computationally intensive simulations. This paper explores the different pros and cons in the parallelization of stellar codes, providing recommendations on when and how parallelization may help in improving the performance of a code for astrophysical applications.</p><p>We report on different parallelization strategies succesfully applied to the spherically symmetric, Lagrangian, implicit hydrodynamic code <span>SHIVA</span>, extensively used in the modeling of classical novae and type I X-ray bursts.</p><p>When only matrix build-up and inversion processes in the nucleosynthesis subroutines are parallelized (a suitable approach for post-processing calculations), the huge amount of time spent on communications between cores, together with the small problem size (limited by the number of isotopes of the nuclear network), result in a much worse performance of the parallel application compared to the 1-core, sequential version of the code. Parallelization of the matrix build-up and inversion processes in the nucleosynthesis subroutines is not recommended unless the number of isotopes adopted largely exceeds 10,000.</p><p>In sharp contrast, speed-up factors of 26 and 35 have been obtained with a parallelized version of <span>SHIVA</span>, in a 200-shell simulation of a type I X-ray burst carried out with two nuclear reaction networks: a reduced one, consisting of 324 isotopes and 1392 reactions, and a more extended network with 606 nuclides and 3551 nuclear interactions. Maximum speed-ups of ~41 (324-isotope network) and ~85 (606-isotope network), are also predicted for 200 cores, stressing that the number of shells of the computational domain constitutes an effective upper limit for the maximum number of cores that could be used in a parallel application.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":"5 1","pages":""},"PeriodicalIF":16.2810,"publicationDate":"2018-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-018-0025-5","citationCount":"2","resultStr":"{\"title\":\"On the parallelization of stellar evolution codes\",\"authors\":\"David Martin,&nbsp;Jordi José,&nbsp;Richard Longland\",\"doi\":\"10.1186/s40668-018-0025-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Multidimensional nucleosynthesis studies with hundreds of nuclei linked through thousands of nuclear processes are still computationally prohibitive. To date, most nucleosynthesis studies rely either on hydrostatic/hydrodynamic simulations in spherical symmetry, or on post-processing simulations using temperature and density versus time profiles directly linked to huge nuclear reaction networks.</p><p>Parallel computing has been regarded as the main permitting factor of computationally intensive simulations. This paper explores the different pros and cons in the parallelization of stellar codes, providing recommendations on when and how parallelization may help in improving the performance of a code for astrophysical applications.</p><p>We report on different parallelization strategies succesfully applied to the spherically symmetric, Lagrangian, implicit hydrodynamic code <span>SHIVA</span>, extensively used in the modeling of classical novae and type I X-ray bursts.</p><p>When only matrix build-up and inversion processes in the nucleosynthesis subroutines are parallelized (a suitable approach for post-processing calculations), the huge amount of time spent on communications between cores, together with the small problem size (limited by the number of isotopes of the nuclear network), result in a much worse performance of the parallel application compared to the 1-core, sequential version of the code. Parallelization of the matrix build-up and inversion processes in the nucleosynthesis subroutines is not recommended unless the number of isotopes adopted largely exceeds 10,000.</p><p>In sharp contrast, speed-up factors of 26 and 35 have been obtained with a parallelized version of <span>SHIVA</span>, in a 200-shell simulation of a type I X-ray burst carried out with two nuclear reaction networks: a reduced one, consisting of 324 isotopes and 1392 reactions, and a more extended network with 606 nuclides and 3551 nuclear interactions. Maximum speed-ups of ~41 (324-isotope network) and ~85 (606-isotope network), are also predicted for 200 cores, stressing that the number of shells of the computational domain constitutes an effective upper limit for the maximum number of cores that could be used in a parallel application.</p>\",\"PeriodicalId\":523,\"journal\":{\"name\":\"Computational Astrophysics and Cosmology\",\"volume\":\"5 1\",\"pages\":\"\"},\"PeriodicalIF\":16.2810,\"publicationDate\":\"2018-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1186/s40668-018-0025-5\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational Astrophysics and Cosmology\",\"FirstCategoryId\":\"4\",\"ListUrlMain\":\"https://link.springer.com/article/10.1186/s40668-018-0025-5\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Astrophysics and Cosmology","FirstCategoryId":"4","ListUrlMain":"https://link.springer.com/article/10.1186/s40668-018-0025-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

通过数千个核过程连接数百个原子核的多维核合成研究在计算上仍然是令人望而却步的。迄今为止,大多数核合成研究要么依赖于球对称的流体静力学/流体动力学模拟,要么依赖于与巨大核反应网络直接相关的温度和密度随时间分布的后处理模拟。并行计算已被认为是计算密集型仿真的主要允许因素。本文探讨了恒星代码并行化的不同优点和缺点,并就并行化何时以及如何帮助提高天体物理应用代码的性能提供了建议。我们报道了不同的并行化策略,成功地应用于球对称,拉格朗日,隐式流体动力学代码SHIVA,广泛用于经典新星和I型x射线爆发的建模。当核合成子程序中只有矩阵构建和反转过程被并行化时(一种适合后处理计算的方法),在核之间的通信上花费的大量时间,以及小问题规模(受核网络同位素数量的限制),导致并行应用程序的性能比单核顺序版本的代码差得多。除非采用的同位素数量大大超过10,000,否则不建议在核合成子程序中并行化矩阵构建和反转过程。与此形成鲜明对比的是,在一个I型x射线爆发的200壳层模拟中,并行版SHIVA获得了26和35的加速因子,该模拟包含两个核反应网络:一个简化的网络,由324个同位素和1392个反应组成,一个更扩展的网络,由606个核素和3551个核相互作用组成。在200核的情况下,最大加速速度为~41(324同位素网络)和~85(606同位素网络),强调计算域的壳数构成了可用于并行应用的最大核数的有效上限。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

On the parallelization of stellar evolution codes

On the parallelization of stellar evolution codes

Multidimensional nucleosynthesis studies with hundreds of nuclei linked through thousands of nuclear processes are still computationally prohibitive. To date, most nucleosynthesis studies rely either on hydrostatic/hydrodynamic simulations in spherical symmetry, or on post-processing simulations using temperature and density versus time profiles directly linked to huge nuclear reaction networks.

Parallel computing has been regarded as the main permitting factor of computationally intensive simulations. This paper explores the different pros and cons in the parallelization of stellar codes, providing recommendations on when and how parallelization may help in improving the performance of a code for astrophysical applications.

We report on different parallelization strategies succesfully applied to the spherically symmetric, Lagrangian, implicit hydrodynamic code SHIVA, extensively used in the modeling of classical novae and type I X-ray bursts.

When only matrix build-up and inversion processes in the nucleosynthesis subroutines are parallelized (a suitable approach for post-processing calculations), the huge amount of time spent on communications between cores, together with the small problem size (limited by the number of isotopes of the nuclear network), result in a much worse performance of the parallel application compared to the 1-core, sequential version of the code. Parallelization of the matrix build-up and inversion processes in the nucleosynthesis subroutines is not recommended unless the number of isotopes adopted largely exceeds 10,000.

In sharp contrast, speed-up factors of 26 and 35 have been obtained with a parallelized version of SHIVA, in a 200-shell simulation of a type I X-ray burst carried out with two nuclear reaction networks: a reduced one, consisting of 324 isotopes and 1392 reactions, and a more extended network with 606 nuclides and 3551 nuclear interactions. Maximum speed-ups of ~41 (324-isotope network) and ~85 (606-isotope network), are also predicted for 200 cores, stressing that the number of shells of the computational domain constitutes an effective upper limit for the maximum number of cores that could be used in a parallel application.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊介绍: Computational Astrophysics and Cosmology (CompAC) is now closed and no longer accepting submissions. However, we would like to assure you that Springer will maintain an archive of all articles published in CompAC, ensuring their accessibility through SpringerLink's comprehensive search functionality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信