PKDGRAV3:超越万亿粒子的宇宙学模拟,为下一个星系调查时代做准备

IF 16.281
Douglas Potter, Joachim Stadel, Romain Teyssier
{"title":"PKDGRAV3:超越万亿粒子的宇宙学模拟,为下一个星系调查时代做准备","authors":"Douglas Potter,&nbsp;Joachim Stadel,&nbsp;Romain Teyssier","doi":"10.1186/s40668-017-0021-1","DOIUrl":null,"url":null,"abstract":"<p>We report on the successful completion of a 2 trillion particle cosmological simulation to <span>\\(z=0\\)</span> run on the Piz Daint supercomputer (CSCS, Switzerland), using 4000+ GPU nodes for a little less than 80?h of wall-clock time or 350,000 node hours. Using multiple benchmarks and performance measurements on the US Oak Ridge National Laboratory Titan supercomputer, we demonstrate that our code PKDGRAV3, delivers, to our knowledge, the fastest time-to-solution for large-scale cosmological <i>N</i>-body simulations. This was made possible by using the Fast Multipole Method in conjunction with individual and adaptive particle time steps, both deployed efficiently (and for the first time) on supercomputers with GPU-accelerated nodes. The very low memory footprint of PKDGRAV3 allowed us to run the first ever benchmark with 8 trillion particles on Titan, and to achieve perfect scaling up to 18,000 nodes and a peak performance of 10 Pflops.</p>","PeriodicalId":523,"journal":{"name":"Computational Astrophysics and Cosmology","volume":null,"pages":null},"PeriodicalIF":16.2810,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40668-017-0021-1","citationCount":"149","resultStr":"{\"title\":\"PKDGRAV3: beyond trillion particle cosmological simulations for the next era of galaxy surveys\",\"authors\":\"Douglas Potter,&nbsp;Joachim Stadel,&nbsp;Romain Teyssier\",\"doi\":\"10.1186/s40668-017-0021-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>We report on the successful completion of a 2 trillion particle cosmological simulation to <span>\\\\(z=0\\\\)</span> run on the Piz Daint supercomputer (CSCS, Switzerland), using 4000+ GPU nodes for a little less than 80?h of wall-clock time or 350,000 node hours. Using multiple benchmarks and performance measurements on the US Oak Ridge National Laboratory Titan supercomputer, we demonstrate that our code PKDGRAV3, delivers, to our knowledge, the fastest time-to-solution for large-scale cosmological <i>N</i>-body simulations. This was made possible by using the Fast Multipole Method in conjunction with individual and adaptive particle time steps, both deployed efficiently (and for the first time) on supercomputers with GPU-accelerated nodes. The very low memory footprint of PKDGRAV3 allowed us to run the first ever benchmark with 8 trillion particles on Titan, and to achieve perfect scaling up to 18,000 nodes and a peak performance of 10 Pflops.</p>\",\"PeriodicalId\":523,\"journal\":{\"name\":\"Computational Astrophysics and Cosmology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":16.2810,\"publicationDate\":\"2017-05-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1186/s40668-017-0021-1\",\"citationCount\":\"149\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational Astrophysics and Cosmology\",\"FirstCategoryId\":\"4\",\"ListUrlMain\":\"https://link.springer.com/article/10.1186/s40668-017-0021-1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Astrophysics and Cosmology","FirstCategoryId":"4","ListUrlMain":"https://link.springer.com/article/10.1186/s40668-017-0021-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 149

摘要

我们报告成功完成了2万亿粒子宇宙学模拟\(z=0\)运行在Piz Daint超级计算机(CSCS,瑞士)上,使用4000多个GPU节点,略低于80?H为挂钟时间或350,000个节点小时。使用美国橡树岭国家实验室泰坦超级计算机上的多个基准测试和性能测量,我们证明了我们的代码PKDGRAV3,据我们所知,提供了大规模宇宙n体模拟的最快求解时间。这是通过将快速多极方法与个体和自适应粒子时间步骤相结合,在具有gpu加速节点的超级计算机上有效地(也是第一次)部署而实现的。PKDGRAV3非常低的内存占用使我们能够在泰坦上运行第一个具有8万亿粒子的基准测试,并实现完美的扩展到18,000个节点和10 Pflops的峰值性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

PKDGRAV3: beyond trillion particle cosmological simulations for the next era of galaxy surveys

PKDGRAV3: beyond trillion particle cosmological simulations for the next era of galaxy surveys

We report on the successful completion of a 2 trillion particle cosmological simulation to \(z=0\) run on the Piz Daint supercomputer (CSCS, Switzerland), using 4000+ GPU nodes for a little less than 80?h of wall-clock time or 350,000 node hours. Using multiple benchmarks and performance measurements on the US Oak Ridge National Laboratory Titan supercomputer, we demonstrate that our code PKDGRAV3, delivers, to our knowledge, the fastest time-to-solution for large-scale cosmological N-body simulations. This was made possible by using the Fast Multipole Method in conjunction with individual and adaptive particle time steps, both deployed efficiently (and for the first time) on supercomputers with GPU-accelerated nodes. The very low memory footprint of PKDGRAV3 allowed us to run the first ever benchmark with 8 trillion particles on Titan, and to achieve perfect scaling up to 18,000 nodes and a peak performance of 10 Pflops.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊介绍: Computational Astrophysics and Cosmology (CompAC) is now closed and no longer accepting submissions. However, we would like to assure you that Springer will maintain an archive of all articles published in CompAC, ensuring their accessibility through SpringerLink's comprehensive search functionality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信