分布式内存并行蓝基因/L超级计算机计算双域方程时的误差估计和通信开销

M. Reumann, B. Fitch, A. Rayshubskiy, D. Weiß, G. Seemann, O. Doessel, M. Pitman, J. Rice
{"title":"分布式内存并行蓝基因/L超级计算机计算双域方程时的误差估计和通信开销","authors":"M. Reumann, B. Fitch, A. Rayshubskiy, D. Weiß, G. Seemann, O. Doessel, M. Pitman, J. Rice","doi":"10.1109/CIC.2008.4748982","DOIUrl":null,"url":null,"abstract":"Increasing biophysical detail in multi physical, multiscale cardiac model will demand higher levels of parallelism in multi-core approaches to obtain fast simulation times. As an example of such a highly parallel multi-core approaches, we develop a completely distributed bidomain cardiac model implemented on the IBM Blue Gene/L architecture. A tissue block of size 50 times 50 times 100 cubic elements based on ten Tusscher et al. (2004) cell model is distributed on 512 computational nodes. The extracellular potential is calculated by the Gauss-Seidel (GS) iterative method that typically requires high levels of inter-processor communication. Specifically, the GS method requires knowledge of all cellular potentials at each of its iterative step. In the absence of shared memory, the values are communicated with substantial overhead. We attempted to reduce communication overhead by computing the extracellular potential only every 5th time step for the integration of the cell models. We also investigated the effects of reducing inter-processor communication to every 5th, 10th, 50th iteration or no communication within the GS iteration. While technically incorrect, these approximation had little impact on numerical convergence or accuracy for the simulations tested. The results suggest some heuristic approaches may further reduce the inter-processor communication to improve the execution time of large-scale simulations.","PeriodicalId":194782,"journal":{"name":"2008 Computers in Cardiology","volume":"262 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Error estimates and communication overhead in the computation of the bidomain equations on the distributed memory parallel Blue Gene/L supercomputer\",\"authors\":\"M. Reumann, B. Fitch, A. Rayshubskiy, D. Weiß, G. Seemann, O. Doessel, M. Pitman, J. Rice\",\"doi\":\"10.1109/CIC.2008.4748982\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Increasing biophysical detail in multi physical, multiscale cardiac model will demand higher levels of parallelism in multi-core approaches to obtain fast simulation times. As an example of such a highly parallel multi-core approaches, we develop a completely distributed bidomain cardiac model implemented on the IBM Blue Gene/L architecture. A tissue block of size 50 times 50 times 100 cubic elements based on ten Tusscher et al. (2004) cell model is distributed on 512 computational nodes. The extracellular potential is calculated by the Gauss-Seidel (GS) iterative method that typically requires high levels of inter-processor communication. Specifically, the GS method requires knowledge of all cellular potentials at each of its iterative step. In the absence of shared memory, the values are communicated with substantial overhead. We attempted to reduce communication overhead by computing the extracellular potential only every 5th time step for the integration of the cell models. We also investigated the effects of reducing inter-processor communication to every 5th, 10th, 50th iteration or no communication within the GS iteration. While technically incorrect, these approximation had little impact on numerical convergence or accuracy for the simulations tested. The results suggest some heuristic approaches may further reduce the inter-processor communication to improve the execution time of large-scale simulations.\",\"PeriodicalId\":194782,\"journal\":{\"name\":\"2008 Computers in Cardiology\",\"volume\":\"262 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 Computers in Cardiology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIC.2008.4748982\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 Computers in Cardiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIC.2008.4748982","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在多物理、多尺度心脏模型中增加生物物理细节将要求在多核方法中实现更高水平的并行性,以获得更快的模拟时间。作为这种高度并行的多核方法的一个例子,我们开发了一个完全分布式的双域心脏模型,该模型在IBM Blue Gene/L架构上实现。基于10个Tusscher等人(2004)的细胞模型,大小为50 × 50 × 100立方元素的组织块分布在512个计算节点上。细胞外电位由高斯-塞德尔(GS)迭代法计算,通常需要高水平的处理器间通信。具体来说,GS方法需要在每个迭代步骤中了解所有细胞电位。在没有共享内存的情况下,这些值的通信开销很大。我们试图通过每隔5个时间步计算细胞外电位来减少通信开销,以整合细胞模型。我们还研究了将处理器间通信减少到每5次、10次、50次迭代或在GS迭代中不进行通信的影响。虽然在技术上是不正确的,但这些近似对所测试的模拟的数值收敛或精度几乎没有影响。结果表明,一些启发式方法可以进一步减少处理器间的通信,从而提高大规模仿真的执行时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Error estimates and communication overhead in the computation of the bidomain equations on the distributed memory parallel Blue Gene/L supercomputer
Increasing biophysical detail in multi physical, multiscale cardiac model will demand higher levels of parallelism in multi-core approaches to obtain fast simulation times. As an example of such a highly parallel multi-core approaches, we develop a completely distributed bidomain cardiac model implemented on the IBM Blue Gene/L architecture. A tissue block of size 50 times 50 times 100 cubic elements based on ten Tusscher et al. (2004) cell model is distributed on 512 computational nodes. The extracellular potential is calculated by the Gauss-Seidel (GS) iterative method that typically requires high levels of inter-processor communication. Specifically, the GS method requires knowledge of all cellular potentials at each of its iterative step. In the absence of shared memory, the values are communicated with substantial overhead. We attempted to reduce communication overhead by computing the extracellular potential only every 5th time step for the integration of the cell models. We also investigated the effects of reducing inter-processor communication to every 5th, 10th, 50th iteration or no communication within the GS iteration. While technically incorrect, these approximation had little impact on numerical convergence or accuracy for the simulations tested. The results suggest some heuristic approaches may further reduce the inter-processor communication to improve the execution time of large-scale simulations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信