International Journal of High Performance Computing Applications最新文献

筛选
英文 中文
Exascale models of stellar explosions: Quintessential multi-physics simulation 百亿亿次恒星爆炸模型:典型的多物理场模拟
IF 3.1 3区 计算机科学
International Journal of High Performance Computing Applications Pub Date : 2021-07-20 DOI: 10.1177/10943420211027937
J. A. Harris, Ran Chu, S. Couch, A. Dubey, E. Endeve, Antigoni Georgiadou, R. Jain, D. Kasen, M. P. Laiu, O. E. Bronson Messer, J. O'Neal, M. A. Sandoval, K. Weide
{"title":"Exascale models of stellar explosions: Quintessential multi-physics simulation","authors":"J. A. Harris, Ran Chu, S. Couch, A. Dubey, E. Endeve, Antigoni Georgiadou, R. Jain, D. Kasen, M. P. Laiu, O. E. Bronson Messer, J. O'Neal, M. A. Sandoval, K. Weide","doi":"10.1177/10943420211027937","DOIUrl":"https://doi.org/10.1177/10943420211027937","url":null,"abstract":"The ExaStar project aims to deliver an efficient, versatile, and portable software ecosystem for multi-physics astrophysics simulations run on exascale machines. The code suite is a component-based multi-physics toolkit, built on the capabilities of current simulation codes (in particular Flash-X and Castro), and based on the massively parallel adaptive mesh refinement framework AMReX. It includes modules for hydrodynamics, advanced radiation transport, thermonuclear kinetics, and nuclear microphysics. The code will reach exascale efficiency by building upon current multi- and many-core packages integrated into an orchestration system that uses a combination of configuration tools, code translators, and a domain-specific asynchronous runtime to manage performance across a range of platform architectures. The target science includes multi-physics simulations of astrophysical explosions (such as supernovae and neutron star mergers) to understand the cosmic origin of the elements and the fundamental physics of matter and neutrinos under extreme conditions.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"59 - 77"},"PeriodicalIF":3.1,"publicationDate":"2021-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211027937","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42594898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Unprecedented cloud resolution in a GPU-enabled full-physics atmospheric climate simulation on OLCF’s summit supercomputer OLCF峰会超级计算机上GPU支持的全物理大气气候模拟中前所未有的云分辨率
IF 3.1 3区 计算机科学
International Journal of High Performance Computing Applications Pub Date : 2021-07-16 DOI: 10.1177/10943420211027539
M. Norman, D. A. Bader, C. Eldred, W. Hannah, B. Hillman, C. R. Jones, Jungmin M. Lee, L. Leung, Isaac Lyngaas, K. Pressel, S. Sreepathi, M. Taylor, Xingqiu Yuan
{"title":"Unprecedented cloud resolution in a GPU-enabled full-physics atmospheric climate simulation on OLCF’s summit supercomputer","authors":"M. Norman, D. A. Bader, C. Eldred, W. Hannah, B. Hillman, C. R. Jones, Jungmin M. Lee, L. Leung, Isaac Lyngaas, K. Pressel, S. Sreepathi, M. Taylor, Xingqiu Yuan","doi":"10.1177/10943420211027539","DOIUrl":"https://doi.org/10.1177/10943420211027539","url":null,"abstract":"Clouds represent a key uncertainty in future climate projection. While explicit cloud resolution remains beyond our computational grasp for global climate, we can incorporate important cloud effects through a computational middle ground called the Multi-scale Modeling Framework (MMF), also known as Super Parameterization. This algorithmic approach embeds high-resolution Cloud Resolving Models (CRMs) to represent moist convective processes within each grid column in a Global Climate Model (GCM). The MMF code requires no parallel data transfers and provides a self-contained target for acceleration. This study investigates the performance of the Energy Exascale Earth System Model-MMF (E3SM-MMF) code on the OLCF Summit supercomputer at an unprecedented scale of simulation. Hundreds of kernels in the roughly 10K lines of code in the E3SM-MMF CRM were ported to GPUs with OpenACC directives. A high-resolution benchmark using 4600 nodes on Summit demonstrates the computational capability of the GPU-enabled E3SM-MMF code in a full physics climate simulation.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"93 - 105"},"PeriodicalIF":3.1,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42497644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Enabling particle applications for exascale computing platforms 为百亿亿次计算平台启用粒子应用
IF 3.1 3区 计算机科学
International Journal of High Performance Computing Applications Pub Date : 2021-07-01 DOI: 10.1177/10943420211022829
S. Mniszewski, J. Belak, J. Fattebert, C. Negre, S. Slattery, A. Adedoyin, R. Bird, Choong-Seock Chang, Guangye Chen, S. Ethier, S. Fogerty, Salman Habib, Christoph Junghans, D. Lebrun-Grandié, J. Mohd-Yusof, S. Moore, D. Osei-Kuffuor, S. Plimpton, A. Pope, S. Reeve, L. Ricketson, A. Scheinberg, A. Sharma, M. Wall
{"title":"Enabling particle applications for exascale computing platforms","authors":"S. Mniszewski, J. Belak, J. Fattebert, C. Negre, S. Slattery, A. Adedoyin, R. Bird, Choong-Seock Chang, Guangye Chen, S. Ethier, S. Fogerty, Salman Habib, Christoph Junghans, D. Lebrun-Grandié, J. Mohd-Yusof, S. Moore, D. Osei-Kuffuor, S. Plimpton, A. Pope, S. Reeve, L. Ricketson, A. Scheinberg, A. Sharma, M. Wall","doi":"10.1177/10943420211022829","DOIUrl":"https://doi.org/10.1177/10943420211022829","url":null,"abstract":"The Exascale Computing Project (ECP) is invested in co-design to assure that key applications are ready for exascale computing. Within ECP, the Co-design Center for Particle Applications (CoPA) is addressing challenges faced by particle-based applications across four “sub-motifs”: short-range particle–particle interactions (e.g., those which often dominate molecular dynamics (MD) and smoothed particle hydrodynamics (SPH) methods), long-range particle–particle interactions (e.g., electrostatic MD and gravitational N-body), particle-in-cell (PIC) methods, and linear-scaling electronic structure and quantum molecular dynamics (QMD) algorithms. Our crosscutting co-designed technologies fall into two categories: proxy applications (or “apps”) and libraries. Proxy apps are vehicles used to evaluate the viability of incorporating various types of algorithms, data structures, and architecture-specific optimizations and the associated trade-offs; examples include ExaMiniMD, CabanaMD, CabanaPIC, and ExaSP2. Libraries are modular instantiations that multiple applications can utilize or be built upon; CoPA has developed the Cabana particle library, PROGRESS/BML libraries for QMD, and the SWFFT and fftMPI parallel FFT libraries. Success is measured by identifiable “lessons learned” that are translated either directly into parent production application codes or into libraries, with demonstrated performance and/or productivity improvement. The libraries and their use in CoPA’s ECP application partner codes are also addressed.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"35 1","pages":"572 - 597"},"PeriodicalIF":3.1,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211022829","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49142083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
A survey of software implementations used by application codes in the Exascale Computing Project Exascale计算项目中应用程序代码使用的软件实现的调查
IF 3.1 3区 计算机科学
International Journal of High Performance Computing Applications Pub Date : 2021-06-25 DOI: 10.1177/10943420211028940
T. Evans, A. Siegel, E. Draeger, J. Deslippe, M. Francois, T. Germann, W. Hart, Daniel F. Martin
{"title":"A survey of software implementations used by application codes in the Exascale Computing Project","authors":"T. Evans, A. Siegel, E. Draeger, J. Deslippe, M. Francois, T. Germann, W. Hart, Daniel F. Martin","doi":"10.1177/10943420211028940","DOIUrl":"https://doi.org/10.1177/10943420211028940","url":null,"abstract":"The US Department of Energy Office of Science and the National Nuclear Security Administration initiated the Exascale Computing Project (ECP) in 2016 to prepare mission-relevant applications and scientific software for the delivery of the exascale computers starting in 2023. The ECP currently supports 24 efforts directed at specific applications and six supporting co-design projects. These 24 application projects contain 62 application codes that are implemented in three high-level languages—C, C++, and Fortran—and use 22 combinations of graphical processing unit programming models. The most common implementation language is C++, which is used in 53 different application codes. The most common programming models across ECP applications are CUDA and Kokkos, which are employed in 15 and 14 applications, respectively. This article provides a survey of the programming languages and models used in the ECP applications codebase that will be used to achieve performance on the future exascale hardware platforms.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"5 - 12"},"PeriodicalIF":3.1,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211028940","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41398347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Multiphysics coupling in the Exascale computing project 百亿亿次计算项目中的多物理场耦合
IF 3.1 3区 计算机科学
International Journal of High Performance Computing Applications Pub Date : 2021-06-23 DOI: 10.1177/10943420211028943
T. Evans, J. White
{"title":"Multiphysics coupling in the Exascale computing project","authors":"T. Evans, J. White","doi":"10.1177/10943420211028943","DOIUrl":"https://doi.org/10.1177/10943420211028943","url":null,"abstract":"Multiphysics coupling presents a significant challenge in terms of both computational accuracy and performance. Achieving high performance on coupled simulations can be particularly challenging in a high-performance computing context. The US Department of Energy Exascale Computing Project has the mission to prepare mission-relevant applications for the delivery of the exascale computers starting in 2023. Many of these applications require multiphysics coupling, and the implementations must be performant on exascale hardware. In this special issue we feature six articles performing advanced multiphysics coupling that span the computational science domains in the Exascale Computing Project.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"3 - 4"},"PeriodicalIF":3.1,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211028943","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42901728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Online data analysis and reduction: An important Co-design motif for extreme-scale computers 在线数据分析和约简:超大规模计算机的重要协同设计主题
IF 3.1 3区 计算机科学
International Journal of High Performance Computing Applications Pub Date : 2021-06-12 DOI: 10.1177/10943420211023549
Ian Foster, M. Ainsworth, J. Bessac, F. Cappello, J. Choi, S. Di, Z. Di, A. M. Gok, Hanqi Guo, K. Huck, Christopher Kelly, S. Klasky, K. Kleese van Dam, Xin Liang, Kshitij Mehta, M. Parashar, T. Peterka, Line C. Pouchard, Tong Shu, O. Tugluk, H. V. van Dam, Lipeng Wan, Matthew Wolf, J. Wozniak, Wei Xu, I. Yakushin, Shinjae Yoo, T. Munson
{"title":"Online data analysis and reduction: An important Co-design motif for extreme-scale computers","authors":"Ian Foster, M. Ainsworth, J. Bessac, F. Cappello, J. Choi, S. Di, Z. Di, A. M. Gok, Hanqi Guo, K. Huck, Christopher Kelly, S. Klasky, K. Kleese van Dam, Xin Liang, Kshitij Mehta, M. Parashar, T. Peterka, Line C. Pouchard, Tong Shu, O. Tugluk, H. V. van Dam, Lipeng Wan, Matthew Wolf, J. Wozniak, Wei Xu, I. Yakushin, Shinjae Yoo, T. Munson","doi":"10.1177/10943420211023549","DOIUrl":"https://doi.org/10.1177/10943420211023549","url":null,"abstract":"A growing disparity between supercomputer computation speeds and I/O rates means that it is rapidly becoming infeasible to analyze supercomputer application output only after that output has been written to a file system. Instead, data-generating applications must run concurrently with data reduction and/or analysis operations, with which they exchange information via high-speed methods such as interprocess communications. The resulting parallel computing motif, online data analysis and reduction (ODAR), has important implications for both application and HPC systems design. Here we introduce the ODAR motif and its co-design concerns, describe a co-design process for identifying and addressing those concerns, present tools that assist in the co-design process, and present case studies to illustrate the use of the process and tools in practical settings.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"35 1","pages":"617 - 635"},"PeriodicalIF":3.1,"publicationDate":"2021-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211023549","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42711902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Efficient exascale discretizations: High-order finite element methods 高效百亿亿次离散化:高阶有限元方法
IF 3.1 3区 计算机科学
International Journal of High Performance Computing Applications Pub Date : 2021-06-08 DOI: 10.1177/10943420211020803
T. Kolev, P. Fischer, M. Min, J. Dongarra, Jed Brown, V. Dobrev, T. Warburton, S. Tomov, M. Shephard, A. Abdelfattah, V. Barra, Natalie N. Beams, Jean-Sylvain Camier, N. Chalmers, Yohann Dudouit, A. Karakus, I. Karlin, S. Kerkemeier, Yu-Hsiang Lan, David S. Medina, E. Merzari, A. Obabko, Will Pazner, T. Rathnayake, Cameron W. Smith, L. Spies, K. Swirydowicz, Jeremy L. Thompson, A. Tomboulides, V. Tomov
{"title":"Efficient exascale discretizations: High-order finite element methods","authors":"T. Kolev, P. Fischer, M. Min, J. Dongarra, Jed Brown, V. Dobrev, T. Warburton, S. Tomov, M. Shephard, A. Abdelfattah, V. Barra, Natalie N. Beams, Jean-Sylvain Camier, N. Chalmers, Yohann Dudouit, A. Karakus, I. Karlin, S. Kerkemeier, Yu-Hsiang Lan, David S. Medina, E. Merzari, A. Obabko, Will Pazner, T. Rathnayake, Cameron W. Smith, L. Spies, K. Swirydowicz, Jeremy L. Thompson, A. Tomboulides, V. Tomov","doi":"10.1177/10943420211020803","DOIUrl":"https://doi.org/10.1177/10943420211020803","url":null,"abstract":"Efficient exploitation of exascale architectures requires rethinking of the numerical algorithms used in many large-scale applications. These architectures favor algorithms that expose ultra fine-grain parallelism and maximize the ratio of floating point operations to energy intensive data movement. One of the few viable approaches to achieve high efficiency in the area of PDE discretizations on unstructured grids is to use matrix-free/partially assembled high-order finite element methods, since these methods can increase the accuracy and/or lower the computational time due to reduced data motion. In this paper we provide an overview of the research and development activities in the Center for Efficient Exascale Discretizations (CEED), a co-design center in the Exascale Computing Project that is focused on the development of next-generation discretization software and algorithms to enable a wide range of finite element applications to run efficiently on future hardware. CEED is a research partnership involving more than 30 computational scientists from two US national labs and five universities, including members of the Nek5000, MFEM, MAGMA and PETSc projects. We discuss the CEED co-design activities based on targeted benchmarks, miniapps and discretization libraries and our work on performance optimizations for large-scale GPU architectures. We also provide a broad overview of research and development activities in areas such as unstructured adaptive mesh refinement algorithms, matrix-free linear solvers, high-order data visualization, and list examples of collaborations with several ECP and external applications.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"35 1","pages":"527 - 552"},"PeriodicalIF":3.1,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211020803","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65398845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Coupling of regional geophysics and local soil-structure models in the EQSIM fault-to-structure earthquake simulation framework EQSIM断层-构造地震模拟框架中区域地球物理与局部土壤结构模型的耦合
IF 3.1 3区 计算机科学
International Journal of High Performance Computing Applications Pub Date : 2021-05-25 DOI: 10.1177/10943420211019118
D. McCallen, Houjun Tang, Suiwen Wu, E. Eckert, Junfei Huang, N. Petersson
{"title":"Coupling of regional geophysics and local soil-structure models in the EQSIM fault-to-structure earthquake simulation framework","authors":"D. McCallen, Houjun Tang, Suiwen Wu, E. Eckert, Junfei Huang, N. Petersson","doi":"10.1177/10943420211019118","DOIUrl":"https://doi.org/10.1177/10943420211019118","url":null,"abstract":"Accurate understanding and quantification of the risk to critical infrastructure posed by future large earthquakes continues to be a very challenging problem. Earthquake phenomena are quite complex and traditional approaches to predicting ground motions for future earthquake events have historically been empirically based whereby measured ground motion data from historical earthquakes are homogenized into a common data set and the ground motions for future postulated earthquakes are probabilistically derived based on the historical observations. This procedure has recognized significant limitations, principally due to the fact that earthquake ground motions tend to be dictated by the particular earthquake fault rupture and geologic conditions at a given site and are thus very site-specific. Historical earthquakes recorded at different locations are often only marginally representative. There has been strong and increasing interest in utilizing large-scale, physics-based regional simulations to advance the ability to accurately predict ground motions and associated infrastructure response. However, the computational requirements for simulations at frequencies of engineering interest have proven a major barrier to employing regional scale simulations. In a U.S. Department of Energy Exascale Computing Initiative project, the EQSIM application development is underway to create a framework for fault-to-structure simulations. This framework is being prepared to exploit emerging exascale platforms in order to overcome computational limitations. This article presents the essential methodology and computational workflow employed in EQSIM to couple regional-scale geophysics models with local soil-structure models to achieve a fully integrated, complete fault-to-structure simulation framework. The computational workflow, accuracy and performance of the coupling methodology are illustrated through example fault-to-structure simulations.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"78 - 92"},"PeriodicalIF":3.1,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211019118","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42341355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The Exascale Framework for High Fidelity coupled Simulations (EFFIS): Enabling whole device modeling in fusion science 高保真耦合仿真的Exascale框架(EFFIS):实现融合科学中的整个设备建模
IF 3.1 3区 计算机科学
International Journal of High Performance Computing Applications Pub Date : 2021-05-24 DOI: 10.1177/10943420211019119
E. Suchyta, S. Klasky, N. Podhorszki, M. Wolf, Abolaji D. Adesoji, Choong-Seock Chang, J. Choi, Philip E. Davis, J. Dominski, S. Ethier, I. Foster, K. Germaschewski, Berk Geveci, Chris Harris, K. Huck, Qing Liu, Jeremy S. Logan, Kshitij Mehta, G. Merlo, S. Moore, T. Munson, M. Parashar, D. Pugmire, M. Shephard, Cameron W. Smith, P. Subedi, Lipeng Wan, Ruonan Wang, Shuangxi Zhang
{"title":"The Exascale Framework for High Fidelity coupled Simulations (EFFIS): Enabling whole device modeling in fusion science","authors":"E. Suchyta, S. Klasky, N. Podhorszki, M. Wolf, Abolaji D. Adesoji, Choong-Seock Chang, J. Choi, Philip E. Davis, J. Dominski, S. Ethier, I. Foster, K. Germaschewski, Berk Geveci, Chris Harris, K. Huck, Qing Liu, Jeremy S. Logan, Kshitij Mehta, G. Merlo, S. Moore, T. Munson, M. Parashar, D. Pugmire, M. Shephard, Cameron W. Smith, P. Subedi, Lipeng Wan, Ruonan Wang, Shuangxi Zhang","doi":"10.1177/10943420211019119","DOIUrl":"https://doi.org/10.1177/10943420211019119","url":null,"abstract":"We present the Exascale Framework for High Fidelity coupled Simulations (EFFIS), a workflow and code coupling framework developed as part of the Whole Device Modeling Application (WDMApp) in the Exascale Computing Project. EFFIS consists of a library, command line utilities, and a collection of run-time daemons. Together, these software products enable users to easily compose and execute workflows that include: strong or weak coupling, in situ (or offline) analysis/visualization/monitoring, command-and-control actions, remote dashboard integration, and more. We describe WDMApp physics coupling cases and computer science requirements that motivate the design of the EFFIS framework. Furthermore, we explain the essential enabling technology that EFFIS leverages: ADIOS for performant data movement, PerfStubs/TAU for performance monitoring, and an advanced COUPLER for transforming coupling data from its native format to the representation needed by another application. Finally, we demonstrate EFFIS using coupled multi-simulation WDMApp workflows and exemplify how the framework supports the project’s needs. We show that EFFIS and its associated services for data movement, visualization, and performance collection does not introduce appreciable overhead to the WDMApp workflow and that the resource-dominant application’s idle time while waiting for data is minimal.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"106 - 128"},"PeriodicalIF":3.1,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211019119","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41475608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Parallel encryption of input and output data for HPC applications HPC应用中输入和输出数据的并行加密
IF 3.1 3区 计算机科学
International Journal of High Performance Computing Applications Pub Date : 2021-05-18 DOI: 10.1177/10943420211016516
L. Lapworth
{"title":"Parallel encryption of input and output data for HPC applications","authors":"L. Lapworth","doi":"10.1177/10943420211016516","DOIUrl":"https://doi.org/10.1177/10943420211016516","url":null,"abstract":"A methodology for protecting confidential data sets on third-party HPC systems is reported. This is based on the NIST AES algorithm and supports the common ECB, CTR and CBC modes. The methodology is built on a flexible programming model that delegates management of the encryption key to the application code. The methodology also includes a fine-grain control over which arrays on the files are encrypted. All the stages in an encrypted workflow are investigated using an established CFD code. Benchmarks are reported using the UK national supercomputer service (ARCHER) running the CFD code on up to 18,432 cores. Performance benchmarks demonstrate the importance of the way the encryption metadata is treated. Naïve treatments are shown to have a large impact on performance. However, through a more judicious treatment, the time to run the solver with encrypted input and output data is shown to be almost identical to that with plain data. A novel parallel treatment of the block chaining in AES-CBC mode allows users to benefit from the avalanche properties of this mode relative to the CTR mode, with no penalty in run-time.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"231 - 250"},"PeriodicalIF":3.1,"publicationDate":"2021-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211016516","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46755969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信