J. A. Harris, Ran Chu, S. Couch, A. Dubey, E. Endeve, Antigoni Georgiadou, R. Jain, D. Kasen, M. P. Laiu, O. E. Bronson Messer, J. O'Neal, M. A. Sandoval, K. Weide
{"title":"Exascale models of stellar explosions: Quintessential multi-physics simulation","authors":"J. A. Harris, Ran Chu, S. Couch, A. Dubey, E. Endeve, Antigoni Georgiadou, R. Jain, D. Kasen, M. P. Laiu, O. E. Bronson Messer, J. O'Neal, M. A. Sandoval, K. Weide","doi":"10.1177/10943420211027937","DOIUrl":"https://doi.org/10.1177/10943420211027937","url":null,"abstract":"The ExaStar project aims to deliver an efficient, versatile, and portable software ecosystem for multi-physics astrophysics simulations run on exascale machines. The code suite is a component-based multi-physics toolkit, built on the capabilities of current simulation codes (in particular Flash-X and Castro), and based on the massively parallel adaptive mesh refinement framework AMReX. It includes modules for hydrodynamics, advanced radiation transport, thermonuclear kinetics, and nuclear microphysics. The code will reach exascale efficiency by building upon current multi- and many-core packages integrated into an orchestration system that uses a combination of configuration tools, code translators, and a domain-specific asynchronous runtime to manage performance across a range of platform architectures. The target science includes multi-physics simulations of astrophysical explosions (such as supernovae and neutron star mergers) to understand the cosmic origin of the elements and the fundamental physics of matter and neutrinos under extreme conditions.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"59 - 77"},"PeriodicalIF":3.1,"publicationDate":"2021-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211027937","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42594898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Norman, D. A. Bader, C. Eldred, W. Hannah, B. Hillman, C. R. Jones, Jungmin M. Lee, L. Leung, Isaac Lyngaas, K. Pressel, S. Sreepathi, M. Taylor, Xingqiu Yuan
{"title":"Unprecedented cloud resolution in a GPU-enabled full-physics atmospheric climate simulation on OLCF’s summit supercomputer","authors":"M. Norman, D. A. Bader, C. Eldred, W. Hannah, B. Hillman, C. R. Jones, Jungmin M. Lee, L. Leung, Isaac Lyngaas, K. Pressel, S. Sreepathi, M. Taylor, Xingqiu Yuan","doi":"10.1177/10943420211027539","DOIUrl":"https://doi.org/10.1177/10943420211027539","url":null,"abstract":"Clouds represent a key uncertainty in future climate projection. While explicit cloud resolution remains beyond our computational grasp for global climate, we can incorporate important cloud effects through a computational middle ground called the Multi-scale Modeling Framework (MMF), also known as Super Parameterization. This algorithmic approach embeds high-resolution Cloud Resolving Models (CRMs) to represent moist convective processes within each grid column in a Global Climate Model (GCM). The MMF code requires no parallel data transfers and provides a self-contained target for acceleration. This study investigates the performance of the Energy Exascale Earth System Model-MMF (E3SM-MMF) code on the OLCF Summit supercomputer at an unprecedented scale of simulation. Hundreds of kernels in the roughly 10K lines of code in the E3SM-MMF CRM were ported to GPUs with OpenACC directives. A high-resolution benchmark using 4600 nodes on Summit demonstrates the computational capability of the GPU-enabled E3SM-MMF code in a full physics climate simulation.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"93 - 105"},"PeriodicalIF":3.1,"publicationDate":"2021-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42497644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Mniszewski, J. Belak, J. Fattebert, C. Negre, S. Slattery, A. Adedoyin, R. Bird, Choong-Seock Chang, Guangye Chen, S. Ethier, S. Fogerty, Salman Habib, Christoph Junghans, D. Lebrun-Grandié, J. Mohd-Yusof, S. Moore, D. Osei-Kuffuor, S. Plimpton, A. Pope, S. Reeve, L. Ricketson, A. Scheinberg, A. Sharma, M. Wall
{"title":"Enabling particle applications for exascale computing platforms","authors":"S. Mniszewski, J. Belak, J. Fattebert, C. Negre, S. Slattery, A. Adedoyin, R. Bird, Choong-Seock Chang, Guangye Chen, S. Ethier, S. Fogerty, Salman Habib, Christoph Junghans, D. Lebrun-Grandié, J. Mohd-Yusof, S. Moore, D. Osei-Kuffuor, S. Plimpton, A. Pope, S. Reeve, L. Ricketson, A. Scheinberg, A. Sharma, M. Wall","doi":"10.1177/10943420211022829","DOIUrl":"https://doi.org/10.1177/10943420211022829","url":null,"abstract":"The Exascale Computing Project (ECP) is invested in co-design to assure that key applications are ready for exascale computing. Within ECP, the Co-design Center for Particle Applications (CoPA) is addressing challenges faced by particle-based applications across four “sub-motifs”: short-range particle–particle interactions (e.g., those which often dominate molecular dynamics (MD) and smoothed particle hydrodynamics (SPH) methods), long-range particle–particle interactions (e.g., electrostatic MD and gravitational N-body), particle-in-cell (PIC) methods, and linear-scaling electronic structure and quantum molecular dynamics (QMD) algorithms. Our crosscutting co-designed technologies fall into two categories: proxy applications (or “apps”) and libraries. Proxy apps are vehicles used to evaluate the viability of incorporating various types of algorithms, data structures, and architecture-specific optimizations and the associated trade-offs; examples include ExaMiniMD, CabanaMD, CabanaPIC, and ExaSP2. Libraries are modular instantiations that multiple applications can utilize or be built upon; CoPA has developed the Cabana particle library, PROGRESS/BML libraries for QMD, and the SWFFT and fftMPI parallel FFT libraries. Success is measured by identifiable “lessons learned” that are translated either directly into parent production application codes or into libraries, with demonstrated performance and/or productivity improvement. The libraries and their use in CoPA’s ECP application partner codes are also addressed.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"35 1","pages":"572 - 597"},"PeriodicalIF":3.1,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211022829","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49142083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Evans, A. Siegel, E. Draeger, J. Deslippe, M. Francois, T. Germann, W. Hart, Daniel F. Martin
{"title":"A survey of software implementations used by application codes in the Exascale Computing Project","authors":"T. Evans, A. Siegel, E. Draeger, J. Deslippe, M. Francois, T. Germann, W. Hart, Daniel F. Martin","doi":"10.1177/10943420211028940","DOIUrl":"https://doi.org/10.1177/10943420211028940","url":null,"abstract":"The US Department of Energy Office of Science and the National Nuclear Security Administration initiated the Exascale Computing Project (ECP) in 2016 to prepare mission-relevant applications and scientific software for the delivery of the exascale computers starting in 2023. The ECP currently supports 24 efforts directed at specific applications and six supporting co-design projects. These 24 application projects contain 62 application codes that are implemented in three high-level languages—C, C++, and Fortran—and use 22 combinations of graphical processing unit programming models. The most common implementation language is C++, which is used in 53 different application codes. The most common programming models across ECP applications are CUDA and Kokkos, which are employed in 15 and 14 applications, respectively. This article provides a survey of the programming languages and models used in the ECP applications codebase that will be used to achieve performance on the future exascale hardware platforms.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"5 - 12"},"PeriodicalIF":3.1,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211028940","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41398347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiphysics coupling in the Exascale computing project","authors":"T. Evans, J. White","doi":"10.1177/10943420211028943","DOIUrl":"https://doi.org/10.1177/10943420211028943","url":null,"abstract":"Multiphysics coupling presents a significant challenge in terms of both computational accuracy and performance. Achieving high performance on coupled simulations can be particularly challenging in a high-performance computing context. The US Department of Energy Exascale Computing Project has the mission to prepare mission-relevant applications for the delivery of the exascale computers starting in 2023. Many of these applications require multiphysics coupling, and the implementations must be performant on exascale hardware. In this special issue we feature six articles performing advanced multiphysics coupling that span the computational science domains in the Exascale Computing Project.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"3 - 4"},"PeriodicalIF":3.1,"publicationDate":"2021-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211028943","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42901728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ian Foster, M. Ainsworth, J. Bessac, F. Cappello, J. Choi, S. Di, Z. Di, A. M. Gok, Hanqi Guo, K. Huck, Christopher Kelly, S. Klasky, K. Kleese van Dam, Xin Liang, Kshitij Mehta, M. Parashar, T. Peterka, Line C. Pouchard, Tong Shu, O. Tugluk, H. V. van Dam, Lipeng Wan, Matthew Wolf, J. Wozniak, Wei Xu, I. Yakushin, Shinjae Yoo, T. Munson
{"title":"Online data analysis and reduction: An important Co-design motif for extreme-scale computers","authors":"Ian Foster, M. Ainsworth, J. Bessac, F. Cappello, J. Choi, S. Di, Z. Di, A. M. Gok, Hanqi Guo, K. Huck, Christopher Kelly, S. Klasky, K. Kleese van Dam, Xin Liang, Kshitij Mehta, M. Parashar, T. Peterka, Line C. Pouchard, Tong Shu, O. Tugluk, H. V. van Dam, Lipeng Wan, Matthew Wolf, J. Wozniak, Wei Xu, I. Yakushin, Shinjae Yoo, T. Munson","doi":"10.1177/10943420211023549","DOIUrl":"https://doi.org/10.1177/10943420211023549","url":null,"abstract":"A growing disparity between supercomputer computation speeds and I/O rates means that it is rapidly becoming infeasible to analyze supercomputer application output only after that output has been written to a file system. Instead, data-generating applications must run concurrently with data reduction and/or analysis operations, with which they exchange information via high-speed methods such as interprocess communications. The resulting parallel computing motif, online data analysis and reduction (ODAR), has important implications for both application and HPC systems design. Here we introduce the ODAR motif and its co-design concerns, describe a co-design process for identifying and addressing those concerns, present tools that assist in the co-design process, and present case studies to illustrate the use of the process and tools in practical settings.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"35 1","pages":"617 - 635"},"PeriodicalIF":3.1,"publicationDate":"2021-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211023549","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42711902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Kolev, P. Fischer, M. Min, J. Dongarra, Jed Brown, V. Dobrev, T. Warburton, S. Tomov, M. Shephard, A. Abdelfattah, V. Barra, Natalie N. Beams, Jean-Sylvain Camier, N. Chalmers, Yohann Dudouit, A. Karakus, I. Karlin, S. Kerkemeier, Yu-Hsiang Lan, David S. Medina, E. Merzari, A. Obabko, Will Pazner, T. Rathnayake, Cameron W. Smith, L. Spies, K. Swirydowicz, Jeremy L. Thompson, A. Tomboulides, V. Tomov
{"title":"Efficient exascale discretizations: High-order finite element methods","authors":"T. Kolev, P. Fischer, M. Min, J. Dongarra, Jed Brown, V. Dobrev, T. Warburton, S. Tomov, M. Shephard, A. Abdelfattah, V. Barra, Natalie N. Beams, Jean-Sylvain Camier, N. Chalmers, Yohann Dudouit, A. Karakus, I. Karlin, S. Kerkemeier, Yu-Hsiang Lan, David S. Medina, E. Merzari, A. Obabko, Will Pazner, T. Rathnayake, Cameron W. Smith, L. Spies, K. Swirydowicz, Jeremy L. Thompson, A. Tomboulides, V. Tomov","doi":"10.1177/10943420211020803","DOIUrl":"https://doi.org/10.1177/10943420211020803","url":null,"abstract":"Efficient exploitation of exascale architectures requires rethinking of the numerical algorithms used in many large-scale applications. These architectures favor algorithms that expose ultra fine-grain parallelism and maximize the ratio of floating point operations to energy intensive data movement. One of the few viable approaches to achieve high efficiency in the area of PDE discretizations on unstructured grids is to use matrix-free/partially assembled high-order finite element methods, since these methods can increase the accuracy and/or lower the computational time due to reduced data motion. In this paper we provide an overview of the research and development activities in the Center for Efficient Exascale Discretizations (CEED), a co-design center in the Exascale Computing Project that is focused on the development of next-generation discretization software and algorithms to enable a wide range of finite element applications to run efficiently on future hardware. CEED is a research partnership involving more than 30 computational scientists from two US national labs and five universities, including members of the Nek5000, MFEM, MAGMA and PETSc projects. We discuss the CEED co-design activities based on targeted benchmarks, miniapps and discretization libraries and our work on performance optimizations for large-scale GPU architectures. We also provide a broad overview of research and development activities in areas such as unstructured adaptive mesh refinement algorithms, matrix-free linear solvers, high-order data visualization, and list examples of collaborations with several ECP and external applications.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"35 1","pages":"527 - 552"},"PeriodicalIF":3.1,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211020803","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65398845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. McCallen, Houjun Tang, Suiwen Wu, E. Eckert, Junfei Huang, N. Petersson
{"title":"Coupling of regional geophysics and local soil-structure models in the EQSIM fault-to-structure earthquake simulation framework","authors":"D. McCallen, Houjun Tang, Suiwen Wu, E. Eckert, Junfei Huang, N. Petersson","doi":"10.1177/10943420211019118","DOIUrl":"https://doi.org/10.1177/10943420211019118","url":null,"abstract":"Accurate understanding and quantification of the risk to critical infrastructure posed by future large earthquakes continues to be a very challenging problem. Earthquake phenomena are quite complex and traditional approaches to predicting ground motions for future earthquake events have historically been empirically based whereby measured ground motion data from historical earthquakes are homogenized into a common data set and the ground motions for future postulated earthquakes are probabilistically derived based on the historical observations. This procedure has recognized significant limitations, principally due to the fact that earthquake ground motions tend to be dictated by the particular earthquake fault rupture and geologic conditions at a given site and are thus very site-specific. Historical earthquakes recorded at different locations are often only marginally representative. There has been strong and increasing interest in utilizing large-scale, physics-based regional simulations to advance the ability to accurately predict ground motions and associated infrastructure response. However, the computational requirements for simulations at frequencies of engineering interest have proven a major barrier to employing regional scale simulations. In a U.S. Department of Energy Exascale Computing Initiative project, the EQSIM application development is underway to create a framework for fault-to-structure simulations. This framework is being prepared to exploit emerging exascale platforms in order to overcome computational limitations. This article presents the essential methodology and computational workflow employed in EQSIM to couple regional-scale geophysics models with local soil-structure models to achieve a fully integrated, complete fault-to-structure simulation framework. The computational workflow, accuracy and performance of the coupling methodology are illustrated through example fault-to-structure simulations.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"78 - 92"},"PeriodicalIF":3.1,"publicationDate":"2021-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211019118","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42341355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Suchyta, S. Klasky, N. Podhorszki, M. Wolf, Abolaji D. Adesoji, Choong-Seock Chang, J. Choi, Philip E. Davis, J. Dominski, S. Ethier, I. Foster, K. Germaschewski, Berk Geveci, Chris Harris, K. Huck, Qing Liu, Jeremy S. Logan, Kshitij Mehta, G. Merlo, S. Moore, T. Munson, M. Parashar, D. Pugmire, M. Shephard, Cameron W. Smith, P. Subedi, Lipeng Wan, Ruonan Wang, Shuangxi Zhang
{"title":"The Exascale Framework for High Fidelity coupled Simulations (EFFIS): Enabling whole device modeling in fusion science","authors":"E. Suchyta, S. Klasky, N. Podhorszki, M. Wolf, Abolaji D. Adesoji, Choong-Seock Chang, J. Choi, Philip E. Davis, J. Dominski, S. Ethier, I. Foster, K. Germaschewski, Berk Geveci, Chris Harris, K. Huck, Qing Liu, Jeremy S. Logan, Kshitij Mehta, G. Merlo, S. Moore, T. Munson, M. Parashar, D. Pugmire, M. Shephard, Cameron W. Smith, P. Subedi, Lipeng Wan, Ruonan Wang, Shuangxi Zhang","doi":"10.1177/10943420211019119","DOIUrl":"https://doi.org/10.1177/10943420211019119","url":null,"abstract":"We present the Exascale Framework for High Fidelity coupled Simulations (EFFIS), a workflow and code coupling framework developed as part of the Whole Device Modeling Application (WDMApp) in the Exascale Computing Project. EFFIS consists of a library, command line utilities, and a collection of run-time daemons. Together, these software products enable users to easily compose and execute workflows that include: strong or weak coupling, in situ (or offline) analysis/visualization/monitoring, command-and-control actions, remote dashboard integration, and more. We describe WDMApp physics coupling cases and computer science requirements that motivate the design of the EFFIS framework. Furthermore, we explain the essential enabling technology that EFFIS leverages: ADIOS for performant data movement, PerfStubs/TAU for performance monitoring, and an advanced COUPLER for transforming coupling data from its native format to the representation needed by another application. Finally, we demonstrate EFFIS using coupled multi-simulation WDMApp workflows and exemplify how the framework supports the project’s needs. We show that EFFIS and its associated services for data movement, visualization, and performance collection does not introduce appreciable overhead to the WDMApp workflow and that the resource-dominant application’s idle time while waiting for data is minimal.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"106 - 128"},"PeriodicalIF":3.1,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211019119","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41475608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel encryption of input and output data for HPC applications","authors":"L. Lapworth","doi":"10.1177/10943420211016516","DOIUrl":"https://doi.org/10.1177/10943420211016516","url":null,"abstract":"A methodology for protecting confidential data sets on third-party HPC systems is reported. This is based on the NIST AES algorithm and supports the common ECB, CTR and CBC modes. The methodology is built on a flexible programming model that delegates management of the encryption key to the application code. The methodology also includes a fine-grain control over which arrays on the files are encrypted. All the stages in an encrypted workflow are investigated using an established CFD code. Benchmarks are reported using the UK national supercomputer service (ARCHER) running the CFD code on up to 18,432 cores. Performance benchmarks demonstrate the importance of the way the encryption metadata is treated. Naïve treatments are shown to have a large impact on performance. However, through a more judicious treatment, the time to run the solver with encrypted input and output data is shown to be almost identical to that with plain data. A novel parallel treatment of the block chaining in AES-CBC mode allows users to benefit from the avalanche properties of this mode relative to the CTR mode, with no penalty in run-time.","PeriodicalId":54957,"journal":{"name":"International Journal of High Performance Computing Applications","volume":"36 1","pages":"231 - 250"},"PeriodicalIF":3.1,"publicationDate":"2021-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/10943420211016516","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46755969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}