2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)最新文献

筛选
英文 中文
Message from the HiPC 2022 Program Chairs 重债穷国2022项目主席的讲话
{"title":"Message from the HiPC 2022 Program Chairs","authors":"","doi":"10.1109/hipc56025.2022.00006","DOIUrl":"https://doi.org/10.1109/hipc56025.2022.00006","url":null,"abstract":"","PeriodicalId":119363,"journal":{"name":"2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116979439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning-Based In Situ Analysis Framework for Tropical Cyclogenesis Prediction 基于深度学习的热带气旋形成现场分析框架
Abir Mukherjee, Preeti Malakar
{"title":"A Deep Learning-Based In Situ Analysis Framework for Tropical Cyclogenesis Prediction","authors":"Abir Mukherjee, Preeti Malakar","doi":"10.1109/HiPC56025.2022.00032","DOIUrl":"https://doi.org/10.1109/HiPC56025.2022.00032","url":null,"abstract":"Tropical cyclone is one of the most violent natural disasters causing massive devastation. Accurate forecasting of cyclones with high lead times is an important problem. We propose a framework to predict tropical cyclogenesis (i.e. cyclone formation). This framework executes along with a parallel weather simulation model (WRF) and analyzes the simulation output as soon as they are generated. Our framework has two major components – a trigger function and a deep predictive model. The trigger function acts as a basic filter to identify cyclones from non-cyclones. The proposed deep learning model is based on convolutional neural networks (CNNs). The best track data from Indian Meteorological Department (IMD) is used as a reference for labeling data points into disturbances and tropical cyclones. The framework achieves a probability of detection (POD) value of approximately 95% with a false alarm ratio (FAR) of 21.69% overall. The predictions made by the framework have a lead time of up to 150 hours from the time that a disturbance transforms into a tropical cyclone.","PeriodicalId":119363,"journal":{"name":"2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114468050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A GPU-accelerated Data Transformation Framework Rooted in Pushdown Transducers 基于下推传感器的gpu加速数据转换框架
Tri Nguyen, M. Becchi
{"title":"A GPU-accelerated Data Transformation Framework Rooted in Pushdown Transducers","authors":"Tri Nguyen, M. Becchi","doi":"10.1109/HiPC56025.2022.00038","DOIUrl":"https://doi.org/10.1109/HiPC56025.2022.00038","url":null,"abstract":"With the rise of machine learning and data analytics, the ability to process large and diverse sets of data efficiently has become crucial. Research has shown that data transformation is a key performance bottleneck for applications across a variety of domains, from data analytics to scientific computing. Custom hardware accelerators and GPU implementations targeting specific data transformation tasks can alleviate the problem, but suffer from narrow applicability and lack of generality.To tackle this problem, we propose a GPU-accelerated data transformation engine grounded on pushdown transducers. We define an extended pushdown transducer abstraction (effPDT) that allows expressing a wide range of data transformations in a memory-efficient fashion, and is thus amenable for GPU deployment. The effPDT execution engine utilizes a data streaming model that reduces the application’s memory requirements significantly, facilitating deployment on high- and low-end systems. We showcase our GPU-accelerated engine on a diverse set of transformation tasks covering data encoding/decoding, parsing and querying of structured data, and matrix transformation, and we evaluate it against publicly available CPU and GPU library implementations of the considered data transformation tasks. To understand the benefits of the effPDT abstraction, we extend our data transformation engine to also support finite state transducers (FSTs), we map the considered data transformation tasks on FSTs, and we compare the performance and resource requirements of the FST-based and the effPDT-based implementations.","PeriodicalId":119363,"journal":{"name":"2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117205560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LuxIO: Intelligent Resource Provisioning and Auto-Configuration for Storage Services LuxIO:智能资源发放和存储服务自动配置
Keith Bateman, N. Rajesh, Jaime Cernuda Garcia, Luke Logan, Jie Ye, Stephen Herbein, Anthony Kougkas, Xian-He Sun
{"title":"LuxIO: Intelligent Resource Provisioning and Auto-Configuration for Storage Services","authors":"Keith Bateman, N. Rajesh, Jaime Cernuda Garcia, Luke Logan, Jie Ye, Stephen Herbein, Anthony Kougkas, Xian-He Sun","doi":"10.1109/HiPC56025.2022.00041","DOIUrl":"https://doi.org/10.1109/HiPC56025.2022.00041","url":null,"abstract":"Storage in HPC is typically a single Remote and Static Storage (RSS) resource. However, applications demonstrate diverse I/O requirements that can be better served by a multi-storage approach. Current practice employs ephemeral storage systems running on either node-local or shared storage resources. Yet, the burden of provisioning and configuring intermediate storage falls solely on the users, while global job schedulers offer little to no support for custom deployments. This lack of support often leads to over- or under-provisioning of resources and poorly configured storage systems. To mitigate this, we present LuxIO, an intelligent storage resource provisioning and auto-configuration service. LuxIO constructs storage deployments configured to best match I/O requirements. LuxIO-tuned storage services show performance improvements up to 2× across common applications and benchmarks, while introducing minimal overhead of 93.40 ms on top of existing job scheduling pipelines. LuxIO improves resource utilization by up to 25% in select workflows.","PeriodicalId":119363,"journal":{"name":"2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133226961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Consumption Evaluation of Optane DC Persistent Memory for Indexing Data Structures 用于索引数据结构的Optane DC持久存储器的能耗评估
Manolis Katsaragakis, Christos Baloukas, Lazaros Papadopoulos, Verena Kantere, F. Catthoor, D. Soudris
{"title":"Energy Consumption Evaluation of Optane DC Persistent Memory for Indexing Data Structures","authors":"Manolis Katsaragakis, Christos Baloukas, Lazaros Papadopoulos, Verena Kantere, F. Catthoor, D. Soudris","doi":"10.1109/HiPC56025.2022.00022","DOIUrl":"https://doi.org/10.1109/HiPC56025.2022.00022","url":null,"abstract":"The Intel Optane DC Persistent Memory (DCPM) is an attractive novel technology for building storage systems for data intensive HPC applications, as it provides lower cost per byte, low standby power and larger capacities than DRAM, with comparable latency. This work provides an in-depth evaluation of the energy consumption of the Optane DCPM, using well-established indexes specifically designed to address the challenges and constraints of the persistent memories. We study the energy efficiency of the Optane DCPM for several indexing data structures and for the LevelDB key-value store, under different types of YCSB workloads. By integrating an Optane DCPM in a memory system, the energy drops by 71.2% and the throughput increases by 37.3% for the LevelDB experiments, compared to a typical SSD storage solution.","PeriodicalId":119363,"journal":{"name":"2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116200461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HiBGT: High-Performance Bayesian Group Testing for COVID-19 HiBGT:新型冠状病毒的高性能贝叶斯群检测
Weicong Chen, C. Tatsuoka, Xiaoyi Lu
{"title":"HiBGT: High-Performance Bayesian Group Testing for COVID-19","authors":"Weicong Chen, C. Tatsuoka, Xiaoyi Lu","doi":"10.1109/HiPC56025.2022.00033","DOIUrl":"https://doi.org/10.1109/HiPC56025.2022.00033","url":null,"abstract":"The COVID-19 pandemic has necessitated disease surveillance using group testing. Novel Bayesian methods using lattice models were proposed, which offer substantial improvements in group testing efficiency by precisely quantifying uncertainty in diagnoses, acknowledging varying individual risk and dilution effects, and guiding optimally convergent sequential pooled test selections. Computationally, however, Bayesian group testing poses considerable challenges as computational complexity grows exponentially with sample size. HPC and big data stacks are needed for assessing computational and statistical performance across fluctuating prevalence levels at large scales. Here, we study how to design and optimize critical computational components of Bayesian group testing, including lattice model representation, test selection algorithms, and statistical analysis schemes, under the context of parallel computing. To realize this, we propose a high-performance Bayesian group testing framework named HiBGT, based on Apache Spark, which systematically explores the design space of Bayesian group testing and provides comprehensive heuristics on how to achieve high-performance, highly scalable Bayesian group testing. We show that HiBGT can perform large-scale test selections (> 250 state iterations) and accelerate statistical analyzes up to 15.9x (up to 363x with little trade-offs) through a varied selection of sophisticated parallel computing techniques while achieving near linear scalability using up to 924 CPU cores.","PeriodicalId":119363,"journal":{"name":"2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121147191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Real-time Flood Inundation Prediction on SX-Aurora TSUBASA SX-Aurora TSUBASA的洪水淹没实时预报
Yoichi Shimomura, A. Musa, Yoshihiko Sato, Atsuhiko Konja, Guoqing Cui, Rei Aoyagi, Keichi Takahashi, H. Takizawa
{"title":"A Real-time Flood Inundation Prediction on SX-Aurora TSUBASA","authors":"Yoichi Shimomura, A. Musa, Yoshihiko Sato, Atsuhiko Konja, Guoqing Cui, Rei Aoyagi, Keichi Takahashi, H. Takizawa","doi":"10.1109/HiPC56025.2022.00035","DOIUrl":"https://doi.org/10.1109/HiPC56025.2022.00035","url":null,"abstract":"Due to extreme weather, record-breaking heavy rainfalls frequently cause severe flood damages. Thus, there is a strong demand for predicting flood scales to mitigate damages. In this paper, we propose a real-time flood inundation prediction system on a shared HPC system. Although the Rainfall-Runoff Inundation (RRI) model has been developed for predicting large-scale flood inundation, it is necessary to improve the performance for real-time prediction. Since the RRI model is highly memory-bound, we port the RRI simulation code to the latest vector computing system, SX-Aurora TSUBASA (SX-AT), which provides high sustained memory bandwidth. We discuss performance optimization of the RRI code at the node level and MPI parallelization strategies. The RRI code also needs to output intermediate results at a high frequency. Thus, the RRI code is split into file I/O operation and kernel computation, which are assigned to different kinds of processors using the heterogeneity of SX-AT. Furthermore, we discuss a resource demand estimation method to minimize the amount of shared computing resources used for prediction in order to reduce the impact on other users sharing the system. In our evaluation, we demonstrate that SX-AT with only 32 cores can meet the real-time simulation requirement of simulating 7-hour flood inundation for the Tohoku region of Japan within 20 minutes. The evaluation results also demonstrate that the proposed method can adaptively adjust the computing resource amount used for the real-time simulation, and thus reduce the computing resource by 75% in comparison with the worst-case scenario of conservative static resource allocation.","PeriodicalId":119363,"journal":{"name":"2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122635777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Portable Sparse Solver Framework for Large Matrices on Heterogeneous Architectures 异构体系结构上大矩阵的便携式稀疏求解器框架
F. Rabbi, C. Daley, Ümit V. Çatalyürek, H. Aktulga
{"title":"A Portable Sparse Solver Framework for Large Matrices on Heterogeneous Architectures","authors":"F. Rabbi, C. Daley, Ümit V. Çatalyürek, H. Aktulga","doi":"10.1109/HiPC56025.2022.00030","DOIUrl":"https://doi.org/10.1109/HiPC56025.2022.00030","url":null,"abstract":"Programming applications on heterogeneous systems with hardware accelerators is challenging due to the disjoint address spaces between the host (CPU) and the device (GPU). The limited device memory further exacerbates the challenges as most data-intensive applications will not fit in the limited device memory. CUDA Unified Memory (UM) was introduced to mitigate such challenges. UM improves GPU programmability by supporting oversubscription, on-demand paging, and migration. However, when the working set of an application exceeds the device memory capacity, the resulting data movement can cause significant performance losses. We propose a tiling-based task-parallel framework, named DeepSparseGPU, to accelerate sparse eigensolvers on GPUs by minimizing data movement between the host and device. To this end, we tile all operations in a sparse solver and express the entire computation as a directed acyclic graph (DAG). We design and develop a memory manager (MM) to execute larger inputs that do not fit into GPU memory. MM keeps track of the data on CPU and GPU, and automatically moves data between them as needed. We use OpenMP target offload in our implementation to achieve portability beyond NVIDIA hardware. Performance evaluations show that DeepSparseGPU transfers 1.39x-2.18x less host to device (H2D) and device to host (D2H) data, while executing up to 2.93x faster than the UM-based baseline version.","PeriodicalId":119363,"journal":{"name":"2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130510657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance analysis of GPU accelerated meshfree q-LSKUM solvers in Fortran, C, Python, and Julia GPU加速无网格q-LSKUM求解器在Fortran, C, Python和Julia中的性能分析
Nischay Ram Mamidi, D. Saxena, K. Prasun, Anil Nemili, Bharatkumar Sharma, S. Deshpande
{"title":"Performance analysis of GPU accelerated meshfree q-LSKUM solvers in Fortran, C, Python, and Julia","authors":"Nischay Ram Mamidi, D. Saxena, K. Prasun, Anil Nemili, Bharatkumar Sharma, S. Deshpande","doi":"10.1109/HiPC56025.2022.00031","DOIUrl":"https://doi.org/10.1109/HiPC56025.2022.00031","url":null,"abstract":"This paper presents a comprehensive analysis of the performance of Fortran, C, Python, and Julia based GPU accelerated meshfree solvers for compressible flows. The programming model CUDA is used to develop the GPU codes. The meshfree solver is based on the least squares kinetic upwind method with entropy variables (q-LSKUM). To measure the performance of baseline codes, benchmark calculations are performed. The codes are then profiled to investigate the differences in their performance. Analysing various performance metrics for the computationally expensive flux residual kernel helped identify various bottlenecks in the codes. To resolve the bottlenecks, several optimisation techniques are employed. Post optimisation, the performance metrics have improved significantly, with the C GPU code exhibiting the best performance.","PeriodicalId":119363,"journal":{"name":"2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128456842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Leveraging GPU Tensor Cores for Double Precision Euclidean Distance Calculations 利用GPU张量核心的双精度欧几里得距离计算
Benoît Gallet, M. Gowanlock
{"title":"Leveraging GPU Tensor Cores for Double Precision Euclidean Distance Calculations","authors":"Benoît Gallet, M. Gowanlock","doi":"10.1109/HiPC56025.2022.00029","DOIUrl":"https://doi.org/10.1109/HiPC56025.2022.00029","url":null,"abstract":"Tensor cores (TCs) are a type of Application-Specific Integrated Circuit (ASIC) and are a recent addition to Graphics Processing Unit (GPU) architectures. As such, TCs are purposefully designed to greatly improve the performance of Matrix Multiply-Accumulate (MMA) operations. While TCs are heavily studied for machine learning and closely related fields, where their high efficiency is undeniable, MMA operations are not unique to these fields. More generally, any computation that can be expressed as MMA operations can leverage TCs, and potentially benefit from their higher computational throughput compared to other general-purpose cores, such as CUDA cores on Nvidia GPUs. In this paper, we propose the first double precision (FP64) Euclidean distance calculation algorithm, which is expressed as MMA operations to leverage TCs on Nvidia GPUs, rather than the more commonly used CUDA cores. To show that the Euclidean distance can be accelerated in a real-world application, we evaluate our proposed TC algorithm on the distance similarity self-join problem, as the most computationally intensive part of the algorithm consists of computing distances in a multi-dimensional space. We find that the performance gain from using the tensor core algorithm over the CUDA core algorithm depends weakly on the dataset size and distribution, but is strongly dependent on data dimensionality. Overall, TCs are a compelling alternative to CUDA cores, particularly when the data dimensionality is low (≤ 4), as we achieve an average speedup of 1.28× and up to 2.23× against a state-of-the-art GPU distance similarity self-join algorithm. Furthermore, because this paper is among the first to explore the use of TCs for FP64 general-purpose computation, future research is promising.","PeriodicalId":119363,"journal":{"name":"2022 IEEE 29th International Conference on High Performance Computing, Data, and Analytics (HiPC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115704812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信