2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)最新文献

筛选
英文 中文
SPbLA: The Library of GPGPU-Powered Sparse Boolean Linear Algebra Operations 基于gpgpu的稀疏布尔线性代数运算库
2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) Pub Date : 2021-06-01 DOI: 10.1109/IPDPSW52791.2021.00049
Egor Orachev, Maria Karpenko, Artem Khoroshev, S. Grigorev
{"title":"SPbLA: The Library of GPGPU-Powered Sparse Boolean Linear Algebra Operations","authors":"Egor Orachev, Maria Karpenko, Artem Khoroshev, S. Grigorev","doi":"10.1109/IPDPSW52791.2021.00049","DOIUrl":"https://doi.org/10.1109/IPDPSW52791.2021.00049","url":null,"abstract":"Sparse matrices are widely applicable in data analysis while the theory of matrix processing is well-established. There are a wide range of algorithms for basic operations such as matrix-matrix and matrix-vector multiplication, factorization, etc. To facilitate data analysis, GraphBLAS API provides a set of building blocks and allows for reducing algorithms to sparse linear algebra operations. While GPGPU utilization for high-performance linear algebra is common, the high complexity of GPGPU programming makes the implementation of GraphBLAS API on GPGPU challenging. In this work, we present a GPGPU library of sparse operations for an important case — Boolean algebra. The library is based on modern algorithms for sparse matrix processing. We provide a Python wrapper for the library to simplify its use in applied solutions. Our evaluation shows that operations specialized for Boolean matrices can be up to 5 times faster and consume up to 4 times less memory than generic, not the Boolean optimized, operations from modern libraries. We hope that our results help to move the development of a GPGPU version of GraphBLAS API forward.","PeriodicalId":170832,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133207111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast HBM Access with FPGAs: Analysis, Architectures, and Applications 快速HBM访问fpga:分析,架构和应用
2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) Pub Date : 2021-06-01 DOI: 10.1109/IPDPSW52791.2021.00030
Philipp Holzinger, Daniel Reiser, Tobias Hahn, M. Reichenbach
{"title":"Fast HBM Access with FPGAs: Analysis, Architectures, and Applications","authors":"Philipp Holzinger, Daniel Reiser, Tobias Hahn, M. Reichenbach","doi":"10.1109/IPDPSW52791.2021.00030","DOIUrl":"https://doi.org/10.1109/IPDPSW52791.2021.00030","url":null,"abstract":"Over the past few decades, the gap between rapidly increasing computational power and almost stagnating memory bandwidth has steadily worsened. Recently, 3D die-stacking in form of High Bandwidth Memory (HBM) enabled the first major jump in external memory throughput in years. In contrast to traditional DRAM it compensates its lower clock frequency with wide busses and a high number of separate channels. However, this also requires data to be spread out over all channels to reach the full throughput. Previous research relied on manual HBM data partitioning schemes and handled each channel as an entirely independent entity. This paper in contrast also considers scalable hardware adaptions and approaches system design holistically. In this process we first analyze the problem with real world measurements on a Xilinx HBM FPGA. Then we derive several architectural changes to improve throughput and ease accelerator design. Finally, a Roofline based model to more accurately estimate the expected performance in advance is presented. With these measures we were able to increase the throughput by up to 3.78× with random and 40.6× with certain strided access patterns compared to Xilinx’ state-of-the-art switch fabric.","PeriodicalId":170832,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131821185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Time-Division Multiplexing for FPGA Considering CNN Model Switch Time 考虑CNN模型切换时间的FPGA时分复用
2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) Pub Date : 2021-06-01 DOI: 10.1109/IPDPSW52791.2021.00074
Tetsuro Nakamura, S. Saito, Kei Fujimoto, M. Kaneko, A. Shiraga
{"title":"Time-Division Multiplexing for FPGA Considering CNN Model Switch Time","authors":"Tetsuro Nakamura, S. Saito, Kei Fujimoto, M. Kaneko, A. Shiraga","doi":"10.1109/IPDPSW52791.2021.00074","DOIUrl":"https://doi.org/10.1109/IPDPSW52791.2021.00074","url":null,"abstract":"With the spread of real-time data analysis by artificial intelligence (Al), the use of accelerators in edge computing has been attracting attention due to their low power consumption and low latency. In this paper, we propose a system that further reduces the power consumption and cost by sharing an accelerator among multiple users while maintaining real-time performance. Four requirements are defined: high utilization of device, fair device usage among users, real-time performance, and resource abstraction. Targeting a use case of Al inference, we propose a system that can share a field-programmable gate array (FPGA) among multiple users while satisfying the requirements by switching convolutional neural network (CNN) models stored in the device memory on the FPGA. The system enables a time-division multiplexed accelerator with real-time performance and high device utilization by using a scheduling algorithm that considers the switch time of the CNN models. User fairness is also achieved by adopting ageing techniques in the scheduling algorithm, in which priority increases in accordance with job waiting time. In addition, a thread manager has been integrated to the system that absorbs the difference among CNN models to abstract underlying hardware resources. The system was implemented on an FPGA device and evaluated to be 24-94 % fairer and 31-33 % more resource efficient than the conventional system using first-come first-served and round robin algorithms.","PeriodicalId":170832,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"15 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114018832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving the MPI-IO Performance of Applications with Genetic Algorithm based Auto-tuning 基于遗传算法的应用程序MPI-IO性能自调优
2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) Pub Date : 2021-06-01 DOI: 10.1109/IPDPSW52791.2021.00118
Ayse Bagbaba, Xuan Wang
{"title":"Improving the MPI-IO Performance of Applications with Genetic Algorithm based Auto-tuning","authors":"Ayse Bagbaba, Xuan Wang","doi":"10.1109/IPDPSW52791.2021.00118","DOIUrl":"https://doi.org/10.1109/IPDPSW52791.2021.00118","url":null,"abstract":"Parallel I/O is an essential part of scientific applications running on high-performance computing systems. Understanding an application’s parallel I/O behavior and identifying sources of performance bottlenecks require a multi-layer view of the I/O. Typical parallel I/O stack layers offer many tunable parameters that can achieve the best possible I/O performance. However, scientific users do often not have the time nor the experience for investigating the proper combination of these parameters for each application use-case. Auto-tuning can help users by automatically tuning I/O parameters at various layers transparently. In auto-tuning, using naïve strategy, running an application by trying all possible combinations of tunable parameters for all layers of the I/O stack to find the best settings is an exhaustive search through the huge parameter space. This strategy is infeasible because of the long execution times of trial runs. In this paper, we propose a genetic algorithm-based parallel I/O auto-tuning approach that can hide the complexity of the I/O stack from users and auto-tune a set of parameter values for an application on a given system to improve the I/O performance. In particular, our approach tests a set of parameters and then, modifies the combination of these parameters for further testing based on the I/O performance. We have validated our model using two I/O benchmarks, namely IOR and MPI-Tile-IO. We achieved an increase in I/O bandwidth of up to 7.74×over the default parameters for IOR and 5.59×over the default parameters for MPI-Tile-IO.","PeriodicalId":170832,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114420701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ScaDL 2021 Invited Speaker-3: AI for Social Impact: Results from multiagent reasoning and learning in the real world ScaDL 2021特邀演讲者3:人工智能对社会的影响:现实世界中多智能体推理和学习的结果
2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) Pub Date : 2021-06-01 DOI: 10.1109/ipdpsw52791.2021.00138
{"title":"ScaDL 2021 Invited Speaker-3: AI for Social Impact: Results from multiagent reasoning and learning in the real world","authors":"","doi":"10.1109/ipdpsw52791.2021.00138","DOIUrl":"https://doi.org/10.1109/ipdpsw52791.2021.00138","url":null,"abstract":"","PeriodicalId":170832,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114476465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characters Recognition based on CNN-RNN architecture and Metaheuristic 基于CNN-RNN架构和元启发式的字符识别
2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) Pub Date : 2021-06-01 DOI: 10.1109/IPDPSW52791.2021.00082
F. Keddous, H. Nguyen, A. Nakib
{"title":"Characters Recognition based on CNN-RNN architecture and Metaheuristic","authors":"F. Keddous, H. Nguyen, A. Nakib","doi":"10.1109/IPDPSW52791.2021.00082","DOIUrl":"https://doi.org/10.1109/IPDPSW52791.2021.00082","url":null,"abstract":"Convolutional neural networks (CNN) are composed of multiple convolutional layers and a fully connected layer(s) (FC). In most of CNN models, the memory needed only for the weights of FC layers exceeds the total required by the rest of the layers. Consequently, for decreasing memory size needed and the acceleration of the inference, it obvious to focus on the an FC layer optimization method. In this paper, we propose a hybrid neural network architecture to perform image classification that combines CNN and the recurrent neural networks (RNN) to deal with the presented problem. To do so, a pretrained CNN model is used for features extraction (without FC Layers), then plugged into a parallel architecture of a RNN. In this work the Hopfield is considered. The obtained results on the Noisy MNIST Dataset have exceeded the state of the art for this problem.","PeriodicalId":170832,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115401813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message from the ParSocial 2021 Workshop Co-Chairs ParSocial 2021研讨会联合主席的讲话
2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) Pub Date : 2021-06-01 DOI: 10.1109/ipdpsw52791.2021.00154
{"title":"Message from the ParSocial 2021 Workshop Co-Chairs","authors":"","doi":"10.1109/ipdpsw52791.2021.00154","DOIUrl":"https://doi.org/10.1109/ipdpsw52791.2021.00154","url":null,"abstract":"","PeriodicalId":170832,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121478120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Workload Balance of a Marine CSEM Inversion Application 改进海洋CSEM反演应用的工作负载平衡
2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) Pub Date : 2021-06-01 DOI: 10.1109/IPDPSW52791.2021.00107
Jessica Imlau Dagostini, Henrique Corrêa Pereira da Silva, V. G. Pinto, Roberto M. Velho, E. S. Gastal, L. Schnorr
{"title":"Improving Workload Balance of a Marine CSEM Inversion Application","authors":"Jessica Imlau Dagostini, Henrique Corrêa Pereira da Silva, V. G. Pinto, Roberto M. Velho, E. S. Gastal, L. Schnorr","doi":"10.1109/IPDPSW52791.2021.00107","DOIUrl":"https://doi.org/10.1109/IPDPSW52791.2021.00107","url":null,"abstract":"The marine Controlled-Source Electromagnetic Method (mCSEM) complements traditional seismic surveys for oil and gas exploration. A ship with a robust electromagnetic transmitter close to the seabed moves over an area previously equipped with fixed electromagnetic receivers on the seafloor. The collected data is then subject to data inversion techniques to compute the subsurface resistivity volume under the seabed. We characterize the workload imbalance as it originates from the scheduling policy and the mapping between measurement data and the resistivity model. We then propose a workload balancing improvement by applying the Sorted-Greedy scheduling policy in this data inversion application. Using realistic datasets, we demonstrate that the policy reduces the execution time from 32.1% up to 40.3% without affecting the results’ numerical accuracy. Our changes also improved the application scalability, enabling the execution with a larger number of processes achieving additional gains from 53.1% up to 60.9%.","PeriodicalId":170832,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122770516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Plaster: an Embedded FPGA-based Cluster Orchestrator for Accelerated Distributed Algorithms 石膏:用于加速分布式算法的嵌入式fpga集群编排器
2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) Pub Date : 2021-06-01 DOI: 10.1109/IPDPSW52791.2021.00023
Lorenzo Farinelli, Daniele Valentino De Vincenti, Andrea Damiani, Luca Stornaiuolo, Rolando Brondolin, M. Santambrogio, D. Sciuto
{"title":"Plaster: an Embedded FPGA-based Cluster Orchestrator for Accelerated Distributed Algorithms","authors":"Lorenzo Farinelli, Daniele Valentino De Vincenti, Andrea Damiani, Luca Stornaiuolo, Rolando Brondolin, M. Santambrogio, D. Sciuto","doi":"10.1109/IPDPSW52791.2021.00023","DOIUrl":"https://doi.org/10.1109/IPDPSW52791.2021.00023","url":null,"abstract":"The increasing use of real-time data-intensive applications and the growing interest in Heterogeneous Architectures have led to the need for increasingly complex embedded computing systems. An example of this is the research carried out by both the scientific community and companies toward embedded multi-FPGA systems for the implementation of the inference phase of Convolutional Neural Networks.In this paper, we focus on optimizing the management system of these embedded FPGA-based distributed systems. We extend the state-of-the-art FARD framework to data-intensive applications in an embedded scenario. Our orchestration and management infrastructure benefits from compiled language and is accessible to end-users by the means of Python APIs, which provides a simple way to interact with the cluster and design apps to run on the embedded nodes. The proposed prototype system consists of a PYNQ-based cluster of multiple FPGAs and has been evaluated by running an FPGA-based You Only Look Once (YOLO) image classification algorithm.","PeriodicalId":170832,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128024789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Streaming Accelerator for Heterogeneous CPU-FPGA Processing of Graph Applications 面向图形应用异构CPU-FPGA处理的流加速器
2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) Pub Date : 2021-06-01 DOI: 10.1109/IPDPSW52791.2021.00014
Francis O'Brien, Matthew Agostini, T. Abdelrahman
{"title":"A Streaming Accelerator for Heterogeneous CPU-FPGA Processing of Graph Applications","authors":"Francis O'Brien, Matthew Agostini, T. Abdelrahman","doi":"10.1109/IPDPSW52791.2021.00014","DOIUrl":"https://doi.org/10.1109/IPDPSW52791.2021.00014","url":null,"abstract":"We explore the heterogeneous acceleration of graph processing on a platform that tightly integrates an FPGA with a multicore CPU to share system memory in a cache-coherent manner. We design an accelerator for the scatter phase of scatter-gather vertex-centric iterative graph processing. The accelerator accesses graph data exclusively from system memory, sharing it at the cache line granularity with the CPU, thus enabling the concurrent use of both the accelerator and software threads. We implement and evaluate the accelerator on the second generation Intel Heterogeneous Architecture Research Platform (HARPv2). Our evaluation, using two key graph processing kernels and both synthetically-generated and real-world graphs, shows that: (1) our accelerator delivers a performance improvement of about 2.4X over a single CPU thread, (2) our concurrent use of software and hardware is efficient and delivers speedups over the use of just software threads or just the accelerator, and (3) heterogeneous hardware-software acceleration delivers high graph processing throughputs. These results demonstrate the viability and promise of combined CPU-FPGA processing in contrast to the traditional offload model that leaves the CPU idle during acceleration.","PeriodicalId":170832,"journal":{"name":"2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130396626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信