IEEE Transactions on Parallel and Distributed Systems最新文献

筛选
英文 中文
High Performance Householder QR Factorization on Emerging GPU Architectures Using Tensor Cores 基于张量核的新型GPU架构的高性能户型QR分解
IF 5.6 2区 计算机科学
IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-12-25 DOI: 10.1109/TPDS.2024.3522776
Yuhan Leng;Gaoyuan Zou;Hansheng Wang;Panruo Wu;Shaoshuai Zhang
{"title":"High Performance Householder QR Factorization on Emerging GPU Architectures Using Tensor Cores","authors":"Yuhan Leng;Gaoyuan Zou;Hansheng Wang;Panruo Wu;Shaoshuai Zhang","doi":"10.1109/TPDS.2024.3522776","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3522776","url":null,"abstract":"Since 2017, NVIDIA GPUs have been equipped with specialized units known as Tensor Cores, which demonstrate remarkable efficiency in processing matrix multiplications (GEMMs). Beyond GEMMs, researchers have explored the potential applications of Tensor Cores in matrix factorization, such as QR factorization. However, the inside GEMMs in QR factorization are typically tall and skinny. Compared to compute-bound square GEMMs, these tall and skinny GEMMs are memory bound, leading to suboptimal performance on Tensor Cores. To solve this problem, we indicate the recursive QR factorization can convert the tall and skinny GEMMs to relatively square and large GEMMs, resulting in better performance on Tensor Cores. Besides, we extend the FP16 Tensor-Cores-based QR factorization to accommodate FP32 and FP64 on FP16 and INT8 Tensor Cores, respectively. Additionally, to address the issue of orthogonality loss in the preceding Tensor Cores-based QR factorization, we transition from the Gram-Schmidt to the Householder algorithm while preserving high performance. According to our experimental evaluation conducted on NVIDIA's A100 and GeForce RTX 3090 GPU, the precision levels of FP64, FP32, and FP16 are up to 6.22x, 8.67x, and 4.03x faster, respectively, than the current state-of-the-art implementations.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 3","pages":"422-436"},"PeriodicalIF":5.6,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated and Fungible Scheduling of Deep Learning Workloads Using Multi-Agent Reinforcement Learning 基于多智能体强化学习的深度学习工作负载集成可替换调度
IF 5.6 2区 计算机科学
IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-12-25 DOI: 10.1109/TPDS.2024.3522333
Jialun Li;Danyang Xiao;Diying Yang;Xuan Mo;Weigang Wu
{"title":"Integrated and Fungible Scheduling of Deep Learning Workloads Using Multi-Agent Reinforcement Learning","authors":"Jialun Li;Danyang Xiao;Diying Yang;Xuan Mo;Weigang Wu","doi":"10.1109/TPDS.2024.3522333","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3522333","url":null,"abstract":"GPU clusters have been widely used to co-locate various deep learning (DL) workloads in a multi-tenant way. Although such resource sharing can significantly reduce training cost, resource contention and interference among co-located workloads make task scheduling very complex and challenging. To simplify the scheduling problem, existing algorithms usually divide the procedure of scheduling into two sub-tasks, i.e., task placement and resource allocation, and allocate resources according to pre-defined and fixed resource demands. However, such a paradigm significantly constrains the selection of potential scheduling solutions. In this article, we present MAIFS, a novel multi-agent reinforcement learning based scheduling algorithm that handles task placement and resource allocation integratedly, and allows fungible resource allocation based on resource sensitivity of DL workloads. The core of MAIFS lies in two mechanisms. The multi-agent attention mechanism is designed to learn and share inter-related resource state features observed from different agents, which enables agents to explore fungible resource allocation solutions. The dynamic coordination graph mechanism is designed for coordinating interactive task placement decisions of agents during integrated scheduling, so as to mitigate potential task conflicts. Simulated experiments using two large scale production DL workload traces and physical deployment experiments based on a Kubernetes based GPU cluster show that MAIFS can outperform state-of-the-art scheduling algorithms by up to 44% in terms of makespan and 46% in terms of job completion time (JCT).","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 3","pages":"391-406"},"PeriodicalIF":5.6,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ViTeGNN: Towards Versatile Inference of Temporal Graph Neural Networks on FPGA 在FPGA上实现时间图神经网络的多用途推理
IF 5.6 2区 计算机科学
IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-12-24 DOI: 10.1109/TPDS.2024.3521897
Hongkuan Zhou;Bingyi Zhang;Rajgopal Kannan;Carl Busart;Viktor K. Prasanna
{"title":"ViTeGNN: Towards Versatile Inference of Temporal Graph Neural Networks on FPGA","authors":"Hongkuan Zhou;Bingyi Zhang;Rajgopal Kannan;Carl Busart;Viktor K. Prasanna","doi":"10.1109/TPDS.2024.3521897","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3521897","url":null,"abstract":"Temporal Graph Neural Networks (TGNNs) are powerful models to capture temporal, structural, and contextual information on temporal graphs, outperforming other methods in many high-impact downstream tasks. However, achieving high-performance TGNN inference in production environments is challenging because TGNN models suffer from high computation complexity and intrinsic temporal data dependency that hinders data parallelism. In addition, real-world TGNN applications have different latency and throughput requirements. This work presents ViTeGNN, a versatile TGNN inference solution for memory-based TGNNs on FPGAs. ViTeGNN performs algorithm-model-architecture co-design to meet the latency and throughput requirements of real-world TGNN applications. Besides the vanilla inference mode ViTeGNN-bal that updates embeddings for nodes interacting with others, we propose ViTeGNN-lat and ViTeGNN-thpt, optimized for latency and throughput. Our model optimizations include a lightweight method to compute attention scores and a related temporal neighbor pruning strategy to reduce computation and memory accesses. These are holistically coupled with key hardware optimizations that leverage the FPGA hardware. We propose a novel hardware module to execute the complex neighbor update process efficiently. To ensure similar accuracy vis-á-vis the original model, the simplified models are trained using the knowledge distillation technique. We propose a unified hardware design that supports all of these three inference modes without FPGA reconfiguration. Enabled by our flexible hardware architecture, we further propose ViTeGNN-auto, which automatically selects the best inference mode at runtime based on latency and throughput requirements, guided by our accurate performance model. We evaluate the performance of the proposed hardware accelerator on five real-world datasets. ViTeGNN-bal reduces the computation complexity by an average of 62% and memory accesses by an average of 36% with only 0.0042 accuracy loss. Compared with state-of-the-art implementations on CPU and GPU, our FPGA implementation achieves <inline-formula><tex-math>$53.9/26.0/16.1times$</tex-math></inline-formula> speedup and <inline-formula><tex-math>$8.2/4.0/2.5times$</tex-math></inline-formula> speedup for ViTeGNN-lat/-bal/-thpt, respectively.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 3","pages":"502-519"},"PeriodicalIF":5.6,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Elastic Resource Provisioning With QoS Guarantee in Container-Based Cloud Computing 基于容器的云计算中QoS保障的在线弹性资源发放
IF 5.6 2区 计算机科学
IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-12-24 DOI: 10.1109/TPDS.2024.3522085
Shuaibing Lu;Ran Yan;Jie Wu;Jackson Yang;Xinyu Deng;Shen Wu;Zhi Cai;Juan Fang
{"title":"Online Elastic Resource Provisioning With QoS Guarantee in Container-Based Cloud Computing","authors":"Shuaibing Lu;Ran Yan;Jie Wu;Jackson Yang;Xinyu Deng;Shen Wu;Zhi Cai;Juan Fang","doi":"10.1109/TPDS.2024.3522085","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3522085","url":null,"abstract":"In cloud data centers, the exponential growth of data places increasing demands on computing, storage, and network resources, especially in multi-tenant environments. While this growth is crucial for ensuring Quality of Service (QoS), it also introduces challenges such as fluctuating resource requirements and static container configurations, which can lead to resource underutilization and high energy consumption. This article addresses online resource provisioning and efficient scheduling for multi-tenant environments, aiming to minimize energy consumption while balancing elasticity and QoS requirements. To address this, we propose a novel optimization framework that reformulates the resource provisioning problem into a more manageable form. By reducing the original multi-constraint optimization to a container placement problem, we apply the interior-point barrier method to simplify the optimization, integrating constraints directly into the objective function for efficient computation. We also introduce elasticity as a key parameter to balance energy consumption with autonomous resource scaling, ensuring that resource consolidation does not compromise system flexibility. The proposed Energy-Efficient and Elastic Resource Provisioning (EEP) framework comprises three main modules: a distributed resource management module that employs vertical partitioning and dynamic leader election for adaptive resource allocation; a prediction module using <inline-formula><tex-math>$omega$</tex-math></inline-formula>-step prediction for accurate resource demand forecasting; and an elastic scheduling module that dynamically adjusts to tenant scaling needs, optimizing resource allocation and minimizing energy consumption. Extensive experiments across diverse cloud scenarios demonstrate that the EEP framework significantly improves energy efficiency and resource utilization compared to established baselines, supporting sustainable cloud management practices.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 3","pages":"361-376"},"PeriodicalIF":5.6,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAT: Cellular Automata on Tensor Cores 张量核上的元胞自动机
IF 5.6 2区 计算机科学
IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-12-20 DOI: 10.1109/TPDS.2024.3520395
Cristóbal A. Navarro;Felipe A. Quezada;Enzo Meneses;Héctor Ferrada;Nancy Hitschfeld
{"title":"CAT: Cellular Automata on Tensor Cores","authors":"Cristóbal A. Navarro;Felipe A. Quezada;Enzo Meneses;Héctor Ferrada;Nancy Hitschfeld","doi":"10.1109/TPDS.2024.3520395","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3520395","url":null,"abstract":"Cellular automata (CA) are simulation models that can produce complex emergent behaviors from simple local rules. Although state-of-the-art GPU solutions are already fast due to their data-parallel nature, their performance can rapidly degrade in CA with a large neighborhood radius. With the inclusion of tensor cores across the entire GPU ecosystem, interest has grown in finding ways to leverage these fast units outside the field of artificial intelligence, which was their original purpose. In this work, we present CAT, a GPU tensor core approach that can accelerate CA in which the cell transition function acts on a weighted summation of its neighborhood. CAT is evaluated theoretically, using an extended PRAM cost model, as well as empirically using the Larger Than Life (LTL) family of CA as case studies. The results confirm that the cost model is accurate, showing that CAT exhibits constant time throughout the entire radius range \u0000<inline-formula><tex-math>$1 leq r leq 16$</tex-math></inline-formula>\u0000, and its theoretical speedups agree with the empirical results. At low radius \u0000<inline-formula><tex-math>$r=1,2$</tex-math></inline-formula>\u0000, CAT is competitive and is only surpassed by the fastest state-of-the-art GPU solution. Starting from \u0000<inline-formula><tex-math>$r=3$</tex-math></inline-formula>\u0000, CAT progressively outperforms all other approaches, reaching speedups of up to \u0000<inline-formula><tex-math>$101times$</tex-math></inline-formula>\u0000 over a GPU baseline and up to \u0000<inline-formula><tex-math>$sim !14times$</tex-math></inline-formula>\u0000 over the fastest state-of-the-art GPU approach. In terms of energy efficiency, CAT is competitive in the range \u0000<inline-formula><tex-math>$1 leq r leq 4$</tex-math></inline-formula>\u0000 and from \u0000<inline-formula><tex-math>$r geq 5$</tex-math></inline-formula>\u0000 it is the most energy efficient approach. As for performance scaling across GPU architectures, CAT shows a promising trend that, if continues for future generations, it would increase its performance at a higher rate than classical GPU solutions. A CPU version of CAT was also explored, using the recently introduced AMX instructions. Although its performance is still below GPU tensor cores, it is a promising approach as it can still outperform some GPU approaches at large radius. The results obtained in this work put CAT as an approach with great potential for scientists who need to study emerging phenomena in CA with a large neighborhood radius, both in the GPU and in the CPU.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 2","pages":"341-355"},"PeriodicalIF":5.6,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AsyncFedGAN: An Efficient and Staleness-Aware Asynchronous Federated Learning Framework for Generative Adversarial Networks 生成对抗网络的异步联邦学习框架
IF 5.6 2区 计算机科学
IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-12-20 DOI: 10.1109/TPDS.2024.3521016
Daniel Manu;Abee Alazzwi;Jingjing Yao;Youzuo Lin;Xiang Sun
{"title":"AsyncFedGAN: An Efficient and Staleness-Aware Asynchronous Federated Learning Framework for Generative Adversarial Networks","authors":"Daniel Manu;Abee Alazzwi;Jingjing Yao;Youzuo Lin;Xiang Sun","doi":"10.1109/TPDS.2024.3521016","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3521016","url":null,"abstract":"Generative Adversarial Networks (GANs) are deep learning models that learn and generate new samples similar to existing ones. Traditionally, GANs are trained in centralized data centers, raising data privacy concerns due to the need for clients to upload their data. To address this, Federated Learning (FL) integrates with GANs, allowing collaborative training without sharing local data. However, this integration is complex because GANs involve two interdependent models—the generator and the discriminator—while FL typically handles a single model over distributed datasets. In this article, we propose a novel asynchronous FL framework for GANs, called AsyncFedGAN, designed to efficiently and distributively train both models tailored for molecule generation. AsyncFedGAN addresses the challenges of training interactive models, resolves the straggler issue in synchronous FL, reduces model staleness in asynchronous FL, and lowers client energy consumption. Our extensive simulations for molecular discovery show that AsyncFedGAN achieves convergence with proper settings, outperforms baseline methods, and balances model performance with client energy usage.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 3","pages":"553-569"},"PeriodicalIF":5.6,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UMPIPE: Unequal Microbatches-Based Pipeline Parallelism for Deep Neural Network Training UMPIPE:基于不等微批的管道并行深度神经网络训练
IF 5.6 2区 计算机科学
IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-12-11 DOI: 10.1109/TPDS.2024.3515804
Guangyao Zhou;Wenhong Tian;Rajkumar Buyya;Kui Wu
{"title":"UMPIPE: Unequal Microbatches-Based Pipeline Parallelism for Deep Neural Network Training","authors":"Guangyao Zhou;Wenhong Tian;Rajkumar Buyya;Kui Wu","doi":"10.1109/TPDS.2024.3515804","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3515804","url":null,"abstract":"The increasing need for large-scale deep neural networks (DNN) has made parallel training an area of intensive focus. One effective method, microbatch-based pipeline parallelism (notably GPipe), accelerates parallel training in various architectures. However, existing parallel training architectures normally use equal data partitioning (EDP), where each layer's process maintains identical microbatch-sizes. EDP may hinder training speed because different processes often require varying optimal microbatch-sizes. To address this, we introduce UMPIPE, a novel framework for unequal microbatches-based pipeline parallelism. UMPIPE enables unequal data partitions (UEDP) across processes to optimize resource utilization. We develop a recurrence formula to calculate the time cost in UMPIPE by considering both computation and communication processes. To further enhance UMPIPE's efficiency, we propose the Dual-Chromosome Genetic Algorithm for UMPIPE (DGAP) that accounts for the independent time costs of forward and backward propagation. Furthermore, we present TiDGAP, a two-level improvement on DGAP. TiDGAP accelerates the process by simultaneously calculating the end time for multiple individuals and microbatches using matrix operations. Our extensive experiments validate the dual-chromosome strategy's optimization benefits and TiDGAP's acceleration capabilities. TiDGAP can achieve better training schemes than baselines, such as the local greedy algorithm and the global greedy-based dynamic programming. Compared to (GPipe, PipeDream), UMPIPE achieves increases in training speed: \u0000<inline-formula><tex-math>$(13.89,11.09)%$</tex-math></inline-formula>\u0000 for GPT1-14, \u0000<inline-formula><tex-math>$(17.11, 7.96)%$</tex-math></inline-formula>\u0000 for VGG16 and \u0000<inline-formula><tex-math>$geq (170,100)%$</tex-math></inline-formula>\u0000 for simulation networks.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 2","pages":"293-307"},"PeriodicalIF":5.6,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-Grained QoS Control via Tightly-Coupled Bandwidth Monitoring and Regulation for FPGA-Based Heterogeneous SoCs 基于fpga的异构soc紧耦合带宽监控与调节的细粒度QoS控制
IF 5.6 2区 计算机科学
IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-12-09 DOI: 10.1109/TPDS.2024.3513416
Giacomo Valente;Gianluca Brilli;Tania Di Mascio;Alessandro Capotondi;Paolo Burgio;Paolo Valente;Andrea Marongiu
{"title":"Fine-Grained QoS Control via Tightly-Coupled Bandwidth Monitoring and Regulation for FPGA-Based Heterogeneous SoCs","authors":"Giacomo Valente;Gianluca Brilli;Tania Di Mascio;Alessandro Capotondi;Paolo Burgio;Paolo Valente;Andrea Marongiu","doi":"10.1109/TPDS.2024.3513416","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3513416","url":null,"abstract":"Commercial embedded systems increasingly rely on heterogeneous architectures that integrate general-purpose, multi-core processors, and various hardware accelerators on the same chip. This provides the high performance required by modern applications at a low cost and low power consumption, but at the same time poses new challenges. Hardware resource sharing at various levels, and in particular at the main memory controller level, results in slower execution time for the application tasks, ultimately making the system unpredictable from the point of view of timing. To enable the adoption of heterogeneous systems-on-chip (System on Chips (SoCs)) in the domain of timing-critical applications several hardware and software approaches have been proposed, bandwidth regulation based on monitoring and throttling being one of the most widely adopted. Existing solutions, however, are either too coarse-grained, limiting the control over computing engines activities, or strongly platform-dependent, addressing the problem only for specific SoCs. This article proposes an innovative approach that can accurately control main memory bandwidth usage in FPGA-based heterogeneous SoCs. In particular, it controls system bandwidth by connecting a runtime bandwidth regulation component to FPGA-based accelerators. Our solution offers dynamically configurable, fine-grained bandwidth regulation – to adapt to the varying requirements of the application over time – at a very low overhead. Furthermore, it is entirely platform-independent, capable of integration with any FPGA-based accelerator. Developed at the register-transfer level using a reference SoC platform, it is designed for easy compatibility with any FPGA-based SoC. Experimental results conducted on the Xilinx Zynq UltraScale+ platform demonstrate that our approach (i) is more than \u0000<inline-formula><tex-math>$100times$</tex-math></inline-formula>\u0000 faster than loosely-coupled, software controlled regulators; (ii) is capable of exploiting the system bandwidth 28.7% more efficiently than tightly-coupled hardware regulators (e.g., ARM CoreLink QoS-400, where available); (iii) enables task co-scheduling solutions not feasible with state-of-the-art bandwidth regulation methods.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 2","pages":"326-340"},"PeriodicalIF":5.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TOP: Task-Based Operator Parallelism for Asynchronous Deep Learning Inference on GPU TOP: GPU上异步深度学习推理的基于任务的算子并行性
IF 5.6 2区 计算机科学
IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-12-05 DOI: 10.1109/TPDS.2024.3511543
Changyao Lin;Zhenming Chen;Ziyang Zhang;Jie Liu
{"title":"TOP: Task-Based Operator Parallelism for Asynchronous Deep Learning Inference on GPU","authors":"Changyao Lin;Zhenming Chen;Ziyang Zhang;Jie Liu","doi":"10.1109/TPDS.2024.3511543","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3511543","url":null,"abstract":"Current deep learning compilers have made significant strides in optimizing computation graphs for single- and multi-model scenarios. However, they lack specific optimizations for asynchronous multi-task inference systems. In such systems, tasks arrive dynamically, leading to diverse inference progress for each model. This renders traditional optimization strategies based solely on the original computation graph suboptimal or even invalid. Furthermore, existing operator scheduling methods do not account for parallel task pipelines involving the same model. Task pipelines present additional opportunities for optimization. Therefore, we propose Task-based Operator Parallelism (TOP). TOP incorporates an understanding of the impact of task arrival patterns on the inference progress of each model. It leverages the multi-agent reinforcement learning algorithm MADDPG to cooperatively optimize the task launcher and model scheduler, generating an optimal pair of dequeue frequency and computation graph. The objective of TOP is to enhance resource utilization, increase throughput, and allocate resources judiciously to prevent task backlog. To expedite the optimization process in TOP, we introduce a novel stage partition method using the GNN-based Policy Gradient (GPG) algorithm. Through extensive experiments on various devices, we demonstrate the efficacy of TOP. It outperforms the state-of-the-art in operator scheduling for both single- and multi-model task processing scenarios. Benefiting from TOP, we can significantly enhance the throughput of a single model by increasing its concurrency or batch size, thereby achieving self-acceleration.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 2","pages":"266-281"},"PeriodicalIF":5.6,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient GPU Algorithm for Lattice Boltzmann Method on Sparse Complex Geometries 稀疏复几何格子玻尔兹曼方法的高效GPU算法
IF 5.6 2区 计算机科学
IEEE Transactions on Parallel and Distributed Systems Pub Date : 2024-12-04 DOI: 10.1109/TPDS.2024.3510810
Zhangrong Qin;Xusheng Lu;Long Lv;Zhongxiang Tang;Binghai Wen
{"title":"An Efficient GPU Algorithm for Lattice Boltzmann Method on Sparse Complex Geometries","authors":"Zhangrong Qin;Xusheng Lu;Long Lv;Zhongxiang Tang;Binghai Wen","doi":"10.1109/TPDS.2024.3510810","DOIUrl":"https://doi.org/10.1109/TPDS.2024.3510810","url":null,"abstract":"Many fluid flow problems, such as the porous media, arterial blood flow and tissue fluid, contain sparse complex geometries. Although the lattice Boltzmann method is good at dealing with the complex boundaries, these sparse complex geometries cause the low computational performance and high memory consumption when the graphics processing unit (GPU) is used to accelerate the numerical computation. These problems would be addressed by compact memory layout, sophisticated memory access and enhanced thread utilization. This paper proposes a GPU-based algorithm to improve the lattice Boltzmann simulations with sparse complex geometries. An access pattern for a single set of distribution functions together with a semi-direct addressing is adopted to reduce memory consumption, while a collected structure of arrays is employed to enhance memory access efficiency. Furthermore, an address index array and a node classification coding scheme are employed to improve the GPU thread utilization ratio and reduce the GPU global memory access, respectively. The accuracy and mesh-independence has been verified by the numerical simulations of Poiseuille flow and porous media flow with face-centered filled spheres. The present algorithm has a significantly lower memory consumption than those based on direct or indirect addressing schemes. It improves the computational performance by several times compared to the other algorithms on the common GPU hardware.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 2","pages":"239-252"},"PeriodicalIF":5.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142890357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信