{"title":"FedCSpc: A Cross-Silo Federated Learning System With Error-Bounded Lossy Parameter Compression","authors":"Zhaorui Zhang;Sheng Di;Kai Zhao;Sian Jin;Dingwen Tao;Zhuoran Ji;Benben Liu;Khalid Ayed Alharthi;Jiannong Cao;Franck Cappello","doi":"10.1109/TPDS.2025.3564736","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3564736","url":null,"abstract":"Cross-Silo federated learning is widely used for scaling deep neural network (DNN) training over data silos from different locations worldwide while guaranteeing data privacy. Communication has been identified as the main bottleneck when training large-scale models due to large-volume model parameters and gradient transmission across public networks with limited bandwidth. Most previous works focus on gradient compression, while limited work tries to compress parameters that can not be ignored and extremely affect communication performance during the training. To bridge this gap, we propose <italic>FedCSpc:</i> an efficient cross-silo federated learning system with an XAI-driven adaptive parameter compression strategy for large-scale model training. Our work substantially differs from existing gradient compression techniques due to the distinct data features of gradient and parameter. The key contributions of this paper are fourfold. (1) Our designed <italic>FedCSpc</i> proposes to compress the parameter during the training using the state-of-the-art error-bounded lossy compressor – SZ3. (2) We develop an adaptive compression error bound adjustment algorithm to guarantee the model accuracy effectively. (3) We exploit an efficient approach to utilize the idle CPU resources of clients to compress the parameters. (4) We perform a comprehensive evaluation with a wide range of models and benchmarks on a GPU cluster with 65 GPUs. Results show that <italic>FedCSpc</i> can achieve the same model accuracy as FedAvg while reducing the data volume of parameters and gradients in communication by up to 7.39× and 288×, respectively. With 32 clients on a 4 Gb size model, <italic>FedCSpc</i> significantly outperforms FedAvg in wall-clock time in the emulated WAN environment (at the bandwidth of 1 Gbps or lower without loss of generality).","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 7","pages":"1372-1386"},"PeriodicalIF":5.6,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144100075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel Greedy Algorithms for Steiner Forest","authors":"Laleh Ghalami;Daniel Grosu","doi":"10.1109/TPDS.2025.3563849","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3563849","url":null,"abstract":"The Steiner Forest Problem is a fundamental combinatorial optimization problem in operations research and computer science. Given an undirected graph with non-negative weights for edges and a set of pairs of vertices called terminals, the Steiner Forest Problem is to find the minimum cost subgraph that connects each of the terminal pairs together. We design a family of parallel greedy algorithms based on a sequential heuristic greedy algorithm called Paired Greedy, which iteratively connects the terminal pairs that have the minimum distance. The family of parallel algorithms consists of a set of algorithms exhibiting various degrees of parallelism determined by the number of pairs that are connected in parallel in each iteration of the algorithms. We implement and run the algorithms on a multi-core system and perform an extensive experimental analysis. We analyzed the performance of the algorithms on a rich library of Steiner Forest instances with various underlying graph types. The results show that our proposed parallel algorithms achieve significant speedup with respect to the sequential Paired Greedy algorithm and provide solutions with costs that are very close to those of the solutions obtained by the sequential Paired Greedy algorithm. We provide recommendation on selecting the type of parallel algorithm and its parameters in order to achieve the most efficient results for each class of instances.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 6","pages":"1311-1325"},"PeriodicalIF":5.6,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143929668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beehive: Decentralised High-Frequency Small Tasks Scheduling in Large Clusters","authors":"Yuxia Cheng;Linfeng Xu;Tongkai Yang;Wei Wu;Zhiqiang Lin;Antong Yu;Wenzhi Chen","doi":"10.1109/TPDS.2025.3563457","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3563457","url":null,"abstract":"Data centers struggle with growing cluster sizes and rising submissions of short-lived, high-frequency tasks that cause performance bottlenecks in task scheduling. Existing centralized and distributed scheduling systems fall short in meeting performance requirements due to computational overload on the scheduler, cluster state management overhead, and scheduling conflicts. To address these challenges, this article introduces Beehive, a novel lightweight decentralized scheduling framework. In Beehive, each cluster node can schedule tasks within its local neighborhood, effectively reducing resource management overhead and scheduling conflicts. Moreover, all nodes are interconnected in a small-world network, an efficient structure that allows tasks to access resources across the entire cluster through global routing. This lightweight design enables Beehive to scale efficiently, supporting over 10,000 nodes and up to 80,000 task submissions per second without causing single-node scheduling bottlenecks. Experimental results demonstrate that Beehive significantly reduces scheduling latency. Specifically, 99% of tasks are scheduled within 100 milliseconds, and scheduling throughput can increase linearly with the number of nodes. Compared to existing centralized and distributed scheduling frameworks, Beehive substantially alleviates scheduling bottlenecks, particularly for high-frequency, short-lived tasks.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 6","pages":"1326-1337"},"PeriodicalIF":5.6,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengyu Liao;Shiyou Qian;Zhonglong Zheng;Jian Cao;Guangtao Xue;Minglu Li
{"title":"$AWB^+$AWB+-$Tree$Tree: A Novel Width-Based Index Structure Supporting Hybrid Matching for Large-Scale Content-Based Pub/Sub Systems","authors":"Zhengyu Liao;Shiyou Qian;Zhonglong Zheng;Jian Cao;Guangtao Xue;Minglu Li","doi":"10.1109/TPDS.2025.3561714","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3561714","url":null,"abstract":"Event matching is a key component in a large-scale content-based publish/subscribe system. The performance of most existing algorithms is easily affected by the subscription matching probability. In this article, we propose a new data structure, named <inline-formula><tex-math>$AWB^+$</tex-math></inline-formula>-<inline-formula><tex-math>$Tree$</tex-math></inline-formula>, which is based on the width of the predicates, to efficiently index the subscriptions. The most notable feature of <inline-formula><tex-math>$AWB^+$</tex-math></inline-formula>-<inline-formula><tex-math>$Tree$</tex-math></inline-formula> is its ability to combine the advantages of different matching methods, thus achieving high and robust performance in dynamic environments. First, we implement both a forward matching method (AFM) and a backward matching method (ABM) based on <inline-formula><tex-math>$AWB^+$</tex-math></inline-formula>-<inline-formula><tex-math>$Tree$</tex-math></inline-formula>. Then, we introduce a hybrid matching method (AHM) that combines AFM and ABM. Moreover, we extend <inline-formula><tex-math>$AWB^+$</tex-math></inline-formula>-<inline-formula><tex-math>$Tree$</tex-math></inline-formula> in three aspects: approximate matching, string type matching, and fine-grained parallelization. We conducted extensive experiments to evaluate the performance of the proposed matching algorithms on synthetic and real-world datasets. The experiment results reveal that AHM achieves a reduction in matching time by up to 53.8% compared to the state-of-the-art method. Additionally, AHM exhibits improved performance robustness, with up to a 76.9% reduction in terms of the standard deviation of matching time. Particularly in dynamic scenarios, AHM is at least 2.3 times faster and 41.3% more stable than its counterparts. Furthermore, by implementing parallelization, the matching speed of 8 threads can be accelerated by 4.16 times compared to the single-thread matching speed.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 6","pages":"1268-1281"},"PeriodicalIF":5.6,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143896457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangyu Kong;Yi Huang;Longlong Chen;Jianfeng Zhu;Liangwei Li;Xingchen Man;Mingyu Gao;Shaojun Wei;Leibo Liu
{"title":"Raccoon: Lightweight Support for Comprehensive Control Flows in Reconfigurable Spatial Architectures","authors":"Xiangyu Kong;Yi Huang;Longlong Chen;Jianfeng Zhu;Liangwei Li;Xingchen Man;Mingyu Gao;Shaojun Wei;Leibo Liu","doi":"10.1109/TPDS.2025.3561145","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3561145","url":null,"abstract":"Coarse-grained reconfigurable arrays (CGRAs) have emerged as promising candidates for digital signal processing, biomedical, and automotive applications, where energy efficiency and flexibility are paramount. Yet existing CGRAs suffer from the Amdahl bottleneck caused by constrained control handling via either off-device communication or expensive tag-matching mechanisms. More importantly, mapping control flow onto CGRAs is extremely arduous and time-consuming due to intricate instruction structures and hardware mechanisms. To counteract these limitations, we propose Raccoon, a portable and lightweight framework for CGRAs targeting vast control flows. Raccoon comprises a comprehensive approach that spans microarchitecture, HW/SW interface, and compiler aspects. Regarding microarchitecture, Raccoon incorporates specialized infrastructure for branch- and loop-level control patterns with concise execution mechanisms. The HW/SW interface of Raccoon includes well-characterized abstractions and instruction sets tailored for easy compilation, featuring custom operators and architectural models for control-oriented units. On the compiler front, Raccoon integrates advanced control handling techniques and employs a portable mapper leveraging reinforcement learning and Monte Carlo tree search. This enables agile mapping and optimization of the entire program, ensuring efficient execution and high-quality results. Through the cohesive co-design, Raccoon can empower various CGRAs with robust control-flow handling capabilities, surpassing conventional tagged mechanisms in terms of hardware efficiency and compiler adaptability. Evaluation results show that Raccoon achieves up to a 5.78× improvement in energy efficiency and a 2.24× reduction in cycle count over state-of-the-art CGRAs. Raccoon stands out for its versatility in managing intricate control flows and showcases remarkable portability across diverse CGRA architectures.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 6","pages":"1294-1310"},"PeriodicalIF":5.6,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143896458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CausalConf: Datasize-Aware Configuration Auto-Tuning for Recurring Big Data Processing Jobs via Adaptive Causal Structure Learning","authors":"Hui Dou;Mingjie He;Lei Zhang;Yiwen Zhang;Zibin Zheng","doi":"10.1109/TPDS.2025.3560304","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3560304","url":null,"abstract":"To ensure high-performance processing capabilities across diverse application scenarios, Big Data frameworks such as Spark and Flink usually provide a number of performance-related parameters to configure. Considering the computation scale and the characteristic of repeated executions of typical recurring Big Data processing jobs, how to automatically tune parameters for performance optimization has emerged as a hot research topic in both academic and industry. With the advantages in interpretability and generalization ability, causal inference-based methods recently prove their advancement over conventional search-based and machine learning-based methods. However, the complexity of Big Data frameworks, the time-varying input dataset size of a recurring job and the limitation of a single causal structure learning algorithm together prevent these methods from practical application. Therefore, in this paper, we design and implement CausalConf, a datasize-aware configuration auto-tuning approach for recurring Big Data processing jobs via adaptive causal structure learning. Specifically, the offline training phase is responsible for training multiple datasize-aware causal structure models with different causal structure learning algorithms, while the online tuning phase is responsible for recommending the next promising configuration in an iterative manner via the Multi-Armed Bandit-based optimal intervention set selection as well as the novel datasize-aware causal Bayesian optimization. To evaluate the performance of CausalConf, a series of experiments are conducted on our local Spark cluster with 9 different previously unknown target applications from HiBench. Experimental results show that the performance speed ratio achieved by CausalConf compared to the four recent and representative baselines can respectively reach 1.45×, 1.31×, 1.26× and 1.54× on average and up to 2.53×, 1.55×, 1.57×, 2.18×. Besides, the average total online tuning cost of CausalConf is reduced by 8.85%, 14.26%, 18.58%, and 14.29%, respectively.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 7","pages":"1354-1371"},"PeriodicalIF":5.6,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144100035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ChunkFunc: Dynamic SLO-Aware Configuration of Serverless Functions","authors":"Thomas Pusztai;Stefan Nastic","doi":"10.1109/TPDS.2025.3559021","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3559021","url":null,"abstract":"Serverless computing promises to be a cost effective form of on demand computing. To fully utilize its cost saving potential, workflows must be configured with the appropriate amount of resources to meet their response time Service Level Objective (SLO), while keeping costs at a minimum. Since determining and updating these configuration models manually is a nontrivial and error prone task, researchers have developed solutions for automatically finding configurations that meet the aforementioned requirements. However, our initial experiments show that even when following best practices and using state-of-the-art configuration tools, resources may still be considerably over- or underprovisioned, depending on the size of functions’ input payload. In this paper we present ChunkFunc, an SLO- and input data-aware framework for tuning serverless workflows. Our main contributions include: i) an SLO- and input size-aware function performance model for optimized configurations in serverless workflows, ii) ChunkFunc Profiler, an auto-tuned, Bayesian Optimization-guided profiling mechanism for profiling serverless functions with typical input data sizes to build a performance model, and iii) ChunkFunc Workflow Optimizer, which uses these models to determine an input size dependent configuration for each serverless function in a workflow to meet the SLO, while keeping costs to a minimum. We evaluate ChunkFunc on real-life serverless workflows and compare it to two state-of-the-art solutions, showing that it increases SLO adherence by a factor of 1.04 to 2.78, depending on the workflow, and reduces costs by up to 61% .","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 6","pages":"1237-1252"},"PeriodicalIF":5.6,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10959103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandros Nikolaos Ziogas;Timo Schneider;Tal Ben-Nun;Alexandru Calotoiu;Tiziano De Matteis;Johannes de Fine Licht;Luca Lavarini;Torsten Hoefler
{"title":"Productivity, Portability, Performance, and Reproducibility: Data-Centric Python","authors":"Alexandros Nikolaos Ziogas;Timo Schneider;Tal Ben-Nun;Alexandru Calotoiu;Tiziano De Matteis;Johannes de Fine Licht;Luca Lavarini;Torsten Hoefler","doi":"10.1109/TPDS.2025.3549310","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3549310","url":null,"abstract":"Python has become the <italic>de facto</i> language for scientific computing. Programming in Python is highly productive, mainly due to its rich science-oriented software ecosystem built around the NumPy module. As a result, the demand for Python support in High-Performance Computing (HPC) has skyrocketed. However, the Python language itself does not necessarily offer high performance. This work presents a workflow that retains Python’s high productivity while achieving portable performance across different architectures. The workflow’s key features are HPC-oriented language extensions and a set of automatic optimizations powered by a data-centric intermediate representation. We show performance results and scaling across CPU, GPU, FPGA, and the Piz Daint supercomputer (up to 23,328 cores), with 2.47x and 3.75x speedups over previous-best solutions, first-ever Xilinx and Intel FPGA results of annotated Python, and up to 93.16% scaling efficiency on 512 nodes. Our benchmarks were reproduced in the Student Cluster Competition (SCC) during the Supercomputing Conference (SC) 2022. We present and discuss the student teams’ results.","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 5","pages":"804-820"},"PeriodicalIF":5.6,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143808917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Symmetric Properties and Two Variants of Shuffle-Cubes","authors":"Huazhong Lü;Kai Deng;Xiaomei Yang","doi":"10.1109/TPDS.2025.3558885","DOIUrl":"https://doi.org/10.1109/TPDS.2025.3558885","url":null,"abstract":"Li et al. in [Inf. Process. Lett. 77 (2001) 35–41] proposed the shuffle-cube <inline-formula><tex-math>$SQ_{n}$</tex-math></inline-formula>, a hypercube variant, as an attractive interconnection network topology for massive parallel and distributed systems. Diameter and symmetry are two desirable measures of network performance in terms of transmission delay and routing algorithms. Almost all <inline-formula><tex-math>$n$</tex-math></inline-formula>-regular hypercube variants of dimension <inline-formula><tex-math>$n$</tex-math></inline-formula> have diameter not less than <inline-formula><tex-math>$n/2$</tex-math></inline-formula>. The diameter of the shuffle-cube is approximately a quarter of the diameter of the hypercube of the same dimension, making it a competitive candidate network topology. By far, symmetric properties of the shuffle-cube remain unknown. In this paper, we show that <inline-formula><tex-math>$SQ_{n}$</tex-math></inline-formula> is not vertex-transitive for <inline-formula><tex-math>$n> 2$</tex-math></inline-formula>, which is not an appealing property in interconnection networks. This shortcoming limits the practical application of the shuffle-cube. To overcome this limitation, two novel variants of the shuffle-cube, namely simplified shuffle-cube <inline-formula><tex-math>$SSQ_{n}$</tex-math></inline-formula> and balanced shuffle-cube <inline-formula><tex-math>$BSQ_{n}$</tex-math></inline-formula> are introduced, and their vertex-transitivity are proved simultaneously. By proposing the shuffle-cube-like graph, we obtain that both <inline-formula><tex-math>$SSQ_{n}$</tex-math></inline-formula> and <inline-formula><tex-math>$BSQ_{n}$</tex-math></inline-formula> are maximally connected, implying high connectivity similar to the hypercube. Additionally, super-connectivity, a refined parameter of connectivity, of <inline-formula><tex-math>$SSQ_{n}$</tex-math></inline-formula> and <inline-formula><tex-math>$BSQ_{n}$</tex-math></inline-formula> are also determined. Then, by vertex-transitivity of <inline-formula><tex-math>$SSQ_{n}$</tex-math></inline-formula> and <inline-formula><tex-math>$BSQ_{n}$</tex-math></inline-formula>, routing algorithms of <inline-formula><tex-math>$SSQ_{n}$</tex-math></inline-formula> and <inline-formula><tex-math>$BSQ_{n}$</tex-math></inline-formula> are given for all <inline-formula><tex-math>$n> 2$</tex-math></inline-formula> respectively. We show that both <inline-formula><tex-math>$SSQ_{n}$</tex-math></inline-formula> and <inline-formula><tex-math>$BSQ_{n}$</tex-math></inline-formula> possess Hamiltonian cycle embedding for all <inline-formula><tex-math>$n> 2$</tex-math></inline-formula>, and we also show that <inline-formula><tex-math>$SSQ_{n}$</tex-math></inline-formula> is Hamiltonian-connected. It is noticeable that each vertex of <inline-formula><tex-math>$SSQ_{n}$</tex-math></inline-formula> is contained in exactly one clique of size four, making it also a viable interconnection topo","PeriodicalId":13257,"journal":{"name":"IEEE Transactions on Parallel and Distributed Systems","volume":"36 6","pages":"1282-1293"},"PeriodicalIF":5.6,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143896268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}