{"title":"Accelerating Intra-Party Communication in Vertical Federated Learning with RDMA","authors":"Duowen Liu","doi":"10.1145/3426745.3431333","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has emerged as an elegant privacy-preserving distributed machine learning (ML) paradigm. Particularly, vertical FL (VFL) has a promising application prospect for collaborating organizations owning data of the same set of users but with disjoint features to jointly train models without leaking their private data to each other. As the volume of training data and the model size increase rapidly, each organization may deploy a cluster of many servers to participant in the federation. As such, the intra-party communication cost (i.e., network transfers within each organization's cluster) can significantly impact the entire VFL job's performance. Despite this, existing FL frameworks use the inefficient gRPC for intra-party communication, leading to high latency and high CPU cost. In this paper, we propose a design to transmit data with RDMA for intra-party communication, with no modifications to applications. To improve the network efficiency, we further propose an RDMA usage arbiter to adjust the RDMA bandwidth used for a non-straggler party dynamically, and a query data size optimizer to automatically find out the optimal query data size that each response carries. Our preliminary results show that RDMA based intra-party communication is 10x faster than gRPC based one, leading to a reduction of 9% on the completion time of a VFL job. Moreover, the RDMA usage arbiter can save over 90% bandwidth, and the query data size optimizer can improve the transmission speed by 18%.","PeriodicalId":301937,"journal":{"name":"Proceedings of the 1st Workshop on Distributed Machine Learning","volume":"222 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st Workshop on Distributed Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3426745.3431333","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Federated learning (FL) has emerged as an elegant privacy-preserving distributed machine learning (ML) paradigm. Particularly, vertical FL (VFL) has a promising application prospect for collaborating organizations owning data of the same set of users but with disjoint features to jointly train models without leaking their private data to each other. As the volume of training data and the model size increase rapidly, each organization may deploy a cluster of many servers to participant in the federation. As such, the intra-party communication cost (i.e., network transfers within each organization's cluster) can significantly impact the entire VFL job's performance. Despite this, existing FL frameworks use the inefficient gRPC for intra-party communication, leading to high latency and high CPU cost. In this paper, we propose a design to transmit data with RDMA for intra-party communication, with no modifications to applications. To improve the network efficiency, we further propose an RDMA usage arbiter to adjust the RDMA bandwidth used for a non-straggler party dynamically, and a query data size optimizer to automatically find out the optimal query data size that each response carries. Our preliminary results show that RDMA based intra-party communication is 10x faster than gRPC based one, leading to a reduction of 9% on the completion time of a VFL job. Moreover, the RDMA usage arbiter can save over 90% bandwidth, and the query data size optimizer can improve the transmission speed by 18%.