Aibo Song, Bowen Peng, Jingyi Qiu, Yingying Xue, Mingyang Du
{"title":"BSDP: A Novel Balanced Spark Data Partitioner","authors":"Aibo Song, Bowen Peng, Jingyi Qiu, Yingying Xue, Mingyang Du","doi":"10.1109/ICPADS53394.2021.00075","DOIUrl":null,"url":null,"abstract":"As a memory-based distributed big data computing framework, Spark has been widely used in big data processing systems. However, during the execution of Spark, due to the imbalance of input data distribution and the shortage of existing data partitioners in Spark, it is easy to cause partition skew problem and reduce the execution efficiency of Spark. Aiming at this problem, this paper proposes a balanced Spark data partitioner called BSDP (Balanced Spark Data Partitioner). By deeply analyzing the partitioning characteristics of Shuffle intermediate data, the Spark Shuffle intermediate data equalization partitioning model is established. The model aims to minimize the partition skew and find a Shuffle intermediate data equalization partitioning strategy. Based on the model, this paper designs and implements a data equalization partitioning algorithm of BSDP. This algorithm transforms the Shuffle intermediate data equalization partitioning problem into a classic List-Scheduling task scheduling problem, effectively realizes the balanced partitioning of Shuffle intermediate data. The experiment verifies that the BSDP can effectively realize the balanced partitioning of the Shuffle intermediate data and improve the execution efficiency of Spark.","PeriodicalId":309508,"journal":{"name":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 27th International Conference on Parallel and Distributed Systems (ICPADS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPADS53394.2021.00075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As a memory-based distributed big data computing framework, Spark has been widely used in big data processing systems. However, during the execution of Spark, due to the imbalance of input data distribution and the shortage of existing data partitioners in Spark, it is easy to cause partition skew problem and reduce the execution efficiency of Spark. Aiming at this problem, this paper proposes a balanced Spark data partitioner called BSDP (Balanced Spark Data Partitioner). By deeply analyzing the partitioning characteristics of Shuffle intermediate data, the Spark Shuffle intermediate data equalization partitioning model is established. The model aims to minimize the partition skew and find a Shuffle intermediate data equalization partitioning strategy. Based on the model, this paper designs and implements a data equalization partitioning algorithm of BSDP. This algorithm transforms the Shuffle intermediate data equalization partitioning problem into a classic List-Scheduling task scheduling problem, effectively realizes the balanced partitioning of Shuffle intermediate data. The experiment verifies that the BSDP can effectively realize the balanced partitioning of the Shuffle intermediate data and improve the execution efficiency of Spark.