{"title":"分布式执行器的内存抽象与优化","authors":"S. Sahin, Ling Liu, Wenqi Cao, Qi Zhang, Juhyun Bae, Yanzhao Wu","doi":"10.1109/CIC50333.2020.00019","DOIUrl":null,"url":null,"abstract":"This paper presents a suite of memory abstraction and optimization techniques for distributed executors, with the focus on showing the performance optimization opportunities for Spark executors, which are known to outperform Hadoop MapReduce executors by leveraging Resilient Distributed Datasets (RDDs), a fundamental core of Spark. This paper makes three original contributions. First, we show that applications on Spark experience large performance deterioration, when RDD is too large to fit in memory, causing unbalanced memory utilizations and premature spilling. Second, we develop a suite of techniques to guide the configuration of RDDs in Spark executors, aiming to optimize the performance of iterative ML workloads on Spark executors when their allocated memory is sufficient for RDD caching. Third, we design DAHI, a light-weight RDD optimizer. DAHI provides three enhancements to Spark: (i) using elastic executors, instead of fixed size JVM executors; (ii) supporting coarser grained tasks and large size RDDs by enabling partial RDD caching; and (iii) automatically leveraging remote memory for secondary RDD caching in the shortage of primary RDD caching on a local node. Extensive experiments on machine learning and graph processing benchmarks show that with DAHI, the performance of ML workloads and applications on Spark improves by up to 12.4x.","PeriodicalId":265435,"journal":{"name":"2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Memory Abstraction and Optimization for Distributed Executors\",\"authors\":\"S. Sahin, Ling Liu, Wenqi Cao, Qi Zhang, Juhyun Bae, Yanzhao Wu\",\"doi\":\"10.1109/CIC50333.2020.00019\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a suite of memory abstraction and optimization techniques for distributed executors, with the focus on showing the performance optimization opportunities for Spark executors, which are known to outperform Hadoop MapReduce executors by leveraging Resilient Distributed Datasets (RDDs), a fundamental core of Spark. This paper makes three original contributions. First, we show that applications on Spark experience large performance deterioration, when RDD is too large to fit in memory, causing unbalanced memory utilizations and premature spilling. Second, we develop a suite of techniques to guide the configuration of RDDs in Spark executors, aiming to optimize the performance of iterative ML workloads on Spark executors when their allocated memory is sufficient for RDD caching. Third, we design DAHI, a light-weight RDD optimizer. DAHI provides three enhancements to Spark: (i) using elastic executors, instead of fixed size JVM executors; (ii) supporting coarser grained tasks and large size RDDs by enabling partial RDD caching; and (iii) automatically leveraging remote memory for secondary RDD caching in the shortage of primary RDD caching on a local node. Extensive experiments on machine learning and graph processing benchmarks show that with DAHI, the performance of ML workloads and applications on Spark improves by up to 12.4x.\",\"PeriodicalId\":265435,\"journal\":{\"name\":\"2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIC50333.2020.00019\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIC50333.2020.00019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Memory Abstraction and Optimization for Distributed Executors
This paper presents a suite of memory abstraction and optimization techniques for distributed executors, with the focus on showing the performance optimization opportunities for Spark executors, which are known to outperform Hadoop MapReduce executors by leveraging Resilient Distributed Datasets (RDDs), a fundamental core of Spark. This paper makes three original contributions. First, we show that applications on Spark experience large performance deterioration, when RDD is too large to fit in memory, causing unbalanced memory utilizations and premature spilling. Second, we develop a suite of techniques to guide the configuration of RDDs in Spark executors, aiming to optimize the performance of iterative ML workloads on Spark executors when their allocated memory is sufficient for RDD caching. Third, we design DAHI, a light-weight RDD optimizer. DAHI provides three enhancements to Spark: (i) using elastic executors, instead of fixed size JVM executors; (ii) supporting coarser grained tasks and large size RDDs by enabling partial RDD caching; and (iii) automatically leveraging remote memory for secondary RDD caching in the shortage of primary RDD caching on a local node. Extensive experiments on machine learning and graph processing benchmarks show that with DAHI, the performance of ML workloads and applications on Spark improves by up to 12.4x.