{"title":"Optimizing Power and Performance Trade-offs of MapReduce Job Processing with Heterogeneous Multi-core Processors","authors":"Feng Yan, L. Cherkasova, Zhuoyao Zhang, E. Smirni","doi":"10.1109/CLOUD.2014.41","DOIUrl":null,"url":null,"abstract":"Modern processors are often constrained by a given power budget that forces designers to consider different trade-offs, e.g., to choose between either many slow, power-efficient cores, or fewer faster, power-hungry cores, or to select a combination of them. In this work, we design and evaluate a new Hadoop scheduler, called DyScale, that exploits capabilities offered by heterogeneous cores within a single multi-core processor for achieving a variety of performance objectives. A typical MapReduce workload contains jobs with different performance goals: large, batch jobs that are throughput oriented, and smaller interactive jobs that are response-time sensitive. Heterogeneous multi-core processors enable creating virtual resource pools based on the different core types for multi-class priority scheduling. These virtual Hadoop clusters, based on \"slow\" cores versus \"fast\" cores can effectively support different performance objectives that cannot be achieved in a Hadoop cluster with homogeneous processors. Using detailed measurements and extensive simulation study we argue in favor of heterogeneous multi-core processors as they provide performance means for \"faster\" processing of the small, interactive MapReduce jobs (up to 40% faster), while at the same time offer an improved throughput (up to 40% higher) for large, batch job processing.","PeriodicalId":288542,"journal":{"name":"2014 IEEE 7th International Conference on Cloud Computing","volume":"145 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE 7th International Conference on Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CLOUD.2014.41","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
Modern processors are often constrained by a given power budget that forces designers to consider different trade-offs, e.g., to choose between either many slow, power-efficient cores, or fewer faster, power-hungry cores, or to select a combination of them. In this work, we design and evaluate a new Hadoop scheduler, called DyScale, that exploits capabilities offered by heterogeneous cores within a single multi-core processor for achieving a variety of performance objectives. A typical MapReduce workload contains jobs with different performance goals: large, batch jobs that are throughput oriented, and smaller interactive jobs that are response-time sensitive. Heterogeneous multi-core processors enable creating virtual resource pools based on the different core types for multi-class priority scheduling. These virtual Hadoop clusters, based on "slow" cores versus "fast" cores can effectively support different performance objectives that cannot be achieved in a Hadoop cluster with homogeneous processors. Using detailed measurements and extensive simulation study we argue in favor of heterogeneous multi-core processors as they provide performance means for "faster" processing of the small, interactive MapReduce jobs (up to 40% faster), while at the same time offer an improved throughput (up to 40% higher) for large, batch job processing.