{"title":"Strike the Balance between System Utilization and Data Locality under Deadline Constraint for MapReduce Clusters","authors":"Yeh-Cheng Chen, J. Chou","doi":"10.1109/PDCAT.2017.00061","DOIUrl":null,"url":null,"abstract":"MapReduce paradigm has become a popular platform for massive data processing and Big Data applications. Although MapReduce was initially designed for high throughput and batch processing, it has also been used for handling many other types of applications and workloads due to its scalable and reliable system architecture. One of the emerging requirements for enterprise data-process computing is completion time guar- antee. However, there are only a few research works have been done for MapReduce jobs with deadline constraint. Therefore, in this paper, we aim to prevent jobs from missing deadline while maximizing the resource utilization and data locality of a MapReduce cluster. Our approach is to introduce a two-phase job scheduling mechanism which combines a job admission controller policy and a priority-based scheduling algorithm. We use a series of simulations over diverted workload to evaluate our system. The results show that our approach can guarantee job completion time in a heavy-loaded system, and achieve comparable data locality to the delay schedule algorithm in a light-loaded system. Furthermore, our approach can maximize system throughput by preventing system resources from being wasted by the jobs missing their deadlines.","PeriodicalId":119197,"journal":{"name":"2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PDCAT.2017.00061","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
MapReduce paradigm has become a popular platform for massive data processing and Big Data applications. Although MapReduce was initially designed for high throughput and batch processing, it has also been used for handling many other types of applications and workloads due to its scalable and reliable system architecture. One of the emerging requirements for enterprise data-process computing is completion time guar- antee. However, there are only a few research works have been done for MapReduce jobs with deadline constraint. Therefore, in this paper, we aim to prevent jobs from missing deadline while maximizing the resource utilization and data locality of a MapReduce cluster. Our approach is to introduce a two-phase job scheduling mechanism which combines a job admission controller policy and a priority-based scheduling algorithm. We use a series of simulations over diverted workload to evaluate our system. The results show that our approach can guarantee job completion time in a heavy-loaded system, and achieve comparable data locality to the delay schedule algorithm in a light-loaded system. Furthermore, our approach can maximize system throughput by preventing system resources from being wasted by the jobs missing their deadlines.