Hadoop MapReduce的TaskTracker感知调度

Jisha S. Manjaly, Varghese S. Chooralil
{"title":"Hadoop MapReduce的TaskTracker感知调度","authors":"Jisha S. Manjaly, Varghese S. Chooralil","doi":"10.1109/ICACC.2013.103","DOIUrl":null,"url":null,"abstract":"Hadoop is a framework for processing large amount of data in parallel with the help of Hadoop Distributed File System (HDFS) and MapReduce framework. Job scheduling is an important process in Hadoop MapReduce. Hadoop comes with three types of schedulers namely FIFO, Fair and Capacity Scheduler. The schedulers are now a plug gable component in the Hadoop MapReduce framework. When jobs have a dependency on an external service like database or Web service may leads to the failure of tasks due to overloading. In this scenario, Hadoop needs to re-run the tasks in another slots. To address this issue, Task Tracker aware scheduling has introduced. This scheduler enables users to configure a maximum load per Task Tracker in the Job Configuration itself. The algorithm will not allow a task to run and fail if the load of the Task Tracker reaches its threshold for the job. Also this scheduler allows the users to select the Task Tracker's per Job in the Job configuration.","PeriodicalId":109537,"journal":{"name":"2013 Third International Conference on Advances in Computing and Communications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"TaskTracker Aware Scheduling for Hadoop MapReduce\",\"authors\":\"Jisha S. Manjaly, Varghese S. Chooralil\",\"doi\":\"10.1109/ICACC.2013.103\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hadoop is a framework for processing large amount of data in parallel with the help of Hadoop Distributed File System (HDFS) and MapReduce framework. Job scheduling is an important process in Hadoop MapReduce. Hadoop comes with three types of schedulers namely FIFO, Fair and Capacity Scheduler. The schedulers are now a plug gable component in the Hadoop MapReduce framework. When jobs have a dependency on an external service like database or Web service may leads to the failure of tasks due to overloading. In this scenario, Hadoop needs to re-run the tasks in another slots. To address this issue, Task Tracker aware scheduling has introduced. This scheduler enables users to configure a maximum load per Task Tracker in the Job Configuration itself. The algorithm will not allow a task to run and fail if the load of the Task Tracker reaches its threshold for the job. Also this scheduler allows the users to select the Task Tracker's per Job in the Job configuration.\",\"PeriodicalId\":109537,\"journal\":{\"name\":\"2013 Third International Conference on Advances in Computing and Communications\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 Third International Conference on Advances in Computing and Communications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACC.2013.103\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Third International Conference on Advances in Computing and Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACC.2013.103","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

摘要

Hadoop是一个使用HDFS (Hadoop Distributed File System)和MapReduce框架并行处理大量数据的框架。作业调度是Hadoop MapReduce中的一个重要过程。Hadoop自带三种调度器,即FIFO、Fair和Capacity Scheduler。调度程序现在是Hadoop MapReduce框架中的一个可插拔组件。当作业依赖于外部服务(如数据库或Web服务)时,可能会由于超载而导致任务失败。在这种情况下,Hadoop需要在另一个插槽中重新运行任务。为了解决这个问题,引入了任务跟踪器感知调度。此调度器使用户能够在作业配置本身中配置每个任务跟踪器的最大负载。如果任务跟踪器的负载达到任务的阈值,该算法将不允许任务运行并失败。此外,这个调度器允许用户在作业配置中选择任务跟踪器的每个作业。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TaskTracker Aware Scheduling for Hadoop MapReduce
Hadoop is a framework for processing large amount of data in parallel with the help of Hadoop Distributed File System (HDFS) and MapReduce framework. Job scheduling is an important process in Hadoop MapReduce. Hadoop comes with three types of schedulers namely FIFO, Fair and Capacity Scheduler. The schedulers are now a plug gable component in the Hadoop MapReduce framework. When jobs have a dependency on an external service like database or Web service may leads to the failure of tasks due to overloading. In this scenario, Hadoop needs to re-run the tasks in another slots. To address this issue, Task Tracker aware scheduling has introduced. This scheduler enables users to configure a maximum load per Task Tracker in the Job Configuration itself. The algorithm will not allow a task to run and fail if the load of the Task Tracker reaches its threshold for the job. Also this scheduler allows the users to select the Task Tracker's per Job in the Job configuration.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信