{"title":"节能Hadoop使用镜像数据块复制策略","authors":"Sara Arbab Yazd, S. Venkatesan, N. Mittal","doi":"10.1109/SRDS.2012.25","DOIUrl":null,"url":null,"abstract":"MapReduce scheme has became the state of the art in parallel processing of vast amount of data in distributed systems. Hadoop, as a popular open-source implementation of this technique, makes use of data block replication mechanism to provide a reliable and fault-tolerant design. To maintain data availability, Hadoop takes into account the possibilities of node and rack failures. Hence, it stores multiple copies of each data block to ensure availability and reliability. The current data block placement policy is to randomly distribute the replicas on all servers, satisfying some constraints such as preventing storage of two replicas of a data block on a single node. Our study proposes an efficient placement policy for data block replicas, which can reduce the consumed energy in data centers. The proposed policy is built upon the covering subset (CovSet) method. The effectiveness of the proposed approach is confirmed through simulations. Also, our experiments show that the proposed method becomes more effective whenever the average number of data blocks per server increases, which corresponds to the actual conditions in practice.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"295 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Energy Efficient Hadoop Using Mirrored Data Block Replication Policy\",\"authors\":\"Sara Arbab Yazd, S. Venkatesan, N. Mittal\",\"doi\":\"10.1109/SRDS.2012.25\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"MapReduce scheme has became the state of the art in parallel processing of vast amount of data in distributed systems. Hadoop, as a popular open-source implementation of this technique, makes use of data block replication mechanism to provide a reliable and fault-tolerant design. To maintain data availability, Hadoop takes into account the possibilities of node and rack failures. Hence, it stores multiple copies of each data block to ensure availability and reliability. The current data block placement policy is to randomly distribute the replicas on all servers, satisfying some constraints such as preventing storage of two replicas of a data block on a single node. Our study proposes an efficient placement policy for data block replicas, which can reduce the consumed energy in data centers. The proposed policy is built upon the covering subset (CovSet) method. The effectiveness of the proposed approach is confirmed through simulations. Also, our experiments show that the proposed method becomes more effective whenever the average number of data blocks per server increases, which corresponds to the actual conditions in practice.\",\"PeriodicalId\":447700,\"journal\":{\"name\":\"2012 IEEE 31st Symposium on Reliable Distributed Systems\",\"volume\":\"295 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE 31st Symposium on Reliable Distributed Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SRDS.2012.25\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 31st Symposium on Reliable Distributed Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SRDS.2012.25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Energy Efficient Hadoop Using Mirrored Data Block Replication Policy
MapReduce scheme has became the state of the art in parallel processing of vast amount of data in distributed systems. Hadoop, as a popular open-source implementation of this technique, makes use of data block replication mechanism to provide a reliable and fault-tolerant design. To maintain data availability, Hadoop takes into account the possibilities of node and rack failures. Hence, it stores multiple copies of each data block to ensure availability and reliability. The current data block placement policy is to randomly distribute the replicas on all servers, satisfying some constraints such as preventing storage of two replicas of a data block on a single node. Our study proposes an efficient placement policy for data block replicas, which can reduce the consumed energy in data centers. The proposed policy is built upon the covering subset (CovSet) method. The effectiveness of the proposed approach is confirmed through simulations. Also, our experiments show that the proposed method becomes more effective whenever the average number of data blocks per server increases, which corresponds to the actual conditions in practice.