CLRC:一个新的HDFS Erasure Code定位算法

Ying Fang, Shuai Wang, Hai Tan, Xin Zhang, Jun Zhang
{"title":"CLRC:一个新的HDFS Erasure Code定位算法","authors":"Ying Fang, Shuai Wang, Hai Tan, Xin Zhang, Jun Zhang","doi":"10.1109/ICCEAI52939.2021.00012","DOIUrl":null,"url":null,"abstract":"With the continuous development of big data, the increase speed of hardware expansion used for HDFS has been far behind the volume of big data. As a data redundancy strategy, the traditional data replication strategy has been gradually replaced by Erasure Code due to its smaller redundancy rate and storage overhead. However, compared with replicas, Erasure Code needs to read a certain amount of data blocks during the process of data recovery, resulting in a large amount overhead of I/O and network. Based on the RS algorithm, a new CLRC algorithm is proposed to optimize the locality of RS algorithm by grouping RS coded blocks and generating local check blocks. Evaluations show that the algorithm can reduce about 61% bandwidth and I/O consumption during the process of data recovery when a single block is damaged. What's more, the cost of decoding time is only 59% of RS algorithm.","PeriodicalId":331409,"journal":{"name":"2021 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"CLRC: a New Erasure Code Localization Algorithm for HDFS\",\"authors\":\"Ying Fang, Shuai Wang, Hai Tan, Xin Zhang, Jun Zhang\",\"doi\":\"10.1109/ICCEAI52939.2021.00012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the continuous development of big data, the increase speed of hardware expansion used for HDFS has been far behind the volume of big data. As a data redundancy strategy, the traditional data replication strategy has been gradually replaced by Erasure Code due to its smaller redundancy rate and storage overhead. However, compared with replicas, Erasure Code needs to read a certain amount of data blocks during the process of data recovery, resulting in a large amount overhead of I/O and network. Based on the RS algorithm, a new CLRC algorithm is proposed to optimize the locality of RS algorithm by grouping RS coded blocks and generating local check blocks. Evaluations show that the algorithm can reduce about 61% bandwidth and I/O consumption during the process of data recovery when a single block is damaged. What's more, the cost of decoding time is only 59% of RS algorithm.\",\"PeriodicalId\":331409,\"journal\":{\"name\":\"2021 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCEAI52939.2021.00012\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCEAI52939.2021.00012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

随着大数据的不断发展,用于HDFS的硬件扩展的增长速度已经远远落后于大数据的体量。作为一种数据冗余策略,传统的数据复制策略由于具有更小的冗余率和更小的存储开销,逐渐被Erasure Code所取代。但与副本相比,Erasure Code在数据恢复过程中需要读取一定数量的数据块,造成了较大的I/O开销和网络开销。在RS算法的基础上,提出了一种新的CLRC算法,通过分组RS编码块并生成局部校验块来优化RS算法的局域性。评估表明,该算法在单个块损坏的情况下,在数据恢复过程中可减少约61%的带宽和I/O消耗。解码时间成本仅为RS算法的59%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CLRC: a New Erasure Code Localization Algorithm for HDFS
With the continuous development of big data, the increase speed of hardware expansion used for HDFS has been far behind the volume of big data. As a data redundancy strategy, the traditional data replication strategy has been gradually replaced by Erasure Code due to its smaller redundancy rate and storage overhead. However, compared with replicas, Erasure Code needs to read a certain amount of data blocks during the process of data recovery, resulting in a large amount overhead of I/O and network. Based on the RS algorithm, a new CLRC algorithm is proposed to optimize the locality of RS algorithm by grouping RS coded blocks and generating local check blocks. Evaluations show that the algorithm can reduce about 61% bandwidth and I/O consumption during the process of data recovery when a single block is damaged. What's more, the cost of decoding time is only 59% of RS algorithm.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信