Cluster and Single-Node Analysis of Long-Term Deduplication Patterns

Zhen Sun, G. Kuenning, Sonam Mandal, Philip Shilane, Vasily Tarasov, Nong Xiao, E. Zadok
{"title":"Cluster and Single-Node Analysis of Long-Term Deduplication Patterns","authors":"Zhen Sun, G. Kuenning, Sonam Mandal, Philip Shilane, Vasily Tarasov, Nong Xiao, E. Zadok","doi":"10.1145/3183890","DOIUrl":null,"url":null,"abstract":"Deduplication has become essential in disk-based backup systems, but there have been few long-term studies of backup workloads. Most past studies either were of a small static snapshot or covered only a short period that was not representative of how a backup system evolves over time. For this article, we first collected 21 months of data from a shared user file system; 33 users and over 4,000 snapshots are covered. We then analyzed the dataset, examining a variety of essential characteristics across two dimensions: single-node deduplication and cluster deduplication. For single-node deduplication analysis, our primary focus was individual-user data. Despite apparently similar roles and behavior among all of our users, we found significant differences in their deduplication ratios. Moreover, the data that some users share with others had a much higher deduplication ratio than average. For cluster deduplication analysis, we implemented seven published data-routing algorithms and created a detailed comparison of their performance with respect to deduplication ratio, load distribution, and communication overhead. We found that per-file routing achieves a higher deduplication ratio than routing by super-chunk (multiple consecutive chunks), but it also leads to high data skew (imbalance of space usage across nodes). We also found that large chunking sizes are better for cluster deduplication, as they significantly reduce data-routing overhead, while their negative impact on deduplication ratios is small and acceptable. We draw interesting conclusions from both single-node and cluster deduplication analysis and make recommendations for future deduplication systems design.","PeriodicalId":273014,"journal":{"name":"ACM Transactions on Storage (TOS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Storage (TOS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3183890","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

Deduplication has become essential in disk-based backup systems, but there have been few long-term studies of backup workloads. Most past studies either were of a small static snapshot or covered only a short period that was not representative of how a backup system evolves over time. For this article, we first collected 21 months of data from a shared user file system; 33 users and over 4,000 snapshots are covered. We then analyzed the dataset, examining a variety of essential characteristics across two dimensions: single-node deduplication and cluster deduplication. For single-node deduplication analysis, our primary focus was individual-user data. Despite apparently similar roles and behavior among all of our users, we found significant differences in their deduplication ratios. Moreover, the data that some users share with others had a much higher deduplication ratio than average. For cluster deduplication analysis, we implemented seven published data-routing algorithms and created a detailed comparison of their performance with respect to deduplication ratio, load distribution, and communication overhead. We found that per-file routing achieves a higher deduplication ratio than routing by super-chunk (multiple consecutive chunks), but it also leads to high data skew (imbalance of space usage across nodes). We also found that large chunking sizes are better for cluster deduplication, as they significantly reduce data-routing overhead, while their negative impact on deduplication ratios is small and acceptable. We draw interesting conclusions from both single-node and cluster deduplication analysis and make recommendations for future deduplication systems design.
长期重复数据删除模式的集群和单机分析
重复数据删除在基于磁盘的备份系统中已经变得至关重要,但是很少有关于备份工作负载的长期研究。过去的大多数研究要么是一个小的静态快照,要么只涵盖了一个短时间,不能代表备份系统如何随时间发展。在本文中,我们首先从共享用户文件系统收集了21个月的数据;涵盖33个用户和超过4,000个快照。然后,我们分析了数据集,检查了两个维度上的各种基本特征:单节点重复数据删除和集群重复数据删除。对于单节点重复数据删除分析,我们主要关注个人用户数据。尽管所有用户的角色和行为明显相似,但我们发现他们的重复数据删除比率存在显著差异。此外,部分用户与其他用户共享的数据的重复数据删除率远高于平均水平。对于集群重复数据删除分析,我们实现了七种已发布的数据路由算法,并创建了它们在重复数据删除比率、负载分布和通信开销方面的性能的详细比较。我们发现,按文件路由比按超级块(多个连续的块)路由实现更高的重复数据删除比率,但它也会导致高数据倾斜(节点间空间使用的不平衡)。我们还发现,较大的分块大小更适合集群重复数据删除,因为它们可以显著减少数据路由开销,而它们对重复数据删除比率的负面影响很小,并且是可以接受的。我们从单节点和集群重复数据删除分析中得出了有趣的结论,并为未来的重复数据删除系统设计提出了建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信