Data De-duplication on Similar File Detection

Yueguang Zhu, Xingjun Zhang, Runting Zhao, Xiaoshe Dong
{"title":"Data De-duplication on Similar File Detection","authors":"Yueguang Zhu, Xingjun Zhang, Runting Zhao, Xiaoshe Dong","doi":"10.1109/IMIS.2014.9","DOIUrl":null,"url":null,"abstract":"At present, there exist many bottlenecks in block level data de-duplication on the metadata management and read/write rate. In order to achieve higher de-duplication elimination ratio, the traditional way is to expand the range of data for data de-duplication, but that would make metadata fields longer and increase the number of metadata entries. When detecting the redundant data, metadata needs to be constantly imported and exported into the memory and access bottleneck will be produced. So it is necessary to detect similar documents to classify valuable data for de-duplication. In this paper, we propose a new method of block-level data de-duplication combined with similar file detection. At the time of guaranteeing the de-duplication elimination ratio, we narrow the range of data to reduce the metadata and eliminate performance bottlenecks. We present a detailed evaluation of our method and other existing data deduplication methods, and we show that our method meets its design goals as it improves the de-duplication ratio while reducing overhead costs.","PeriodicalId":345694,"journal":{"name":"2014 Eighth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 Eighth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IMIS.2014.9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

At present, there exist many bottlenecks in block level data de-duplication on the metadata management and read/write rate. In order to achieve higher de-duplication elimination ratio, the traditional way is to expand the range of data for data de-duplication, but that would make metadata fields longer and increase the number of metadata entries. When detecting the redundant data, metadata needs to be constantly imported and exported into the memory and access bottleneck will be produced. So it is necessary to detect similar documents to classify valuable data for de-duplication. In this paper, we propose a new method of block-level data de-duplication combined with similar file detection. At the time of guaranteeing the de-duplication elimination ratio, we narrow the range of data to reduce the metadata and eliminate performance bottlenecks. We present a detailed evaluation of our method and other existing data deduplication methods, and we show that our method meets its design goals as it improves the de-duplication ratio while reducing overhead costs.
基于相似文件检测的重复数据删除
目前,块级重复数据删除在元数据管理和读写速率方面存在很多瓶颈。为了达到更高的重复数据删除率,传统的方法是扩大重复数据删除的数据范围,但这会使元数据字段变长,增加元数据条目的数量。当检测到冗余数据时,需要不断地将元数据导入和导出到内存中,从而产生访问瓶颈。因此,有必要检测相似文档,对有价值的数据进行分类,进行重复数据删除。本文提出了一种结合相似文件检测的块级重复数据删除新方法。在保证重复数据删除率的同时,缩小数据范围,减少元数据,消除性能瓶颈。我们对我们的方法和其他现有的重复数据删除方法进行了详细的评估,并表明我们的方法满足其设计目标,因为它提高了重复数据删除比率,同时降低了间接成本。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信