{"title":"A Virtualized Hybrid Distributed File System","authors":"Xingyu Zhou, Liangyu He","doi":"10.1109/CyberC.2013.39","DOIUrl":null,"url":null,"abstract":"We have designed a virtualized hybrid distributed file system for large scale data storage. It ensembles the characteristic of fault tolerance based on inexpensive commodity hardware inherited from the classic Google File System but also integrates the advantage of the P2P network. While sharing many of the same goals as previous distributed file systems, this new system design aims to reduce the scale and previously necessary space, bandwidth and energy consumptions of the data center and reduces the cost of running a large-scale distributed file system. Traditionally, only local or inner-enterprise servers are used for constructing cloud-computing network for a certain enterprise. This new design tries to induce a new method of integrating leisure hard-disk space, bandwidth and CPU capacity of from external sources. With a distributed data storage network and an inner-enterprise server system as a central controller, large clusters of data can be managed in a classic C/S like way and data can be transmitted like the P2P network. To ensure security, more stratified virtualization is required to ensure the isolation between realistic file owner and storage space provider. Workload estimations have been conducted to test the efficiency of time and space in this design.","PeriodicalId":133756,"journal":{"name":"2013 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CyberC.2013.39","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
We have designed a virtualized hybrid distributed file system for large scale data storage. It ensembles the characteristic of fault tolerance based on inexpensive commodity hardware inherited from the classic Google File System but also integrates the advantage of the P2P network. While sharing many of the same goals as previous distributed file systems, this new system design aims to reduce the scale and previously necessary space, bandwidth and energy consumptions of the data center and reduces the cost of running a large-scale distributed file system. Traditionally, only local or inner-enterprise servers are used for constructing cloud-computing network for a certain enterprise. This new design tries to induce a new method of integrating leisure hard-disk space, bandwidth and CPU capacity of from external sources. With a distributed data storage network and an inner-enterprise server system as a central controller, large clusters of data can be managed in a classic C/S like way and data can be transmitted like the P2P network. To ensure security, more stratified virtualization is required to ensure the isolation between realistic file owner and storage space provider. Workload estimations have been conducted to test the efficiency of time and space in this design.