{"title":"New Algorithms for File System Cooperative Caching","authors":"Eric Anderson, Christopher Hoover, Xiaozhou Li","doi":"10.1109/MASCOTS.2010.59","DOIUrl":null,"url":null,"abstract":"We present two new cooperative caching algorithms that allow a cluster of file system clients to cache chunks of files instead of directly accessing them from origin file servers. The first algorithm, called C-LRU (Cooperative-LRU), is based on the simple D-LRU (Distributed-LRU) algorithm, but moves a chunk's position closer to the tail of its local LRU list when the number of copies of the chunk increases. The second algorithm, called RobinHood, is based on the N-Chance algorithm, but targets chunks cached at many clients for replacement when forwarding a singlet to a peer. We evaluate these algorithms on a variety of workloads, including several publicly available traces, and find that the new algorithms significantly outperform their predecessors.","PeriodicalId":406889,"journal":{"name":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MASCOTS.2010.59","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
We present two new cooperative caching algorithms that allow a cluster of file system clients to cache chunks of files instead of directly accessing them from origin file servers. The first algorithm, called C-LRU (Cooperative-LRU), is based on the simple D-LRU (Distributed-LRU) algorithm, but moves a chunk's position closer to the tail of its local LRU list when the number of copies of the chunk increases. The second algorithm, called RobinHood, is based on the N-Chance algorithm, but targets chunks cached at many clients for replacement when forwarding a singlet to a peer. We evaluate these algorithms on a variety of workloads, including several publicly available traces, and find that the new algorithms significantly outperform their predecessors.