Enhancing both fairness and performance using rate-aware dynamic storage cache partitioning

Yong Li, D. Feng, Zhan Shi
{"title":"Enhancing both fairness and performance using rate-aware dynamic storage cache partitioning","authors":"Yong Li, D. Feng, Zhan Shi","doi":"10.1145/2534645.2534650","DOIUrl":null,"url":null,"abstract":"In this paper, we investigate the problem of fair storage cache allocation among multiply competing applications with diversified access rates. Commonly used cache replacement policies like LRU and most LRU variants are inherently unfair in cache allocation for heterogenous applications. They implicitly give more cache to the applications that has high access rate and less cache to the applications of slow access rate. However, applications of fast access rate do not always gain higher performance from the additional cache blocks. In contrast, the slow application suffer poor performance with a reduced cache size. It is beneficial in terms of both performance and fairness to allocate cache blocks by their utility.\n In this paper, we propose a partition-based cache management algorithm for a shared cache. The goal of our algorithm is to find an allocation such that all heterogenous applications can achieve a specified fairness degree, while maximizing the overall performance. To achieve this goal, we present an adaptive partition framework, which partitions the shared cache among competing applications and dynamic adjusts the partition size based on predicted utility on both fairness and performance. We implemented our algorithm in a storage simulator and evaluated the fairness and performance with various workloads. Experimental results show that, compared with LRU, our algorithm achieves large improvement in fairness and slightly in performance.","PeriodicalId":166804,"journal":{"name":"International Symposium on Design and Implementation of Symbolic Computation Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Symposium on Design and Implementation of Symbolic Computation Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2534645.2534650","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

In this paper, we investigate the problem of fair storage cache allocation among multiply competing applications with diversified access rates. Commonly used cache replacement policies like LRU and most LRU variants are inherently unfair in cache allocation for heterogenous applications. They implicitly give more cache to the applications that has high access rate and less cache to the applications of slow access rate. However, applications of fast access rate do not always gain higher performance from the additional cache blocks. In contrast, the slow application suffer poor performance with a reduced cache size. It is beneficial in terms of both performance and fairness to allocate cache blocks by their utility. In this paper, we propose a partition-based cache management algorithm for a shared cache. The goal of our algorithm is to find an allocation such that all heterogenous applications can achieve a specified fairness degree, while maximizing the overall performance. To achieve this goal, we present an adaptive partition framework, which partitions the shared cache among competing applications and dynamic adjusts the partition size based on predicted utility on both fairness and performance. We implemented our algorithm in a storage simulator and evaluated the fairness and performance with various workloads. Experimental results show that, compared with LRU, our algorithm achieves large improvement in fairness and slightly in performance.
使用速率感知的动态存储缓存分区增强公平性和性能
本文研究了具有不同访问速率的多个竞争应用程序之间的公平存储缓存分配问题。常用的缓存替换策略(如LRU和大多数LRU变体)在异构应用程序的缓存分配方面本质上是不公平的。它们隐式地为高访问速率的应用程序提供更多的缓存,为慢访问速率的应用程序提供更少的缓存。然而,快速访问速率的应用程序并不总是从额外的缓存块中获得更高的性能。相比之下,速度较慢的应用程序在减少缓存大小的情况下性能较差。根据缓存块的效用来分配缓存块,在性能和公平性方面都是有益的。本文提出了一种基于分区的共享缓存管理算法。我们算法的目标是找到一种分配,使所有异构应用程序都能达到指定的公平程度,同时最大化整体性能。为了实现这一目标,我们提出了一个自适应分区框架,该框架在竞争应用程序之间对共享缓存进行分区,并根据预测的公平性和性能效用动态调整分区大小。我们在存储模拟器中实现了我们的算法,并在各种工作负载下评估了公平性和性能。实验结果表明,与LRU算法相比,我们的算法在公平性上有较大的提高,而在性能上有轻微的提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信