Achieving Fair or Differentiated Cache Sharing in Power-Constrained Chip Multiprocessors

Xiaorui Wang, Kai Ma, Yefu Wang
{"title":"Achieving Fair or Differentiated Cache Sharing in Power-Constrained Chip Multiprocessors","authors":"Xiaorui Wang, Kai Ma, Yefu Wang","doi":"10.1109/ICPP.2010.9","DOIUrl":null,"url":null,"abstract":"Limiting the peak power consumption of chip multiprocessors (CMPs) has recently received a lot of attention. In order to enable chip-level power capping, the peak power consumption of on-chip L2 caches in a CMP often needs to be constrained by dynamically transitioning selected cache banks into low-power modes. However, dynamic cache resizing for power capping may cause undesired long cache access latencies, and even thread starving and thrashing, for the applications running on the CMP. In this paper, we propose a novel cache management strategy that can limit the peak power consumption of L2 caches and provide fairness guarantees, such that the cache access latencies of the application threads co-scheduled on the CMP are impacted more uniformly. Our strategy is also extended to provide differentiated cache latency guarantees that can help the OS to enforce the desired thread priorities at the architectural level and achieve desired rates of thread progress for co-scheduled applications. Our solution features a two-tier control architecture rigorously designed based on advanced feedback control theory for guaranteed control accuracy and system stability. Extensive experimental results demonstrate that our solution can achieve the desired cache power capping, fair or differentiated cache sharing, and power-performance tradeoffs for many applications.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 39th International Conference on Parallel Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPP.2010.9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

Abstract

Limiting the peak power consumption of chip multiprocessors (CMPs) has recently received a lot of attention. In order to enable chip-level power capping, the peak power consumption of on-chip L2 caches in a CMP often needs to be constrained by dynamically transitioning selected cache banks into low-power modes. However, dynamic cache resizing for power capping may cause undesired long cache access latencies, and even thread starving and thrashing, for the applications running on the CMP. In this paper, we propose a novel cache management strategy that can limit the peak power consumption of L2 caches and provide fairness guarantees, such that the cache access latencies of the application threads co-scheduled on the CMP are impacted more uniformly. Our strategy is also extended to provide differentiated cache latency guarantees that can help the OS to enforce the desired thread priorities at the architectural level and achieve desired rates of thread progress for co-scheduled applications. Our solution features a two-tier control architecture rigorously designed based on advanced feedback control theory for guaranteed control accuracy and system stability. Extensive experimental results demonstrate that our solution can achieve the desired cache power capping, fair or differentiated cache sharing, and power-performance tradeoffs for many applications.
在功率受限的芯片多处理器中实现公平或差异化的缓存共享
限制芯片多处理器(cmp)的峰值功耗最近受到了很多关注。为了实现芯片级功率封顶,通常需要通过将选定的缓存组动态转换为低功耗模式来限制CMP中片上L2缓存的峰值功耗。但是,针对功率上限的动态缓存调整大小可能会导致不希望的长缓存访问延迟,甚至会导致运行在CMP上的应用程序出现线程饥饿和抖动。在本文中,我们提出了一种新的缓存管理策略,该策略可以限制L2缓存的峰值功耗并提供公平性保证,从而使在CMP上共同调度的应用程序线程的缓存访问延迟受到更均匀的影响。我们的策略还扩展到提供差异化的缓存延迟保证,这可以帮助操作系统在体系结构级别强制执行所需的线程优先级,并为共同调度的应用程序实现所需的线程进度率。我们的解决方案采用两层控制体系结构,严格设计基于先进的反馈控制理论,以保证控制精度和系统稳定性。大量的实验结果表明,我们的解决方案可以实现所需的缓存功率上限、公平或差异化的缓存共享以及许多应用程序的功率性能权衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信