基于内存页的预测和预取

Haifeng Yu, G. Kedem
{"title":"基于内存页的预测和预取","authors":"Haifeng Yu, G. Kedem","doi":"10.1109/ICCD.2000.878296","DOIUrl":null,"url":null,"abstract":"This paper describes and evaluates DRAM-page based cache-line prediction and prefetching architecture. The scheme takes DRAM access timing into consideration in order to reduce prefetching overhead, amortizing the high cost of DRAM access by fetching two cache lines that reside on the same DRAM-page in a single access. On each DRAM access, one or two cache blocks may be prefetched. We combine three prediction mechanisms: history mechanism, stride, and one block lookahead, make them DRAM page sensitive and deploy them in an effective adaptive prefetching strategy. Our simulation shows that the prefetch mechanism can greatly improve system performance. Using a 32-KB prediction table cache, the prefetching scheme improves performance by 26%-55% on average over a baseline configuration, depending on the memory model. Moreover, the simulation shows that prefetching is more cost-effective than simply increasing L2-cache size or using a one block lookahead prefetching scheme. Simulation results also show that DRAM-page based prefetching yields higher relative performance as processors get faster, making the prefetching scheme more attractive for next generation processors.","PeriodicalId":437697,"journal":{"name":"Proceedings 2000 International Conference on Computer Design","volume":"170 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"DRAM-page based prediction and prefetching\",\"authors\":\"Haifeng Yu, G. Kedem\",\"doi\":\"10.1109/ICCD.2000.878296\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes and evaluates DRAM-page based cache-line prediction and prefetching architecture. The scheme takes DRAM access timing into consideration in order to reduce prefetching overhead, amortizing the high cost of DRAM access by fetching two cache lines that reside on the same DRAM-page in a single access. On each DRAM access, one or two cache blocks may be prefetched. We combine three prediction mechanisms: history mechanism, stride, and one block lookahead, make them DRAM page sensitive and deploy them in an effective adaptive prefetching strategy. Our simulation shows that the prefetch mechanism can greatly improve system performance. Using a 32-KB prediction table cache, the prefetching scheme improves performance by 26%-55% on average over a baseline configuration, depending on the memory model. Moreover, the simulation shows that prefetching is more cost-effective than simply increasing L2-cache size or using a one block lookahead prefetching scheme. Simulation results also show that DRAM-page based prefetching yields higher relative performance as processors get faster, making the prefetching scheme more attractive for next generation processors.\",\"PeriodicalId\":437697,\"journal\":{\"name\":\"Proceedings 2000 International Conference on Computer Design\",\"volume\":\"170 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 2000 International Conference on Computer Design\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCD.2000.878296\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 2000 International Conference on Computer Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCD.2000.878296","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

摘要

本文描述并评价了基于dram页的缓存线预测和预取体系结构。该方案考虑了DRAM访问时间,以减少预取开销,通过在一次访问中获取驻留在同一DRAM页面上的两条缓存线来分摊DRAM访问的高成本。在每次访问DRAM时,可能会预取一个或两个缓存块。我们结合了三种预测机制:历史机制、步进机制和一个块前瞻机制,使它们对DRAM页面敏感,并将它们部署在有效的自适应预取策略中。仿真结果表明,该预取机制可以大大提高系统性能。使用32kb预测表缓存,预取方案比基线配置平均提高26%-55%的性能,具体取决于内存模型。此外,仿真结果表明,预取比简单地增加l2缓存大小或使用一个块的预取方案更具成本效益。仿真结果还表明,随着处理器速度的提高,基于内存页的预取具有更高的相对性能,使预取方案对下一代处理器更具吸引力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
DRAM-page based prediction and prefetching
This paper describes and evaluates DRAM-page based cache-line prediction and prefetching architecture. The scheme takes DRAM access timing into consideration in order to reduce prefetching overhead, amortizing the high cost of DRAM access by fetching two cache lines that reside on the same DRAM-page in a single access. On each DRAM access, one or two cache blocks may be prefetched. We combine three prediction mechanisms: history mechanism, stride, and one block lookahead, make them DRAM page sensitive and deploy them in an effective adaptive prefetching strategy. Our simulation shows that the prefetch mechanism can greatly improve system performance. Using a 32-KB prediction table cache, the prefetching scheme improves performance by 26%-55% on average over a baseline configuration, depending on the memory model. Moreover, the simulation shows that prefetching is more cost-effective than simply increasing L2-cache size or using a one block lookahead prefetching scheme. Simulation results also show that DRAM-page based prefetching yields higher relative performance as processors get faster, making the prefetching scheme more attractive for next generation processors.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信