Simultaneous continual flow pipeline architecture

K. Jothi, Mageda Sharafeddine, Haitham Akkary
{"title":"Simultaneous continual flow pipeline architecture","authors":"K. Jothi, Mageda Sharafeddine, Haitham Akkary","doi":"10.1109/ICCD.2011.6081387","DOIUrl":null,"url":null,"abstract":"Since the introduction of the first industrial out-of-order superscalar processors in the 1990s, instruction buffers and cache sizes have kept increasing with every new generation of out-of-order cores. The motivation behind this continuous evolution has been performance of single-thread applications. Performance gains from larger instruction buffers and caches come at the expense of area, power, and complexity. We show that this is not the most energy efficient way to achieve performance. Instead, sizing the instruction buffers to the minimum size necessary for the common case of L1 data cache hits and using new latency-tolerant microarchitecture to handle loads that miss the L1 data cache, improves execution time and energy consumption on SpecCPU 2000 benchmarks by an average of 10% and 12% respectively, compared to a large superscalar baseline. Our non-blocking architecture outperforms other latency tolerant architectures, such as Continual Flow Pipelines, by up to 15% on the same SpecCPU 2000 benchmarks.","PeriodicalId":354015,"journal":{"name":"2011 IEEE 29th International Conference on Computer Design (ICCD)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 29th International Conference on Computer Design (ICCD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCD.2011.6081387","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Since the introduction of the first industrial out-of-order superscalar processors in the 1990s, instruction buffers and cache sizes have kept increasing with every new generation of out-of-order cores. The motivation behind this continuous evolution has been performance of single-thread applications. Performance gains from larger instruction buffers and caches come at the expense of area, power, and complexity. We show that this is not the most energy efficient way to achieve performance. Instead, sizing the instruction buffers to the minimum size necessary for the common case of L1 data cache hits and using new latency-tolerant microarchitecture to handle loads that miss the L1 data cache, improves execution time and energy consumption on SpecCPU 2000 benchmarks by an average of 10% and 12% respectively, compared to a large superscalar baseline. Our non-blocking architecture outperforms other latency tolerant architectures, such as Continual Flow Pipelines, by up to 15% on the same SpecCPU 2000 benchmarks.
同时连续流管道结构
自从20世纪90年代引入第一个工业无序超标量处理器以来,指令缓冲区和缓存大小随着每一代无序核心的出现而不断增加。这种持续发展背后的动机是单线程应用程序的性能。更大的指令缓冲区和缓存带来的性能提升是以牺牲面积、功率和复杂性为代价的。我们表明,这不是实现性能的最节能的方式。相反,将指令缓冲区调整到L1数据缓存命中的常见情况所需的最小大小,并使用新的容忍延迟的微架构来处理错过L1数据缓存的负载,与大型超标量基线相比,SpecCPU 2000基准上的执行时间和能耗平均分别提高了10%和12%。在相同的SpecCPU 2000基准测试中,我们的非阻塞架构比其他延迟容忍架构(如continuous Flow Pipelines)的性能高出15%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信