Asynchronous Memory Access Unit: Exploiting Massive Parallelism for Far Memory Access

IF 1.5 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Luming Wang, Xu Zhang, Songyue Wang, Zhuolun Jiang, Tianyue Lu, Mingyu Chen, Siwei Luo, Keji Huang
{"title":"Asynchronous Memory Access Unit: Exploiting Massive Parallelism for Far Memory Access","authors":"Luming Wang, Xu Zhang, Songyue Wang, Zhuolun Jiang, Tianyue Lu, Mingyu Chen, Siwei Luo, Keji Huang","doi":"10.1145/3663479","DOIUrl":null,"url":null,"abstract":"<p>The growing memory demands of modern applications have driven the adoption of far memory technologies in data centers to provide cost-effective, high-capacity memory solutions. However, far memory presents new performance challenges because its access latencies are significantly longer and more variable than local DRAM. For applications to achieve acceptable performance on far memory, a high degree of memory-level parallelism (MLP) is needed to tolerate the long access latency. </p><p>While modern out-of-order processors are capable of exploiting a certain degree of MLP, they are constrained by resource limitations and hardware complexity. The key obstacle is the synchronous memory access semantics of traditional load/store instructions, which occupy critical hardware resources for a long time. The longer far memory latencies exacerbate this limitation. </p><p>This paper proposes a set of Asynchronous Memory Access Instructions (AMI) and its supporting function unit, Asynchronous Memory Access Unit (AMU), inside contemporary Out-of-Order Core. AMI separates memory request issuing from response handling to reduce resource occupation. Additionally, AMU architecture supports up to several hundreds of asynchronous memory requests through re-purposing a portion of L2 Cache as scratchpad memory (SPM) to provide sufficient temporal storage. Together with a coroutine-based programming framework, this scheme can achieve significantly higher MLP for hiding far memory latencies. </p><p>Evaluation with a cycle-accurate simulation shows AMI achieves 2.42 × speedup on average for memory-bound benchmarks with 1<i>μ</i>s additional far memory latency. Over 130 outstanding requests are supported with 26.86 × speedup for GUPS (random access) with 5 <i>μ</i>s latency. These demonstrate how the techniques tackle far memory performance impacts through explicit MLP expression and latency adaptation.</p>","PeriodicalId":50920,"journal":{"name":"ACM Transactions on Architecture and Code Optimization","volume":"6 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Architecture and Code Optimization","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3663479","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

The growing memory demands of modern applications have driven the adoption of far memory technologies in data centers to provide cost-effective, high-capacity memory solutions. However, far memory presents new performance challenges because its access latencies are significantly longer and more variable than local DRAM. For applications to achieve acceptable performance on far memory, a high degree of memory-level parallelism (MLP) is needed to tolerate the long access latency.

While modern out-of-order processors are capable of exploiting a certain degree of MLP, they are constrained by resource limitations and hardware complexity. The key obstacle is the synchronous memory access semantics of traditional load/store instructions, which occupy critical hardware resources for a long time. The longer far memory latencies exacerbate this limitation.

This paper proposes a set of Asynchronous Memory Access Instructions (AMI) and its supporting function unit, Asynchronous Memory Access Unit (AMU), inside contemporary Out-of-Order Core. AMI separates memory request issuing from response handling to reduce resource occupation. Additionally, AMU architecture supports up to several hundreds of asynchronous memory requests through re-purposing a portion of L2 Cache as scratchpad memory (SPM) to provide sufficient temporal storage. Together with a coroutine-based programming framework, this scheme can achieve significantly higher MLP for hiding far memory latencies.

Evaluation with a cycle-accurate simulation shows AMI achieves 2.42 × speedup on average for memory-bound benchmarks with 1μs additional far memory latency. Over 130 outstanding requests are supported with 26.86 × speedup for GUPS (random access) with 5 μs latency. These demonstrate how the techniques tackle far memory performance impacts through explicit MLP expression and latency adaptation.

异步内存访问单元:利用大规模并行性实现远距离内存访问
现代应用对内存的需求不断增长,推动了远端内存技术在数据中心的应用,以提供具有成本效益的大容量内存解决方案。然而,远端内存带来了新的性能挑战,因为它的访问延迟比本地 DRAM 长得多,而且变化更大。要使应用程序在远端内存上实现可接受的性能,需要高度的内存级并行性(MLP)来承受较长的访问延迟。虽然现代阶外处理器能够利用一定程度的 MLP,但它们受到资源限制和硬件复杂性的制约。关键的障碍在于传统加载/存储指令的同步内存访问语义,它会长时间占用关键的硬件资源。较长的远存储器延迟加剧了这一限制。本文提出了一套异步内存访问指令(AMI)及其支持功能单元--异步内存访问单元(AMU),并将其置于当代的阶次外核(Out-of-Order Core)中。AMI 将内存请求发布与响应处理分开,以减少资源占用。此外,AMU 架构通过将部分二级缓存重新用作刮板内存 (SPM),提供足够的时间存储,可支持多达数百个异步内存请求。该方案与基于例程的编程框架相结合,可显著提高隐藏远内存延迟的 MLP。通过周期精确模拟进行的评估显示,AMI 在 1μs 额外远内存延迟的内存绑定基准测试中平均实现了 2.42 倍的速度提升。对于具有 5μs 延迟的 GUPS(随机存取),支持 130 多个未处理请求,速度提高了 26.86 倍。这证明了这些技术如何通过明确的 MLP 表达和延迟适应来解决对远端内存性能的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ACM Transactions on Architecture and Code Optimization
ACM Transactions on Architecture and Code Optimization 工程技术-计算机:理论方法
CiteScore
3.60
自引率
6.20%
发文量
78
审稿时长
6-12 weeks
期刊介绍: ACM Transactions on Architecture and Code Optimization (TACO) focuses on hardware, software, and system research spanning the fields of computer architecture and code optimization. Articles that appear in TACO will either present new techniques and concepts or report on experiences and experiments with actual systems. Insights useful to architects, hardware or software developers, designers, builders, and users will be emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信