Automorphism Ensemble Decoding on GPU: Achieving High Throughput and Low Latency for Polar and RM Codes

IF 4.6 2区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Yansong Li;Kairui Tian;Rongke Liu
{"title":"Automorphism Ensemble Decoding on GPU: Achieving High Throughput and Low Latency for Polar and RM Codes","authors":"Yansong Li;Kairui Tian;Rongke Liu","doi":"10.1109/TSP.2025.3570740","DOIUrl":null,"url":null,"abstract":"Automorphism ensemble decoding (AED) is a highly parallel approach that enables decoding of polar and Reed-Muller (RM) codes with automorphisms, offering a practical solution with near-maximum likelihood (ML) performance and manageable computational complexity. To meet the growing demands for high throughput and low latency in cloud and virtual random access networks, this paper presents a graphics processing unit (GPU)-based AED architecture for polar and RM codes, utilizing low-complexity successive cancellation (SC) and small list SC (SCL) decoders as the constituent of AED. The proposed architecture exploits the inherent parallelism of AED to optimize decoding tasks on the GPU, significantly enhancing throughput by efficiently harnessing the massive parallel processing capabilities of the GPU. Additionally, improved thread mapping and data management techniques substantially reduce latency for automorphism ensemble SC (Aut-SC) decoding, while a low-latency sorting mechanism further accelerates automorphism ensemble SCL (Aut-SCL) decoding. Experimental results on an NVIDIA RTX 4090 demonstrate that the proposed Aut-SC decoder, with an ensemble size of 8, achieves a throughput exceeding 17 Gbps under highly parallelized batch processing. Compared to the state-of-the-art software-based SCL decoders, the proposed GPU-based Aut-SC and Aut-SCL architectures outperform existing solutions by factors of up to 28$\\boldsymbol{\\times}$ and 10$\\boldsymbol{\\times}$, respectively, in normalized throughput while maintaining the same or even superior error correction performance.","PeriodicalId":13330,"journal":{"name":"IEEE Transactions on Signal Processing","volume":"73 ","pages":"2227-2242"},"PeriodicalIF":4.6000,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11005721/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Automorphism ensemble decoding (AED) is a highly parallel approach that enables decoding of polar and Reed-Muller (RM) codes with automorphisms, offering a practical solution with near-maximum likelihood (ML) performance and manageable computational complexity. To meet the growing demands for high throughput and low latency in cloud and virtual random access networks, this paper presents a graphics processing unit (GPU)-based AED architecture for polar and RM codes, utilizing low-complexity successive cancellation (SC) and small list SC (SCL) decoders as the constituent of AED. The proposed architecture exploits the inherent parallelism of AED to optimize decoding tasks on the GPU, significantly enhancing throughput by efficiently harnessing the massive parallel processing capabilities of the GPU. Additionally, improved thread mapping and data management techniques substantially reduce latency for automorphism ensemble SC (Aut-SC) decoding, while a low-latency sorting mechanism further accelerates automorphism ensemble SCL (Aut-SCL) decoding. Experimental results on an NVIDIA RTX 4090 demonstrate that the proposed Aut-SC decoder, with an ensemble size of 8, achieves a throughput exceeding 17 Gbps under highly parallelized batch processing. Compared to the state-of-the-art software-based SCL decoders, the proposed GPU-based Aut-SC and Aut-SCL architectures outperform existing solutions by factors of up to 28$\boldsymbol{\times}$ and 10$\boldsymbol{\times}$, respectively, in normalized throughput while maintaining the same or even superior error correction performance.
GPU上的自同构集成解码:实现极性和RM码的高吞吐量和低延迟
自同构集成解码(AED)是一种高度并行的方法,可以解码具有自同构的极性和Reed-Muller (RM)码,提供具有近最大似然(ML)性能和可管理的计算复杂性的实用解决方案。为了满足云和虚拟随机接入网络对高吞吐量和低延迟日益增长的需求,本文提出了一种基于图形处理单元(GPU)的极性和RM码的AED架构,利用低复杂度连续消去(SC)和小列表SC (SCL)解码器作为AED的组成部分。该架构利用AED固有的并行性来优化GPU上的解码任务,通过有效地利用GPU的大规模并行处理能力,显著提高了吞吐量。此外,改进的线程映射和数据管理技术大大减少了自同构集成SC (Aut-SC)解码的延迟,而低延迟排序机制进一步加速了自同构集成SCL (Aut-SCL)解码。在NVIDIA RTX 4090上的实验结果表明,所提出的集成大小为8的Aut-SC解码器在高度并行的批处理下实现了超过17 Gbps的吞吐量。与最先进的基于软件的SCL解码器相比,所提出的基于gpu的Aut-SC和Aut-SCL架构在标准化吞吐量方面分别优于现有解决方案高达28$\boldsymbol{\times}$和10$\boldsymbol{\times}$,同时保持相同甚至更好的纠错性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing 工程技术-工程:电子与电气
CiteScore
11.20
自引率
9.30%
发文量
310
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Signal Processing covers novel theory, algorithms, performance analyses and applications of techniques for the processing, understanding, learning, retrieval, mining, and extraction of information from signals. The term “signal” includes, among others, audio, video, speech, image, communication, geophysical, sonar, radar, medical and musical signals. Examples of topics of interest include, but are not limited to, information processing and the theory and application of filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信