基于GPU的空间信道模型仿真加速

Qingqing Dang, Zhisong Bie
{"title":"基于GPU的空间信道模型仿真加速","authors":"Qingqing Dang, Zhisong Bie","doi":"10.1109/ICCCHINA.2014.7008380","DOIUrl":null,"url":null,"abstract":"In traditional link level simulation, multiple-input and multiple-output (MIMO) channel model is one of the most time-consuming modules. When using more realistic geometry-based channel models, it consumes more time. In this paper, we propose an efficient simulator implementation of geometry-based spatial channel model (SCM) on graphics processing unit (GPU). We first analyze the potential parallelism of the SCM module. The SCM simulation includes generating channel coefficients, generating additive white Gaussian noise (AWGN), filtering input signals and adding noise. Secondly, we implement all those parallelizable sub-modules on GPU using the open computing language (OpenCL). Then, a lot of effective GPU accelerating approaches are employed to make all those GPU functions highly optimized. The approaches include out-of-order command queue, merging data, sharing local memory and vectorization. At last, we verify our approaches on Nvidia's mid-range GPU GTX660. The experiment result shows that our newly proposed GPU implementation achieves more than 1000 times speedup compared with the implementation on traditional central processing unit (CPU). The simulation time is close to the processing time of transmitter and receiver, which makes it possible to construct a real-time channel simulator of link level for long term evolution (LTE) or LTE-advanced system and software-defined radio. As far as we know, we are the first to accelerate the SCM model on GPU. The results of this paper should have significant application value in practice.","PeriodicalId":353402,"journal":{"name":"2014 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Acceleration of spatial channel model simulation using GPU\",\"authors\":\"Qingqing Dang, Zhisong Bie\",\"doi\":\"10.1109/ICCCHINA.2014.7008380\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In traditional link level simulation, multiple-input and multiple-output (MIMO) channel model is one of the most time-consuming modules. When using more realistic geometry-based channel models, it consumes more time. In this paper, we propose an efficient simulator implementation of geometry-based spatial channel model (SCM) on graphics processing unit (GPU). We first analyze the potential parallelism of the SCM module. The SCM simulation includes generating channel coefficients, generating additive white Gaussian noise (AWGN), filtering input signals and adding noise. Secondly, we implement all those parallelizable sub-modules on GPU using the open computing language (OpenCL). Then, a lot of effective GPU accelerating approaches are employed to make all those GPU functions highly optimized. The approaches include out-of-order command queue, merging data, sharing local memory and vectorization. At last, we verify our approaches on Nvidia's mid-range GPU GTX660. The experiment result shows that our newly proposed GPU implementation achieves more than 1000 times speedup compared with the implementation on traditional central processing unit (CPU). The simulation time is close to the processing time of transmitter and receiver, which makes it possible to construct a real-time channel simulator of link level for long term evolution (LTE) or LTE-advanced system and software-defined radio. As far as we know, we are the first to accelerate the SCM model on GPU. The results of this paper should have significant application value in practice.\",\"PeriodicalId\":353402,\"journal\":{\"name\":\"2014 IEEE/CIC International Conference on Communications in China (ICCC)\",\"volume\":\"99 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE/CIC International Conference on Communications in China (ICCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCHINA.2014.7008380\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE/CIC International Conference on Communications in China (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCHINA.2014.7008380","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在传统的链路级仿真中,多输入多输出(MIMO)信道模型是耗时最多的模块之一。当使用更真实的基于几何的通道模型时,它会消耗更多的时间。在本文中,我们提出了一种在图形处理单元(GPU)上实现基于几何的空间通道模型(SCM)的高效模拟器。我们首先分析了单片机模块潜在的并行性。单片机仿真包括产生信道系数、产生加性高斯白噪声(AWGN)、滤波输入信号和加噪。其次,我们使用开放计算语言(OpenCL)在GPU上实现了所有可并行化的子模块。然后,采用许多有效的GPU加速方法,使所有GPU功能得到高度优化。这些方法包括乱序命令队列、合并数据、共享局部内存和向量化。最后,我们在Nvidia的中档GPU GTX660上验证了我们的方法。实验结果表明,与传统的中央处理器(CPU)相比,我们提出的GPU实现实现的速度提高了1000倍以上。仿真时间接近于发送端和接收端的处理时间,为长期演进(LTE)或LTE-advanced系统和软件定义无线电构建链路级实时信道模拟器提供了可能。据我们所知,我们是第一个在GPU上加速SCM模型的公司。本文的研究结果在实际中具有重要的应用价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Acceleration of spatial channel model simulation using GPU
In traditional link level simulation, multiple-input and multiple-output (MIMO) channel model is one of the most time-consuming modules. When using more realistic geometry-based channel models, it consumes more time. In this paper, we propose an efficient simulator implementation of geometry-based spatial channel model (SCM) on graphics processing unit (GPU). We first analyze the potential parallelism of the SCM module. The SCM simulation includes generating channel coefficients, generating additive white Gaussian noise (AWGN), filtering input signals and adding noise. Secondly, we implement all those parallelizable sub-modules on GPU using the open computing language (OpenCL). Then, a lot of effective GPU accelerating approaches are employed to make all those GPU functions highly optimized. The approaches include out-of-order command queue, merging data, sharing local memory and vectorization. At last, we verify our approaches on Nvidia's mid-range GPU GTX660. The experiment result shows that our newly proposed GPU implementation achieves more than 1000 times speedup compared with the implementation on traditional central processing unit (CPU). The simulation time is close to the processing time of transmitter and receiver, which makes it possible to construct a real-time channel simulator of link level for long term evolution (LTE) or LTE-advanced system and software-defined radio. As far as we know, we are the first to accelerate the SCM model on GPU. The results of this paper should have significant application value in practice.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信