{"title":"Acceleration of spatial channel model simulation using GPU","authors":"Qingqing Dang, Zhisong Bie","doi":"10.1109/ICCCHINA.2014.7008380","DOIUrl":null,"url":null,"abstract":"In traditional link level simulation, multiple-input and multiple-output (MIMO) channel model is one of the most time-consuming modules. When using more realistic geometry-based channel models, it consumes more time. In this paper, we propose an efficient simulator implementation of geometry-based spatial channel model (SCM) on graphics processing unit (GPU). We first analyze the potential parallelism of the SCM module. The SCM simulation includes generating channel coefficients, generating additive white Gaussian noise (AWGN), filtering input signals and adding noise. Secondly, we implement all those parallelizable sub-modules on GPU using the open computing language (OpenCL). Then, a lot of effective GPU accelerating approaches are employed to make all those GPU functions highly optimized. The approaches include out-of-order command queue, merging data, sharing local memory and vectorization. At last, we verify our approaches on Nvidia's mid-range GPU GTX660. The experiment result shows that our newly proposed GPU implementation achieves more than 1000 times speedup compared with the implementation on traditional central processing unit (CPU). The simulation time is close to the processing time of transmitter and receiver, which makes it possible to construct a real-time channel simulator of link level for long term evolution (LTE) or LTE-advanced system and software-defined radio. As far as we know, we are the first to accelerate the SCM model on GPU. The results of this paper should have significant application value in practice.","PeriodicalId":353402,"journal":{"name":"2014 IEEE/CIC International Conference on Communications in China (ICCC)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE/CIC International Conference on Communications in China (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCHINA.2014.7008380","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In traditional link level simulation, multiple-input and multiple-output (MIMO) channel model is one of the most time-consuming modules. When using more realistic geometry-based channel models, it consumes more time. In this paper, we propose an efficient simulator implementation of geometry-based spatial channel model (SCM) on graphics processing unit (GPU). We first analyze the potential parallelism of the SCM module. The SCM simulation includes generating channel coefficients, generating additive white Gaussian noise (AWGN), filtering input signals and adding noise. Secondly, we implement all those parallelizable sub-modules on GPU using the open computing language (OpenCL). Then, a lot of effective GPU accelerating approaches are employed to make all those GPU functions highly optimized. The approaches include out-of-order command queue, merging data, sharing local memory and vectorization. At last, we verify our approaches on Nvidia's mid-range GPU GTX660. The experiment result shows that our newly proposed GPU implementation achieves more than 1000 times speedup compared with the implementation on traditional central processing unit (CPU). The simulation time is close to the processing time of transmitter and receiver, which makes it possible to construct a real-time channel simulator of link level for long term evolution (LTE) or LTE-advanced system and software-defined radio. As far as we know, we are the first to accelerate the SCM model on GPU. The results of this paper should have significant application value in practice.