{"title":"An improved quantization scheme for lattice-reduction aided MIMO detection","authors":"Tien Due Nguyen, T. Fujino, X. Tran","doi":"10.1587/TRANSFUN.E96.A.2405","DOIUrl":null,"url":null,"abstract":"Lattice reduction aided (LRA) has been introduced recently by Yao and Wornell as a new type of MIMO detector. However, the quantization step of LRA detector is only suboptimal. In this paper, we propose a list quantization scheme to reduce the effect of quantization error. The method is to generate a list of candidate symbols from the original LRA estimated symbol. The simulation shows that improved performance can be obtained while additional complexity is reasonable small. The performance of LRA-MMSE with list quantization is shown to be very close to ML performance in both case, 4-QAM and 16-QAM modulation.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 International Symposium on Communications and Information Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1587/TRANSFUN.E96.A.2405","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Lattice reduction aided (LRA) has been introduced recently by Yao and Wornell as a new type of MIMO detector. However, the quantization step of LRA detector is only suboptimal. In this paper, we propose a list quantization scheme to reduce the effect of quantization error. The method is to generate a list of candidate symbols from the original LRA estimated symbol. The simulation shows that improved performance can be obtained while additional complexity is reasonable small. The performance of LRA-MMSE with list quantization is shown to be very close to ML performance in both case, 4-QAM and 16-QAM modulation.