{"title":"A High-throughput Architecture for Lossless Decompression on FPGA Designed Using HLS (Abstract Only)","authors":"Jie Lei, Yu-Ting Chen, Yunsong Li, J. Cong","doi":"10.1145/2847263.2847305","DOIUrl":null,"url":null,"abstract":"In the field of big data applications, lossless data compression and decompression can play an important role in improving the data center's efficiency in storage and distribution of data. To avoid becoming a performance bottleneck, they must be accelerated to have a capability of high speed data processing. As FPGAs begin to be deployed as compute accelerators in the data centers for its advantages of massive parallel customized processing capability, power efficiency and hardware reconfiguration. It is promising and interesting to use FPGAs for acceleration of data compression and decompression. The conventional development of FPGA accelerators using hardware description language costs much more design efforts than that of CPUs or GPUs. High level synthesis (HLS) can be used to greatly improve the design productivity. In this paper, we present a solution for accelerating lossless data decompression on FPGA by using HLS. With a pipelined data-flow structure, the proposed decompression accelerator can perform static Huffman decoding and LZ77 decompression at a very high throughput rate. According to the experimental results conducted on FPGA with the Calgary Corpus data benchmark, the average data throughput of the proposed decompression core achieves to 4.6 Gbps while running at 200 MHz.","PeriodicalId":438572,"journal":{"name":"Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","volume":"56 8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2847263.2847305","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In the field of big data applications, lossless data compression and decompression can play an important role in improving the data center's efficiency in storage and distribution of data. To avoid becoming a performance bottleneck, they must be accelerated to have a capability of high speed data processing. As FPGAs begin to be deployed as compute accelerators in the data centers for its advantages of massive parallel customized processing capability, power efficiency and hardware reconfiguration. It is promising and interesting to use FPGAs for acceleration of data compression and decompression. The conventional development of FPGA accelerators using hardware description language costs much more design efforts than that of CPUs or GPUs. High level synthesis (HLS) can be used to greatly improve the design productivity. In this paper, we present a solution for accelerating lossless data decompression on FPGA by using HLS. With a pipelined data-flow structure, the proposed decompression accelerator can perform static Huffman decoding and LZ77 decompression at a very high throughput rate. According to the experimental results conducted on FPGA with the Calgary Corpus data benchmark, the average data throughput of the proposed decompression core achieves to 4.6 Gbps while running at 200 MHz.