2013 Data Compression Conference最新文献

筛选
英文 中文
Compressing Huffman Models on Large Alphabets 在大字母上压缩霍夫曼模型
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.46
G. Navarro, Alberto Ordóñez Pereira
{"title":"Compressing Huffman Models on Large Alphabets","authors":"G. Navarro, Alberto Ordóñez Pereira","doi":"10.1109/DCC.2013.46","DOIUrl":"https://doi.org/10.1109/DCC.2013.46","url":null,"abstract":"A naive storage of a Huffman model on a text of length n over an alphabet of size σ requires O(σlog n) bits. This can be reduced to σ logσ + O(σ) bits using canonical codes. This overhead over the entropy can be significant when σ is comparable to n, and it also dictates the amount of main memory required to compress or decompress. We design an encoding scheme that requires σlog log n+O(σ+log2 n) bits in the worst case, and typically less, while supporting encoding and decoding of symbols in O(log log n) time. We show that our technique reduces the storage size of the model of state-of-the-art techniques to around 15% in various real-life sequences over large alphabets, while still offering reasonable compression/decompression times.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116016730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Quadratic Similarity Queries on Compressed Data 压缩数据的二次相似查询
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.52
A. Ingber, T. Courtade, T. Weissman
{"title":"Quadratic Similarity Queries on Compressed Data","authors":"A. Ingber, T. Courtade, T. Weissman","doi":"10.1109/DCC.2013.52","DOIUrl":"https://doi.org/10.1109/DCC.2013.52","url":null,"abstract":"The problem of performing similarity queries on compressed data is considered. We study the fundamental tradeoff between compression rate, sequence length, and reliability of queries performed on compressed data. For a Gaussian source and quadratic similarity criterion, we show that queries can be answered reliably if and only if the compression rate exceeds a given threshold - the identification rate - which we explicitly characterize. When compression is performed at a rate greater than the identification rate, responses to queries on the compressed data can be made exponentially reliable. We give a complete characterization of this exponent, which is analogous to the error and excess-distortion exponents in channel and source coding, respectively. For a general source, we prove that the identification rate is at most that of a Gaussian source with the same variance. Therefore, as with classical compression, the Gaussian source requires the largest compression rate. Moreover, a scheme is described that attains this maximal rate for any source distribution.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124931548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Ultra Fast H.264/AVC to HEVC Transcoder 超快速H.264/AVC到HEVC转码器
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.32
Tong Shen, Yao Lu, Ziyu Wen, Linxi Zou, Yucong Chen, Jiangtao Wen
{"title":"Ultra Fast H.264/AVC to HEVC Transcoder","authors":"Tong Shen, Yao Lu, Ziyu Wen, Linxi Zou, Yucong Chen, Jiangtao Wen","doi":"10.1109/DCC.2013.32","DOIUrl":"https://doi.org/10.1109/DCC.2013.32","url":null,"abstract":"The emerging High Efficiency Video Coding (HEVC) standard achieves significant performance improvement over H.264/AVC standard at a cost of much higher complexity. In this paper, we propose a ultra fast H.264/AVC to HEVC transcoder for multi-core processors implementing Wave front Parallel Processing (WPP) and SIMD acceleration, along with expedited motion estimation (ME) and mode decision (MD) by utilizing information extracted from the input H.264/AVC stream. Experiments using standard HEVC test bit streams show that the proposed transcoder achieves 70x speed up over the HEVC HM 8.1 reference software (including H.264 encoding) at very small rate distortion (RD) performance loss.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127177818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
High Compression Rate and Ratio Using Predefined Huffman Dictionaries 高压缩率和比例使用预定义的霍夫曼字典
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.119
Amit Golander, S. Tahar, Lior Glass, G. Biran, Sagi Manole
{"title":"High Compression Rate and Ratio Using Predefined Huffman Dictionaries","authors":"Amit Golander, S. Tahar, Lior Glass, G. Biran, Sagi Manole","doi":"10.1109/DCC.2013.119","DOIUrl":"https://doi.org/10.1109/DCC.2013.119","url":null,"abstract":"Current Huffman coding modes are optimal for a single metric: compression ratio (quality) or rate (performance). We recognize that real life data can usually be classified to families of data types and thus the Huffman dictionary can be reused instead of recalculated. In this paper, we show how to balance the trade-off between compression ratio and rate, without modifying existing standards and legacy decompression implementations.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129315480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analog Joint Source Channel Coding over Non-Linear Channels 非线性信道上的模拟联合源信道编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.75
Mohamed Hassanin, J. Garcia-Frías
{"title":"Analog Joint Source Channel Coding over Non-Linear Channels","authors":"Mohamed Hassanin, J. Garcia-Frías","doi":"10.1109/DCC.2013.75","DOIUrl":"https://doi.org/10.1109/DCC.2013.75","url":null,"abstract":"We investigate the performance of analog joint source channel coding systems based on the use of spiral-like space filling curves for the transmission of Gaussian sources over non-linear channels. The non-linearity proceeds from a non-linear power amplifier in the transmitter that exhibits saturation effects at the extremes and also near the origin (see Figure below). Then, the output of the amplifier is sent through an AWGN channel which introduces attenuation that depends on the distance between the transmitter and the receiver. This means that the attenuation cannot be subsumed into the noise variance.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127864220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Near in Place Linear Time Minimum Redundancy Coding 近就地线性时间最小冗余编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.49
Juha Kärkkäinen, German Tischler
{"title":"Near in Place Linear Time Minimum Redundancy Coding","authors":"Juha Kärkkäinen, German Tischler","doi":"10.1109/DCC.2013.49","DOIUrl":"https://doi.org/10.1109/DCC.2013.49","url":null,"abstract":"In this paper we discuss data structures and algorithms for linear time encoding and decoding of minimum redundancy codes. We show that a text of length n over an alphabet of cardinality σ can be encoded to minimum redundancy code and decoded from minimum redundancy code in time O(n) using only an additional space of O(σ) words (O(σ log n) bits) for handling the auxiliary data structures. The encoding process can replace the given block code by the corresponding minimum redundancy code in place. The decoding process is able to replace the minimum redundancy code given in sufficient space to store the block code by the corresponding block code.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128947377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Variable-to-Fixed-Length Encoding for Large Texts Using Re-Pair Algorithm with Shared Dictionaries 基于共享字典的重对算法的大文本变到定长编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.97
Kei Sekine, Hirohito Sasakawa, S. Yoshida, T. Kida
{"title":"Variable-to-Fixed-Length Encoding for Large Texts Using Re-Pair Algorithm with Shared Dictionaries","authors":"Kei Sekine, Hirohito Sasakawa, S. Yoshida, T. Kida","doi":"10.1109/DCC.2013.97","DOIUrl":"https://doi.org/10.1109/DCC.2013.97","url":null,"abstract":"The Re-Pair algorithm proposed by Larsson and Moffat in 1999 is a simple grammar-based compression method that achieves an extremely high compression ratio. However, Re-Pair is an offline and very space consuming algorithm. Thus, to apply it to a very large text, we need to divide the text into smaller blocks. Consequently, if we share a part of the dictionary among all blocks, we expect that the compression speed and ratio of the algorithm will improve. In this paper, we implemented our method with exploiting variable-to-fixed-length codes, and empirically show how the compression speed and ratio of the method vary by adjusting three parameters: block size, dictionary size, and size of shared dictionary. Finally, we discuss the tendencies of compression speed and ratio with respect to the three parameters.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"101 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Simplified HEVC FME Interpolation Unit Targeting a Low Cost and High Throughput Hardware Design 针对低成本和高吞吐量硬件设计的简化HEVC FME插补单元
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.55
Vladimir Afonso, Henrique Maich, L. Agostini, Denis Franco
{"title":"Simplified HEVC FME Interpolation Unit Targeting a Low Cost and High Throughput Hardware Design","authors":"Vladimir Afonso, Henrique Maich, L. Agostini, Denis Franco","doi":"10.1109/DCC.2013.55","DOIUrl":"https://doi.org/10.1109/DCC.2013.55","url":null,"abstract":"Summary form only given. The new demands for high resolution digital video applications are pushing the development of new techniques in the video coding area. This paper presents a simplified version of the original Fractional Motion Estimation (FME) algorithm defined by the HEVC emerging video coding standard targeting a low cost and high throughput hardware design. Based on evaluations using the HEVC Model (HM), the HEVC reference software, a simplification strategy was defined to be used in the hardware design, drastically reducing the HEVC complexity, but with some losses in terms of compression rates and quality. The used strategy considered the use of only the most used PU size in the Motion Estimation process, avoiding the evaluation of the 24 PU sizes defined in the HEVC and avoiding also the RDO decision process. This expressively reduces the ME complexity and causes a bit-rate loss lower than 13.18% and a quality loss lower than 0.45dB. Even with the proposed simplification, the proposed solution is fully compliant with the current version of the HEVC standard. The FME interpolation was also simplified targeting the hardware design through some algebraic manipulations, converting multiplications in shift-adds and sharing sub-expressions. The simplified FME interpolator was designed in hardware and the results showed a low use of hardware resources and a processing rate high enough to process QFHD videos (3840x2160 pixels) in real time.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127397436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A High Throughput Multi Symbol CABAC Framework for Hybrid Video Codecs 用于混合视频编解码器的高吞吐量多符号CABAC框架
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.94
K. Rapaka, E. Yang
{"title":"A High Throughput Multi Symbol CABAC Framework for Hybrid Video Codecs","authors":"K. Rapaka, E. Yang","doi":"10.1109/DCC.2013.94","DOIUrl":"https://doi.org/10.1109/DCC.2013.94","url":null,"abstract":"Summary form only given. This paper proposes a Multi-Symbol Context Adaptive Binary Arithmetic Coding (CABAC) Framework in Hybrid Video Coding. Advanced CABAC techniques have been employed in popular video coding technologies like H264-AVC, HEVC. The proposed framework aims at extending these technique by providing symbol level scalability in being able to code one or multi-symbols at a time without changing the existing framework. Such a coding not only can exploit higher order statistical dependencies on a syntax element level but also reduce the number of coded bins. New syntax elements and their Probability modeling are proposed as extensions to achieve Multi-Symbol coding. An example variant of this framework, that is coding only maximum of two symbols at a time for quantized coefficient Indices, was implemented on top of JM18.3-H264 CABAC. This example extension when tested with on HEVC test Sequences shows significant throughput improvement (i.e., significant reduction in number of bins to be coded) and at the same time reduces Bit-rate significantly. The Frame-work can be seamlessly extended to code Multiple Symbols greater than two.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"18 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131452497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lossless Compression of Rotated Maskless Lithography Images 旋转无掩模光刻图像的无损压缩
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.80
S. T. Klein, Dana Shapira, Gal Shelef
{"title":"Lossless Compression of Rotated Maskless Lithography Images","authors":"S. T. Klein, Dana Shapira, Gal Shelef","doi":"10.1109/DCC.2013.80","DOIUrl":"https://doi.org/10.1109/DCC.2013.80","url":null,"abstract":"A new lossless image compression algorithm is presented, aimed at mask less lithography systems with mostly right-angled regular structures. Since these images appear often in slightly rotated form, an algorithm dealing with this special case is suggested, which improves performance relative to the state of the art alternatives.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1997 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131273842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信