2013 Data Compression Conference最新文献

筛选
英文 中文
Ultra Fast H.264/AVC to HEVC Transcoder 超快速H.264/AVC到HEVC转码器
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.32
Tong Shen, Yao Lu, Ziyu Wen, Linxi Zou, Yucong Chen, Jiangtao Wen
{"title":"Ultra Fast H.264/AVC to HEVC Transcoder","authors":"Tong Shen, Yao Lu, Ziyu Wen, Linxi Zou, Yucong Chen, Jiangtao Wen","doi":"10.1109/DCC.2013.32","DOIUrl":"https://doi.org/10.1109/DCC.2013.32","url":null,"abstract":"The emerging High Efficiency Video Coding (HEVC) standard achieves significant performance improvement over H.264/AVC standard at a cost of much higher complexity. In this paper, we propose a ultra fast H.264/AVC to HEVC transcoder for multi-core processors implementing Wave front Parallel Processing (WPP) and SIMD acceleration, along with expedited motion estimation (ME) and mode decision (MD) by utilizing information extracted from the input H.264/AVC stream. Experiments using standard HEVC test bit streams show that the proposed transcoder achieves 70x speed up over the HEVC HM 8.1 reference software (including H.264 encoding) at very small rate distortion (RD) performance loss.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127177818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
STOL: Spatio-Temporal Online Dictionary Learning for Low Bit-Rate Video Coding 低比特率视频编码的时空在线词典学习
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.101
Xin Tang, H. Xiong
{"title":"STOL: Spatio-Temporal Online Dictionary Learning for Low Bit-Rate Video Coding","authors":"Xin Tang, H. Xiong","doi":"10.1109/DCC.2013.101","DOIUrl":"https://doi.org/10.1109/DCC.2013.101","url":null,"abstract":"To speed up the convergence rate of learning dictionary in low bit-rate video coding, this paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to improve the original adaptive regularized dictionary learning with K-SVD which involves a high computational complexity and interfere with the coding efficiency. Considering the intrinsic dimensionality of the primitives in training each series of 2-D sub dictionaries is low, the 3-D low-frequency and high-frequency dictionary pair would be formed by the online dictionary learning to update the atoms for optimal sparse representation and convergence. Instead of classical first-order stochastic gradient descent on the constraint set, e.g. K-SVD, the online algorithm would exploit the structure of sparse coding in the design of an optimization procedure in terms of stochastic approximations. It depends on low memory consumption and lower computational cost without the need of explicit learning rate tuning. Through drawing a cubic from i.i.d. samples of a distribution in each inner loop and alternating classical sparse coding steps for computing the decomposition coefficient of the cubic over previous dictionary, the dictionary update problem is converted to solve the expected cost instead of the empirical cost. For dynamic training data over time, online dictionary learning behaves faster than second-order iteration batch alternatives, e.g. K-SVD. Through experiments, the super-resolution reconstruction based on STOL obviously reduces the computational complexity to 40% to 50% of the K-SVD learning-based schemes with a guaranteed accuracy.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132579539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Compression Algorithm for Fluctuant Data in Smart Grid Database Systems 智能电网数据库系统中波动数据的压缩算法
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.67
Chi-Cheng Chuang, Y. Chiu, Zhi-Hung Chen, Hao-Ping Kang, Che-Rung Lee
{"title":"A Compression Algorithm for Fluctuant Data in Smart Grid Database Systems","authors":"Chi-Cheng Chuang, Y. Chiu, Zhi-Hung Chen, Hao-Ping Kang, Che-Rung Lee","doi":"10.1109/DCC.2013.67","DOIUrl":"https://doi.org/10.1109/DCC.2013.67","url":null,"abstract":"In this paper, we present a lossless compression algorithm for fluctuant data, which can be integrated into database system and allows regular database insertion and queries. The algorithm is based on the observation that fluctuant data, although varied violently during small time intervals, have similar patterns over time. The algorithm first partitioned consecutive k records into segments. Those segments are normalized and treated as vectors in k-dimensional space. Classification algorithms are then applied to find representative vectors for those normalized vectors. The classification criterion is that any segments after normalization can find at least one representative vector such that their distance is less than a given threshold. Those representative vectors, called codes, are stored in a codebook. The codebook can be generated offline from a small training dataset, and used repeatedly. The online compression algorithm searches the nearest code for an input segment, and stores only the ID of the code and their difference. Since the difference is small, it can be compressed by Rice coding or Golomb coding.lossless compression algorithm.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134461135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Natural Language Compression Optimized for Large Set of Files 为大型文件集优化的自然语言压缩
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.93
P. Procházka, J. Holub
{"title":"Natural Language Compression Optimized for Large Set of Files","authors":"P. Procházka, J. Holub","doi":"10.1109/DCC.2013.93","DOIUrl":"https://doi.org/10.1109/DCC.2013.93","url":null,"abstract":"Summary form only given. The web search engines store the web pages in the raw text form to build so called snippets (short text surrounding the searched pattern) or to perform so called positional ranking functions. We address the problem of the compression of a large collection of text files distributed in cluster of computers, where the single files need to be randomly accessed in very short time. The compression algorithm Set-of-Files Semi-Adaptive Two Byte Dense Code (SF-STBDC) is based on the word-based approach and the idea of combination of two statistical models: the global model (common for all the files of the set) and the local model. The latter is built as the set of changes which transform the global model to the proper model of the single compressed file. Except very good compression ratio the compression method allows fast searching on the compressed text, which is an attractive property especially for search engines property especially for search engines. Exactly the same problem (compression of a set of files using byte codes) was first stated in. Our algorithm SF-STBDC overcomes the algorithm based on (s,c) - Dense Code in compression ratio and at the same time it keeps a very good searching and decompression speed. The key idea to achieve this result is a usage of Semi-Adaptive Two Byte Dense Code which provides more effective coding of small portions ofof the text and still allows exact setting of the number of stoppers and continuers.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133287552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Analog Joint Source Channel Coding over Non-Linear Channels 非线性信道上的模拟联合源信道编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.75
Mohamed Hassanin, J. Garcia-Frías
{"title":"Analog Joint Source Channel Coding over Non-Linear Channels","authors":"Mohamed Hassanin, J. Garcia-Frías","doi":"10.1109/DCC.2013.75","DOIUrl":"https://doi.org/10.1109/DCC.2013.75","url":null,"abstract":"We investigate the performance of analog joint source channel coding systems based on the use of spiral-like space filling curves for the transmission of Gaussian sources over non-linear channels. The non-linearity proceeds from a non-linear power amplifier in the transmitter that exhibits saturation effects at the extremes and also near the origin (see Figure below). Then, the output of the amplifier is sent through an AWGN channel which introduces attenuation that depends on the distance between the transmitter and the receiver. This means that the attenuation cannot be subsumed into the noise variance.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127864220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Near in Place Linear Time Minimum Redundancy Coding 近就地线性时间最小冗余编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.49
Juha Kärkkäinen, German Tischler
{"title":"Near in Place Linear Time Minimum Redundancy Coding","authors":"Juha Kärkkäinen, German Tischler","doi":"10.1109/DCC.2013.49","DOIUrl":"https://doi.org/10.1109/DCC.2013.49","url":null,"abstract":"In this paper we discuss data structures and algorithms for linear time encoding and decoding of minimum redundancy codes. We show that a text of length n over an alphabet of cardinality σ can be encoded to minimum redundancy code and decoded from minimum redundancy code in time O(n) using only an additional space of O(σ) words (O(σ log n) bits) for handling the auxiliary data structures. The encoding process can replace the given block code by the corresponding minimum redundancy code in place. The decoding process is able to replace the minimum redundancy code given in sufficient space to store the block code by the corresponding block code.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128947377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Variable-to-Fixed-Length Encoding for Large Texts Using Re-Pair Algorithm with Shared Dictionaries 基于共享字典的重对算法的大文本变到定长编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.97
Kei Sekine, Hirohito Sasakawa, S. Yoshida, T. Kida
{"title":"Variable-to-Fixed-Length Encoding for Large Texts Using Re-Pair Algorithm with Shared Dictionaries","authors":"Kei Sekine, Hirohito Sasakawa, S. Yoshida, T. Kida","doi":"10.1109/DCC.2013.97","DOIUrl":"https://doi.org/10.1109/DCC.2013.97","url":null,"abstract":"The Re-Pair algorithm proposed by Larsson and Moffat in 1999 is a simple grammar-based compression method that achieves an extremely high compression ratio. However, Re-Pair is an offline and very space consuming algorithm. Thus, to apply it to a very large text, we need to divide the text into smaller blocks. Consequently, if we share a part of the dictionary among all blocks, we expect that the compression speed and ratio of the algorithm will improve. In this paper, we implemented our method with exploiting variable-to-fixed-length codes, and empirically show how the compression speed and ratio of the method vary by adjusting three parameters: block size, dictionary size, and size of shared dictionary. Finally, we discuss the tendencies of compression speed and ratio with respect to the three parameters.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"101 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Simplified HEVC FME Interpolation Unit Targeting a Low Cost and High Throughput Hardware Design 针对低成本和高吞吐量硬件设计的简化HEVC FME插补单元
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.55
Vladimir Afonso, Henrique Maich, L. Agostini, Denis Franco
{"title":"Simplified HEVC FME Interpolation Unit Targeting a Low Cost and High Throughput Hardware Design","authors":"Vladimir Afonso, Henrique Maich, L. Agostini, Denis Franco","doi":"10.1109/DCC.2013.55","DOIUrl":"https://doi.org/10.1109/DCC.2013.55","url":null,"abstract":"Summary form only given. The new demands for high resolution digital video applications are pushing the development of new techniques in the video coding area. This paper presents a simplified version of the original Fractional Motion Estimation (FME) algorithm defined by the HEVC emerging video coding standard targeting a low cost and high throughput hardware design. Based on evaluations using the HEVC Model (HM), the HEVC reference software, a simplification strategy was defined to be used in the hardware design, drastically reducing the HEVC complexity, but with some losses in terms of compression rates and quality. The used strategy considered the use of only the most used PU size in the Motion Estimation process, avoiding the evaluation of the 24 PU sizes defined in the HEVC and avoiding also the RDO decision process. This expressively reduces the ME complexity and causes a bit-rate loss lower than 13.18% and a quality loss lower than 0.45dB. Even with the proposed simplification, the proposed solution is fully compliant with the current version of the HEVC standard. The FME interpolation was also simplified targeting the hardware design through some algebraic manipulations, converting multiplications in shift-adds and sharing sub-expressions. The simplified FME interpolator was designed in hardware and the results showed a low use of hardware resources and a processing rate high enough to process QFHD videos (3840x2160 pixels) in real time.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127397436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A High Throughput Multi Symbol CABAC Framework for Hybrid Video Codecs 用于混合视频编解码器的高吞吐量多符号CABAC框架
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.94
K. Rapaka, E. Yang
{"title":"A High Throughput Multi Symbol CABAC Framework for Hybrid Video Codecs","authors":"K. Rapaka, E. Yang","doi":"10.1109/DCC.2013.94","DOIUrl":"https://doi.org/10.1109/DCC.2013.94","url":null,"abstract":"Summary form only given. This paper proposes a Multi-Symbol Context Adaptive Binary Arithmetic Coding (CABAC) Framework in Hybrid Video Coding. Advanced CABAC techniques have been employed in popular video coding technologies like H264-AVC, HEVC. The proposed framework aims at extending these technique by providing symbol level scalability in being able to code one or multi-symbols at a time without changing the existing framework. Such a coding not only can exploit higher order statistical dependencies on a syntax element level but also reduce the number of coded bins. New syntax elements and their Probability modeling are proposed as extensions to achieve Multi-Symbol coding. An example variant of this framework, that is coding only maximum of two symbols at a time for quantized coefficient Indices, was implemented on top of JM18.3-H264 CABAC. This example extension when tested with on HEVC test Sequences shows significant throughput improvement (i.e., significant reduction in number of bins to be coded) and at the same time reduces Bit-rate significantly. The Frame-work can be seamlessly extended to code Multiple Symbols greater than two.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"18 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131452497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lossless Compression of Rotated Maskless Lithography Images 旋转无掩模光刻图像的无损压缩
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.80
S. T. Klein, Dana Shapira, Gal Shelef
{"title":"Lossless Compression of Rotated Maskless Lithography Images","authors":"S. T. Klein, Dana Shapira, Gal Shelef","doi":"10.1109/DCC.2013.80","DOIUrl":"https://doi.org/10.1109/DCC.2013.80","url":null,"abstract":"A new lossless image compression algorithm is presented, aimed at mask less lithography systems with mostly right-angled regular structures. Since these images appear often in slightly rotated form, an algorithm dealing with this special case is suggested, which improves performance relative to the state of the art alternatives.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1997 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131273842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信