2013 Data Compression Conference最新文献

筛选
英文 中文
Genome Sequence Compression with Distributed Source Coding 基因组序列压缩与分布式源编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.104
Shuang Wang, Xiaoqian Jiang, Lijuan Cui, Wenrui Dai, N. Deligiannis, Pinghao Li, H. Xiong, Samuel Cheng, L. Ohno-Machado
{"title":"Genome Sequence Compression with Distributed Source Coding","authors":"Shuang Wang, Xiaoqian Jiang, Lijuan Cui, Wenrui Dai, N. Deligiannis, Pinghao Li, H. Xiong, Samuel Cheng, L. Ohno-Machado","doi":"10.1109/DCC.2013.104","DOIUrl":"https://doi.org/10.1109/DCC.2013.104","url":null,"abstract":"In this paper, we develop a novel genome compression framework based on distributed source coding (DSC)[3], which is specially tailored to the need of miniaturized devices. At the encoder side, subsequences with adaptive code length can be compressed flexibly through either low complexity DSC based syndrome coding or hash coding with the decision determined by the existence of variations between source and reference known from the decoder feedback. Moreover, to tackle the variations between source and reference at the decoder, we carefully designed a factor graph based low-density parity-check (LDPC) decoder, which automatically detects insertion, deletion and substitution.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115359694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mode Duplication Based Multiview Multiple Description Video Coding 基于模式复制的多视图多描述视频编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.106
Xiaolan Wang, C. Cai
{"title":"Mode Duplication Based Multiview Multiple Description Video Coding","authors":"Xiaolan Wang, C. Cai","doi":"10.1109/DCC.2013.106","DOIUrl":"https://doi.org/10.1109/DCC.2013.106","url":null,"abstract":"Compression ability is most concerned for multiview video (MVV) transmission system because of its massive amount of data. To improve coding efficiency, the Joint Video Team (JVT) standardization body has developed a joint multiview video coding model (JMVC), in which both intra-view and inter-view prediction techniques are exploited to yield a better coding gain. Therefore, how to prevent from error propagation has become a critical issue in multiview video coding (MVC). Error concealment methods for MVC have been widely studied in recent years, but little research has been conducted on error resilience for MVC. Multiple description coding (MDC) provides a promising solution for robust data transmission over error-prone channels, and has found many applications in monoview video communications. However, these MDC frameworks are not applicable to MVC because its prediction structure involves inter-view prediction. To develop an efficient and robust MVC scheme, a novel MDC algorithm for JMVC based on the mode duplication strategy is proposed in this paper. The input MVV sequence is firstly sub sampled in both horizontal and vertical directions, forming four subsequences, X1p, X1d, X2p, and X2d. Then, X1p and X1d are paired to form description 1, and X2p and X2d are grouped to form description 2, respectively. Secondly, X1p and X2p are directly encoded by separate JMVC encoders. Meanwhile, X1d/X2d adopts the best modes and prediction vectors (PVs) of X1p/X2p in corresponding (same spatial) locations to perform prediction coding. Consequently, neither code for best modes and PVs nor time load for mode decision is needed while coding X1d and X2d. Only coding for the prediction errors is required. Because subsequences in the same description are closely resembled each other, the extra prediction errors introduced by this best mode and PV reuse are negligible. Therefore, the bit rate and computational cost for coding X1d and X2d are greatly reduced. The proposed algorithm has been integrated into JMVC 6.0 and experimented on multiple MVV test sequences. The experimental results have shown that the proposed algorithm outperforms the state-of-the-arts of MDC for MVV and stereoscopic video, achieving improvements of 0.5-3dB in central decode and 0.5-3.5dB in side decode at the same bit rate over a wide range from 500kbps to 6000kbps. Comparing with original JMVC, the proposed algorithm saves about 40% encoding time in average.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126070036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Image Coding Using Nonlinear Evolutionary Transforms 基于非线性进化变换的图像编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.100
Seishi Takamura, A. Shimizu
{"title":"Image Coding Using Nonlinear Evolutionary Transforms","authors":"Seishi Takamura, A. Shimizu","doi":"10.1109/DCC.2013.100","DOIUrl":"https://doi.org/10.1109/DCC.2013.100","url":null,"abstract":"Transform is one of the most important tools for image/video coding technology. In this paper, novel nonlinear transform generation based on genetic programming is proposed and implemented into H.264/AVC and HEVC reference software to enhance coding performance. The transform procedure itself is coded and transmitted. Despite this overhead, 0.590% (vs. JM18.0) and 1.711% (vs. HM5.0) coding gain was observed in our preliminary experiment.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126970808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Low Complexity Embedded Quantization Scheme Compatible with Bitplane Image Coding 兼容位平面图像编码的低复杂度嵌入式量化方案
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.35
Francesc Auli-Llinas
{"title":"Low Complexity Embedded Quantization Scheme Compatible with Bitplane Image Coding","authors":"Francesc Auli-Llinas","doi":"10.1109/DCC.2013.35","DOIUrl":"https://doi.org/10.1109/DCC.2013.35","url":null,"abstract":"Embedded quantization is a mechanism through which image coding systems provide quality progressivity. Although the most common embedded quantization approach is to use uniform scalar dead zone quantization (USDQ) together with bit plane coding (BPC), recent work suggested that similar coding performance as that achieved with USDQ+BPC can be obtained with a general embedded quantization (GEQ) scheme than performs fewer quantization stages. Unfortunately, practical approaches of GEQ can not be implemented in bit plane coding engines without substantially modifying their structure. This work overcomes this drawback introducing a 2-step scalar dead zone quantization (2SDQ) scheme compatible with bit plane image coding that provides the same advantages of practical GEQ approaches. Herein, 2SDQ is introduced in the framework of JPEG2000 to demonstrate its viability and efficiency.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114620179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Compact Data Structures for Temporal Graphs 时态图的紧凑数据结构
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.59
Guillermo de Bernardo, N. Brisaboa, Diego Caro, Michael A. Rodriguez
{"title":"Compact Data Structures for Temporal Graphs","authors":"Guillermo de Bernardo, N. Brisaboa, Diego Caro, Michael A. Rodriguez","doi":"10.1109/DCC.2013.59","DOIUrl":"https://doi.org/10.1109/DCC.2013.59","url":null,"abstract":"Summary form only given. In this paper we propose three compact data structures to answer queries on temporal graphs. We define a temporal graph as a graph whose edges appear or disappear along time. Possible queries are related to adjacency along time, for example, to get the neighbors of a node at a given time point or interval. A naive representation consists of a time-ordered sequence of graphs, each of them valid at a particular time instant. The main issue of this representation is the unnecessary use of space if many nodes and their connections remain unchanged during a long period of time. The work in this paper proposes to store only what changes at each time instant. The ttk2-tree is conceptually a dynamic k2-tree in which each leaf and internal node contains a change list of time instants when its bit value has changed. All the change lists are stored consecutively in a dynamic sequence. During query processing, the change lists are used to expand only valid regions in the dynamic k2-tree. It supports updates of the current or past states of the graph. The ltg-index is a set of snapshots and logs of changes between consecutive snapshots. The structure keeps a log for each node, storing the edge and the time where a change has been produced. To retrieve direct neighbors of a node, the previous snapshot is queried, and then the log is traversed adding or removing edges to the result. The differential k2-tree stores snapshots of some time instants in k2-trees. For the other time instants, a k2-tree is also built, but these are differential (they store the edges that differ from the last snapshot). To perform a query it accesses the k2-tree of the given time and the previous full snapshot. The edges that appear in exactly one of these two k2-trees will be the final results. We test our proposals using synthetic and real datasets. Our results show that the ltg-index obtains the smallest space in general. We also measure times for direct and reverse neighbor queries in a time instant or a time interval. For all these queries, the times of our best proposal range from tens of μs to several ms, depending on the size of the dataset and the number of results returned. The ltg-index is the fastest for direct queries (almost as fast as accessing a snapshot), but it is 5-20 times slower in reverse queries. The differential k2-tree is very fast in time instant queries, but slower in time interval queries. The ttk2-tree obtains similar times for direct and reverse queries and different time intervals, being the fastest in some reverse interval queries. It has also the advantage of being dynamic.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116154495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Effective Variable-Length-to-Fixed-Length Coding via a Re-Pair Algorithm 基于重对算法的变长到定长有效编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.111
S. Yoshida, T. Kida
{"title":"Effective Variable-Length-to-Fixed-Length Coding via a Re-Pair Algorithm","authors":"S. Yoshida, T. Kida","doi":"10.1109/DCC.2013.111","DOIUrl":"https://doi.org/10.1109/DCC.2013.111","url":null,"abstract":"Summary form only given. We address the problem of improving variable-length-to-fixed-length codes (VF codes). A VF code is an encoding scheme that uses a fixed-length code, and thus, one can easily access the compressed data. However, conventional VF codes usually have an inferior compression ratio to that of variable-length codes. Although a method proposed by T. Uemura et al. in 2010 achieves a good compression ratio comparable to that of gzip, it is very time consuming. In this study, we propose a new VF coding method that applies a fixed-length code to the set of rules extracted by the Re-Pair algorithm, proposed by N. J. Larsson and A. Moffat in 1999. The Re-Pair algorithm is a simple off-line grammar-based compression method that has good compression-ratio performance with moderate compression speed. Moreover, we present several experimental results to show that the proposed coding is superior to the existing VF coding.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134000339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Algorithms for Compressed Inputs 压缩输入的算法
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.60
Nathan Brunelle, G. Robins, Abhi Shelat
{"title":"Algorithms for Compressed Inputs","authors":"Nathan Brunelle, G. Robins, Abhi Shelat","doi":"10.1109/DCC.2013.60","DOIUrl":"https://doi.org/10.1109/DCC.2013.60","url":null,"abstract":"We study compression-aware algorithms, i.e. algorithms that can exploit regularity in their input data by directly operating on compressed data. While popular with string algorithms, we consider this idea for algorithms operating on numeric sequences and graphs that have been compressed using a variety of schemes including LZ77, grammar-based compression, a graph interpretation of Re-Pair, and a method presented by Boldi and Vigna in The Web Graph Framework. In all cases, we discover algorithms outperforming a trivial approach: to decompress the input and run a standard algorithm. We aim to develop an algorithmic toolkit for basic tasks to operate on a variety of compression inputs.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"22 6S 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133301207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Efficient Parallelization of Different HEVC Decoding Stages 不同HEVC解码阶段的高效并行化
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.82
A. Kotra, M. Raulet, O. Déforges
{"title":"Efficient Parallelization of Different HEVC Decoding Stages","authors":"A. Kotra, M. Raulet, O. Déforges","doi":"10.1109/DCC.2013.82","DOIUrl":"https://doi.org/10.1109/DCC.2013.82","url":null,"abstract":"Summary form only given. In this paper we present efficient parallelization implementations for different stages of the HEVC decoder, which are LCU decoding, deblocking filtering and SAO filtering. Each of the stages are parallelized in separate passes. The LCU decoding is parallelized using Wave front Parallel Processing (WPP). Deblocking and SAO filtering are parallelized by segmenting each picture into separate regions of consecutive LCU rows and processing each of the regions in a concurrent fashion. On a 6 core machine with 6 threads running concurrently, experimental results showed an average accelerating factor of 4.6, 5, 5.35 for the LCU decoding stage and 4.5, 4.9, 5 for deblocking filtering stage and 4, 4.5 and 5 for SAO filtering stages on HD, 1600p and 2160p sequences respectively.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132166697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Random Extraction from Compressed Data - A Practical Study 从压缩数据中随机抽取——一个实用的研究
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.65
C. Constantinescu, Joseph S. Glider, D. Simha, D. Chambliss
{"title":"Random Extraction from Compressed Data - A Practical Study","authors":"C. Constantinescu, Joseph S. Glider, D. Simha, D. Chambliss","doi":"10.1109/DCC.2013.65","DOIUrl":"https://doi.org/10.1109/DCC.2013.65","url":null,"abstract":"Modern primary storage systems support or intend to add support for real time compression usually based on some flavor of the LZ77 and/or Huffman algorithm. There is a fundamental tradeoff in adding real time (adaptive) compression to such a system: to get good compression the amount of compressed data (the independently compressed block) should be large, to be able to read quickly from random places the blocks should be small. One idea is to let the independently compressed blocks be large but to be able to start decompressing the needed part of the block from a random location inside the compressed block. We explore this idea and compare it with a few alternatives, experimenting with the zlib code base.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"58 37","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134505642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable Video Coding Extension for HEVC 可扩展的视频编码扩展HEVC
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.27
Jianle Chen, K. Rapaka, Xiang Li, V. Seregin, Liwei Guo, M. Karczewicz, G. V. D. Auwera, J. Solé, Xianglin Wang, Chengjie Tu, Ying Chen, R. Joshi
{"title":"Scalable Video Coding Extension for HEVC","authors":"Jianle Chen, K. Rapaka, Xiang Li, V. Seregin, Liwei Guo, M. Karczewicz, G. V. D. Auwera, J. Solé, Xianglin Wang, Chengjie Tu, Ying Chen, R. Joshi","doi":"10.1109/DCC.2013.27","DOIUrl":"https://doi.org/10.1109/DCC.2013.27","url":null,"abstract":"This paper describes a scalable video codec that was submitted as a response to the joint call for proposals issued by ISO/IEC MPEG and ITU-T VCEG on HEVC scalable extension. The proposed codec uses a multi-loop decoding structure. Several inter-layer texture prediction methods are employed to remove the inter-layer redundancy. Inter-layer prediction is also used when coding enhancement layer syntax elements such as motion parameter and intra prediction mode, to further reduce bit overhead. Additionally, alternative transforms as well as adaptive coefficients scanning are used to code the prediction residues more efficiently. Experimental results are presented to demonstrate the effectiveness of the proposed scheme. When compared to HEVC single-layer coding, the additional rate overhead for the proposed scalable extension is 1.2% to 6.4% to achieve two layers of SNR and spatial scalability.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128896425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信