2018 Data Compression Conference最新文献

筛选
英文 中文
Fixed-Rate Zero-Delay Source Coding for Stationary Vector-Valued Gauss-Markov Sources 平稳向量值高斯-马尔可夫信源的固定速率零延迟编码
2018 Data Compression Conference Pub Date : 2018-07-19 DOI: 10.1109/DCC.2018.00034
Photios A. Stavrou, Jan Østergaard
{"title":"Fixed-Rate Zero-Delay Source Coding for Stationary Vector-Valued Gauss-Markov Sources","authors":"Photios A. Stavrou, Jan Østergaard","doi":"10.1109/DCC.2018.00034","DOIUrl":"https://doi.org/10.1109/DCC.2018.00034","url":null,"abstract":"We consider a fixed-rate zero-delay source coding problem where a stationary vector-valued Gauss-Markov source is compressed subject to an average mean-squared error (MSE) dis- tortion constraint. We address the problem by considering the Gaussian nonanticipative rate distortion function (NRDF) which is a lower bound to the zero-delay Gaussian RDF. Then, we use its corresponding optimal “test-channel” to characterize the stationary Gaus- sian NRDF and evaluate the corresponding information rates. We show that the Gaussian NRDF can be achieved by p-parallel fixed-rate scalar uniform quantizers of finite support with dithering signal up to a multiplicative distortion factor and a constant rate penalty. We demonstrate our framework with a numerical example.","PeriodicalId":137206,"journal":{"name":"2018 Data Compression Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122015027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The Multi-Scale Deep Decoder for the Standard HEVC Bitstreams 标准HEVC码流的多尺度深度解码器
2018 Data Compression Conference Pub Date : 2018-03-27 DOI: 10.1109/DCC.2018.00028
Tingting Wang, Wenhui Xiao, Mingjin Chen, Hongyang Chao
{"title":"The Multi-Scale Deep Decoder for the Standard HEVC Bitstreams","authors":"Tingting Wang, Wenhui Xiao, Mingjin Chen, Hongyang Chao","doi":"10.1109/DCC.2018.00028","DOIUrl":"https://doi.org/10.1109/DCC.2018.00028","url":null,"abstract":"As we all know, there is strong multi-scale similarity among video frames. However, almost none of the current video coding standards takes this similarity into consideration. There exist two major problems when utilizing the multi-scale information at encoder-end: one is the extra motion models and the overheads brought by new motion parameters; the other is the extreme increment of the encoding algorithms’ complexity. Is it possible to employ the multi-scale similarity only at the decoder-end to improve the decoded videos’ quality, i.e., to further boost the coding efficiency? This paper mainly studies how to answer this question by proposing a novel Multi-Scale Deep Decoder (MSDD) for HEVC. Benefiting from the efficiency of deep learning technology (Convolutional Neural Network and Long Short-Term Memory network), MSDD achieves a higher coding efficiency only at the decoder-end without changing any encoding algorithms. Extensive experiments validate the feasibility and effectiveness of MSDD. MSDD leads to on averagely 6.5%, 8.0%, 6.4%, and 6.7% BD-rate reduction compared to HEVC anchor, for AI, LP, LB and RA coding configurations respectively. Especially for the videos with multi-scale similarity, the proposed approach obviously improves the coding efficiency indeed.","PeriodicalId":137206,"journal":{"name":"2018 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114986169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Engineering Compressed Static Functions 工程压缩静态函数
2018 Data Compression Conference Pub Date : 2018-03-27 DOI: 10.1109/DCC.2018.00013
M. Genuzio, S. Vigna
{"title":"Engineering Compressed Static Functions","authors":"M. Genuzio, S. Vigna","doi":"10.1109/DCC.2018.00013","DOIUrl":"https://doi.org/10.1109/DCC.2018.00013","url":null,"abstract":"Recent advances in the compact representation of static functions (with constant access time) have made it possible to fully exploit constructions based on random linear system. Such constructions, albeit theoretically appealing, were previously too slow to be usable. In this paper, we extend such techniques to the problem of storing compressed static functions, in the sense that the space used per key should be close to the entropy of the list of values. From a theoretical viewpoint, we are inspired by the approach of Hreinsson, Krøyer and Pagh. Values are represented using a near-optimal instantaneous code. Then, a bit array is created so that by XOR’ing its content at a fixed number of positions depending on the key one obtains the value, represented by its associated codeword. In the construction phase, every bit of the array is associated with an equation on Z/2Z, and solving the associated system provides the desired representation. Thus, we pass from one equation per key (the non-compressed case) to one equation per bit: the size of the system is thus approximately multiplied by the empirical entropy of the values, making the problem much more challenging. We show that by carefully engineering the value representation we can obtain a practical data structure. For example, we can store a function with geometrically distributed output in just 2.28 bits per key, independently of the key set, with a construction time double with respect to that of a state-of-the-art non-compressed function, which requires ≈ log log n bits per key, where n is the number of keys, and slightly improved lookup time. We can also store a function with an output of 106 values distributed following a power law of exponent 2 in just 2.75 bits per key, whereas a non-compressed function would require more than 20, with a threefold increase in construction time and significantly faster lookups.","PeriodicalId":137206,"journal":{"name":"2018 Data Compression Conference","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115597310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Rate Allocation for Motion Compensated JPEG2000 运动补偿JPEG2000的速率分配
2018 Data Compression Conference Pub Date : 2018-03-27 DOI: 10.1109/DCC.2018.00015
Jose Carmelo Maturana-Espinosa, V. Ruiz, J. Ortiz, D. Muller
{"title":"Rate Allocation for Motion Compensated JPEG2000","authors":"Jose Carmelo Maturana-Espinosa, V. Ruiz, J. Ortiz, D. Muller","doi":"10.1109/DCC.2018.00015","DOIUrl":"https://doi.org/10.1109/DCC.2018.00015","url":null,"abstract":"This work proposes the video codec MCJ2K (Motion Compensated JPEG2000), which is based on Motion Compensated Temporal Filtering (MCTF) and JPEG2000 (J2K). MCJ2K exploits the temporal redundancy present in most videos, thereby increasing the rate/distortion performance, and generates a collection of temporal subbands which are compressed with J2K. MCJ2K code-streams can be managed by standard JPIP (J2K Interactive Protocol) servers.","PeriodicalId":137206,"journal":{"name":"2018 Data Compression Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123297806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detail-Aware Image Decomposition for an HEVC-Based Texture Synthesis Framework 基于hevc纹理合成框架的细节感知图像分解
2018 Data Compression Conference Pub Date : 2018-03-27 DOI: 10.1109/DCC.2018.00083
Bastian Wandt, Thorsten Laude, B. Rosenhahn, J. Ostermann
{"title":"Detail-Aware Image Decomposition for an HEVC-Based Texture Synthesis Framework","authors":"Bastian Wandt, Thorsten Laude, B. Rosenhahn, J. Ostermann","doi":"10.1109/DCC.2018.00083","DOIUrl":"https://doi.org/10.1109/DCC.2018.00083","url":null,"abstract":"Modern video coding standards like High Efficiency Video Coding (HEVC) provide superior coding efficiency. However, this does not state true for complex and hard to predict textures which require high bit rates to achieve a high quality. To overcome this limitation of HEVC, texture synthesis frameworks were proposed in previous works. However, these frameworks only result in good reconstruction quality if the decomposition into synthesizable and non-synthesizable regions is either known or trivial. The frameworks fail for more challenging content, e.g. for content with fine non-synthesizable details within synthesizable regions. To enable texture synthesis-based video coding with high quality for this content, we propose sophisticated detail-aware decomposition techniques in this paper. These techniques are based on an initial coarse segmentation step followed by a refinement step that detects even small differences in the previously segmented region. With this new approach, we are able to achieve average luma BD-rate gains of 13.77% over HEVC and 3.03% over the closest related work from the literature. Furthermore, the considerably improved visual quality in addition to the bit rate savings is confirmed by comprehensive subjective tests.","PeriodicalId":137206,"journal":{"name":"2018 Data Compression Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125367440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Algorithm for HEVC Intra Prediction Based on Adaptive Mode Decision and Early Termination of CU Partition 基于自适应模式决策和CU分区早期终止的HEVC内快速预测算法
2018 Data Compression Conference Pub Date : 2018-03-27 DOI: 10.1109/DCC.2018.00087
Mengmeng Zhang, Xiaojun Zhai, Zhi Liu, Changzhi An
{"title":"Fast Algorithm for HEVC Intra Prediction Based on Adaptive Mode Decision and Early Termination of CU Partition","authors":"Mengmeng Zhang, Xiaojun Zhai, Zhi Liu, Changzhi An","doi":"10.1109/DCC.2018.00087","DOIUrl":"https://doi.org/10.1109/DCC.2018.00087","url":null,"abstract":"High Efficiency Video Coding (HEVC) introduces 35 intra prediction modes and a flexible quad-tree coding structure, which remarkably improve the coding efficiency. However, in intra prediction, the cost computation in Rough Mode Decision (RMD) process and Rate Distortion Optimized (RDO) process suffers from a pretty high complexity compared with H.264. To deal with this problem, a modified RMD process is proposed, in which all 35 prediction modes are divided into groups according to their phase angle to reduce the candidate modes.","PeriodicalId":137206,"journal":{"name":"2018 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122159139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
High Efficient Snake Order Pseudo-Sequence Based Light Field Image Compression 基于蛇阶伪序列的高效光场图像压缩
2018 Data Compression Conference Pub Date : 2018-03-27 DOI: 10.1109/DCC.2018.00050
Hadi Amirpour, Manuela Pereira, A. Pinheiro
{"title":"High Efficient Snake Order Pseudo-Sequence Based Light Field Image Compression","authors":"Hadi Amirpour, Manuela Pereira, A. Pinheiro","doi":"10.1109/DCC.2018.00050","DOIUrl":"https://doi.org/10.1109/DCC.2018.00050","url":null,"abstract":"Light fields capture a large number of samples of light rays in both intensity and direction terms, which allow post-processing applications such as refocusing, shifting view-point and depth estimation. However, they are represented by huge amount of data and require a high-efficient coding scheme for its compression. In this paper, light field raw image data is decomposed into multi-views and used as a pseudo-sequence input for state-of-the-art codecs such as High Efficiency Video Coding (HEVC). In order to better exploit redundancy between neighboring views and decrease distances between current view and its references instead of using conventional orders, views are divided into four smaller regions and each region is scanned by a snake order. Furthermore, according to this ordering, an appropriate referencing structure is defined that only selects adjacent views as references. Simulation results show that Rate-Distortion performance of proposed method has higher gain than the other state-of-the-art light field compression methods.","PeriodicalId":137206,"journal":{"name":"2018 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117033682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Constant Delay Traversal of Compressed Graphs 压缩图的恒延迟遍历
2018 Data Compression Conference Pub Date : 2018-03-27 DOI: 10.1109/DCC.2018.00011
S. Maneth, Fabian Peternek
{"title":"Constant Delay Traversal of Compressed Graphs","authors":"S. Maneth, Fabian Peternek","doi":"10.1109/DCC.2018.00011","DOIUrl":"https://doi.org/10.1109/DCC.2018.00011","url":null,"abstract":"We present a pointer-based data structure for constant time traversal of the edges of an edge-labeled (alphabet Σ) graph given as hyperedge-replacement grammar G. The grammar is assumed to have a fixed rank κ (maximal number of nodes connected to a nonterminal hyperedge) and that each node of the represented graph is incident to at most one σ-edge per direction (σ ε Σ). Precomputing the data structure needs O(|G||Σ|κh) space and O(|G||Σ|κh 2 ) time, where h is the height of the derivation tree of G.","PeriodicalId":137206,"journal":{"name":"2018 Data Compression Conference","volume":"61 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126662506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient Processing of top-K Vector-Raster Queries Over Compressed Data 压缩数据上top-K矢量栅格查询的高效处理
2018 Data Compression Conference Pub Date : 2018-03-27 DOI: 10.1109/DCC.2018.00063
Gilberto Gutiérrez, Susana Ladra, Juan R. Lopez, J. Paramá, Fernando Silva-Coira
{"title":"Efficient Processing of top-K Vector-Raster Queries Over Compressed Data","authors":"Gilberto Gutiérrez, Susana Ladra, Juan R. Lopez, J. Paramá, Fernando Silva-Coira","doi":"10.1109/DCC.2018.00063","DOIUrl":"https://doi.org/10.1109/DCC.2018.00063","url":null,"abstract":"In this work, we propose an efficient algorithm for retrieving K polygons of a vector dataset that overlap cells of a raster dataset, such that the K polygons are those overlapping the highest (or lowest) cell values among all polygons.","PeriodicalId":137206,"journal":{"name":"2018 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114073741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimal Single- and Multiple-Tree Almost Instantaneous Variable-to-Fixed Codes 最优单树和多树几乎瞬时变到固定码
2018 Data Compression Conference Pub Date : 2018-03-27 DOI: 10.1109/DCC.2018.00058
Danny Dubé, Fatma Haddad
{"title":"Optimal Single- and Multiple-Tree Almost Instantaneous Variable-to-Fixed Codes","authors":"Danny Dubé, Fatma Haddad","doi":"10.1109/DCC.2018.00058","DOIUrl":"https://doi.org/10.1109/DCC.2018.00058","url":null,"abstract":"Variable-to-fixed codes are often based on dictionaries that obey the prefix-free property. In particular, the Tunstall algorithm builds such codes. However, the prefix-free property is not necessary to have correct variable-to-fixed codes. Removing the constraint to obey the prefix-free property may offer the opportunity to build more efficient codes. Here, we come back on the almost instantaneous variable-to-fixed codes introduced by Yamamoto and Yokoo. They considered both single trees and multiple trees to perform the parsing of the source data. We show that, in some cases, their techniques build suboptimal codes. We propose potential correctives to their techniques. We also propose a new, completely different technique based on dynamic programming that builds optimal dictionary trees.","PeriodicalId":137206,"journal":{"name":"2018 Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124513474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信