2013 Data Compression Conference最新文献

筛选
英文 中文
Color Gamut Scalable Video Coding 色域可扩展视频编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.29
L. Kerofsky, C. A. Segall, Seung-Hwan Kim
{"title":"Color Gamut Scalable Video Coding","authors":"L. Kerofsky, C. A. Segall, Seung-Hwan Kim","doi":"10.1109/DCC.2013.29","DOIUrl":"https://doi.org/10.1109/DCC.2013.29","url":null,"abstract":"This paper describes a scalable extension of the High Efficiency Video Coding (HEVC) standard that supports different color gamuts in an enhancement and base layer. Here, the emphasis is on scenarios with BT.2020 color gamut in an enhancement layer and BT.709 color gamut in the base layer. This is motivated by a need to provide content for both high definition and ultra-high definition devices in the near future. The paper describes a method for predicting the enhancement layer samples from a decoded base layer using a series of multiplies and adds to account for both color gamut and bit-depth changes. Results show an improvement in coding efficiency between 65% and 84% for luma (57% and 85% for chroma) compared to simulcast in quality (SNR) scalable coding.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128075669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multi-Level Dictionary Used in Code Compression for Embedded Systems 用于嵌入式系统代码压缩的多级字典
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.69
W. R. A. Dias, E. Moreno
{"title":"Multi-Level Dictionary Used in Code Compression for Embedded Systems","authors":"W. R. A. Dias, E. Moreno","doi":"10.1109/DCC.2013.69","DOIUrl":"https://doi.org/10.1109/DCC.2013.69","url":null,"abstract":"This paper presents an innovative and efficient approach to code compression. Our method reduces code size by up to 32.6% and 31.9% (including all extra costs) respectively, for ARM and MIPS processor, and presents an improvement of almost 7% over the traditional Huffman method. We performed simulations and analyzes, using the applications from benchmark MiBench. In spite of these experiments, our method is orthogonal to approaches that take into account the particularities of a given instruction set architecture, becoming an independent method for any specific architecture.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132092805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Domain-Specific XML Compression 特定于领域的XML压缩
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.90
John P. T. Moore, Antonio D. Kheirkhahzadeh, Jiva N. Bagale
{"title":"Domain-Specific XML Compression","authors":"John P. T. Moore, Antonio D. Kheirkhahzadeh, Jiva N. Bagale","doi":"10.1109/DCC.2013.90","DOIUrl":"https://doi.org/10.1109/DCC.2013.90","url":null,"abstract":"Our compression technique is an abstraction of Packed Encoding Rules and has been implemented by the Packed objects structured data compression tool. Rather than trying to support a complex standard we instead describe a very simple technique which allows us to implement a very light-weight encoder capable of compressing structured data represented in XML. We call this work Integer Encoding Rules (IER). The technique is based on a simple mapping of data values belonging to a set of data types to a series of integer values. The data values come from XML data and the data types come from XML Schema.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129799846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
3D Wavelet Encoder for Depth Map Data Compression 三维小波编码器深度图数据压缩
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.88
M. Martínez-Rach, O. López, P. Piñol, Manuel P. Malumbres
{"title":"3D Wavelet Encoder for Depth Map Data Compression","authors":"M. Martínez-Rach, O. López, P. Piñol, Manuel P. Malumbres","doi":"10.1109/DCC.2013.88","DOIUrl":"https://doi.org/10.1109/DCC.2013.88","url":null,"abstract":"Depth Image Base Rendering (DIBR) is an effective approach for 3D-TV, however quality and time consistence of the depth map is a problem in this field. Our intermediate solution between Intra and Inter encoders is able to cope with the quality and time consistency of the captured depth map info. Our encoder achieves the same visual quality than H264/AVC and x264 in Intra mode reducing coding delays.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"02 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129934218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visually Lossless JPEG 2000 Decoder 视觉无损JPEG 2000解码器
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.25
Leandro Jimenez-Rodriguez, Francesc Aulí Llinàs, M. Marcellin, J. Serra-Sagristà
{"title":"Visually Lossless JPEG 2000 Decoder","authors":"Leandro Jimenez-Rodriguez, Francesc Aulí Llinàs, M. Marcellin, J. Serra-Sagristà","doi":"10.1109/DCC.2013.25","DOIUrl":"https://doi.org/10.1109/DCC.2013.25","url":null,"abstract":"Visually lossless coding is a method through which an image is coded with numerical losses that are not noticeable by visual inspection. Contrary to numerically lossless coding, visually lossless coding can achieve high compression ratios. In general, visually lossless coding is approached from the point of view of the encoder, i.e., as a procedure devised to generate a compressed code stream from an original image. If an image has already been encoded to a very high fidelity (higher than visually lossless - perhaps even numerically lossless), it is not straightforward to create a just visually lossless version without fully re-encoding the image. However, for large repositories, re-encoding may not be a suitable option. A visually lossless decoder might be useful to decode, or to parse and transmit, only the data needed for visually lossless reconstruction. This work introduces a decoder for JPEG 2000 code streams that identifies and decodes the minimum amount of information needed to produce a visually lossless image. The main insights behind the proposed method are to estimate the variance of the code blocks before the decoding procedure, and to determine the visibility thresholds employing a well-known model from the literature. The main advantages are faster decoding and the possibility to transmit visually lossless images employing minimal bit rates.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130601988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Efficient Coding of Signal Distances Using Universal Quantized Embeddings 利用通用量化嵌入的有效信号距离编码
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.33
P. Boufounos, S. Rane
{"title":"Efficient Coding of Signal Distances Using Universal Quantized Embeddings","authors":"P. Boufounos, S. Rane","doi":"10.1109/DCC.2013.33","DOIUrl":"https://doi.org/10.1109/DCC.2013.33","url":null,"abstract":"Traditional rate-distortion theory is focused on how to best encode a signal using as few bits as possible and incurring as low a distortion as possible. However, very often, the goal of transmission is to extract specific information from the signal at the receiving end, and the distortion should be measured on that extracted information. In this paper we examine the problem of encoding signals such that sufficient information is preserved about their pair wise distances. For that goal, we consider randomized embeddings as an encoding mechanism and provide a framework to analyze their performance. We also propose the recently developed universal quantized embeddings as a solution to that problem and experimentally demonstrate that, in image retrieval experiments, universal embedding can achieve up to 25% rate reduction over the state of the art.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121363607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Inter-view Reference Frame Selection in Multi-view Video Coding 多视点视频编码中的视点间参考帧选择
2013 Data Compression Conference Pub Date : 2013-03-20 DOI: 10.1109/DCC.2013.113
Guang Y. Zhang, Abdelrahman Abdelazim, S. Mein, M. Varley, D. Ait-Boudaoud
{"title":"Inter-view Reference Frame Selection in Multi-view Video Coding","authors":"Guang Y. Zhang, Abdelrahman Abdelazim, S. Mein, M. Varley, D. Ait-Boudaoud","doi":"10.1109/DCC.2013.113","DOIUrl":"https://doi.org/10.1109/DCC.2013.113","url":null,"abstract":"Summary form only given. Multiple video cameras are used to capture the same scene simultaneously to acquire the multiview view coding data, obviously, over-large data will affect the coding efficiency. Due to the video data is acquired from the same scene, the inter-view similarities between adjacent camera views are exploited for efficient compression. Generally, the same objects with different viewpoints are shown on adjacent views. On the other hand, containing objects at different depth planes, and therefore perfect correlation over the entire image area will never occur. Additionally, the scene complexity and the differences in brightness and color between the video of the individual cameras will also affect the current block to find its best match in the inter-view reference picture. Consequently, the temporal-view reference picture is referred more frequently. In order to gain the compression efficiency, it is a core part to disable the unnecessary inter-view reference. The idea of this paper is to exploit the phase correlation to estimate the dependencies between the inter-view reference and the current picture. If the two frames with low correlation, the inter-view reference frame will be disabled. In addition, this approach works only on non-anchor pictures. Experimental results show that the proposed algorithm can save 16% computational complexity on average, with negligible loss of quality and bit rate. The phase correlation process only takes up 0.1% of the whole process.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121723810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computing Convolution on Grammar-Compressed Text 基于语法压缩文本的卷积计算
2013 Data Compression Conference Pub Date : 2013-03-15 DOI: 10.1109/DCC.2013.53
Toshiya Tanaka, T. I., Shunsuke Inenaga, H. Bannai, M. Takeda
{"title":"Computing Convolution on Grammar-Compressed Text","authors":"Toshiya Tanaka, T. I., Shunsuke Inenaga, H. Bannai, M. Takeda","doi":"10.1109/DCC.2013.53","DOIUrl":"https://doi.org/10.1109/DCC.2013.53","url":null,"abstract":"The convolution between a text string S of length N and a pattern string P of length m can be computed in O(N log m) time by FFT. It is known that various types of approximate string matching problems are reducible to convolution. In this paper, we assume that the input text string is given in a compressed form, as a straight-line program (SLP), which is a context free grammar in the Chomsky normal form that derives a single string. Given an SLP S of size n describing a text S of length N, and an uncompressed pattern P of length m, we present a simple O(nm log m)-time algorithm to compute the convolution between S and P. We then show that this can be improved to O(min{nm, N - α} log m) time, where α ≥ 0 is a value that represents the amount of redundancy that the SLP captures with respect to the length-m substrings. The key of the improvement is our new algorithm that computes the convolution between a trie of size r and a pattern string P of length m in O(r log m) time.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124752674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Image Blocking Artifacts Reduction via Patch Clustering and Low-Rank Minimization 基于Patch聚类和低秩最小化的图像块伪影减少
2013 Data Compression Conference Pub Date : 2013-03-01 DOI: 10.1109/dcc.2013.95
Jie Ren, Jiaying Liu, Mading Li, Wei Bai, Zongming Guo
{"title":"Image Blocking Artifacts Reduction via Patch Clustering and Low-Rank Minimization","authors":"Jie Ren, Jiaying Liu, Mading Li, Wei Bai, Zongming Guo","doi":"10.1109/dcc.2013.95","DOIUrl":"https://doi.org/10.1109/dcc.2013.95","url":null,"abstract":"","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"21 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120856662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Compression of Optimal Value Functions for Markov Decision Processes 马尔可夫决策过程的最优值函数压缩
2013 Data Compression Conference Pub Date : 2013-03-01 DOI: 10.1109/DCC.2013.81
Mykel J. Kochenderfer, Nicholas Monath
{"title":"Compression of Optimal Value Functions for Markov Decision Processes","authors":"Mykel J. Kochenderfer, Nicholas Monath","doi":"10.1109/DCC.2013.81","DOIUrl":"https://doi.org/10.1109/DCC.2013.81","url":null,"abstract":"Summary form only given. A Markov decision process (MDP) is defined by a state space, action space, transition model, and reward model. The objective is to maximize accumulation of reward over time. Solutions can be found through dynamic programming, which generally involves discretization, resulting in significant memory and computational requirements. Although computer clusters can be used to solve large problems, many applications require that solutions be executed on less capable hardware. We explored a general method for compressing solutions in a way that preserves fast random-access lookups. The method was applied to an MDP for an aircraft collision avoidance system. In our problem, S consists of aircraft positions and velocities and A consists of resolution advisories provided by the collision avoidance system, with S > 1.5 x 106, and A = 10. The solution to an MDP can be represented by an |S| x |A| matrix specifying Q*(s,a), the expected return of the optimal strategy from s after executing action a. Since, on average, only 6.6 actions are available from every state in our problem, it is more efficient to use a sparse representation consisting of an array of the permissible values of Q*, organized into into variable-length blocks with one block per state. An index provides offsets into this Q* array corresponding to the block boundaries, and an action array lists the actions available from each state. The values for Q* are stored using a 32-bit floating point representation, resulting in 534 MB for the three arrays associated with the sparse representation. Our method first converts to a 16-bit half-precision representation, sorts the state-action values within each block, adjusts the action array appropriately, and then removes redundant blocks. Although LZMA has a better compression ratio, it does not support real-time random access decompression. The behavior of the proposed method was demonstrated in simulation with negligible impact on safety and operational performance metrics. Although this compression methodology was demonstrated on related MDPs with similar compression ratios, further work will apply this technique to other domains.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129693578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信