{"title":"Visually Lossless Compression of Windowed Images","authors":"Tony Leung, M. Marcellin, A. Bilgin","doi":"10.1109/DCC.2013.84","DOIUrl":"https://doi.org/10.1109/DCC.2013.84","url":null,"abstract":"Summary form only given. Visually lossless image compression methods aim to compress images while ensuring that compression distortions are below perceptible levels. In medical imaging applications where high bit-depth images are often displayed on lower bit-depth displays, adjustments of image brightness and contrast during display are very common. In these applications, radiologists often change the display window level and width to view different ranges of intensities within the full range to better visualize diverse types of tissue in the image. However, when an image created to be visually lossless at a particular display setting is manipulated prior to display, compression distortions that were initially invisible may become visible. Similarly, compression artifacts that would be visible in certain window settings can be invisible in others, creating opportunities for the compression algorithm to allow increased compression distortion with corresponding increases in compression ratios. In this work, the effects of window level and window width adjustments on visibility thresholds were investigated. A JPEG2000 based image compression method to achieve visually lossless compression for a given window level and width was then proposed. A validation study was performed to confirm that the images obtained using the proposed method cannot be distinguished from original windowed images. The proposed compression method was also extended to a client-server setting where the server transmits incremental data to the client to ensure visually lossless representation after adjustments to the window level and width are made at the client side. The proposed incremental compression method was compared to a reference compression system where an 8-bit image corresponding to the desired window settings is created from a 12-bit CT image first at the encoder. This image is then compressed to achieve visually lossless compression using the methods described in. When the window settings are updated, a new 8-bit image corresponding to the updated window settings is created and compressed in a visually lossless manner. A comparison of the two methods illustrate that while the reference system is more efficient when the display settings are changed only once, the proposed method is advantageous when the display settings are changed more than once, requiring only 18% of the data transmitted by the reference system at the end of seven window setting adjustments.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123354008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lu Fang, Ngai-Man Cheung, O. Au, Houqiang Li, Ketan Tang
{"title":"Low Bit-Rate Subpixel-Based Color Image Compression","authors":"Lu Fang, Ngai-Man Cheung, O. Au, Houqiang Li, Ketan Tang","doi":"10.1109/DCC.2013.70","DOIUrl":"https://doi.org/10.1109/DCC.2013.70","url":null,"abstract":"We propose a novel low bit-rate compression scheme with sub pixel-based down-sampling and reconstruction (SPDR) for full color images. In the encoder stage, a decoder-dependent multi-channel sub pixel-based down-sampling is proposed, which is more effective in retaining high frequency detail than conventional pixel-based process. The decoder first decompresses the low-resolution image and then up-converts it to the original resolution using encoder dependent sub pixel-based reconstruction scheme by jointly considering the sub pixel-based down-sampling effect and the compression degradation. Compared to existing algorithms with comparable encoder and decoder complexity, the proposed SPDR offers complete standard compliance, competitive rate-distortion performance, and superior subjective quality.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114143051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion-Adaptive Transforms Based on Vertex-Weighted Graphs","authors":"Du Liu, M. Flierl","doi":"10.1109/DCC.2013.23","DOIUrl":"https://doi.org/10.1109/DCC.2013.23","url":null,"abstract":"Motion information in image sequences connects pixels that are highly correlated. In this paper, we consider vertex-weighted graphs that are formed by motion vector information. The vertex weights are defined by scale factors which are introduced to improve the energy compaction of motion-adaptive transforms. Further, we relate the vertex-weighted graph to a subspace constraint of the transform. Finally, we propose a subspace-constrained transform (SCT) that achieves optimal energy compaction for the given constraint. The subspace constraint is derived from the underlying motion information only and requires no additional information. Experimental results on energy compaction confirm that the motion-adaptive SCT outperforms motion-compensated orthogonal transforms while approaching the theoretical performance of the Karhunen Loeve Transform (KLT) along given motion trajectories.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121767866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low-Complexity Global Motion for AVC and HEVC Coders","authors":"J. Sievers","doi":"10.1109/DCC.2013.99","DOIUrl":"https://doi.org/10.1109/DCC.2013.99","url":null,"abstract":"This paper presents a low-complexity approach to deriving Region Of Interest vectors for an image based on fast sorting and histogram binning indexed by the components of the vector. Using the ROI vector list as guidance, a motion estimation algorithm can go through the motion vector field for the frame and align vectors with high global cost to the ROI vectors. This technique reduces global cost by incorporating potential unrealized benefit of non-causal vectors. Incorporating global benefit allows higher ? values to be used when Lagrangian approaches are used to determine motion vector cost, and therefore images can be coded with lower motion vector costs while still maintaining motion alignment on object boundaries.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115168503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model Correction for Cross-Channel Chroma Prediction","authors":"Christophe Gisquet, E. François","doi":"10.1109/DCC.2013.10","DOIUrl":"https://doi.org/10.1109/DCC.2013.10","url":null,"abstract":"A new inter-channel coding mode named LM mode has been intensively explored in the HEVC standardization project. This mode predicts the chroma signal from the luma signal using a linear model whose parameters are inferred from neighboring reconstructed luma and chroma samples. Although this mode presents very good coding efficiency, it is observed that ill cases in the linear parameters calculation can be detected and fixed. This paper gives an overview of the LM mode and presents a novel model correction scheme to detect and correct those ill cases. Simulation results show significant bit-rate savings by the proposed correction scheme with limited added complexity.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128047212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Munoz-Gomez, Joan Bartrina-Rapesta, Francesc Aulí Llinàs, J. Serra-Sagristà
{"title":"Computed Tomography Image Coding through Air Filtering in the Wavelet Domain","authors":"Juan Munoz-Gomez, Joan Bartrina-Rapesta, Francesc Aulí Llinàs, J. Serra-Sagristà","doi":"10.1109/DCC.2013.92","DOIUrl":"https://doi.org/10.1109/DCC.2013.92","url":null,"abstract":"Computed Tomography (CT) devices irradiate a (human) body with controlled amounts of X-ray to produce an image where different substance (lung, tissue, vessels, etc.) can be identified unequivocally. Commonly, CT devices also capture areas that do not belong to the human body. Such areas are referred to as air pixels, and may contain imaging artifacts. The air pixels are irrelevant for the medical diagnostic and provoke an important degradation in coding efficiency. In order to improve coding performance, we propose an air filtering technique based on a thresholding in the wavelet domain. The thresholds are determined through the existing relation between wavelet coefficients and image samples, which can be expressed in terms of a probability function. The proposed scheme filters air pixels in the wavelet domain by removing coefficients that are below a given threshold. The thresholds are estimated for different resolution levels and subbands, obtaining a probability of 70% to correctly filter air pixels. Although the proposed technique introduces an slight distortion in terms of RMSE in the biological area, this distortion is negligible compared with the state-of-the-art HDCS filter. These results suggest that the rate-distortion coding performance of our proposal and HDCS outperform significantly the coding performance of JPEG2000. In addition, Table 1 provides the RMSE of the HDCS and our proposal when compared with the original image, indicating that our proposal introduces much less RMSE distortion.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132635819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Tok, Marko Esche, A. Glantz, A. Krutz, T. Sikora
{"title":"A Parametric Merge Candidate for High Efficiency Video Coding","authors":"M. Tok, Marko Esche, A. Glantz, A. Krutz, T. Sikora","doi":"10.1109/DCC.2013.11","DOIUrl":"https://doi.org/10.1109/DCC.2013.11","url":null,"abstract":"Block based motion compensated prediction still is the main technique used for temporal redundancy reduction in modern hybrid video codecs. However, the resulting motion vector fields are highly redundant as well. So, motion vector prediction and difference coding are used to compress such vector fields. A drawback of common motion vector prediction techniques is their inability to predict complex motion such as rotation and zoom in an efficient way. We present a novel Merge candidate for improving already existing vector prediction techniques based on higher order motion models to overcome this issue. To transmit the needed models, an efficient compression scheme is utilized. The improvement results in bit rate savings of 1.7% in average and up to 4% respectively.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115554907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quadratic Similarity Queries on Compressed Data","authors":"A. Ingber, T. Courtade, T. Weissman","doi":"10.1109/DCC.2013.52","DOIUrl":"https://doi.org/10.1109/DCC.2013.52","url":null,"abstract":"The problem of performing similarity queries on compressed data is considered. We study the fundamental tradeoff between compression rate, sequence length, and reliability of queries performed on compressed data. For a Gaussian source and quadratic similarity criterion, we show that queries can be answered reliably if and only if the compression rate exceeds a given threshold - the identification rate - which we explicitly characterize. When compression is performed at a rate greater than the identification rate, responses to queries on the compressed data can be made exponentially reliable. We give a complete characterization of this exponent, which is analogous to the error and excess-distortion exponents in channel and source coding, respectively. For a general source, we prove that the identification rate is at most that of a Gaussian source with the same variance. Therefore, as with classical compression, the Gaussian source requires the largest compression rate. Moreover, a scheme is described that attains this maximal rate for any source distribution.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124931548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amit Golander, S. Tahar, Lior Glass, G. Biran, Sagi Manole
{"title":"High Compression Rate and Ratio Using Predefined Huffman Dictionaries","authors":"Amit Golander, S. Tahar, Lior Glass, G. Biran, Sagi Manole","doi":"10.1109/DCC.2013.119","DOIUrl":"https://doi.org/10.1109/DCC.2013.119","url":null,"abstract":"Current Huffman coding modes are optimal for a single metric: compression ratio (quality) or rate (performance). We recognize that real life data can usually be classified to families of data types and thus the Huffman dictionary can be reused instead of recalculated. In this paper, we show how to balance the trade-off between compression ratio and rate, without modifying existing standards and legacy decompression implementations.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129315480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compressing Huffman Models on Large Alphabets","authors":"G. Navarro, Alberto Ordóñez Pereira","doi":"10.1109/DCC.2013.46","DOIUrl":"https://doi.org/10.1109/DCC.2013.46","url":null,"abstract":"A naive storage of a Huffman model on a text of length n over an alphabet of size σ requires O(σlog n) bits. This can be reduced to σ logσ + O(σ) bits using canonical codes. This overhead over the entropy can be significant when σ is comparable to n, and it also dictates the amount of main memory required to compress or decompress. We design an encoding scheme that requires σlog log n+O(σ+log2 n) bits in the worst case, and typically less, while supporting encoding and decoding of symbols in O(log log n) time. We show that our technique reduces the storage size of the model of state-of-the-art techniques to around 15% in various real-life sequences over large alphabets, while still offering reasonable compression/decompression times.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116016730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}