{"title":"Progressive Dictionary Learning with Hierarchical Structure for Scalable Video Coding","authors":"Xin Tang, Wenrui Dai, H. Xiong","doi":"10.1109/DCC.2015.28","DOIUrl":"https://doi.org/10.1109/DCC.2015.28","url":null,"abstract":"To enable learning-based video coding for transmission over heterogenous networks, this paper proposes a scalable video coding framework by progressive dictionary learning. With the hierarchical B-picture prediction structure, the inter-predicted frames would be reconstructed in terms of the spatio-temporal dictionary in a successive sense. Within the progressive dictionary learning, the training set is enriched with the samples from the reconstructed frames in the coarse layer. Through minimizing the expected cost, the stochastic gradient descent is leveraged to update the dictionary for practical coding. It is demonstrated that the learning-based scalable framework can effectively guarantee the consistency of motion trajectory with the well-designed spatio-temporal dictionary.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130143607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual-Based Distributed Compressed Video Sensing","authors":"S. Elsayed, M. Elsabrouty","doi":"10.1109/DCC.2015.40","DOIUrl":"https://doi.org/10.1109/DCC.2015.40","url":null,"abstract":"This paper proposes an approach of compressed sensing (CS) of video in which distributed video coding DVC and CS are integrated as in [1], and the sensing matrix is modulated in suit of [2] but with proposed fixed weighting strategy to certain DCT coefficients in an effort to improve the visual quality of reconstruction.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126833643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"It's Been 1,000,000 Years Since Huffman","authors":"Alistair Moffat","doi":"10.1109/DCC.2015.87","DOIUrl":"https://doi.org/10.1109/DCC.2015.87","url":null,"abstract":"Summary form only given. Huffman codes are legendary in the computing disciplines, and are embedded in a wide range of critically important communications and storage codecs. With 2015 marking the 64th anniversary of their development -- 1,000,000 years in binary -- it is timely to review Huffman and related codes, and the many mechanisms that have been developed for computing and deploying them.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133766331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Prediction with Switched Models","authors":"Sameer Sheorey, Alrik Firl, Hai Wei, Jesse Mee","doi":"10.1109/DCC.2015.78","DOIUrl":"https://doi.org/10.1109/DCC.2015.78","url":null,"abstract":"Lossless image compression is particularly important in applications requiring high fidelity such as medical imaging, remote sensing and scientific imaging. These applications cannot tolerate the minute artifacts that are caused by lossy compression methods. We first describe a new predictor for lossless image compression based on plane fitting. Our main contribution is an adaptive model switching algorithm that locally selects the best predictor for each pixel based on context. Our experiments show that the new predictor substantially outperform common lossless methods such as CALIC, JPEG-LS, CCSDS SZIP and SFALIC for various medical images of different modalities (including CT and MR images) and bit depths. The simplicity and inherently parallel nature of the model switching algorithm makes a very fast implementation possible.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134074199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Gagie, Aleksi Hartikainen, Juha Kärkkäinen, G. Navarro, S. Puglisi, Jouni Sirén
{"title":"Document Counting in Compressed Space","authors":"T. Gagie, Aleksi Hartikainen, Juha Kärkkäinen, G. Navarro, S. Puglisi, Jouni Sirén","doi":"10.1109/DCC.2015.55","DOIUrl":"https://doi.org/10.1109/DCC.2015.55","url":null,"abstract":"We address the problem of counting the number of strings in a collection where a given pattern appears, which has applications in information retrieval and data mining. Existing solutions are in a theoretical stage. In this pa-per we implement these solutions and explore compressed variants, aiming to reduce data structure size. Our main result is to uncover some unexpected compressibility properties of the fastest known data structure for the problem. By taking advantage of these properties, we can reduce the size of the structure by a factor of 5-400, depending on the dataset.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115191838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Brisaboa, Guillermo de Bernardo, Gilberto Gutiérrez, Susana Ladra, Miguel R. Penabad, Brunny A. Troncoso
{"title":"Efficient Set Operations over k2-Trees","authors":"N. Brisaboa, Guillermo de Bernardo, Gilberto Gutiérrez, Susana Ladra, Miguel R. Penabad, Brunny A. Troncoso","doi":"10.1109/DCC.2015.9","DOIUrl":"https://doi.org/10.1109/DCC.2015.9","url":null,"abstract":"k2-trees have been proved successful to represent in avery compact way different kinds of binary relations, such as web graphs, RDFs or raster data. In order to be a fully functional succinct representation for these domains, the k2-tree must support all the required operations for binary relations. In their original description, the authors include how to answer some of the most relevant queries over the k2-tree. In this paper, we extend this functionality and detail the algorithms to efficiently compute the k2-tree resulting from the union, intersection, difference or complement of binary relations represented using k2-trees.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128399315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Incremental Locality and Clustering-Based Compression","authors":"L. Krcál, J. Holub","doi":"10.1109/DCC.2015.23","DOIUrl":"https://doi.org/10.1109/DCC.2015.23","url":null,"abstract":"Current compression solutions either use a limited size locality-based context or the entire input, to which the compressors adapt. This results in suboptimal compression effectiveness due to missing similarities further apart in the former case, or due to too generic adaptation. There are many deduplication and near deduplication systems that search for similarity across the entire input. Although most of these systems excel with their simplicity and speed, none of those goes deeper in terms of full-scale redundancy removal. We propose a novel compression and archival system called ICBCS. Our system goes beyond standard measures for similarity detection, using extended similarity hash and incremental clustering techniques to determine groups of sufficiently similar chunks designated for compression. ICBCS outperforms conventional file compression solutions on datasets consisting of at least mildly redundant files. It also shows that selective application of weak compressor results in better compression ratio and speed than conventional application of a strong compressor.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126076595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Fast Algorithm for Adaptive Motion Compensation Precision in Screen Content Coding","authors":"Bin Li, Jizheng Xu","doi":"10.1109/DCC.2015.17","DOIUrl":"https://doi.org/10.1109/DCC.2015.17","url":null,"abstract":"Fractional-pel motion compensation is very good at improving video coding efficiency, especially for camera-captured content. But for screen content, which is obtained from a computer desktop, motion vectors with integer-precision may be enough to represent the motion in different pictures. Using fractional-pel motion compensation for such content is a waste of bits. Thus, adaptive motion compensation precision is helpful for improving coding efficiency, especially for screen content coding. Usually, to select suitable motion compensation precision, multi-pass encoding is introduced, which significantly increases the encoding time. This paper presents a fast encoding algorithm for adaptive motion compensation precision used in screen content coding by hash-based block matching. With the proposed method, multi-pass encoding is avoided and most of the benefits brought by adaptive motion compensation precision are preserved. The experimental results show that with the proposed method, up to 7.7% bit saving is obtained without a significant impact on encoding time.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116644253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast HEVC Intra Mode Decision Based on Edge Detection and SATD Costs Classification","authors":"Mohammadreza Jamali, S. Coulombe, François Caron","doi":"10.1109/DCC.2015.21","DOIUrl":"https://doi.org/10.1109/DCC.2015.21","url":null,"abstract":"The recent High Efficiency Video Coding (HEVC) standard was designed to achieve significantly improved compression performance compared to the widely used H.264/AVC standard. This achievement was motivated by the ever-increasing popularity of high-definition video applications and the emergence of ultra-HD. Unfortunately, this comes at the expense of a significant increase in computational complexity for both inter and intra coding. To alleviate this problem, in this paper, we propose a fast intra mode decision method based on improved edge detection, consideration of most relevant modes from neighboring blocks, and classification of SATD costs permitting the elimination of several candidate modes prior to rate distortion optimization (RDO). Experimental results show that the proposed method provides time reduction up to 39.2% and an average 35.6% with negligible quality loss as compared to the HEVC reference implementation HM 15.0.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129187413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generalized Context Transformations -- Enhanced Entropy Reduction","authors":"M. Vasinek, J. Platoš","doi":"10.1109/DCC.2015.56","DOIUrl":"https://doi.org/10.1109/DCC.2015.56","url":null,"abstract":"Context transformations is a very simple data transformation method that we presented recently and it is used to decrease uncertainty in input data. The transformation is based on exchange of two different di-grams. This paper is focused on new consequences of the relationships discovered subsequently. We were able to find a mathematical model which predicts the efficiency of each transformation. The new type of the transformation, Generalized context transformation, developed recently is more efficient than the previous one and it is able to remove almost all redundancy based on the symbols mutual information. The newly developed algorithm is computationally and entropic ally more efficient than the previous one.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129321460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}