2015 Data Compression Conference最新文献

筛选
英文 中文
Compression-Aware Algorithms for Massive Datasets 面向海量数据集的压缩感知算法
2015 Data Compression Conference Pub Date : 2015-04-07 DOI: 10.1109/DCC.2015.74
Nathan Brunelle, G. Robins, Abhi Shelat
{"title":"Compression-Aware Algorithms for Massive Datasets","authors":"Nathan Brunelle, G. Robins, Abhi Shelat","doi":"10.1109/DCC.2015.74","DOIUrl":"https://doi.org/10.1109/DCC.2015.74","url":null,"abstract":"While massive datasets are often stored in compressed format, most algorithms are designed to operate on uncompressed data. We address this growing disconnect by developing a framework for compression-aware algorithms that operate directly on compressed datasets. Synergistically, we also propose new algorithmically-aware compression schemes that enable algorithms to efficiently process the compressed data. In particular, we apply this general methodology to geometric / CAD datasets that are ubiquitous in areas such as graphics, VLSI, and geographic information systems. We develop example algorithms and corresponding compression schemes that address different types of datasets, including point sets and graphs. Our methods are more efficient than their classical counterparts, and they extend to both lossless and lossy compression scenarios. This motivates further investigation of how this approach can enable algorithms to process ever-increasing big data volumes.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120972729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Joint Weighted Sparse Representation Based Median Filter for Depth Video Coding 基于联合加权稀疏表示的深度视频编码中值滤波
2015 Data Compression Conference Pub Date : 2015-04-07 DOI: 10.1109/DCC.2015.38
Jinhui Hu, R. Hu, Yu Chen, Liang Liao, Jing Xiao, Ruolin Ruan
{"title":"Joint Weighted Sparse Representation Based Median Filter for Depth Video Coding","authors":"Jinhui Hu, R. Hu, Yu Chen, Liang Liao, Jing Xiao, Ruolin Ruan","doi":"10.1109/DCC.2015.38","DOIUrl":"https://doi.org/10.1109/DCC.2015.38","url":null,"abstract":"In order to promote the development of auto-stereoscopic display, MPEG has proposed multi-view plus depth (MVD) format. The depth video is encoded and transmitted with color video to synthesize virtual views at the receiver side. The existing video coding standards such as H.264/AVC introduces coding artifacts along the depth boundaries, which may seriously affects the synthesized view quality and coding efficiency. Many in-loop depth filters such as joint depth filter have been proposed to remove the artifacts in compressed depth video. However, their performance is unstable and affected by the outliers due to the weighted summation. In this paper, based on the sparse prior characteristic in local region of depth map, we propose a joint weighted sparse representation based median filter to select the most relevant neighboring depth pixel as the output during the filter process. Experimental results show the proposed method is more effective in improving the depth video coding efficiency.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127353581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compression of Next Generation Sequencing Data 下一代测序数据的压缩
2015 Data Compression Conference Pub Date : 2015-04-07 DOI: 10.1109/DCC.2015.92
Ö. U. Nalbantoğlu, A. Riffle, K. Sayood
{"title":"Compression of Next Generation Sequencing Data","authors":"Ö. U. Nalbantoğlu, A. Riffle, K. Sayood","doi":"10.1109/DCC.2015.92","DOIUrl":"https://doi.org/10.1109/DCC.2015.92","url":null,"abstract":"Summary form only given. FASTQ is the defacto standard for data from next generation sequencing platforms. The FASTQ format uses four lines per read: two lines for header information, one for the sequence itself, and one for the quality scores. The proposed compression scheme treats the various lines of each four line set differently. The highly repetitive headers are encoded using an LZ77 variant. The reads themselves are compressed using a modified LZ78 method which uses a backward adaptive dictionary. The quality factors are encoded using a context based arithmetic coding scheme. Performance results for the proposed method were obtained using data generated via a sequencing simulation of a random 50kbp section from Escherichia coli str. K-12 substr. DH10B chromosome with 35X and 100X coverages. The two best performing methods for FASTQ compression, Fastqz and Fqzcomp were used to compress the same data. These methods performed the best in a competition to compress next generation sequencing data. We also compare the results to two general purpose compressors bzip and LZMA. The results are shown in Table.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129062545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Color-Space Transform for HEVC Screen Content Coding HEVC屏幕内容编码的自适应色彩空间变换
2015 Data Compression Conference Pub Date : 2015-04-07 DOI: 10.1109/DCC.2015.33
Li Zhang, Jianle Chen, J. Solé, M. Karczewicz, Xiaoyu Xiu, Jizheng Xu
{"title":"Adaptive Color-Space Transform for HEVC Screen Content Coding","authors":"Li Zhang, Jianle Chen, J. Solé, M. Karczewicz, Xiaoyu Xiu, Jizheng Xu","doi":"10.1109/DCC.2015.33","DOIUrl":"https://doi.org/10.1109/DCC.2015.33","url":null,"abstract":"This paper presents an in-loop adaptive color-space transform for the HEVC Screen Content Coding extension. In the proposed method, the prediction residual is adaptively converted into a different color space to reduce the cross-component redundancy. After the ACT, the signal is coded following the existing HEVC framework. To keep the complexity as low as possible, fixed color-space transforms that are easily implemented with shift and add operations are utilized. Significant coding gains are achieved by this method in the current HEVC Screen Content Coding reference software with no increase of decoding runtime. The proposed method has been adopted to the HEVC Screen Content Coding extension.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130304150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Clustered Multi-dictionary Code Compression for Embedded Systems 嵌入式系统的聚类多字典代码压缩
2015 Data Compression Conference Pub Date : 2015-04-07 DOI: 10.1109/DCC.2015.6
J. Tu, Meisong Zheng, Zilong Wang, Lijian Li, Junye Wang
{"title":"Clustered Multi-dictionary Code Compression for Embedded Systems","authors":"J. Tu, Meisong Zheng, Zilong Wang, Lijian Li, Junye Wang","doi":"10.1109/DCC.2015.6","DOIUrl":"https://doi.org/10.1109/DCC.2015.6","url":null,"abstract":"A novel clustered multi-dictionary code compression method is proposed to effectively reduce the memory size which program code stored. According to the repeat times of distinct codes, the code set is clustered into several clusters. Each cluster is compressed with different dictionary and the codeword length is the same for the same dictionary. Shorter codeword is used for the dictionary whose size is smaller. Experimental results of MiBench benchmark compiled for ARM and MIPS show that the compression efficiency of this method is superior to the traditional multi-level dictionary-based code compression. The latency of instruction fetch is almost not increased, decode logic overhead is tiny and acceptable. Furthermore, the storage-bandwidth is increased.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"22 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114116859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compression Based on a Joint Task-Specific Information Metric 基于联合任务特定信息度量的压缩
2015 Data Compression Conference Pub Date : 2015-04-07 DOI: 10.1109/DCC.2015.76
Lingling Pu, M. Marcellin, A. Bilgin, A. Ashok
{"title":"Compression Based on a Joint Task-Specific Information Metric","authors":"Lingling Pu, M. Marcellin, A. Bilgin, A. Ashok","doi":"10.1109/DCC.2015.76","DOIUrl":"https://doi.org/10.1109/DCC.2015.76","url":null,"abstract":"Compression is a key component in many imaging systems in order to accommodate limited resources such as power and bandwidth. Image compression is often done independent of the specific tasks that the systems are designed for, such as target detection, classification, diagnosis, etc. Standard compression techniques are designed based on quality metrics such as mean-squared error (MSE) or peak signal to noise ratio (PSNR). Recently, a metric based on task-specific information (TSI) was proposed and successfully incorporated into JPEG2000 encoding. It has been shown that the proposed TSI metric can optimize the task performance. In this work, a joint metric is proposed to provide a seamless transition between the conventional quality metric MSE and the recently proposed TSI. We demonstrate the effectiveness and flexibility of the proposed joint TSI metric for target detection tasks. Furthermore, it is extended to video tracking applications to demonstrate the robustness of the proposed metric. Experimental results show that although the metric is not directly designed for the applied task, better tracking performance can still be achieved when the joint metric is used, compared to results obtained with the traditional MSE metric.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114213535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
On the Design of Optimal Sub-Pixel Motion Compensation Interpolation Filters for Video Compression 视频压缩中最优亚像素运动补偿插值滤波器的设计
2015 Data Compression Conference Pub Date : 2015-04-07 DOI: 10.1109/DCC.2015.83
K. Minoo, D. Baylon
{"title":"On the Design of Optimal Sub-Pixel Motion Compensation Interpolation Filters for Video Compression","authors":"K. Minoo, D. Baylon","doi":"10.1109/DCC.2015.83","DOIUrl":"https://doi.org/10.1109/DCC.2015.83","url":null,"abstract":"In this paper, the design of optimal temporal prediction for video coding is addressed as a quantization design problem. In the proposed framework, a codebook consisting of a set of interpolation filters is optimized to achieve rate-distortion optimality. The optimization process jointly affects two aspects of motion compensation to achieve rate distortion optimality: 1) The size of the codebook or motion vector (MV) resolution and 2) The filter coefficients for each sub-sample interpolation filter. Note that filter coefficients dictate the behavior of the interpolation filter in terms of signal-noise shaping.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114355868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Partial Hstry of Losy Compression 松散压缩的部分历史
2015 Data Compression Conference Pub Date : 2015-04-07 DOI: 10.1109/DCC.2015.91
R. Gray
{"title":"A Partial Hstry of Losy Compression","authors":"R. Gray","doi":"10.1109/DCC.2015.91","DOIUrl":"https://doi.org/10.1109/DCC.2015.91","url":null,"abstract":"Summary form only given. The title exemplifies the topic as it is easily recognized as compressed from possible English original versions. It also exemplifies some difficulties. A small sampling of readers all thought \"Losy\" was a corruption of \"Lossy,\" which is consistent with the apparent loss of letters in \"Hstry\" and \"Losy\". But while \"Hstry\" is compressed, it is not really lossy since it can almost certainly be decoded into \"History\" (as my spell checker does). Moreover, \"Losy\" need not be \"Lossy\" - an equally good candidate in terms of minimizing Levenshtein distance is \"Lousy\" - so this talk could be a history of lousy compression, lossless or lossy. There are also problems in the uncompressed words. \"Partial\" has neither compression nor evident losses, but it has ambiguous meaning: it could equally well mean \"incomplete\" or \"biased.\" So the title is not uniquely decodable, which equally favors \"lossy\" (since you cannot guarantee an accurate reconstruction) or \"lousy\" (since lossy coding of English seems a bad idea). This talk will embrace the ambiguity of the title.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125707306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mobile Visual Search with Word-HOG Descriptors 使用Word-HOG描述符的移动视觉搜索
2015 Data Compression Conference Pub Date : 2015-04-07 DOI: 10.1109/DCC.2015.81
Sam S. Tsai, Huizhong Chen, David M. Chen, B. Girod
{"title":"Mobile Visual Search with Word-HOG Descriptors","authors":"Sam S. Tsai, Huizhong Chen, David M. Chen, B. Girod","doi":"10.1109/DCC.2015.81","DOIUrl":"https://doi.org/10.1109/DCC.2015.81","url":null,"abstract":"Visual text information is a descriptive part of many images that can be used to perform mobile visual search (MVS) with particularly small queries. In this paper, we propose a system that uses word patch descriptors for retrieving images containing visual text. A random sampling method is used to find duplicate word patches in the database and reduce the database size. The system achieves comparable retrieval performance to state-of-the-art image feature-based systems for images of book covers, and performs better than state-of-the-art text-based retrieval systems for images of book pages. Using visual text to provide distinctive features, our system achieves more than 10-to-1 query size reduction for images of book covers and more than 16-to-1 query size reduction for images of book pages.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127848931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploiting Temporal Redundancy of Visual Structures for Video Compression 利用视觉结构的时间冗余进行视频压缩
2015 Data Compression Conference Pub Date : 2015-04-07 DOI: 10.1109/DCC.2015.30
Georgios Georgiadis, Stefano Soatto
{"title":"Exploiting Temporal Redundancy of Visual Structures for Video Compression","authors":"Georgios Georgiadis, Stefano Soatto","doi":"10.1109/DCC.2015.30","DOIUrl":"https://doi.org/10.1109/DCC.2015.30","url":null,"abstract":"Summary form only given. We present a video coding system that partitions the scene into \"visual structures\" and a residual \"background\" layer. The system exploits the temporal redundancy of visual structures to compress video sequences. We construct a dictionary of track-templates, which correspond to a representation of visual structures. We subsequently choose a subset of the dictionary's elements to encode video frames using a Markov Random Field (MRF) formulation that places the track-templates in \"depth\" layers. Our video coding system offers an improvement over H.265/H.264 and other methods in a rate-distortion comparison.","PeriodicalId":313156,"journal":{"name":"2015 Data Compression Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115630238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信