2019 Data Compression Conference (DCC)最新文献

筛选
英文 中文
Deterministic Annealing Based Transform Domain Temporal Predictor Design for Adaptive Video Coding 基于确定性退火的自适应视频编码变换域时间预测器设计
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00027
B. Vishwanath, Tejaswi Nanjundaswamy, K. Rose
{"title":"Deterministic Annealing Based Transform Domain Temporal Predictor Design for Adaptive Video Coding","authors":"B. Vishwanath, Tejaswi Nanjundaswamy, K. Rose","doi":"10.1109/DCC.2019.00027","DOIUrl":"https://doi.org/10.1109/DCC.2019.00027","url":null,"abstract":"Current video coders employ motion compensated pixel-to-pixel prediction, which largely ignores significant spatial correlations and the fact that true temporal correlations vary with spatial frequency. Earlier work from our lab proposed to first spatially decorrelate the block of pixels by performing temporal prediction in the transform domain, and to effectively account for both spatial and temporal correlations. To adapt to variations in video signal statistics, the encoder switches between a set of appropriately designed prediction modes.This setting critically depends on efficient offline learning of transform domain temporal prediction modes. Significant challenges include: i) issues of instability and mismatched statistics inherent to closed loop design; and ii) severe non-convexity of the cost function trapping the system in poor local minima. Statistics mismatch is tackled by an appropriate paradigm for system design in a stable open loop fashion, but which asymptotically mimics closed loop operation. The non-convexity is handled by deterministic annealing, a powerful non-convex optimization tool whose probabilistic formulation allows for direct optimization of the cost function with respect to the discrete set of prediction modes, and whose annealing schedule avoids poor local minima. Experimental results validate the method's efficacy.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114505162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Efficient Coding Method for Spike Camera Using Inter-Spike Intervals 一种利用尖峰间隔的高效尖峰相机编码方法
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00080
Siwei Dong, Lin Zhu, Daoyuan Xu, Yonghong Tian, Tiejun Huang
{"title":"An Efficient Coding Method for Spike Camera Using Inter-Spike Intervals","authors":"Siwei Dong, Lin Zhu, Daoyuan Xu, Yonghong Tian, Tiejun Huang","doi":"10.1109/DCC.2019.00080","DOIUrl":"https://doi.org/10.1109/DCC.2019.00080","url":null,"abstract":"Recently, a novel bio-inspired spike camera has been proposed, which continuously accumulates luminance intensity and fires spikes once the dispatch threshold is reached. It has shown great advantages in capturing fast-moving scene in a frame-free manner with full texture reconstruction capabilities. However, it is difficult to transmit or store the large amount of spike data. By investigating the spatiotemporal distribution of the spikes, we propose an intensity-based measurement for spike train distance and design an efficient coding method to meet the challenge. First, the spike train is transformed into inter-spike intervals (ISIs), and ISIs are adaptively partitioned into multiple segments in temporal. Then, intra-and inter-pixel prediction are performed to find the best reference candidate. The prediction residuals are quantized to achieve lossy compression. Finally, the quantized residuals are fed into an adaptive context-based entropy coder. Overall, to achieve the best performance, each prediction mode will be tried and the one with minimum rate-distortion cost is chosen.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121082869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Publisher's Information 出版商的信息
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/dcc.2019.00130
{"title":"Publisher's Information","authors":"","doi":"10.1109/dcc.2019.00130","DOIUrl":"https://doi.org/10.1109/dcc.2019.00130","url":null,"abstract":"","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121871289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating Convolutional Neural Networks with Dynamic Channel Pruning 基于动态通道剪枝的卷积神经网络加速
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00075
Chiliang Zhang, Tao Hu, Yingda Guan, Zuochang Ye
{"title":"Accelerating Convolutional Neural Networks with Dynamic Channel Pruning","authors":"Chiliang Zhang, Tao Hu, Yingda Guan, Zuochang Ye","doi":"10.1109/DCC.2019.00075","DOIUrl":"https://doi.org/10.1109/DCC.2019.00075","url":null,"abstract":"Network acceleration has become a hot topic, for the substantial challenge in deploying such networks in real-time applications or on resource-limited devices. A wide variety of pruning-based acceleration methods were proposed to expend the sparsity of parameters, thus omit computations involving those pruned parameters. However, these element-wise pruning methods can hardly be efficiently used for accelerating without special-customized speed-up algorithms. Due to this difficulty, recent work has turned to prune filters or channels instead, which directly reduce the number of matrix multiplications. While Channel Pruning method reforms the original CNNs to a kernel-wisely or channel-wisely pruned one, Runtime Neural Pruning (RNP) argues that models pruned with static pruning methods will lose the ability for some hard tasks since some potentially significant weights are lost during the pruning process. Dynamically pruning the channels is found to be a good solution. In this paper, we propose to use Channel Threshold-Weighting (T-Weighting) modules to choose and prune unimportant feature channels at inference phase. As the pruning is done dynamically, it is called Dynamic Channel Pruning (DCP). DCP consists of the original convolutional network and a number of \"Channel T-Weighting\" modules at certain layers. The \"Channel T-Weighting\" module assigns weights to corresponding channels, pruning those channels whose weights are zero. Those pruned channels make the CNN accelerated, and those remained channels multiplying with weights help feature expression enhanced. The reason for not considering fully-connected layers are two-fold: 1. convolution operations occupying the vast majority of all computation cost. 2. DCP is not designed only for classification, but for many tasks taking CNN as their backbone networks. In this work, we propose as a specific choice for h(·) the thresholded sigmoid function to offer sparsity to w_l, called thresholded sigmoid (T-sigmoid), h(x) = σ(x)· 1{x > T}, where σ(·) refers to sigmoid function. 1{x} is boolean indicator function, where output being 1 when input x is True, and vice versa. The T-sigmoid function is inspired by spike-and-slab models, which formulates distributions over hidden variables as the product of a binary spike variable and a real-valued code. The DCP is trained in a layer-by-layer manner. We first train the \"Channel T-Weighting\" module, and then set the threshold based on the given pruned ratio, and adjust the threshold in an iterative way at the end. The proposed DCP could reach 5× speed-up with only 4.77% drops on ILSVRC2012 dataset. Comparing the increasing error with baseline methods (Filter Pruning, Channel Pruning and RNP), DCP outperforms other methods consistently as the speed-up ratio increasing. The experiment show that DCP also consistently outperforms the baseline model whenever for Cifar10 and Cifar100. By comparing the full model and accelerated model (3×), we can see that DCP general","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126267093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Hard-Decision Quantization Algorithm Based on Deep Learning in Intra Video Coding 基于深度学习的视频编码硬决策量化算法
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00119
Hongkui Wang, Shengju Yu, Y. Zhang, Zhuo Kuang, Li Yu
{"title":"Hard-Decision Quantization Algorithm Based on Deep Learning in Intra Video Coding","authors":"Hongkui Wang, Shengju Yu, Y. Zhang, Zhuo Kuang, Li Yu","doi":"10.1109/DCC.2019.00119","DOIUrl":"https://doi.org/10.1109/DCC.2019.00119","url":null,"abstract":"In video encoder, hard-decision quantization (HDQ) is well-suited for parallel processing, but suffers from non-negligible coding performance degradation compared with soft-decision quantization (SDQ). In this paper, by fully simulating the behavior of SDQ, a coefficient-adaptive offset model constructed by the deep learning approach is proposed to adjust the output of HDQ. Experiment results show that the proposed algorithm achieves promising RD performance and well-suited for hardware encoder implementation design.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134322487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Measurement Coding System for Block-Based Compressive Sensing Images by Using Pixel-Domain Features 基于像素域特征的分块压缩感知图像测量编码系统
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00111
Jirayu Peetakul, Jinjia Zhou, K. Wada
{"title":"A Measurement Coding System for Block-Based Compressive Sensing Images by Using Pixel-Domain Features","authors":"Jirayu Peetakul, Jinjia Zhou, K. Wada","doi":"10.1109/DCC.2019.00111","DOIUrl":"https://doi.org/10.1109/DCC.2019.00111","url":null,"abstract":"Compressive sensing (CS) is data acquiring and innovative mathematical approach that accelerate and efficient sampling from large into small volumes of data. Moreover, it could be dramatically reduced amounts of sensor, power consumption, storage size, and bandwidth which results in lower hardware costs [1]. In wireless cameras network for video surveillance, the large amount of data is produced. However, there is still a lot of redundant data in measurement domain. To solve this problem, coding techniques such as block-based CS (BCS), intra-prediction and quantization is applied to avoid higher rate-distortion than other CS frameworks. Therefore, new imaging architecture has been proposed to be sensed, removed redundant information, and compressed simultaneously, thus leading to the faster image acquisition system.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125082775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Online Machine Learning for Fast Coding Unit Decisions in HEVC HEVC中快速编码单元决策的在线机器学习
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00076
G. Corrêa, Pargles Dall'Oglio, D. Palomino, L. Agostini
{"title":"Online Machine Learning for Fast Coding Unit Decisions in HEVC","authors":"G. Corrêa, Pargles Dall'Oglio, D. Palomino, L. Agostini","doi":"10.1109/DCC.2019.00076","DOIUrl":"https://doi.org/10.1109/DCC.2019.00076","url":null,"abstract":"The High Efficiency Video Coding standard introduced a flexible frame partitioning process that increased significantly compression rates in comparison to previous standards at the cost of a high computational cost. To accelerate frame partitioning decisions, this paper proposes a method that replaces the usual Rate-Distortion Optimization employed in Coding Unit size decision by a set of simpler decision tree models, which are built during encoding time by the C5 machine learning algorithm. The algorithm and the set of attributes employed in the model training process were chosen based on an extensive analysis that compared several options in terms of decision accuracy and training complexity. Experimental results show that the proposed method is capable of building accurate models for each video sequence, decreasing the HEVC encoding complexity in 34.4% on average with a compression efficiency loss of only 0.2% in comparison to the original HEVC reference encoder.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128570282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Combating Packet Loss in Image Coding Using Oversampling, Irregular Interpolation and Noise Shaping 利用过采样、不规则插值和噪声整形对抗图像编码中的丢包
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00043
Mor Goren, R. Zamir
{"title":"Combating Packet Loss in Image Coding Using Oversampling, Irregular Interpolation and Noise Shaping","authors":"Mor Goren, R. Zamir","doi":"10.1109/DCC.2019.00043","DOIUrl":"https://doi.org/10.1109/DCC.2019.00043","url":null,"abstract":"Diversity \"multiple description\" (MD) source coding promises graceful degradation in the presence of an unknown number of erasures in the channel. A simple scheme for the case of two descriptions consists of oversampling the source by a factor of two and delta-sigma quantization. This approach was applied successfully to JPEG-based image coding over a lossy packet network, where the interpolation and splitting into two descriptions is done in the discrete cosine transform (DCT) domain. The extension to a larger number of descriptions, however, suffers from noise amplification whenever the received descriptions form a nonuniform sampling pattern. In this work, we examine inter and intra-block interpolation methods and show how noise amplification can be reduced by optimizing the interpolation filter. Specifically, for a given total coding rate, we demonstrate that an irregular interpolation filter minimizes the average distortion over all (K out of N) patterns of received packets, (\"side receivers\"). We provide experimental results comparing low-pass (LP) and irregular interpolation filters for the side receivers and the all-N central receiver. We further examine the effect of noise shaping on the trade-off between the central and side distortions.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128353796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Practical Indexing of Repetitive Collections Using Relative Lempel-Ziv 基于相对Lempel-Ziv的重复馆藏实用标引
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00028
G. Navarro, Victor Sepulveda
{"title":"Practical Indexing of Repetitive Collections Using Relative Lempel-Ziv","authors":"G. Navarro, Victor Sepulveda","doi":"10.1109/DCC.2019.00028","DOIUrl":"https://doi.org/10.1109/DCC.2019.00028","url":null,"abstract":"We introduce a simple and implementable compressed index for highly repetitive sequence collections based on Relative Lempel-Ziv (RLZ). On a collection of total size n compressed into z phrases from a reference string R[1..r] over alphabet [1..σ] and with hth order empirical entropy H_h(R), our index uses rH_h(R)+o(r logσ)+O(r+z log n) bits, and finds the occ occurrences of a pattern P[1..m] in time O((m+occ) log n). This is competitive with the only existing index based on RLZ, yet it is much simpler and easier to implement. On a 1GB collection of 80 yeast genomes, a variant of our index achieves the least space among competing structures (slightly over 0.1 bits per base) while outperforming or matching them in time (1–10 microseconds per occurrence found). Our largest variant (below 0.3 bits per base) offers the best search time (1–3 microseconds per occurrence) among all structures using space below 1 bit per base.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130783727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An End-to-End Encrypted Neural Network for Gradient Updates Transmission in Federated Learning 基于端到端加密神经网络的联邦学习梯度更新传输
2019 Data Compression Conference (DCC) Pub Date : 2019-03-01 DOI: 10.1109/DCC.2019.00101
Hongyu Li, Tianqi Han
{"title":"An End-to-End Encrypted Neural Network for Gradient Updates Transmission in Federated Learning","authors":"Hongyu Li, Tianqi Han","doi":"10.1109/DCC.2019.00101","DOIUrl":"https://doi.org/10.1109/DCC.2019.00101","url":null,"abstract":"Federated learning is a distributed learning method to train a shared model by aggregating the locally-computed gradient updates. In federated learning, bandwidth and privacy are two main concerns of gradient updates transmission. This paper proposes an end-to-end encrypted neural network for gradient updates transmission. This network first encodes the input gradient updates to a lower-dimension space in each client, which significantly mitigates the pressure of data communication in federated learning. The encoded gradient updates are directly recovered as a whole, i.e. the aggregated gradient updates of the trained model, in the decoding layers of the network on the server. In this way, gradient updates encrypted in each client are not only prevented from interception during communication, but also unknown to the server. Based on the encrypted neural network, a novel federated learning framework is designed in real applications. Experimental results show that the proposed network can effectively achieve two goals, privacy protection and data compression, under a little sacrifice of the model accuracy in federated learning.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131898101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信