2019 Data Compression Conference (DCC)最新文献

筛选
英文 中文
ResGAN: A Low-Level Image Processing Network to Restore Original Quality of JPEG Compressed Images ResGAN:一种低级图像处理网络,用于恢复JPEG压缩图像的原始质量
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00128
Chunbiao Zhu, Yuanqi Chen, Yiwei Zhang, Shan Liu, Ge Li
{"title":"ResGAN: A Low-Level Image Processing Network to Restore Original Quality of JPEG Compressed Images","authors":"Chunbiao Zhu, Yuanqi Chen, Yiwei Zhang, Shan Liu, Ge Li","doi":"10.1109/DCC.2019.00128","DOIUrl":"https://doi.org/10.1109/DCC.2019.00128","url":null,"abstract":"Low-level image processing is mainly concerned with extracting descriptions (that are usually represented as images themselves) from images. With the rapid development of neural networks, many deep learning-based low-level image processing tasks have shown outstanding performance. In this paper, we describe a unified deep learning based approach for low-level image processing, in particular, image denoising, image deblurring, and compressed image restoration. The proposed method is composed of deep convolutional neural and conditional generative adversarial networks. For the discriminator network, we present a new network architecture with bi-skip connections to address hard training and details losing issues. In the generative network, a multi-objective optimization is derived to solve the problem of common conditions being non-identical. Through extensive experiments on three low-level image processing tasks on both qualitative and quantitative criteria, we demonstrate that our proposed method performs favorably against all current state-of-the-art approaches.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126270215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast CU Size Decision Based on AQ-CNN for Depth Intra Coding in 3D-HEVC 基于AQ-CNN的3D-HEVC深度内编码快速CU大小决策
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00073
Yamei Chen, Li Yu, Tiansong Li, Hongkui Wang, Shengwei Wang
{"title":"Fast CU Size Decision Based on AQ-CNN for Depth Intra Coding in 3D-HEVC","authors":"Yamei Chen, Li Yu, Tiansong Li, Hongkui Wang, Shengwei Wang","doi":"10.1109/DCC.2019.00073","DOIUrl":"https://doi.org/10.1109/DCC.2019.00073","url":null,"abstract":"The complexity of 3D-HEVC is fairly high due to quad tree structure and traversal searching in depth intra coding. In order to reduce complexity caused by coding unit (CU) size decision in rate distortion optimization (RDO) process, a fast algorithm based on adaptive QP convolutional neural network (AQ-CNN) structure is proposed in this paper. For each size of CU, the proposed structure automatically extracts deep feature information to terminate CU partition early. Specially, the AQ-CNN structure is suitable for different QPs because the QP has a great influence on CU partition and is connected into the CNN structure appropriately. Benefiting from the accurate prediction of CU partition label, the proposed algorithm reduces coding complexity sharply. Experimental results show that the proposed algorithm reduces the depth coding time by 69.4% with negligible BD-rate increase, and outperforms other recent algorithms in 3D-HEVC.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128762289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A New Technique for Lossless Compression of Color Images Based on Hierarchical Prediction, Inversion and Context Adaptive Coding 基于分层预测、反演和上下文自适应编码的彩色图像无损压缩新技术
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00096
B. Koc, Z. Arnavut, D. Sarkar, H. Kocak
{"title":"A New Technique for Lossless Compression of Color Images Based on Hierarchical Prediction, Inversion and Context Adaptive Coding","authors":"B. Koc, Z. Arnavut, D. Sarkar, H. Kocak","doi":"10.1109/DCC.2019.00096","DOIUrl":"https://doi.org/10.1109/DCC.2019.00096","url":null,"abstract":"This work introduces a new technique for lossless compression of color images. The technique is composed of first transforming an RGB image into luminance and chrominance domain (Y CuCv). Then, the luminance channel Y is compressed with a context-based, adaptive, lossless image coding technique (CALIC). After processing the chrominance channels with a hierarchical prediction technique that was introduced by Kim and Cho, Burrows-Wheeler Inversion Coder (BWIC) or JPEG 2000 is used to compress of the chrominance channels Cu and Cv. It is demonstrated that, on a wide variety of images, particularly on medical images, the technique achieves substantial compression gains over other well-known compression schemes such as CALIC, JPEG 2000, LOCO-I, BPG(HEVC), and the previously proposed hierarchical prediction and context adaptive coding technique LCIC.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114757873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adaptive Quantization Parameter Selection Leveraging the Inter-Frame Distortion Propagation for HEVC Video Coding 利用帧间失真传播的HEVC视频编码自适应量化参数选择
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00100
Dong Li, Haibing Yin, Xiaofeng Huang, Hang Li
{"title":"Adaptive Quantization Parameter Selection Leveraging the Inter-Frame Distortion Propagation for HEVC Video Coding","authors":"Dong Li, Haibing Yin, Xiaofeng Huang, Hang Li","doi":"10.1109/DCC.2019.00100","DOIUrl":"https://doi.org/10.1109/DCC.2019.00100","url":null,"abstract":"In video coding, inter-frame motion prediction eliminates temporal correlation greatly however bring about strong dependency characterized by inter-frame distortion propagation, which makes currently independent rate-distortion optimization (RDO) non-optimal any more. This paper proposes adaptive quantization parameter (QP) selection algorithm for global RDO by modeling the function between change of distortion propagation (ΔD) and QP change (ΔQP) as well as change of bitrate (ΔR) and ΔQP. Experimental results show that the proposed algorithm achieves promising BD-BR performance.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131330488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Multi-Pass Coding Mode Search Framework For AV1 Encoder Optimization AV1编码器优化的多通道编码模式搜索框架
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00054
Ching-Han Chiang, Jingning Han, Yaowu Xu
{"title":"A Multi-Pass Coding Mode Search Framework For AV1 Encoder Optimization","authors":"Ching-Han Chiang, Jingning Han, Yaowu Xu","doi":"10.1109/DCC.2019.00054","DOIUrl":"https://doi.org/10.1109/DCC.2019.00054","url":null,"abstract":"The AV1 codec recently released by the Alliance of Open Media provides nearly 30% BDrate reduction over its predecessor VP9. It substantially extends the available coding block sizes and supports a wide range of prediction modes. There are also a large variety of transform kernel types and sizes. The combination provides an extremely wide range of flexible coding options. To translate such flexibility into compression efficiency, the encoder needs to conduct an extensive search over the space of coding modes. Optimization of the encoder complexity and compression efficiency trade-off is critical to productionizing AV1. Many research efforts have been devoted to devising feature space based pruning methods ranging from decision rules based on some simple observations to more complex neural network models. A multi-pass coding mode search framework is proposed in this work to provide a structural approach to reduce the search volume. It decomposes the original high dimensional space search into cascaded stages of lower dimensional space searches. To retain a near optimal search result, the scheme departs from conventional dimension reduction approach in which one retains a single winner at each stage, and uses that winner for the next stage (dimension). Instead, this framework retains a subset of the states that are the most likely winners at each stage, which are then fed into the next stage to find the next subset of winners. The subset size at each stage is determined by the likelihood that the optimal route will be captured in the current stage. Changing this likelihood parameter tunes the encoder for speed and compression performance trade-off. This framework can integrate with most existing feature based methods at its various stages. The framework provides 60% encoding time reduction at the expense of 0.6% compression loss in libaom AV1 encoder.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117319886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Overview of the OMAF Standard for 360° Video 360°视频OMAF标准概述
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00050
M. Hannuksela, Ye-Kui Wang, Ari Hourunranta
{"title":"An Overview of the OMAF Standard for 360° Video","authors":"M. Hannuksela, Ye-Kui Wang, Ari Hourunranta","doi":"10.1109/DCC.2019.00050","DOIUrl":"https://doi.org/10.1109/DCC.2019.00050","url":null,"abstract":"Omnidirectional MediA Format (OMAF) is arguably the first virtual reality (VR) system standard, recently developed by the Moving Picture Experts Group (MPEG). OMAF defines a media format that enables omnidirectional media applications, focusing on 360° video, images, and audio, as well as the associated timed text, supporting three degrees of freedom (3DOF). This paper gives an overview of the first edition of the OMAF standard.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123789470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Rate Control Algorithm in HEVC Based on Scene-Change Detection 基于场景变化检测的HEVC速率控制算法
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00112
Jia Qin, H. Bai, Yao Zhao
{"title":"Rate Control Algorithm in HEVC Based on Scene-Change Detection","authors":"Jia Qin, H. Bai, Yao Zhao","doi":"10.1109/DCC.2019.00112","DOIUrl":"https://doi.org/10.1109/DCC.2019.00112","url":null,"abstract":"In HEVC, bit-allocation model is based on the hierarchical control, which can divide video sequence into three levels: Group of Picture (GOP), frame and Coding Tree Unit (CTU). However, the fixed size of GOP fails to consider the influence of scene change in the video coding process, which may decrease the compression efficiency and reconstructed quality. In this paper, the main idea of the proposed algorithm is to detect the scene change efficiently, and then apply it in the rate control algorithm of HEVC to decrease the BD-rate and save the coding time.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121817755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Lists for Efficient Coding of Intra Prediction Modes in the Future Video Coding Standard 未来视频编码标准中有效编码帧内预测模式的动态列表
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00113
Kevin Reuzé, W. Hamidouche, P. Philippe, O. Déforges
{"title":"Dynamic Lists for Efficient Coding of Intra Prediction Modes in the Future Video Coding Standard","authors":"Kevin Reuzé, W. Hamidouche, P. Philippe, O. Déforges","doi":"10.1109/DCC.2019.00113","DOIUrl":"https://doi.org/10.1109/DCC.2019.00113","url":null,"abstract":"The next generation MPEG video coding standard is under development by the Joint Video Coding Experts Team (JVET). This new standard, called Versatile Video Coding (VVC), is expected by the end of 2020 and will offer better coding efficiency than its predecessor High Efficiency Video Coding (HEVC) standard. This coding gain is enabled by new coding tools such as more flexible block partitioning, more accurate Intra/Inter predictions, multiple transforms and adaptive in-loop filtering. In this paper we focus on the coding of the Intra Prediction Modes (IPM) that have been increased from 35 modes in HEVC to 67 modes in VVC. We propose a solution based on genetic algorithms to build an ordered list for the coding of IPM in the Joint Exploration Model (JEM) codec. We first give the theoretical upper bound performance in terms of required bits per IPM to encode the IPM using the available contextual information. The new ordering of the labels associated with more efficient codes is then proposed to efficiently leverage contextual informations available in the encoder and construct the Most Probable Modes (MPM) list. The proposed coding scheme enables to increase the BD-BR performance in average by 0.09% for the same level of complexity compared to the JEM.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127460851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast Early Termination of CU Partition and Mode Selection Algorithm for Virtual Reality Video in HEVC HEVC虚拟现实视频中CU分区的快速早期终止及模式选择算法
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00088
Xiaohan Guan, Xiaosha Dong, Mengmeng Zhang, Zhi Liu
{"title":"Fast Early Termination of CU Partition and Mode Selection Algorithm for Virtual Reality Video in HEVC","authors":"Xiaohan Guan, Xiaosha Dong, Mengmeng Zhang, Zhi Liu","doi":"10.1109/DCC.2019.00088","DOIUrl":"https://doi.org/10.1109/DCC.2019.00088","url":null,"abstract":"Virtual reality technology has become increasingly popular in recent years. It has more and more attempts and applications in online teaching, traffic safety, video and so on. Virtual reality video is popular because of its immersive feel. 360-degree video is one of the virtual reality videos. This type of video has a very high resolution, which makes its encoding time considerably long. To reduce the encoding complexity of virtual reality 360 video, this study proposed a fast early termination of CU partition and mode selection algorithm in HEVC. The proposed algorithm determines whether to terminate CU early or directly further divide in the CU partition process. On the other hand, the proposed algorithm saves time by reducing the number of candidate modes during the mode selection process. Experimental results show that the proposed algorithm can reduce the coding time by 54% and the BD-rate loss is only 1.4%.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127889157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Dataflow-Based Joint Quantization for Deep Neural Networks 基于数据流的深度神经网络联合量化
2019 Data Compression Conference (DCC) Pub Date : 2019-03-26 DOI: 10.1109/DCC.2019.00086
Xue Geng, Jie Fu, Bin Zhao, Jie Lin, M. Aly, C. Pal, V. Chandrasekhar
{"title":"Dataflow-Based Joint Quantization for Deep Neural Networks","authors":"Xue Geng, Jie Fu, Bin Zhao, Jie Lin, M. Aly, C. Pal, V. Chandrasekhar","doi":"10.1109/DCC.2019.00086","DOIUrl":"https://doi.org/10.1109/DCC.2019.00086","url":null,"abstract":"This paper addresses a challenging problem – how to reduce energy consumption without incurring performance drop when deploying deep neural networks (DNNs) at the inference stage. In order to alleviate the computation and storage burdens, we propose a novel dataflow-based joint quantization approach with the hypothesis that a fewer number of quantization operations would incur less information loss and thus improve the final performance. It first introduces a quantization scheme with efficient bit-shifting and rounding operations to represent network parameters and activations in low precision. Then it re-structures the network architectures to form unified modules for optimization on the quantized model. Extensive experiments on ImageNet and KITTI validate the effectiveness of our model, demonstrating that state-of-the-art results for various tasks can be achieved by this quantized model. Besides, we designed and synthesized an RTL model to measure the hardware costs among various quantization methods. For each quantization operation, it reduces area cost by about 15 times and energy consumption by about 9 times, compared to a strong baseline.","PeriodicalId":167723,"journal":{"name":"2019 Data Compression Conference (DCC)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124602952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信