{"title":"No-Reference Light Field Image Quality Assessment Based on Micro-Lens Image","authors":"Ziyuan Luo, Wei Zhou, Likun Shi, Zhibo Chen","doi":"10.1109/PCS48520.2019.8954551","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954551","url":null,"abstract":"Light field image quality assessment (LF-IQA) plays a significant role due to its guidance to Light Field (LF) contents acquisition, processing and application. The LF can be represented as 4-D signal, and its quality depends on both angular consistency and spatial quality. However, few existing LF-IQA methods concentrate on effects caused by angular inconsistency. Especially, no-reference methods lack effective utilization of 2D angular information. In this paper, we focus on measuring the 2-D angular consistency for LF-IQA. The Micro-Lens Image (MLI) refers to the angular domain of the LF image, which can simultaneously record the angular information in both horizontal and vertical directions. Since the MLI contains 2D angular information, we propose a No-Reference Light Field image Quality assessment model based on MLI (LF-QMLI). Specifically, we first utilize Global Entropy Distribution (GED) and Uniform Local Binary Pattern descriptor (ULBP) to extract features from the MLI, and then pool them together to measure angular consistency. In addition, the information entropy of SubAperture Image (SAI) is adopted to measure spatial quality. Extensive experimental results show that LF-QMLI achieves the state-of-the-art performance.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115018017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A neural network approach to GOP-level rate control of x265 using Lookahead","authors":"Boya Cheng, Yuping Zhang","doi":"10.1109/PCS48520.2019.8954550","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954550","url":null,"abstract":"To optimize the perceived quality under a specific bitrate constraint, multi-pass encoding is usually performed with the rate control mode of the average bitrate (ABR) or the constant rate factor (CRF) to distribute bits as reasonably as possible in terms of perceived quality, leading to high computational complexity. In this paper, we propose to utilize the video information generated during the encoding to adaptively adjust the CRF setting at GOP level, ensuring the bits of frames in each GOP are allocated reasonably under the bitrate constraint with a single-pass encoding framework. In particular, due to the inherent relationship between CRF values and bitrates, we adopt a shallow neural network (NN) to map video content features to the CRF-bitrate model. The content-related features are collected from the lookahead module inside the x265 encoder, including encoding cost estimation, motion vector and so on. Further, a rate control method, called content adaptive rate factor (CARF), is proposed to adjust the CRF value of each GOP with the requirement of the target bitrate by using the predicted CRF- bitrate models of each GOP. The experimental results show that the proposed approach can make 84.5% testing data within 20% bitrate error (or better) and outperform the ABR mode in x265, leading to 5.23 % BD-rate reduction on average.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131204480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HEVC Inter Coding using Deep Recurrent Neural Networks and Artificial Reference Pictures","authors":"Felix Haub, Thorsten Laude, J. Ostermann","doi":"10.1109/PCS48520.2019.8954497","DOIUrl":"https://doi.org/10.1109/PCS48520.2019.8954497","url":null,"abstract":"The efficiency of motion compensated prediction in modern video codecs highly depends on the available reference pictures. Occlusions and non-linear motion pose challenges for the motion compensation and often result in high bit rates for the prediction error. We propose the generation of artificial reference pictures using deep recurrent neural networks. Conceptually, a reference picture at the time instance of the currently coded picture is generated from previously reconstructed conventional reference pictures. Based on these artificial reference pictures, we propose a complete coding pipeline based on HEVC. By using the artificial reference pictures for motion compensated prediction, average BD-rate gains of 1.5% over HEVC are achieved.","PeriodicalId":237809,"journal":{"name":"2019 Picture Coding Symposium (PCS)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116258177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}