Inter-view Reference Frame Selection in Multi-view Video Coding

Guang Y. Zhang, Abdelrahman Abdelazim, S. Mein, M. Varley, D. Ait-Boudaoud
{"title":"Inter-view Reference Frame Selection in Multi-view Video Coding","authors":"Guang Y. Zhang, Abdelrahman Abdelazim, S. Mein, M. Varley, D. Ait-Boudaoud","doi":"10.1109/DCC.2013.113","DOIUrl":null,"url":null,"abstract":"Summary form only given. Multiple video cameras are used to capture the same scene simultaneously to acquire the multiview view coding data, obviously, over-large data will affect the coding efficiency. Due to the video data is acquired from the same scene, the inter-view similarities between adjacent camera views are exploited for efficient compression. Generally, the same objects with different viewpoints are shown on adjacent views. On the other hand, containing objects at different depth planes, and therefore perfect correlation over the entire image area will never occur. Additionally, the scene complexity and the differences in brightness and color between the video of the individual cameras will also affect the current block to find its best match in the inter-view reference picture. Consequently, the temporal-view reference picture is referred more frequently. In order to gain the compression efficiency, it is a core part to disable the unnecessary inter-view reference. The idea of this paper is to exploit the phase correlation to estimate the dependencies between the inter-view reference and the current picture. If the two frames with low correlation, the inter-view reference frame will be disabled. In addition, this approach works only on non-anchor pictures. Experimental results show that the proposed algorithm can save 16% computational complexity on average, with negligible loss of quality and bit rate. The phase correlation process only takes up 0.1% of the whole process.","PeriodicalId":388717,"journal":{"name":"2013 Data Compression Conference","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Data Compression Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCC.2013.113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Summary form only given. Multiple video cameras are used to capture the same scene simultaneously to acquire the multiview view coding data, obviously, over-large data will affect the coding efficiency. Due to the video data is acquired from the same scene, the inter-view similarities between adjacent camera views are exploited for efficient compression. Generally, the same objects with different viewpoints are shown on adjacent views. On the other hand, containing objects at different depth planes, and therefore perfect correlation over the entire image area will never occur. Additionally, the scene complexity and the differences in brightness and color between the video of the individual cameras will also affect the current block to find its best match in the inter-view reference picture. Consequently, the temporal-view reference picture is referred more frequently. In order to gain the compression efficiency, it is a core part to disable the unnecessary inter-view reference. The idea of this paper is to exploit the phase correlation to estimate the dependencies between the inter-view reference and the current picture. If the two frames with low correlation, the inter-view reference frame will be disabled. In addition, this approach works only on non-anchor pictures. Experimental results show that the proposed algorithm can save 16% computational complexity on average, with negligible loss of quality and bit rate. The phase correlation process only takes up 0.1% of the whole process.
多视点视频编码中的视点间参考帧选择
只提供摘要形式。采用多台摄像机同时拍摄同一场景,获取多视图视图编码数据,显然,数据量过大会影响编码效率。由于视频数据来自同一场景,因此利用相邻摄像机视图之间的视图间相似性进行有效压缩。通常,相同的物体以不同的视点显示在相邻的视图上。另一方面,包含不同深度平面的对象,因此整个图像区域的完美相关将永远不会发生。此外,场景的复杂性和各个摄像机视频之间的亮度和色彩差异也会影响当前块在访谈参考图像中找到最佳匹配。因此,时间视图参考图被更频繁地引用。为了提高压缩效率,取消不必要的视图间引用是核心部分。本文的思想是利用相位相关来估计视点与当前图像之间的依赖关系。如果两帧相关性较低,则会禁用互视参考帧。此外,这种方法只适用于非锚点图片。实验结果表明,该算法平均可节省16%的计算复杂度,且质量和比特率的损失可以忽略不计。相位相关过程只占整个过程的0.1%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信