自适应加权更新步骤使用色度可扩展视频编码

Fengling Li, N. Ling
{"title":"自适应加权更新步骤使用色度可扩展视频编码","authors":"Fengling Li, N. Ling","doi":"10.1109/SIPS.2005.1579952","DOIUrl":null,"url":null,"abstract":"Scalable video coding using motion-compensated temporal filtering is one of the latest trends in video coding standardization. In the lifting based motion-compensated temporal filtering framework, such as that in the joint scalable video model (JSVM), the data used in the update steps are basically the residuals from the motion-compensated prediction. When the motion model used in the prediction steps fails to capture the true motion, energy in the high-pass temporal frames becomes substantial and strong ghosting artifacts may be introduced to the low-pass frames during the update steps. In this paper we propose a new block-based update approach, which takes advantage of the chrominance information of the video sequence to further reduce ghosting artifacts in low-pass temporal frames. We adaptively weight the update steps according to the energy not only of luminance pixels, but also of chrominance pixels in the high-pass temporal frames at the corresponding locations. Experimental results show that the proposed algorithm can significantly improve the quality of the reconstructed video sequence, in PSNR and visual quality.","PeriodicalId":436123,"journal":{"name":"IEEE Workshop on Signal Processing Systems Design and Implementation, 2005.","volume":"02 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Adaptively weighted update steps using chrominance for scalable video coding\",\"authors\":\"Fengling Li, N. Ling\",\"doi\":\"10.1109/SIPS.2005.1579952\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Scalable video coding using motion-compensated temporal filtering is one of the latest trends in video coding standardization. In the lifting based motion-compensated temporal filtering framework, such as that in the joint scalable video model (JSVM), the data used in the update steps are basically the residuals from the motion-compensated prediction. When the motion model used in the prediction steps fails to capture the true motion, energy in the high-pass temporal frames becomes substantial and strong ghosting artifacts may be introduced to the low-pass frames during the update steps. In this paper we propose a new block-based update approach, which takes advantage of the chrominance information of the video sequence to further reduce ghosting artifacts in low-pass temporal frames. We adaptively weight the update steps according to the energy not only of luminance pixels, but also of chrominance pixels in the high-pass temporal frames at the corresponding locations. Experimental results show that the proposed algorithm can significantly improve the quality of the reconstructed video sequence, in PSNR and visual quality.\",\"PeriodicalId\":436123,\"journal\":{\"name\":\"IEEE Workshop on Signal Processing Systems Design and Implementation, 2005.\",\"volume\":\"02 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Workshop on Signal Processing Systems Design and Implementation, 2005.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SIPS.2005.1579952\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Workshop on Signal Processing Systems Design and Implementation, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIPS.2005.1579952","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

采用运动补偿时间滤波的可扩展视频编码是视频编码标准化的最新发展趋势之一。在基于提升的运动补偿时间滤波框架中,例如在联合可扩展视频模型(JSVM)中,用于更新步骤的数据基本上是运动补偿预测的残差。当预测步骤中使用的运动模型无法捕捉到真实运动时,高通时间帧中的能量会变得很大,并且在更新步骤中可能会在低通帧中引入强烈的鬼影伪影。本文提出了一种新的基于块的更新方法,该方法利用视频序列的色度信息来进一步减少低通时间帧中的重影伪影。我们不仅根据亮度像素的能量,而且根据高通时间帧中相应位置的亮度像素的能量自适应地加权更新步骤。实验结果表明,该算法在PSNR和视觉质量方面都能显著提高重构视频序列的质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adaptively weighted update steps using chrominance for scalable video coding
Scalable video coding using motion-compensated temporal filtering is one of the latest trends in video coding standardization. In the lifting based motion-compensated temporal filtering framework, such as that in the joint scalable video model (JSVM), the data used in the update steps are basically the residuals from the motion-compensated prediction. When the motion model used in the prediction steps fails to capture the true motion, energy in the high-pass temporal frames becomes substantial and strong ghosting artifacts may be introduced to the low-pass frames during the update steps. In this paper we propose a new block-based update approach, which takes advantage of the chrominance information of the video sequence to further reduce ghosting artifacts in low-pass temporal frames. We adaptively weight the update steps according to the energy not only of luminance pixels, but also of chrominance pixels in the high-pass temporal frames at the corresponding locations. Experimental results show that the proposed algorithm can significantly improve the quality of the reconstructed video sequence, in PSNR and visual quality.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信