Analysis and reduction of reference frames for motion estimation in MPEG-4 AVC/JVT/H.264

Yu-Wen Huang, Bing-Yu Hsieh, Tu-Chih Wang, Shao-Yi Chien, Shyh-Yih Ma, Chun-Fu Shen, Liang-Gee Chen
{"title":"Analysis and reduction of reference frames for motion estimation in MPEG-4 AVC/JVT/H.264","authors":"Yu-Wen Huang, Bing-Yu Hsieh, Tu-Chih Wang, Shao-Yi Chien, Shyh-Yih Ma, Chun-Fu Shen, Liang-Gee Chen","doi":"10.1109/ICASSP.2003.1199128","DOIUrl":null,"url":null,"abstract":"In the new video coding standard, MPEG-4 AVC/JVT/H.264, motion estimation is allowed to use multiple reference frames. The reference software adopts a full search scheme, and the increased computation is in proportion to the number of searched reference frames. However, the reduction of prediction residues is highly dependent on the nature of the sequences, not on the number of searched frames. We present a method to speed up the matching process for multiple reference frames. For each macroblock, we analyze the available information after intra prediction and motion estimation from the previous frame to determine whether it is necessary to search more frames. The information we use includes selected mode, inter prediction residues, intra prediction residues, and motion vectors. Simulation results show that the proposed algorithm can save up to 90% of unnecessary frames while keeping the average miss rate of optimal frames less than 4%.","PeriodicalId":104473,"journal":{"name":"2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03).","volume":"240 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"74","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03).","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP.2003.1199128","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 74

Abstract

In the new video coding standard, MPEG-4 AVC/JVT/H.264, motion estimation is allowed to use multiple reference frames. The reference software adopts a full search scheme, and the increased computation is in proportion to the number of searched reference frames. However, the reduction of prediction residues is highly dependent on the nature of the sequences, not on the number of searched frames. We present a method to speed up the matching process for multiple reference frames. For each macroblock, we analyze the available information after intra prediction and motion estimation from the previous frame to determine whether it is necessary to search more frames. The information we use includes selected mode, inter prediction residues, intra prediction residues, and motion vectors. Simulation results show that the proposed algorithm can save up to 90% of unnecessary frames while keeping the average miss rate of optimal frames less than 4%.
MPEG-4 AVC/JVT/H.264中运动估计参考帧的分析与缩减
在新的视频编码标准中,MPEG-4 AVC/JVT/H。264,运动估计是允许使用多个参考帧。参考软件采用全搜索方案,增加的计算量与搜索参考帧的数量成正比。然而,预测残差的减少高度依赖于序列的性质,而不是搜索帧的数量。提出了一种加速多参考帧匹配过程的方法。对于每个宏块,我们分析从前一帧进行帧内预测和运动估计后的可用信息,以确定是否需要搜索更多帧。我们使用的信息包括选择模式、预测间残差、预测内残差和运动矢量。仿真结果表明,该算法在保证最优帧的平均缺失率小于4%的情况下,节省了90%的不必要帧。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信