Markov Progressive Framework, a Universal Paradigm for Modeling Long Videos

Bo Pang;Gao Peng;Yizhuo Li;Cewu Lu
{"title":"Markov Progressive Framework, a Universal Paradigm for Modeling Long Videos","authors":"Bo Pang;Gao Peng;Yizhuo Li;Cewu Lu","doi":"10.1109/TPAMI.2024.3426998","DOIUrl":null,"url":null,"abstract":"The computational complexity of video models increases linearly with the square number of frames. Thus, constrained bycomputational resources, training video models to learn long-term temporal semantics end-to-end is quite a challenge. Currently, the main-stream method is to split a raw video into clips, leading to incomplete fragmentary temporal information flow and failure of modeling long-term semantics. In this paper, we design the Markov Progressive framework (MaPro), a theoretical framework consisting of the progressive modeling method and a paradigm model tailored for it. Thecore idea of MaPro is to find a paradigm model consisting of proposed Markov operators which can be trained in multiple sequential steps and ensure that the multi-step progressive modeling is equivalent to the conventional end-to-endmodeling. By training the paradigm model under the progressive method, we are able to model long videos end-to-endwith limited resources and ensure the effective transmission of long-term temporal information. We provide implementations of this theoretical system on the mainstream CNN- and Transformer-based models, where they are modified to conform to the Markov paradigm. As a general and robust training method, we experimentally demonstrate that it yields significant performance improvements on different backbones and datasets. As an illustrative example, the proposed method improves the SlowOnly network by 4.1 mAP on Charades and 2.5 top-1 accuracy on Kinetics. And for TimeSformer, MaPro improves its performance on Kinetics by 2.0 top-1 accuracy. Importantly, all these improvements areachieved with a little parameter and computation overhead.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10596972/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The computational complexity of video models increases linearly with the square number of frames. Thus, constrained bycomputational resources, training video models to learn long-term temporal semantics end-to-end is quite a challenge. Currently, the main-stream method is to split a raw video into clips, leading to incomplete fragmentary temporal information flow and failure of modeling long-term semantics. In this paper, we design the Markov Progressive framework (MaPro), a theoretical framework consisting of the progressive modeling method and a paradigm model tailored for it. Thecore idea of MaPro is to find a paradigm model consisting of proposed Markov operators which can be trained in multiple sequential steps and ensure that the multi-step progressive modeling is equivalent to the conventional end-to-endmodeling. By training the paradigm model under the progressive method, we are able to model long videos end-to-endwith limited resources and ensure the effective transmission of long-term temporal information. We provide implementations of this theoretical system on the mainstream CNN- and Transformer-based models, where they are modified to conform to the Markov paradigm. As a general and robust training method, we experimentally demonstrate that it yields significant performance improvements on different backbones and datasets. As an illustrative example, the proposed method improves the SlowOnly network by 4.1 mAP on Charades and 2.5 top-1 accuracy on Kinetics. And for TimeSformer, MaPro improves its performance on Kinetics by 2.0 top-1 accuracy. Importantly, all these improvements areachieved with a little parameter and computation overhead.
马尔可夫渐进框架:长视频建模的通用范例
与图像相比,作为日益主流的视觉媒体,视频包含更多的语义信息。因此,视频模型的计算复杂度要比图像级模型大一个数量级,后者的计算复杂度随帧数的平方呈线性增长。受计算资源的限制,训练视频模型以端到端学习长期时间语义是一项相当大的挑战。目前,主流的方法是将原始视频分割成片段,这会导致不完整的碎片化时间信息流,无法建立长期语义模型。为了解决这个问题,我们在本文中设计了马尔可夫渐进框架(MaPro),这是一个由渐进建模方法和为其量身定制的范式模型组成的理论框架。受处理长句的自然语言处理技术的启发,MaPro 的核心思想是找到一个由拟议的马尔可夫算子组成的范式模型,该范式模型可通过多个连续步骤进行训练,并确保多步骤渐进建模等同于传统的端到端建模。通过在渐进方法下训练范式模型,我们能够利用有限的资源对长视频进行端到端建模,并确保长期时间信息的有效传输。我们在基于 CNN 和 Transformer 的主流模型上详细实现了这一理论体系,并对其进行了修改,使其符合马尔可夫范式。作为基本模型的理论范式是模型效率的下限。在此基础上,我们将进一步探索基于 CNN 和 Transformer 方法的更复杂设计。作为一种通用且稳健的训练方法,我们通过实验证明,它能在不同的骨干网和数据集上显著提高性能。举例来说,所提出的方法可使 SlowOnly 网络在猜字谜中提高 4.1 mAP,在 Kinetics 中提高 2.5 top-1 准确率。而对于 TimeSformer,MaPro 在 Kinetics 上的性能提高了 2.0 top-1 精度。重要的是,所有这些改进都是在参数和计算开销很小的情况下实现的。我们希望 MaPro 方法能为社区提供对长视频建模的新见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信