Di Wang , Yousheng Yu , Shaofeng Li , Haodi Zhong , Xiao Liang , Lin Zhao
{"title":"Scene-enhanced multi-scale temporal aware network for video moment retrieval","authors":"Di Wang , Yousheng Yu , Shaofeng Li , Haodi Zhong , Xiao Liang , Lin Zhao","doi":"10.1016/j.patcog.2025.111642","DOIUrl":null,"url":null,"abstract":"<div><div>Video moment retrieval aims to locate the target moment in an untrimmed video using a natural language query. Current methods to moment retrieval are typically tailored for scenarios where temporal localization information is often simple. Nevertheless, these methods overlook the scenarios where a video includes complex localization information, which makes it difficult to achieve precise retrieval across videos that encompass both complex and simple temporal localization information. To address this limitation, we propose a novel Scene-enhanced Multi-scale Temporal Aware Network (SMTAN) designed to adaptively extract different temporal localization information in different videos. Our method involves the comprehensive processing of video moments across fine-grained multiply scales and uses a prior knowledge of the scene for localization information enhancement. This method facilitates the construction of multi-scale temporal feature maps, enabling extraction of both complex and simple temporal localization information in different videos. Extensive experiments on two benchmark datasets demonstrate that our proposed network surpasses the state-of-the-art methods and achieves more accurate retrieval of different localization information across videos.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"165 ","pages":"Article 111642"},"PeriodicalIF":7.5000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325003024","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Video moment retrieval aims to locate the target moment in an untrimmed video using a natural language query. Current methods to moment retrieval are typically tailored for scenarios where temporal localization information is often simple. Nevertheless, these methods overlook the scenarios where a video includes complex localization information, which makes it difficult to achieve precise retrieval across videos that encompass both complex and simple temporal localization information. To address this limitation, we propose a novel Scene-enhanced Multi-scale Temporal Aware Network (SMTAN) designed to adaptively extract different temporal localization information in different videos. Our method involves the comprehensive processing of video moments across fine-grained multiply scales and uses a prior knowledge of the scene for localization information enhancement. This method facilitates the construction of multi-scale temporal feature maps, enabling extraction of both complex and simple temporal localization information in different videos. Extensive experiments on two benchmark datasets demonstrate that our proposed network surpasses the state-of-the-art methods and achieves more accurate retrieval of different localization information across videos.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.