A Survey of Temporal Activity Localization via Language in Untrimmed Videos

Yulan Yang, Z. Li, Gangyan Zeng
{"title":"A Survey of Temporal Activity Localization via Language in Untrimmed Videos","authors":"Yulan Yang, Z. Li, Gangyan Zeng","doi":"10.1109/ICCST50977.2020.00123","DOIUrl":null,"url":null,"abstract":"Video is one of the most informative media which consists of visual, textual and audio contents. As the number of videos on the Internet grows explosively, it is increasingly necessary for machines to understand the semantic information in the videos accurately. Temporally Activity Localization in a video is such a work which needs to localize the video moment that is most semantically similar to a given natural query. This task is quite challenging for that it not only requires to have a deep understanding of the sentences and videos, but also the fine-grained interactions between the two modalities. In this paper, we report a comprehensive survey of existed temporal sentence localization techniques. Firstly, we make a detailed classification and analysis of these methods. Then we discuss the experimental results and performance of existed approaches. Finally, we present some insights for future research direction.","PeriodicalId":189809,"journal":{"name":"2020 International Conference on Culture-oriented Science & Technology (ICCST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Culture-oriented Science & Technology (ICCST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCST50977.2020.00123","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Video is one of the most informative media which consists of visual, textual and audio contents. As the number of videos on the Internet grows explosively, it is increasingly necessary for machines to understand the semantic information in the videos accurately. Temporally Activity Localization in a video is such a work which needs to localize the video moment that is most semantically similar to a given natural query. This task is quite challenging for that it not only requires to have a deep understanding of the sentences and videos, but also the fine-grained interactions between the two modalities. In this paper, we report a comprehensive survey of existed temporal sentence localization techniques. Firstly, we make a detailed classification and analysis of these methods. Then we discuss the experimental results and performance of existed approaches. Finally, we present some insights for future research direction.
未修剪视频中语言对时间活动定位的研究
视频是信息最丰富的媒体之一,它包括视觉、文字和音频内容。随着互联网上视频数量的爆炸式增长,机器准确理解视频中的语义信息变得越来越必要。视频中的时间活动定位就是这样一项工作,它需要定位与给定自然查询在语义上最相似的视频时刻。这个任务是非常具有挑战性的,因为它不仅需要对句子和视频有深刻的理解,而且还需要对两种模式之间的细粒度交互进行理解。本文对现有的时态句定位技术进行了综述。首先,对这些方法进行了详细的分类和分析。然后讨论了现有方法的实验结果和性能。最后,对未来的研究方向提出了几点见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信