Video Indexing, Search, Detection, and Description with Focus on TRECVID

G. Awad, Duy-Dinh Le, C. Ngo, Vinh-Tiep Nguyen, G. Quénot, Cees G. M. Snoek, S. Satoh
{"title":"Video Indexing, Search, Detection, and Description with Focus on TRECVID","authors":"G. Awad, Duy-Dinh Le, C. Ngo, Vinh-Tiep Nguyen, G. Quénot, Cees G. M. Snoek, S. Satoh","doi":"10.1145/3078971.3079044","DOIUrl":null,"url":null,"abstract":"There has been a tremendous growth in video data the last decade. People are using mobile phones and tablets to take, share or watch videos more than ever before. Video cameras are around us almost everywhere in the public domain (e.g. stores, streets, public facilities, ...etc). Efficient and effective retrieval methods are critically needed in different applications. The goal of TRECVID is to encourage research in content-based video retrieval by providing large test collections, uniform scoring procedures, and a forum for organizations interested in comparing their results. In this tutorial, we present and discuss some of the most important and fundamental content-based video retrieval problems such as recognizing predefined visual concepts, searching in videos for complex ad-hoc user queries, searching by image/video examples in a video dataset to retrieve specific objects, persons, or locations, detecting events, and finally bridging the gap between vision and language by looking into how can systems automatically describe videos in a natural language. A review of the state of the art, current challenges, and future directions along with pointers to useful resources will be presented by different regular TRECVID participating teams. Each team will present one of the following tasks: Semantic INdexing (SIN) Zero-example (0Ex) Video Search (AVS) Instance Search (INS) Multimedia Event Detection (MED) Video to Text (VTT)","PeriodicalId":403556,"journal":{"name":"Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval","volume":"93-94 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3078971.3079044","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

There has been a tremendous growth in video data the last decade. People are using mobile phones and tablets to take, share or watch videos more than ever before. Video cameras are around us almost everywhere in the public domain (e.g. stores, streets, public facilities, ...etc). Efficient and effective retrieval methods are critically needed in different applications. The goal of TRECVID is to encourage research in content-based video retrieval by providing large test collections, uniform scoring procedures, and a forum for organizations interested in comparing their results. In this tutorial, we present and discuss some of the most important and fundamental content-based video retrieval problems such as recognizing predefined visual concepts, searching in videos for complex ad-hoc user queries, searching by image/video examples in a video dataset to retrieve specific objects, persons, or locations, detecting events, and finally bridging the gap between vision and language by looking into how can systems automatically describe videos in a natural language. A review of the state of the art, current challenges, and future directions along with pointers to useful resources will be presented by different regular TRECVID participating teams. Each team will present one of the following tasks: Semantic INdexing (SIN) Zero-example (0Ex) Video Search (AVS) Instance Search (INS) Multimedia Event Detection (MED) Video to Text (VTT)
视频索引,搜索,检测和描述与重点在TRECVID
在过去的十年里,视频数据有了巨大的增长。人们比以往任何时候都更多地使用手机和平板电脑来拍摄、分享或观看视频。在公共领域,摄像机几乎无处不在(例如商店、街道、公共设施等)。在不同的应用中,迫切需要高效的检索方法。TRECVID的目标是通过提供大型测试集、统一评分程序和对比较结果感兴趣的组织论坛,鼓励对基于内容的视频检索进行研究。在本教程中,我们介绍并讨论了一些最重要和最基本的基于内容的视频检索问题,例如识别预定义的视觉概念,在视频中搜索复杂的临时用户查询,在视频数据集中搜索图像/视频示例以检索特定对象,人物或位置,检测事件,最后通过研究系统如何自动描述视频来弥合视觉和语言之间的差距。不同的常规TRECVID参与团队将介绍最新的技术、当前的挑战和未来的方向,并指出有用的资源。每个小组将提出以下任务之一:语义索引(SIN)零示例(0Ex)视频搜索(AVS)实例搜索(INS)多媒体事件检测(MED)视频到文本(VTT)
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信