SemanTV: A Content-Based Video Retrieval Framework

Juan Miguel A. Mendoza, China Marie G. Lao, Antolin J. Alipio, Dan Michael A. Cortez, Anne Camille M. Maupay, Charito M. Molina, C. Centeno, Jonathan C. Morano
{"title":"SemanTV: A Content-Based Video Retrieval Framework","authors":"Juan Miguel A. Mendoza, China Marie G. Lao, Antolin J. Alipio, Dan Michael A. Cortez, Anne Camille M. Maupay, Charito M. Molina, C. Centeno, Jonathan C. Morano","doi":"10.1145/3533050.3533067","DOIUrl":null,"url":null,"abstract":"With the increased adaption of CCTV for surveillance, challenges in terms of retrieval have recently gained attention. Most Surveillance Video Systems can only retrieve footage based on its metadata, (date, time, camera location, etc.) which limits the diversity of meaningful footage intended to be retrieved by the user. To solve this, a content-based video retrieval framework was proposed to retrieve relevant videos based on their content and match it to the user's query. This framework composes of two (2) methods: A method for Video Content Extraction that utilizes Google's Video Intelligence API for Optical Character Recognition and Label Detection, and a method for Video Retrieval. Various setups for the Video Retrieval method are explored; this includes the usage of SBERT and Okapi BM25. Each setup was tested against various text queries with equivalent test video results based on the MSVD dataset. To measure each setup's performance in terms of relevance, Recall and Precision at K and Median and Mean Rank were used. It was concluded that the framework composed of the Video Intelligence API along with SBERT alone performed better than the other proposed setup for returning videos relevant to the user's text query more accurately than the other setups of the method.","PeriodicalId":109214,"journal":{"name":"Proceedings of the 2022 6th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 6th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3533050.3533067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the increased adaption of CCTV for surveillance, challenges in terms of retrieval have recently gained attention. Most Surveillance Video Systems can only retrieve footage based on its metadata, (date, time, camera location, etc.) which limits the diversity of meaningful footage intended to be retrieved by the user. To solve this, a content-based video retrieval framework was proposed to retrieve relevant videos based on their content and match it to the user's query. This framework composes of two (2) methods: A method for Video Content Extraction that utilizes Google's Video Intelligence API for Optical Character Recognition and Label Detection, and a method for Video Retrieval. Various setups for the Video Retrieval method are explored; this includes the usage of SBERT and Okapi BM25. Each setup was tested against various text queries with equivalent test video results based on the MSVD dataset. To measure each setup's performance in terms of relevance, Recall and Precision at K and Median and Mean Rank were used. It was concluded that the framework composed of the Video Intelligence API along with SBERT alone performed better than the other proposed setup for returning videos relevant to the user's text query more accurately than the other setups of the method.
SemanTV:基于内容的视频检索框架
随着闭路电视越来越多地用于监控,检索方面的挑战最近引起了人们的关注。大多数监控视频系统只能根据其元数据(日期,时间,摄像机位置等)检索镜头,这限制了用户打算检索的有意义的镜头的多样性。为了解决这个问题,提出了一个基于内容的视频检索框架,根据视频内容检索相关视频,并将其与用户的查询进行匹配。该框架由两种方法组成:一种是视频内容提取方法,该方法利用Google的视频智能API进行光学字符识别和标签检测,另一种是视频检索方法。探讨了视频检索方法的各种设置;这包括使用SBERT和Okapi BM25。每个设置都针对基于MSVD数据集的具有等效测试视频结果的各种文本查询进行了测试。为了在相关性方面衡量每个设置的性能,使用了K处的召回率和精度以及中位数和平均秩。结论是,由视频智能API和SBERT单独组成的框架在返回与用户文本查询相关的视频方面比该方法的其他设置更准确,表现优于其他提出的设置。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信