Predictive visual saliency model for surveillance video

Fahad Fazal Elahi Guraya, F. A. Cheikh
{"title":"Predictive visual saliency model for surveillance video","authors":"Fahad Fazal Elahi Guraya, F. A. Cheikh","doi":"10.5281/ZENODO.42675","DOIUrl":null,"url":null,"abstract":"Visual saliency models(VSM) mimic the human visual system to distinguish the salient regions from the non-salient ones in an image or video. Most of the visual saliency model in the literature are static hence they can only be used for images. Motion is important information in case of videos that is not present in still images and thus not used in most of VSMs. There are very few saliency models which take into account both static and motion information. And there is no saliency model in the literature which uses static features, motion, prediction and face feature. In this paper we propose a predictive visual saliency model for video that uses static features, motion feature and face detection to predict the evolution in time of the human attention or the saliency. We introduce a new approach to compute saliency map for videos using salient motion information and prediction. The proposed model is tested and validated for surveillance videos.","PeriodicalId":331889,"journal":{"name":"2011 19th European Signal Processing Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 19th European Signal Processing Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5281/ZENODO.42675","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

Visual saliency models(VSM) mimic the human visual system to distinguish the salient regions from the non-salient ones in an image or video. Most of the visual saliency model in the literature are static hence they can only be used for images. Motion is important information in case of videos that is not present in still images and thus not used in most of VSMs. There are very few saliency models which take into account both static and motion information. And there is no saliency model in the literature which uses static features, motion, prediction and face feature. In this paper we propose a predictive visual saliency model for video that uses static features, motion feature and face detection to predict the evolution in time of the human attention or the saliency. We introduce a new approach to compute saliency map for videos using salient motion information and prediction. The proposed model is tested and validated for surveillance videos.
监控视频的预测视觉显著性模型
视觉显著性模型(VSM)模拟人类视觉系统,以区分图像或视频中的显著区域和非显著区域。文献中的大多数视觉显著性模型都是静态的,因此只能用于图像。在静止图像中不存在视频的情况下,运动是重要的信息,因此在大多数vsm中不使用。很少有同时考虑静态和运动信息的显著性模型。文献中还没有使用静态特征、运动特征、预测特征和人脸特征的显著性模型。本文提出了一种预测视频视觉显著性的模型,该模型利用静态特征、运动特征和人脸检测来预测人的注意力或显著性在时间上的演变。提出了一种利用显著性运动信息和预测来计算视频显著性映射的新方法。在监控视频中对该模型进行了测试和验证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信