A Video Salient Object Detection Model Guided by Spatio-Temporal Prior

Wen-Wen Jiang, Kai-Fu Yang, Yongjie Li
{"title":"A Video Salient Object Detection Model Guided by Spatio-Temporal Prior","authors":"Wen-Wen Jiang, Kai-Fu Yang, Yongjie Li","doi":"10.1109/SSCI44817.2019.9002971","DOIUrl":null,"url":null,"abstract":"Neurobiology researches suggest that the motion information attracts more attention of human visual system than other low-level features such as brightness, color and texture. Consequently, video saliency detection methods not only consider the spatial saliency caused by the underlying features of images, but also the motion information in temporal domain. In this study, we proposes a model of video salient object detection based on a two-pathway framework that the spatio-temporal contrast guides the search for salient targets. Firstly, along the non-selective pathway, which is computed with the intra-frame and inter-frame maps of the color contrast and motion contrast, combining with the previous saliency map, to represent the prior information of the possible target locations. In contrast, the low-level features such as brightness, color and motion features are extracted in the selective pathway to search target accurately. Finally, the Bayesian inference is used to further obtain the optimal results. Experimental results show that our algorithm improves the performance of salient object detection on video compared to the representative method of Contour Guided Visual Search.","PeriodicalId":6729,"journal":{"name":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"24 1","pages":"2555-2562"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI44817.2019.9002971","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Neurobiology researches suggest that the motion information attracts more attention of human visual system than other low-level features such as brightness, color and texture. Consequently, video saliency detection methods not only consider the spatial saliency caused by the underlying features of images, but also the motion information in temporal domain. In this study, we proposes a model of video salient object detection based on a two-pathway framework that the spatio-temporal contrast guides the search for salient targets. Firstly, along the non-selective pathway, which is computed with the intra-frame and inter-frame maps of the color contrast and motion contrast, combining with the previous saliency map, to represent the prior information of the possible target locations. In contrast, the low-level features such as brightness, color and motion features are extracted in the selective pathway to search target accurately. Finally, the Bayesian inference is used to further obtain the optimal results. Experimental results show that our algorithm improves the performance of salient object detection on video compared to the representative method of Contour Guided Visual Search.
基于时空先验的视频显著目标检测模型
神经生物学研究表明,运动信息比亮度、颜色和纹理等其他低级特征更能引起人类视觉系统的注意。因此,视频显著性检测方法既要考虑图像底层特征引起的空间显著性,又要考虑时域的运动信息。在本研究中,我们提出了一种基于双路径框架的视频显著目标检测模型,该模型利用时空对比指导搜索显著目标。首先,沿着非选择性路径,利用帧内和帧间的颜色对比度和运动对比度图,结合之前的显著性图,计算出可能目标位置的先验信息。在选择性路径中提取亮度、颜色、运动特征等底层特征,精确搜索目标。最后,利用贝叶斯推理进一步得到最优结果。实验结果表明,与具有代表性的轮廓引导视觉搜索方法相比,我们的算法提高了视频显著目标检测的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信