Liuyi Hu, Zhongyuan Wang, Mang Ye, Jing Xiao, R. Hu
{"title":"基于位置先验模型的时空显著性研究","authors":"Liuyi Hu, Zhongyuan Wang, Mang Ye, Jing Xiao, R. Hu","doi":"10.1109/IJCNN.2016.7727514","DOIUrl":null,"url":null,"abstract":"Saliency detection for images and videos becomes increasingly popular due to its wide applicability. Enormous research efforts have been focused on saliency detection, but it still has some issues in maintaining spatiotemporal consistency of videos and uniformly highlighting entire objects. To address these issues, this paper proposes a superpixel-level spatiotemporal saliency model for saliency detection in videos. To detect salient object, we extract multiple spatiotemporal features combined with intra-consistency motion information preliminarily. Meanwhile, considering inter-consistency of foreground in videos, a set of foreground locations are obtained from previous frames. Then, we introduce foreground-background and local foreground contrast saliency cues of those features using the location prior information of foreground. These two improved contrast saliency cues uniformly highlight the entire object and suppress the background effectively. Finally, we use an interactively dynamic fusion method to integrate the output spatial and temporal saliency maps. The proposed approach is validated on challenging sets of video sequences. Subjective observations and objective evaluations demonstrate that the proposed model achieves a better performance on saliency detection compared with the state-of-the-art spatiotemporal saliency methods.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Spatiotemporal saliency based on location prior model\",\"authors\":\"Liuyi Hu, Zhongyuan Wang, Mang Ye, Jing Xiao, R. Hu\",\"doi\":\"10.1109/IJCNN.2016.7727514\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Saliency detection for images and videos becomes increasingly popular due to its wide applicability. Enormous research efforts have been focused on saliency detection, but it still has some issues in maintaining spatiotemporal consistency of videos and uniformly highlighting entire objects. To address these issues, this paper proposes a superpixel-level spatiotemporal saliency model for saliency detection in videos. To detect salient object, we extract multiple spatiotemporal features combined with intra-consistency motion information preliminarily. Meanwhile, considering inter-consistency of foreground in videos, a set of foreground locations are obtained from previous frames. Then, we introduce foreground-background and local foreground contrast saliency cues of those features using the location prior information of foreground. These two improved contrast saliency cues uniformly highlight the entire object and suppress the background effectively. Finally, we use an interactively dynamic fusion method to integrate the output spatial and temporal saliency maps. The proposed approach is validated on challenging sets of video sequences. Subjective observations and objective evaluations demonstrate that the proposed model achieves a better performance on saliency detection compared with the state-of-the-art spatiotemporal saliency methods.\",\"PeriodicalId\":109405,\"journal\":{\"name\":\"2016 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2016.7727514\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2016.7727514","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Spatiotemporal saliency based on location prior model
Saliency detection for images and videos becomes increasingly popular due to its wide applicability. Enormous research efforts have been focused on saliency detection, but it still has some issues in maintaining spatiotemporal consistency of videos and uniformly highlighting entire objects. To address these issues, this paper proposes a superpixel-level spatiotemporal saliency model for saliency detection in videos. To detect salient object, we extract multiple spatiotemporal features combined with intra-consistency motion information preliminarily. Meanwhile, considering inter-consistency of foreground in videos, a set of foreground locations are obtained from previous frames. Then, we introduce foreground-background and local foreground contrast saliency cues of those features using the location prior information of foreground. These two improved contrast saliency cues uniformly highlight the entire object and suppress the background effectively. Finally, we use an interactively dynamic fusion method to integrate the output spatial and temporal saliency maps. The proposed approach is validated on challenging sets of video sequences. Subjective observations and objective evaluations demonstrate that the proposed model achieves a better performance on saliency detection compared with the state-of-the-art spatiotemporal saliency methods.