{"title":"Probability-based saliency detection approach for multi-features integration","authors":"Jing Pan, Yuqing He, Qishen Zhang, Kun Huang","doi":"10.1117/12.2182163","DOIUrl":null,"url":null,"abstract":"There are various saliency detection methods have been proposed recent years. These methods can often complement each other so combining them in appropriate way will be an effective solution of saliency analysis. Existing aggregation methods assigned weights to each entire saliency map, ignoring that features perform differently in certain parts of an image and their gaps between distinguishing the foreground from the backgrounds. In this work, we present a Bayesian probability based framework for multi-feature aggregation. We address saliency detection as a two-class classification problem. Saliency maps generated from each feature have been decomposed into pixels. By the statistic results of different saliency value’s reliability on foreground and background detection, we can generate an accurate, uniform and per-pixel saliency mask without any manual set parameters. This approach can significantly suppress feature’s misclassification while preserve their sensitivity on foreground or background. Experiment on public saliency benchmarks show that our method achieves equal or better results than all state-of-the-art approaches. A new dataset contains 1500 images with human labeled ground truth is also constructed.","PeriodicalId":225534,"journal":{"name":"Photoelectronic Technology Committee Conferences","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photoelectronic Technology Committee Conferences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2182163","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
There are various saliency detection methods have been proposed recent years. These methods can often complement each other so combining them in appropriate way will be an effective solution of saliency analysis. Existing aggregation methods assigned weights to each entire saliency map, ignoring that features perform differently in certain parts of an image and their gaps between distinguishing the foreground from the backgrounds. In this work, we present a Bayesian probability based framework for multi-feature aggregation. We address saliency detection as a two-class classification problem. Saliency maps generated from each feature have been decomposed into pixels. By the statistic results of different saliency value’s reliability on foreground and background detection, we can generate an accurate, uniform and per-pixel saliency mask without any manual set parameters. This approach can significantly suppress feature’s misclassification while preserve their sensitivity on foreground or background. Experiment on public saliency benchmarks show that our method achieves equal or better results than all state-of-the-art approaches. A new dataset contains 1500 images with human labeled ground truth is also constructed.