Guanqun Ding, Nevrez Imamoglu, Ali Caglayan, M. Murakawa, Ryosuke Nakamura
{"title":"FBNet:用于显著性检测的反馈递归CNN","authors":"Guanqun Ding, Nevrez Imamoglu, Ali Caglayan, M. Murakawa, Ryosuke Nakamura","doi":"10.23919/MVA51890.2021.9511371","DOIUrl":null,"url":null,"abstract":"Saliency detection research has achieved great progress with the emergence of convolutional neural network (CNN) in recent years. Most deep learning based saliency models mainly adopt the feed-forward CNN architecture with heavy burden of parameters to learn features via bottom-up manner. However, this forward only process may ignore the intrinsic relationship and potential benefits of top-down connections or information flow. To the best of our knowledge, there is not any work to explore the feedback connection especially in a recursive manner for saliency detection. Therefore, we propose and explore a simple, intuitive yet powerful feedback recursive convolutional model (FBNet) for image saliency detection. Specifically, we first select and define a lightweight baseline feed-forward CNN structure (~4.7MB), then the high-level multi-scale saliency features are fed back to the low-level convolutional blocks in a recursive process. Experimental results show that the feedback recursive process is a promising way to improve the performance of the baseline forward CNN model. Besides, despite having relatively few CNN parameters, the proposed FBNet model achieves competitive results on the public saliency detection benchmarks.","PeriodicalId":312481,"journal":{"name":"2021 17th International Conference on Machine Vision and Applications (MVA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"FBNet: FeedBack-Recursive CNN for Saliency Detection\",\"authors\":\"Guanqun Ding, Nevrez Imamoglu, Ali Caglayan, M. Murakawa, Ryosuke Nakamura\",\"doi\":\"10.23919/MVA51890.2021.9511371\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Saliency detection research has achieved great progress with the emergence of convolutional neural network (CNN) in recent years. Most deep learning based saliency models mainly adopt the feed-forward CNN architecture with heavy burden of parameters to learn features via bottom-up manner. However, this forward only process may ignore the intrinsic relationship and potential benefits of top-down connections or information flow. To the best of our knowledge, there is not any work to explore the feedback connection especially in a recursive manner for saliency detection. Therefore, we propose and explore a simple, intuitive yet powerful feedback recursive convolutional model (FBNet) for image saliency detection. Specifically, we first select and define a lightweight baseline feed-forward CNN structure (~4.7MB), then the high-level multi-scale saliency features are fed back to the low-level convolutional blocks in a recursive process. Experimental results show that the feedback recursive process is a promising way to improve the performance of the baseline forward CNN model. Besides, despite having relatively few CNN parameters, the proposed FBNet model achieves competitive results on the public saliency detection benchmarks.\",\"PeriodicalId\":312481,\"journal\":{\"name\":\"2021 17th International Conference on Machine Vision and Applications (MVA)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 17th International Conference on Machine Vision and Applications (MVA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/MVA51890.2021.9511371\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 17th International Conference on Machine Vision and Applications (MVA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/MVA51890.2021.9511371","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
FBNet: FeedBack-Recursive CNN for Saliency Detection
Saliency detection research has achieved great progress with the emergence of convolutional neural network (CNN) in recent years. Most deep learning based saliency models mainly adopt the feed-forward CNN architecture with heavy burden of parameters to learn features via bottom-up manner. However, this forward only process may ignore the intrinsic relationship and potential benefits of top-down connections or information flow. To the best of our knowledge, there is not any work to explore the feedback connection especially in a recursive manner for saliency detection. Therefore, we propose and explore a simple, intuitive yet powerful feedback recursive convolutional model (FBNet) for image saliency detection. Specifically, we first select and define a lightweight baseline feed-forward CNN structure (~4.7MB), then the high-level multi-scale saliency features are fed back to the low-level convolutional blocks in a recursive process. Experimental results show that the feedback recursive process is a promising way to improve the performance of the baseline forward CNN model. Besides, despite having relatively few CNN parameters, the proposed FBNet model achieves competitive results on the public saliency detection benchmarks.