{"title":"Prototype as query for few shot semantic segmentation","authors":"Leilei Cao, Yibo Guo, Ye Yuan, Qiangguo Jin","doi":"10.1007/s40747-024-01539-4","DOIUrl":null,"url":null,"abstract":"<p>Few-shot Semantic Segmentation (FSS) was proposed to segment unseen classes in a query image, referring to only a few annotated examples named support images. One of the characteristics of FSS is spatial inconsistency between query and support targets, e.g., texture or appearance. This greatly challenges the generalization ability of methods for FSS, which requires to effectively exploit the dependency of the query image and the support examples. Most existing methods abstracted support features into prototype vectors and implemented the interaction with query features using cosine similarity or feature concatenation. However, this simple interaction may not capture spatial details in query features. To address this limitation, some methods utilized pixel-level support information by computing pixel-level correlations between paired query and support features implemented with the attention mechanism of Transformer. Nevertheless, these approaches suffer from heavy computation due to dot-product attention between all pixels of support and query features. In this paper, we propose a novel framework, termed ProtoFormer, built upon the Transformer architecture, to fully capture spatial details in query features. ProtoFormer treats the abstracted prototype of the target class in support features as the Query and the query features as Key and Value embeddings, which are input to the Transformer decoder. This approach enables better capture of spatial details and focuses on the semantic features of the target class in the query image. The output of the Transformer-based module can be interpreted as semantic-aware dynamic kernels that filter the segmentation mask from the enriched query features. Extensive experiments conducted on PASCAL-<span>\\(5^{i}\\)</span> and COCO-<span>\\(20^{i}\\)</span> datasets demonstrate that ProtoFormer significantly outperforms the state-of-the-art methods in FSS.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":null,"pages":null},"PeriodicalIF":5.0000,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01539-4","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Few-shot Semantic Segmentation (FSS) was proposed to segment unseen classes in a query image, referring to only a few annotated examples named support images. One of the characteristics of FSS is spatial inconsistency between query and support targets, e.g., texture or appearance. This greatly challenges the generalization ability of methods for FSS, which requires to effectively exploit the dependency of the query image and the support examples. Most existing methods abstracted support features into prototype vectors and implemented the interaction with query features using cosine similarity or feature concatenation. However, this simple interaction may not capture spatial details in query features. To address this limitation, some methods utilized pixel-level support information by computing pixel-level correlations between paired query and support features implemented with the attention mechanism of Transformer. Nevertheless, these approaches suffer from heavy computation due to dot-product attention between all pixels of support and query features. In this paper, we propose a novel framework, termed ProtoFormer, built upon the Transformer architecture, to fully capture spatial details in query features. ProtoFormer treats the abstracted prototype of the target class in support features as the Query and the query features as Key and Value embeddings, which are input to the Transformer decoder. This approach enables better capture of spatial details and focuses on the semantic features of the target class in the query image. The output of the Transformer-based module can be interpreted as semantic-aware dynamic kernels that filter the segmentation mask from the enriched query features. Extensive experiments conducted on PASCAL-\(5^{i}\) and COCO-\(20^{i}\) datasets demonstrate that ProtoFormer significantly outperforms the state-of-the-art methods in FSS.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.