Rui Wang , Caijuan Shi , Changyu Duan , Weixiang Gao , Hongli Zhu , Yunchao Wei , Meiqin Liu
{"title":"通过两阶段训练进行带先验的伪装物体分割","authors":"Rui Wang , Caijuan Shi , Changyu Duan , Weixiang Gao , Hongli Zhu , Yunchao Wei , Meiqin Liu","doi":"10.1016/j.cviu.2024.104061","DOIUrl":null,"url":null,"abstract":"<div><p>The camouflaged object segmentation (COS) task aims to segment objects visually embedded within the background. Existing models usually rely on prior information as an auxiliary means to identify camouflaged objects. However, low-quality priors and the singular guidance form hinder the effective utilization of prior information. To address these issues, we propose a novel approach for prior generation and guidance, named prior-guided transformer (PGT). For prior generation, we design a prior generation subnetwork consisting of a Transformer backbone and simple convolutions to obtain higher-quality priors at a lower cost. In addition, to fully exploit the backbone’s understanding capabilities of the camouflage characteristics, a novel two-stage training method is proposed to achieve the backbone’s deep supervision. For prior guidance, we design a prior guidance modules (PGM), with distinct space token mixers to respectively capture global dependencies of location priors and local details of boundary priors. Additionally, we introduce a cross-level prior in the form of features to facilitate inter-level communication of backbone features. Extensive experiments have been conducted and experimental results illustrate the effectiveness and superiority of our method. The code is available at <span>https://github.com/Ray3417/PGT</span><svg><path></path></svg>.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Camouflaged object segmentation with prior via two-stage training\",\"authors\":\"Rui Wang , Caijuan Shi , Changyu Duan , Weixiang Gao , Hongli Zhu , Yunchao Wei , Meiqin Liu\",\"doi\":\"10.1016/j.cviu.2024.104061\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The camouflaged object segmentation (COS) task aims to segment objects visually embedded within the background. Existing models usually rely on prior information as an auxiliary means to identify camouflaged objects. However, low-quality priors and the singular guidance form hinder the effective utilization of prior information. To address these issues, we propose a novel approach for prior generation and guidance, named prior-guided transformer (PGT). For prior generation, we design a prior generation subnetwork consisting of a Transformer backbone and simple convolutions to obtain higher-quality priors at a lower cost. In addition, to fully exploit the backbone’s understanding capabilities of the camouflage characteristics, a novel two-stage training method is proposed to achieve the backbone’s deep supervision. For prior guidance, we design a prior guidance modules (PGM), with distinct space token mixers to respectively capture global dependencies of location priors and local details of boundary priors. Additionally, we introduce a cross-level prior in the form of features to facilitate inter-level communication of backbone features. Extensive experiments have been conducted and experimental results illustrate the effectiveness and superiority of our method. The code is available at <span>https://github.com/Ray3417/PGT</span><svg><path></path></svg>.</p></div>\",\"PeriodicalId\":50633,\"journal\":{\"name\":\"Computer Vision and Image Understanding\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Vision and Image Understanding\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1077314224001425\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314224001425","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Camouflaged object segmentation with prior via two-stage training
The camouflaged object segmentation (COS) task aims to segment objects visually embedded within the background. Existing models usually rely on prior information as an auxiliary means to identify camouflaged objects. However, low-quality priors and the singular guidance form hinder the effective utilization of prior information. To address these issues, we propose a novel approach for prior generation and guidance, named prior-guided transformer (PGT). For prior generation, we design a prior generation subnetwork consisting of a Transformer backbone and simple convolutions to obtain higher-quality priors at a lower cost. In addition, to fully exploit the backbone’s understanding capabilities of the camouflage characteristics, a novel two-stage training method is proposed to achieve the backbone’s deep supervision. For prior guidance, we design a prior guidance modules (PGM), with distinct space token mixers to respectively capture global dependencies of location priors and local details of boundary priors. Additionally, we introduce a cross-level prior in the form of features to facilitate inter-level communication of backbone features. Extensive experiments have been conducted and experimental results illustrate the effectiveness and superiority of our method. The code is available at https://github.com/Ray3417/PGT.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems