{"title":"Nowhere to Disguise: Spot Camouflaged Objects via Saliency Attribute Transfer","authors":"Wenda Zhao;Shigeng Xie;Fan Zhao;You He;Huchuan Lu","doi":"10.1109/TIP.2023.3277793","DOIUrl":null,"url":null,"abstract":"Both salient object detection (SOD) and camouflaged object detection (COD) are typical object segmentation tasks. They are intuitively contradictory, but are intrinsically related. In this paper, we explore the relationship between SOD and COD, and then borrow successful SOD models to detect camouflaged objects to save the design cost of COD models. The core insight is that both SOD and COD leverage two aspects of information: object semantic representations for distinguishing object and background, and context attributes that decide object category. Specifically, we start by decoupling context attributes and object semantic representations from both SOD and COD datasets through designing a novel decoupling framework with triple measure constraints. Then, we transfer saliency context attributes to the camouflaged images through introducing an attribute transfer network. The generated weakly camouflaged images can bridge the context attribute gap between SOD and COD, thereby improving the SOD models’ performances on COD datasets. Comprehensive experiments on three widely-used COD datasets verify the ability of the proposed method. Code and model are available at: \n<uri>https://github.com/wdzhao123/SAT</uri>\n.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10132418/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Both salient object detection (SOD) and camouflaged object detection (COD) are typical object segmentation tasks. They are intuitively contradictory, but are intrinsically related. In this paper, we explore the relationship between SOD and COD, and then borrow successful SOD models to detect camouflaged objects to save the design cost of COD models. The core insight is that both SOD and COD leverage two aspects of information: object semantic representations for distinguishing object and background, and context attributes that decide object category. Specifically, we start by decoupling context attributes and object semantic representations from both SOD and COD datasets through designing a novel decoupling framework with triple measure constraints. Then, we transfer saliency context attributes to the camouflaged images through introducing an attribute transfer network. The generated weakly camouflaged images can bridge the context attribute gap between SOD and COD, thereby improving the SOD models’ performances on COD datasets. Comprehensive experiments on three widely-used COD datasets verify the ability of the proposed method. Code and model are available at:
https://github.com/wdzhao123/SAT
.