Zhili Zhang , Xiangyun Hu , Yue Yang , Bingnan Yang , Kai Deng , Hengming Dai , Mi Zhang
{"title":"High-quality one-shot interactive segmentation for remote sensing images via hybrid adapter-enhanced foundation models","authors":"Zhili Zhang , Xiangyun Hu , Yue Yang , Bingnan Yang , Kai Deng , Hengming Dai , Mi Zhang","doi":"10.1016/j.jag.2025.104466","DOIUrl":null,"url":null,"abstract":"<div><div>Interactive segmentation of remote sensing images enables the rapid generation of annotated samples, providing training samples for deep learning algorithms and facilitating high-quality extraction and classification for remote sensing objects. However, existing interactive segmentation methods, such as SAM, are primarily designed for natural images and show inefficiencies when applied to remote sensing images. These methods often require multiple interactions to achieve satisfactory labeling results and frequently struggle to obtain precise target boundaries. To address these limitations, we propose a high-quality one-shot interactive segmentation method (OSISeg) based on the fine-tuning of foundation models, tailored for the efficient annotation of typical objects in remote sensing imagery. OSISeg utilizes robust visual priors from foundation models and implements a hybrid adapter-based strategy for fine-tuning these models. Specifically, It employs a parallel structure with hybrid adapter designs to adjust multi-head self-attention and feed-forward neural networks within foundation models, effectively aligning remote sensing image features for interactive segmentation tasks. Furthermore, the proposed OSISeg integrates point, box, and scribble prompts, facilitating high-quality segmentation only using one prompt through a lightweight decoder. Experimental results on multiple datasets—including buildings, water bodies, and woodlands—demonstrate that our method outperforms existing fine-tuning methods and significantly enhances the quality of one-shot interactive segmentation for typical remote sensing objects. This study highlights the potential of the proposed OSISeg to significantly accelerate sample annotation in remote sensing image labeling tasks, establishing it as a valuable tool for sample labeling in the field of remote sensing. Code is available at <span><span>https://github.com/zhilyzhang/OSISeg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":"139 ","pages":"Article 104466"},"PeriodicalIF":7.6000,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International journal of applied earth observation and geoinformation : ITC journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S156984322500113X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 0
Abstract
Interactive segmentation of remote sensing images enables the rapid generation of annotated samples, providing training samples for deep learning algorithms and facilitating high-quality extraction and classification for remote sensing objects. However, existing interactive segmentation methods, such as SAM, are primarily designed for natural images and show inefficiencies when applied to remote sensing images. These methods often require multiple interactions to achieve satisfactory labeling results and frequently struggle to obtain precise target boundaries. To address these limitations, we propose a high-quality one-shot interactive segmentation method (OSISeg) based on the fine-tuning of foundation models, tailored for the efficient annotation of typical objects in remote sensing imagery. OSISeg utilizes robust visual priors from foundation models and implements a hybrid adapter-based strategy for fine-tuning these models. Specifically, It employs a parallel structure with hybrid adapter designs to adjust multi-head self-attention and feed-forward neural networks within foundation models, effectively aligning remote sensing image features for interactive segmentation tasks. Furthermore, the proposed OSISeg integrates point, box, and scribble prompts, facilitating high-quality segmentation only using one prompt through a lightweight decoder. Experimental results on multiple datasets—including buildings, water bodies, and woodlands—demonstrate that our method outperforms existing fine-tuning methods and significantly enhances the quality of one-shot interactive segmentation for typical remote sensing objects. This study highlights the potential of the proposed OSISeg to significantly accelerate sample annotation in remote sensing image labeling tasks, establishing it as a valuable tool for sample labeling in the field of remote sensing. Code is available at https://github.com/zhilyzhang/OSISeg.
期刊介绍:
The International Journal of Applied Earth Observation and Geoinformation publishes original papers that utilize earth observation data for natural resource and environmental inventory and management. These data primarily originate from remote sensing platforms, including satellites and aircraft, supplemented by surface and subsurface measurements. Addressing natural resources such as forests, agricultural land, soils, and water, as well as environmental concerns like biodiversity, land degradation, and hazards, the journal explores conceptual and data-driven approaches. It covers geoinformation themes like capturing, databasing, visualization, interpretation, data quality, and spatial uncertainty.