Yuanxin Zhao , Mi Zhang , Bingnan Yang , Zhan Zhang , Jujia Kang , Jianya Gong
{"title":"LuoJiaHOG: A hierarchy oriented geo-aware image caption dataset for remote sensing image–text retrieval","authors":"Yuanxin Zhao , Mi Zhang , Bingnan Yang , Zhan Zhang , Jujia Kang , Jianya Gong","doi":"10.1016/j.isprsjprs.2025.02.009","DOIUrl":null,"url":null,"abstract":"<div><div>Image–text retrieval (ITR) is crucial for making informed decisions in various remote sensing (RS) applications, including urban development and disaster prevention. However, creating ITR datasets that combine vision and language modalities requires extensive geo-spatial sampling, diverse categories, and detailed descriptions. To address these needs, we introduce the LuojiaHOG dataset, which is geospatially aware, label-extension-friendly, and features comprehensive captions. LuojiaHOG incorporates hierarchical spatial sampling, an extensible classification system aligned with Open Geospatial Consortium (OGC) standards, and detailed caption generation. Additionally, we propose a CLIP-based Image Semantic Enhancement Network (CISEN) to enhance sophisticated ITR capabilities. CISEN comprises dual-path knowledge transfer and progressive cross-modal feature fusion. The former transfers multimodal knowledge from a large, pretrained CLIP-like model, while the latter enhances visual-to-text alignment and fine-grained cross-modal feature integration. Comprehensive statistics on LuojiaHOG demonstrate its richness in sampling diversity, label quantity, and description granularity. Evaluations of LuojiaHOG using various state-of-the-art ITR models–including ALBEF, ALIGN, CLIP, FILIP, Wukong, GeoRSCLIP, and CISEN-employ second- and third-level labels. Adapter-tuning shows that CISEN outperforms others, achieving the highest scores with WMAP@5 rates of 88.47% and 87.28% on third-level ITR tasks, respectively. Moreover, CISEN shows improvements of approximately 1.3% and 0.9% in WMAP@5 over its baseline. When tested on previous RS ITR benchmarks, CISEN achieves performance close to the state-of-the-art methods. Pretraining on LuojiaHOG can further enhance retrieval results. These findings underscore the advancements of CISEN in accurately retrieving relevant information across images and texts. LuojiaHOG and CISEN can serve as foundational resources for future research on RS image–text alignment, supporting a broad spectrum of vision-language applications. The retrieval demo and dataset are available at:<span><span>https://huggingface.co/spaces/aleo1/LuojiaHOG-demo</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"222 ","pages":"Pages 130-151"},"PeriodicalIF":10.6000,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625000590","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Image–text retrieval (ITR) is crucial for making informed decisions in various remote sensing (RS) applications, including urban development and disaster prevention. However, creating ITR datasets that combine vision and language modalities requires extensive geo-spatial sampling, diverse categories, and detailed descriptions. To address these needs, we introduce the LuojiaHOG dataset, which is geospatially aware, label-extension-friendly, and features comprehensive captions. LuojiaHOG incorporates hierarchical spatial sampling, an extensible classification system aligned with Open Geospatial Consortium (OGC) standards, and detailed caption generation. Additionally, we propose a CLIP-based Image Semantic Enhancement Network (CISEN) to enhance sophisticated ITR capabilities. CISEN comprises dual-path knowledge transfer and progressive cross-modal feature fusion. The former transfers multimodal knowledge from a large, pretrained CLIP-like model, while the latter enhances visual-to-text alignment and fine-grained cross-modal feature integration. Comprehensive statistics on LuojiaHOG demonstrate its richness in sampling diversity, label quantity, and description granularity. Evaluations of LuojiaHOG using various state-of-the-art ITR models–including ALBEF, ALIGN, CLIP, FILIP, Wukong, GeoRSCLIP, and CISEN-employ second- and third-level labels. Adapter-tuning shows that CISEN outperforms others, achieving the highest scores with WMAP@5 rates of 88.47% and 87.28% on third-level ITR tasks, respectively. Moreover, CISEN shows improvements of approximately 1.3% and 0.9% in WMAP@5 over its baseline. When tested on previous RS ITR benchmarks, CISEN achieves performance close to the state-of-the-art methods. Pretraining on LuojiaHOG can further enhance retrieval results. These findings underscore the advancements of CISEN in accurately retrieving relevant information across images and texts. LuojiaHOG and CISEN can serve as foundational resources for future research on RS image–text alignment, supporting a broad spectrum of vision-language applications. The retrieval demo and dataset are available at:https://huggingface.co/spaces/aleo1/LuojiaHOG-demo.
期刊介绍:
The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive.
P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields.
In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.