RemoteCLIP: A Vision Language Foundation Model for Remote Sensing

ArXiv Pub Date : 2023-06-19 DOI:10.48550/arXiv.2306.11029
F. Liu, Delong Chen, Zhan-Rong Guan, Xiaocong Zhou, Jiale Zhu, Jun Zhou
{"title":"RemoteCLIP: A Vision Language Foundation Model for Remote Sensing","authors":"F. Liu, Delong Chen, Zhan-Rong Guan, Xiaocong Zhou, Jiale Zhu, Jun Zhou","doi":"10.48550/arXiv.2306.11029","DOIUrl":null,"url":null,"abstract":"General-purpose foundation models have become increasingly important in the field of artificial intelligence. While self-supervised learning (SSL) and Masked Image Modeling (MIM) have led to promising results in building such foundation models for remote sensing, these models primarily learn low-level features, require annotated data for fine-tuning, and not applicable for retrieval and zero-shot applications due to the lack of language understanding. In response to these limitations, we propose RemoteCLIP, the first vision-language foundation model for remote sensing that aims to learn robust visual features with rich semantics, as well as aligned text embeddings for seamless downstream application. To address the scarcity of pre-training data, we leverage data scaling, converting heterogeneous annotations based on Box-to-Caption (B2C) and Mask-to-Box (M2B) conversion, and further incorporating UAV imagery, resulting a 12xlarger pretraining dataset. RemoteCLIP can be applied to a variety of downstream tasks, including zero-shot image classification, linear probing, k-NN classification, few-shot classification, image-text retrieval, and object counting. Evaluations on 16 datasets, including a newly introduced RemoteCount benchmark to test the object counting ability, show that RemoteCLIP consistently outperforms baseline foundation models across different model scales. Impressively, RemoteCLIP outperform previous SoTA by 9.14% mean recall on RSICD dataset and by 8.92% on RSICD dataset. For zero-shot classification, our RemoteCLIP outperform CLIP baseline by up to 6.39% average accuracy on 12 downstream datasets.Pretrained models is available at https://github.com/ChenDelong1999/RemoteCLIP .","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2306.11029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

General-purpose foundation models have become increasingly important in the field of artificial intelligence. While self-supervised learning (SSL) and Masked Image Modeling (MIM) have led to promising results in building such foundation models for remote sensing, these models primarily learn low-level features, require annotated data for fine-tuning, and not applicable for retrieval and zero-shot applications due to the lack of language understanding. In response to these limitations, we propose RemoteCLIP, the first vision-language foundation model for remote sensing that aims to learn robust visual features with rich semantics, as well as aligned text embeddings for seamless downstream application. To address the scarcity of pre-training data, we leverage data scaling, converting heterogeneous annotations based on Box-to-Caption (B2C) and Mask-to-Box (M2B) conversion, and further incorporating UAV imagery, resulting a 12xlarger pretraining dataset. RemoteCLIP can be applied to a variety of downstream tasks, including zero-shot image classification, linear probing, k-NN classification, few-shot classification, image-text retrieval, and object counting. Evaluations on 16 datasets, including a newly introduced RemoteCount benchmark to test the object counting ability, show that RemoteCLIP consistently outperforms baseline foundation models across different model scales. Impressively, RemoteCLIP outperform previous SoTA by 9.14% mean recall on RSICD dataset and by 8.92% on RSICD dataset. For zero-shot classification, our RemoteCLIP outperform CLIP baseline by up to 6.39% average accuracy on 12 downstream datasets.Pretrained models is available at https://github.com/ChenDelong1999/RemoteCLIP .
遥感视觉语言基础模型RemoteCLIP
通用基础模型在人工智能领域的重要性日益凸显。虽然自监督学习(SSL)和掩膜图像建模(MIM)在构建遥感基础模型方面取得了令人鼓舞的成果,但这些模型主要学习低级特征,需要注释数据进行微调,并且由于缺乏语言理解而不适用于检索和零采样应用。针对这些限制,我们提出了RemoteCLIP,这是遥感的第一个视觉语言基础模型,旨在学习具有丰富语义的鲁棒视觉特征,以及用于无缝下游应用的对齐文本嵌入。为了解决预训练数据的稀缺性,我们利用数据缩放,基于Box-to-Caption (B2C)和Mask-to-Box (M2B)转换转换异构注释,并进一步合并无人机图像,从而得到12倍大的预训练数据集。RemoteCLIP可以应用于各种下游任务,包括零拍摄图像分类、线性探测、k-NN分类、少拍摄分类、图像文本检索和对象计数。对16个数据集的评估,包括新引入的RemoteCount基准测试对象计数能力,表明RemoteCLIP在不同模型尺度上始终优于基线基础模型。令人印象深刻的是,RemoteCLIP比以前的SoTA在RSICD数据集上的平均召回率高出9.14%,在RSICD数据集上的平均召回率高出8.92%。对于零射击分类,我们的RemoteCLIP在12个下游数据集上的平均准确率比CLIP基线高出6.39%。预训练模型可在https://github.com/ChenDelong1999/RemoteCLIP上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信