KD-RSCC:一种高效遥感变化标注的Karras扩散框架

IF 4.4
Xiaofei Yu;Jie Ma;Liqiang Qiao
{"title":"KD-RSCC:一种高效遥感变化标注的Karras扩散框架","authors":"Xiaofei Yu;Jie Ma;Liqiang Qiao","doi":"10.1109/LGRS.2025.3608489","DOIUrl":null,"url":null,"abstract":"Remote sensing image change captioning (RSICC) is a challenging task that involves describing surface changes between bitemporal or multitemporal satellite images using natural language. This task requires both fine-grained visual understanding and expressive language generation. Transformer-based and long short-term memory (LSTM)-based models have shown promising results in this domain. However, they may encounter difficulties in generating flexible and diverse captions, particularly when training data are limited or imbalanced. While diffusion models provide richer textual outputs, they are often constrained by long inference times. To address these issues, we propose a novel diffusion-based framework, KD-RSCC, for efficient and expressive remote sensing change captioning. This framework utilizes the Karras sampling method to significantly reduce the number of steps required during inference, while preserving the quality and diversity of the generated captions. In addition, we introduce a large language model (LLM)-based evaluation strategy <inline-formula> <tex-math>$\\text {G-Eval}_{\\text {RSCC}}$ </tex-math></inline-formula> to conduct a more comprehensive assessment of the semantic accuracy, fluency, and linguistic diversity of the generated descriptions. Experimental results demonstrate that KD-RSCC achieves an optimal balance between generation quality and inference speed, enhancing the flexibility and readability of its outputs. The code and supplementary materials are available at <uri>https://github.com/Fay-Y/KD_RSCC</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"KD-RSCC: A Karras Diffusion Framework for Efficient Remote Sensing Change Captioning\",\"authors\":\"Xiaofei Yu;Jie Ma;Liqiang Qiao\",\"doi\":\"10.1109/LGRS.2025.3608489\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Remote sensing image change captioning (RSICC) is a challenging task that involves describing surface changes between bitemporal or multitemporal satellite images using natural language. This task requires both fine-grained visual understanding and expressive language generation. Transformer-based and long short-term memory (LSTM)-based models have shown promising results in this domain. However, they may encounter difficulties in generating flexible and diverse captions, particularly when training data are limited or imbalanced. While diffusion models provide richer textual outputs, they are often constrained by long inference times. To address these issues, we propose a novel diffusion-based framework, KD-RSCC, for efficient and expressive remote sensing change captioning. This framework utilizes the Karras sampling method to significantly reduce the number of steps required during inference, while preserving the quality and diversity of the generated captions. In addition, we introduce a large language model (LLM)-based evaluation strategy <inline-formula> <tex-math>$\\\\text {G-Eval}_{\\\\text {RSCC}}$ </tex-math></inline-formula> to conduct a more comprehensive assessment of the semantic accuracy, fluency, and linguistic diversity of the generated descriptions. Experimental results demonstrate that KD-RSCC achieves an optimal balance between generation quality and inference speed, enhancing the flexibility and readability of its outputs. The code and supplementary materials are available at <uri>https://github.com/Fay-Y/KD_RSCC</uri>\",\"PeriodicalId\":91017,\"journal\":{\"name\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"volume\":\"22 \",\"pages\":\"1-5\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2025-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11155202/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11155202/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

遥感图像变化字幕(RSICC)是一项具有挑战性的任务,涉及使用自然语言描述双时相或多时相卫星图像之间的表面变化。这项任务需要细粒度的视觉理解和表达性语言生成。基于变压器的模型和基于长短期记忆(LSTM)的模型在这一领域显示出很好的结果。然而,它们在生成灵活多样的标题时可能会遇到困难,特别是当训练数据有限或不平衡时。虽然扩散模型提供了更丰富的文本输出,但它们通常受到较长的推理时间的限制。为了解决这些问题,我们提出了一种新的基于扩散的框架KD-RSCC,用于高效和富有表现力的遥感变化字幕。该框架利用Karras采样方法显著减少了推理过程中所需的步骤数,同时保留了生成标题的质量和多样性。此外,我们引入了一个基于大型语言模型(LLM)的评估策略$\text {G-Eval}_{\text {RSCC}}$,以对生成的描述的语义准确性、流畅性和语言多样性进行更全面的评估。实验结果表明,KD-RSCC在生成质量和推理速度之间达到了最佳平衡,增强了输出的灵活性和可读性。代码和补充材料可在https://github.com/Fay-Y/KD_RSCC上获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
KD-RSCC: A Karras Diffusion Framework for Efficient Remote Sensing Change Captioning
Remote sensing image change captioning (RSICC) is a challenging task that involves describing surface changes between bitemporal or multitemporal satellite images using natural language. This task requires both fine-grained visual understanding and expressive language generation. Transformer-based and long short-term memory (LSTM)-based models have shown promising results in this domain. However, they may encounter difficulties in generating flexible and diverse captions, particularly when training data are limited or imbalanced. While diffusion models provide richer textual outputs, they are often constrained by long inference times. To address these issues, we propose a novel diffusion-based framework, KD-RSCC, for efficient and expressive remote sensing change captioning. This framework utilizes the Karras sampling method to significantly reduce the number of steps required during inference, while preserving the quality and diversity of the generated captions. In addition, we introduce a large language model (LLM)-based evaluation strategy $\text {G-Eval}_{\text {RSCC}}$ to conduct a more comprehensive assessment of the semantic accuracy, fluency, and linguistic diversity of the generated descriptions. Experimental results demonstrate that KD-RSCC achieves an optimal balance between generation quality and inference speed, enhancing the flexibility and readability of its outputs. The code and supplementary materials are available at https://github.com/Fay-Y/KD_RSCC
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信