基于多特征融合潜在Dirichlet分配模型的高空间分辨率遥感影像场景变化检测

IF 1 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL
Xiaoman Li, Yanfei Zhong, Yuxuan Su, Richen Ye
{"title":"基于多特征融合潜在Dirichlet分配模型的高空间分辨率遥感影像场景变化检测","authors":"Xiaoman Li, Yanfei Zhong, Yuxuan Su, Richen Ye","doi":"10.14358/pers.20-00054","DOIUrl":null,"url":null,"abstract":"With the continuous development of high-spatial-resolution ground observation technology, it is now becoming possible to obtain more and more high-resolution images, which provide us with the possibility to understand remote sensing images at the semantic level. Compared with traditional\n pixel- and object-oriented methods of change detection, scene-change detection can provide us with land use change information at the semantic level, and can thus provide reliable information for urban land use change detection, urban planning, and government management. Most of the current\n scene-change detection methods are based on the visual-words expression of the bag-of-visual-words model and the single-feature-based latent Dirichlet allocation model. In this article, a scene-change detection method for high-spatial-resolution imagery is proposed based on a multi-feature-fusion\n latent Dirich- let allocation model. This method combines the spectral, textural, and spatial features of the high-spatial-resolution images, and the final scene expression is realized through the topic features extracted from the more abstract latent Dirichlet allocation model. Post-classification\n comparison is then used to detect changes in the scene images at different times. A series of experiments demonstrates that, compared with the traditional bag-of-words and topic models, the proposed method can obtain superior scene-change detection results.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"4 6","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Scene-Change Detection Based on Multi-Feature-Fusion Latent Dirichlet Allocation Model for High-Spatial-Resolution Remote Sensing Imagery\",\"authors\":\"Xiaoman Li, Yanfei Zhong, Yuxuan Su, Richen Ye\",\"doi\":\"10.14358/pers.20-00054\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the continuous development of high-spatial-resolution ground observation technology, it is now becoming possible to obtain more and more high-resolution images, which provide us with the possibility to understand remote sensing images at the semantic level. Compared with traditional\\n pixel- and object-oriented methods of change detection, scene-change detection can provide us with land use change information at the semantic level, and can thus provide reliable information for urban land use change detection, urban planning, and government management. Most of the current\\n scene-change detection methods are based on the visual-words expression of the bag-of-visual-words model and the single-feature-based latent Dirichlet allocation model. In this article, a scene-change detection method for high-spatial-resolution imagery is proposed based on a multi-feature-fusion\\n latent Dirich- let allocation model. This method combines the spectral, textural, and spatial features of the high-spatial-resolution images, and the final scene expression is realized through the topic features extracted from the more abstract latent Dirichlet allocation model. Post-classification\\n comparison is then used to detect changes in the scene images at different times. A series of experiments demonstrates that, compared with the traditional bag-of-words and topic models, the proposed method can obtain superior scene-change detection results.\",\"PeriodicalId\":49702,\"journal\":{\"name\":\"Photogrammetric Engineering and Remote Sensing\",\"volume\":\"4 6\",\"pages\":\"\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Photogrammetric Engineering and Remote Sensing\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://doi.org/10.14358/pers.20-00054\",\"RegionNum\":4,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photogrammetric Engineering and Remote Sensing","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.14358/pers.20-00054","RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 3

摘要

随着高空间分辨率地面观测技术的不断发展,获得越来越多的高分辨率图像成为可能,这为我们在语义层面理解遥感图像提供了可能。与传统的基于像素和面向对象的变化检测方法相比,场景变化检测可以在语义层面为我们提供土地利用变化信息,从而为城市土地利用变化检测、城市规划和政府管理提供可靠的信息。目前大多数场景变化检测方法都是基于视觉词袋模型的视觉词表达和基于单特征的潜在狄利克雷分配模型。本文提出了一种基于多特征融合潜Dirich- let分配模型的高空间分辨率图像场景变化检测方法。该方法结合了高空间分辨率图像的光谱、纹理和空间特征,并通过从更抽象的潜狄利克雷分配模型中提取主题特征来实现最终的场景表达。然后使用分类后比较来检测场景图像在不同时间的变化。一系列实验表明,与传统的词袋模型和主题模型相比,该方法可以获得更好的场景变化检测结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Scene-Change Detection Based on Multi-Feature-Fusion Latent Dirichlet Allocation Model for High-Spatial-Resolution Remote Sensing Imagery
With the continuous development of high-spatial-resolution ground observation technology, it is now becoming possible to obtain more and more high-resolution images, which provide us with the possibility to understand remote sensing images at the semantic level. Compared with traditional pixel- and object-oriented methods of change detection, scene-change detection can provide us with land use change information at the semantic level, and can thus provide reliable information for urban land use change detection, urban planning, and government management. Most of the current scene-change detection methods are based on the visual-words expression of the bag-of-visual-words model and the single-feature-based latent Dirichlet allocation model. In this article, a scene-change detection method for high-spatial-resolution imagery is proposed based on a multi-feature-fusion latent Dirich- let allocation model. This method combines the spectral, textural, and spatial features of the high-spatial-resolution images, and the final scene expression is realized through the topic features extracted from the more abstract latent Dirichlet allocation model. Post-classification comparison is then used to detect changes in the scene images at different times. A series of experiments demonstrates that, compared with the traditional bag-of-words and topic models, the proposed method can obtain superior scene-change detection results.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Photogrammetric Engineering and Remote Sensing
Photogrammetric Engineering and Remote Sensing 地学-成像科学与照相技术
CiteScore
1.70
自引率
15.40%
发文量
89
审稿时长
9 months
期刊介绍: Photogrammetric Engineering & Remote Sensing commonly referred to as PE&RS, is the official journal of imaging and geospatial information science and technology. Included in the journal on a regular basis are highlight articles such as the popular columns “Grids & Datums” and “Mapping Matters” and peer reviewed technical papers. We publish thousands of documents, reports, codes, and informational articles in and about the industries relating to Geospatial Sciences, Remote Sensing, Photogrammetry and other imaging sciences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信