{"title":"基于多特征融合潜在Dirichlet分配模型的高空间分辨率遥感影像场景变化检测","authors":"Xiaoman Li, Yanfei Zhong, Yuxuan Su, Richen Ye","doi":"10.14358/pers.20-00054","DOIUrl":null,"url":null,"abstract":"With the continuous development of high-spatial-resolution ground observation technology, it is now becoming possible to obtain more and more high-resolution images, which provide us with the possibility to understand remote sensing images at the semantic level. Compared with traditional\n pixel- and object-oriented methods of change detection, scene-change detection can provide us with land use change information at the semantic level, and can thus provide reliable information for urban land use change detection, urban planning, and government management. Most of the current\n scene-change detection methods are based on the visual-words expression of the bag-of-visual-words model and the single-feature-based latent Dirichlet allocation model. In this article, a scene-change detection method for high-spatial-resolution imagery is proposed based on a multi-feature-fusion\n latent Dirich- let allocation model. This method combines the spectral, textural, and spatial features of the high-spatial-resolution images, and the final scene expression is realized through the topic features extracted from the more abstract latent Dirichlet allocation model. Post-classification\n comparison is then used to detect changes in the scene images at different times. A series of experiments demonstrates that, compared with the traditional bag-of-words and topic models, the proposed method can obtain superior scene-change detection results.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"4 6","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Scene-Change Detection Based on Multi-Feature-Fusion Latent Dirichlet Allocation Model for High-Spatial-Resolution Remote Sensing Imagery\",\"authors\":\"Xiaoman Li, Yanfei Zhong, Yuxuan Su, Richen Ye\",\"doi\":\"10.14358/pers.20-00054\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the continuous development of high-spatial-resolution ground observation technology, it is now becoming possible to obtain more and more high-resolution images, which provide us with the possibility to understand remote sensing images at the semantic level. Compared with traditional\\n pixel- and object-oriented methods of change detection, scene-change detection can provide us with land use change information at the semantic level, and can thus provide reliable information for urban land use change detection, urban planning, and government management. Most of the current\\n scene-change detection methods are based on the visual-words expression of the bag-of-visual-words model and the single-feature-based latent Dirichlet allocation model. In this article, a scene-change detection method for high-spatial-resolution imagery is proposed based on a multi-feature-fusion\\n latent Dirich- let allocation model. This method combines the spectral, textural, and spatial features of the high-spatial-resolution images, and the final scene expression is realized through the topic features extracted from the more abstract latent Dirichlet allocation model. Post-classification\\n comparison is then used to detect changes in the scene images at different times. A series of experiments demonstrates that, compared with the traditional bag-of-words and topic models, the proposed method can obtain superior scene-change detection results.\",\"PeriodicalId\":49702,\"journal\":{\"name\":\"Photogrammetric Engineering and Remote Sensing\",\"volume\":\"4 6\",\"pages\":\"\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Photogrammetric Engineering and Remote Sensing\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://doi.org/10.14358/pers.20-00054\",\"RegionNum\":4,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photogrammetric Engineering and Remote Sensing","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.14358/pers.20-00054","RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
Scene-Change Detection Based on Multi-Feature-Fusion Latent Dirichlet Allocation Model for High-Spatial-Resolution Remote Sensing Imagery
With the continuous development of high-spatial-resolution ground observation technology, it is now becoming possible to obtain more and more high-resolution images, which provide us with the possibility to understand remote sensing images at the semantic level. Compared with traditional
pixel- and object-oriented methods of change detection, scene-change detection can provide us with land use change information at the semantic level, and can thus provide reliable information for urban land use change detection, urban planning, and government management. Most of the current
scene-change detection methods are based on the visual-words expression of the bag-of-visual-words model and the single-feature-based latent Dirichlet allocation model. In this article, a scene-change detection method for high-spatial-resolution imagery is proposed based on a multi-feature-fusion
latent Dirich- let allocation model. This method combines the spectral, textural, and spatial features of the high-spatial-resolution images, and the final scene expression is realized through the topic features extracted from the more abstract latent Dirichlet allocation model. Post-classification
comparison is then used to detect changes in the scene images at different times. A series of experiments demonstrates that, compared with the traditional bag-of-words and topic models, the proposed method can obtain superior scene-change detection results.
期刊介绍:
Photogrammetric Engineering & Remote Sensing commonly referred to as PE&RS, is the official journal of imaging and geospatial information science and technology. Included in the journal on a regular basis are highlight articles such as the popular columns “Grids & Datums” and “Mapping Matters” and peer reviewed technical papers.
We publish thousands of documents, reports, codes, and informational articles in and about the industries relating to Geospatial Sciences, Remote Sensing, Photogrammetry and other imaging sciences.