Journal of Applied Remote Sensing最新文献

筛选
英文 中文
Evaluating gradient descent variations for artificial neural network bathymetry modeling and sensitivity analysis 评估用于人工神经网络测深建模和敏感性分析的梯度下降变化
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.022204
Chih-Hung Lee, Min-Kung Hsu, Yu-Min Wang, Jan-Mou Leu, Chung-Ling Chen, Liwei Liu
{"title":"Evaluating gradient descent variations for artificial neural network bathymetry modeling and sensitivity analysis","authors":"Chih-Hung Lee, Min-Kung Hsu, Yu-Min Wang, Jan-Mou Leu, Chung-Ling Chen, Liwei Liu","doi":"10.1117/1.jrs.18.022204","DOIUrl":"https://doi.org/10.1117/1.jrs.18.022204","url":null,"abstract":"Artificial intelligence has been widely applied to water depth retrieval across various environments, deemed essential for habitat modeling, hydraulic structure design, and watershed management. However, most of these models have been developed for deep waters, with the critical impact of the gradient descent algorithm often not evaluated. To address this gap in current research, this study adopted the artificial neural network with seven gradient descent methods, including step, momentum, quick propagation, delta-bar-delta, conjugate gradient, Levenberg–Marquardt, and resilient backpropagation (RProp), for shallow water depth modeling. Shallow water depths in Taiwan’s mountainous rivers were then modeled using multispectral imagery taken by drone and vegetation indices. From our results, it was revealed that methods optimizing weight updates were outperformed by those based on gradient information, such as RProp. The selection of gradient descent algorithm was identified as pivotal; an inappropriate selection might even result in performance inferior to a traditional linear regression model. In the sensitivity analysis, near-infrared and normalized difference water index were classified as highly sensitive. By leveraging multispectral data and vegetation indices with ANN, the optimal gradient descent algorithm and the critical model input for shallow water modeling were successfully identified, offering invaluable insights for future studies.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ComS-YOLO: a combinational and sparse network for detecting vehicles in aerial thermal infrared images ComS-YOLO:用于在航空热红外图像中探测车辆的组合稀疏网络
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014508
Xunxun Zhang, Xiaoyu Lu
{"title":"ComS-YOLO: a combinational and sparse network for detecting vehicles in aerial thermal infrared images","authors":"Xunxun Zhang, Xiaoyu Lu","doi":"10.1117/1.jrs.18.014508","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014508","url":null,"abstract":"Vehicle detection using aerial thermal infrared images has received significant attention because of its strong capability for day and night observations to supply information for vehicle tracking, traffic monitoring, and road network planning. Compared with aerial visible images, aerial thermal infrared images are not sensitive to lighting conditions. However, they have low contrast and blurred edges. Therefore, a combinational and sparse you-only-look-once (ComS-YOLO) neural network is put forward to accurately and quickly detect vehicles in aerial thermal infrared images. Therein, we adjust the structure of the deep neural network to balance the detection accuracy and running time. In addition, we propose an objective function that utilizes the diagonal distance of the corresponding minimum external rectangle, which prevents non-convergence when there is an inclusion relationship between the prediction and true boxes or in the case of width and height alignment. Furthermore, to avoid over-fitting in the training stage, we eliminate some redundant parameters via constraints and on-line pruning. Finally, experimental results on the NWPU VHR-10 and DARPA VIVID datasets show that the proposed ComS-YOLO network effectively and efficiently identifies the vehicles with a low missed rate and false detection rate. Compared with the Faster R-CNN and a series of YOLO neural networks, the proposed neural network presents satisfactory and competitive results in terms of the detection accuracy and running time. Furthermore, vehicle detection experiments under different environments are also carried out, which shows that our method can achieve an excellent and desired performance on detection accuracy and robustness of vehicle detection.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139657029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep neural network based on attention and feature complementary fusion for synthetic aperture radar image classification with small samples 基于注意力和特征互补融合的深度神经网络,用于小样本合成孔径雷达图像分类
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014519
Xiaoning Liu, Furong Shi, Haixia Xu, Liming Yuan, Xianbin Wen
{"title":"Deep neural network based on attention and feature complementary fusion for synthetic aperture radar image classification with small samples","authors":"Xiaoning Liu, Furong Shi, Haixia Xu, Liming Yuan, Xianbin Wen","doi":"10.1117/1.jrs.18.014519","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014519","url":null,"abstract":"In recent years, methods based on convolutional neural networks (CNNs) have achieved significant results in the problem of target classification of synthetic aperture radar (SAR) images. However, the challenges of SAR image data labeling and the characteristics of CNNs relying on a large amount of labeled data for training have seriously limited the further development of this field. In this work, we propose an approach based on attention mechanism and feature complementary fusion (AFCF-CNN) to address these challenges. First, we design and construct a feature complementary module for extracting and fusing multi-layer features, making full use of limited data and utilizing contextual information between different layers to capture more robust feature representations. Then, the attention mechanism reduces the interference of redundant background information, while it highlights the weight information of key targets in the image to further enhance the key local feature representations. Finally, experiments conducted on the moving and stationary target acquisition and recognition dataset show that our model significantly outperforms other state-of-the-art methods despite severe shortages of training data.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139923712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining multisource remote sensing data to calculate individual tree biomass in complex stands 结合多源遥感数据计算复杂林分中单棵树木的生物量
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014515
Xugang Lian, Hailang Zhang, Leixue Wang, Yulu Gao, Lifan Shi, Yu Li, Jiang Chang
{"title":"Combining multisource remote sensing data to calculate individual tree biomass in complex stands","authors":"Xugang Lian, Hailang Zhang, Leixue Wang, Yulu Gao, Lifan Shi, Yu Li, Jiang Chang","doi":"10.1117/1.jrs.18.014515","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014515","url":null,"abstract":"Accurate estimation of forest individual tree characteristics and biomass is very important for monitoring global carbon storage and carbon cycle. In order to solve the problem of calculating individual biomass of various tree species in complex stands, we take terrestrial laser scanning data, unmanned aerial vehicle-laser scanning data, and multispectral data as data sources and extract spectral characteristics, vegetation index characteristics, texture characteristics, and tree height characteristics of diverse forest areas through multispectral classification of tree species. Based on the random forest (RF) algorithm, the extracted features were superimposed and optimized, and the tree species were classified according to the multispectral data combined with field investigation. Then multispectral classification data combined with light detection and ranging (LIDAR) point cloud data were used to classify point cloud species, and then individual tree parameters were extracted for the divided point cloud species, and stand biomass was obtained using the tree biomass calculation model. The results showed that all kinds of tree species could be identified based on RF algorithm by combining multispectral data and LIDAR data. The overall classification accuracy was 66% and the kappa coefficient was 0.59. The recall rate of poplar, cypress, and lacebark-pine was about 75%, except for willow and clove trees, which were blocked by large crown width and caused multiple detection and missed detection. The R2 of diameter at breast height was 0.85, and the root-mean-square error (RMSE) was 5.90 cm. The R2 of the tree height was 0.90, and the RMSE was 1.78 m. Finally, the biomass of each tree species was calculated, and the stand biomass was 66.76 t/hm2, which realized the classification of the whole stand and the measurement of the biomass of each tree. Our study proves that the application of combined multisource remote sensing data to forest biomass estimation has good feasibility.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal feature extraction from multidimensional remote sensing data for orchard identification based on deep learning methods 基于深度学习方法从多维遥感数据中提取最佳特征,用于果园识别
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014514
Junjie Luo, Jiao Guo, Zhe Zhu, Yunlong Du, Yongkai Ye
{"title":"Optimal feature extraction from multidimensional remote sensing data for orchard identification based on deep learning methods","authors":"Junjie Luo, Jiao Guo, Zhe Zhu, Yunlong Du, Yongkai Ye","doi":"10.1117/1.jrs.18.014514","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014514","url":null,"abstract":"Accurate orchard spatial distribution information can help government departments to formulate scientific and reasonable agricultural economic policies. However, it is prominent to apply remote sensing images to obtain orchard planting structure information. The traditional multidimensional remote sensing data processing, dimension reduction and classification, which are two separate steps, cannot guarantee that final classification results can be benefited from dimension reduction process. Consequently, to make connection between dimension reduction and classification, this work proposes two neural networks that fuse stack autoencoder and convolutional neural network (CNN) at one-dimension and three-dimension, namely one-dimension and three-dimension fusion stacked autoencoder (FSA) and CNN networks (1D-FSA-CNN and 3D-FSA-CNN). In both networks, the front-end uses a stacked autoencoder (SAE) for dimension reduction, and the back-end uses a CNN with a Softmax classifier for classification. In the experiments, based on Google Earth Engine platform, two groups of orchard datasets are constructed using multi-source remote sensing data (i.e., GaoFen-1, Sentinel-2 and GaoFen-1, and GaoFen-3). Meanwhile, DenseNet201, 3D-CNN, 1D-CNN, and SAE are used for conduct two comparative experiments. The experimental results show that the proposed fusion neural networks achieve the state-of-the-art performance, both accuracies of 3D-FSA-CNN and 1D-FSA-CNN are higher than 95%.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use of synthetic aperture radar data for the determination of normalized difference vegetation index and normalized difference water index 利用合成孔径雷达数据确定归一化差异植被指数和归一化差异水指数
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014516
Amazonino Lemos de Castro, Miqueias Lima Duarte, Henrique Ewbank, Roberto Wagner Lourenço
{"title":"Use of synthetic aperture radar data for the determination of normalized difference vegetation index and normalized difference water index","authors":"Amazonino Lemos de Castro, Miqueias Lima Duarte, Henrique Ewbank, Roberto Wagner Lourenço","doi":"10.1117/1.jrs.18.014516","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014516","url":null,"abstract":"This study was based on analysis of Sentinel-1 (SAR) data to estimate the normalized difference vegetation index (NDVI) and the normalized difference water index (NDWI) during the period 2019 to 2020 in a region with a range of different land uses. The methodology adopted involved the construction of four regression models: linear regression (LR), support vector machine (SVM), random forest (RF), and artificial neural network (ANN). These models aimed to determine vegetation indices based on Sentinel-1 backscattering data, which were used as independent variables. As dependent variables, the NDVI and NDWI obtained via Sentinel-2 data were used. The implementation of the models included the application of cross-validation with an analysis of performance metrics to identify the most effective model. The results revealed that, based on the post-hoc test, the SVM model presented the best performance in the estimation of NDVI and NDWI, with mean R2 values of 0.74 and 0.70, respectively. It is relevant to note that the backscattering coefficient of the vertical-vertical (VV) and vertical-horizontal (VH) polarizations emerged as the variable with the greatest contribution to the models. This finding reinforces the importance of these parameters in the accuracy of estimates. Ultimately, this approach is promising for the creation of time series of NDVI and NDWI in regions that are frequently affected by cloud cover, thus representing a valuable complement to optical sensor data. This integration is particularly valuable for monitoring agricultural crops.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMFD: an end-to-end infrared and visible image fusion model based on shared-individual multi-scale feature decomposition SMFD:基于共享个体多尺度特征分解的端到端红外与可见光图像融合模型
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.022203
Mingrui Xu, Jun Kong, Min Jiang, Tianshan Liu
{"title":"SMFD: an end-to-end infrared and visible image fusion model based on shared-individual multi-scale feature decomposition","authors":"Mingrui Xu, Jun Kong, Min Jiang, Tianshan Liu","doi":"10.1117/1.jrs.18.022203","DOIUrl":"https://doi.org/10.1117/1.jrs.18.022203","url":null,"abstract":"By leveraging the characteristics of different optical sensors, infrared and visible image fusion generates a fused image that combines prominent thermal radiation targets with clear texture details. Existing methods often focus on a single modality or treat two modalities equally, which overlook the distinctive characteristics of each modality and fail to fully utilize their complementary information. To address this problem, we propose an end-to-end infrared and visible image fusion model based on shared-individual multi-scale feature decomposition. First, to extract multi-scale features from source images, a symmetric multi-scale decomposition encoder consisting of nest connections and a multi-scale receptive field network is designed to capture small, medium, and large-scale features. Second, to sufficiently utilize complementary information, common edge feature maps are introduced to the feature decomposition loss function to decompose extracted features into shared and individual features. Third, to aggregate shared and individual features, a shared-individual self-augmented decoder is proposed to take the individual fusion feature maps as the main input and the shared fusion feature maps as the residual input to assist the decoding process and the reconstruct the fused image. Finally, through comparing subjective evaluations and objective metrics, our method demonstrates its superiority compared with the state-of-the-art approaches.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised burned areas detection using multitemporal synthetic aperture radar data 利用多时合成孔径雷达数据进行无监督烧毁区域探测
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014513
José Victor Orlandi Simões, Rogerio Galante Negri, Felipe Nascimento Souza, Tatiana Sussel Gonçalves Mendes, Adriano Bressane
{"title":"Unsupervised burned areas detection using multitemporal synthetic aperture radar data","authors":"José Victor Orlandi Simões, Rogerio Galante Negri, Felipe Nascimento Souza, Tatiana Sussel Gonçalves Mendes, Adriano Bressane","doi":"10.1117/1.jrs.18.014513","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014513","url":null,"abstract":"Climate change is a critical concern that has been greatly affected by human activities, resulting in a rise in greenhouse gas emissions. Its effects have far-reaching impacts on both living and non-living components of ecosystems, leading to alarming outcomes such as a surge in the frequency and severity of fires. This paper presents a data-driven framework that unifies time series of remote sensing images, statistical modeling, and unsupervised classification for mapping fire-damaged areas. To validate the proposed methodology, multiple remote sensing images acquired by the Sentinel-1 satellite between August and October 2021 were collected and analyzed in two case studies comprising Brazilian biomes affected by burns. Our results demonstrate that the proposed approach outperforms another method evaluated in terms of precision metrics and visual adherence. Our methodology achieves the highest overall accuracy of 58.15% and the highest F1 score of 0.72, both of which are higher than the other method. These findings suggest that our approach is more effective in detecting burned areas and may have practical applications in other environmental issues such as landslides, flooding, and deforestation.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ningaloo eclipse: moon shadow speed and land surface temperature effects from Himawari-9 satellite measurements 宁加洛日食:Himawari-9 卫星测量得出的月影速度和地表温度效应
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-02-01 DOI: 10.1117/1.jrs.18.014511
Fred Prata
{"title":"Ningaloo eclipse: moon shadow speed and land surface temperature effects from Himawari-9 satellite measurements","authors":"Fred Prata","doi":"10.1117/1.jrs.18.014511","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014511","url":null,"abstract":"A total solar eclipse occurred on April 20, 2023, with the umbral shadow touching the Australian continent over the Ningaloo coastal region, near the town of Exmouth, Western Australia. Eclipse totality lasted ∼1 min, reaching totality at ∼03:29 UTC and happened under cloudless skies. Here, we show that the speed of the Moon’s shadow over the land surface can be estimated from 10 min sampling in both the infrared and visible bands of the Himawari-9 geostationary satellite sensor. The cooling of the land surface due to the passage of the Moon’s shadow over the land is investigated, and temperature drops of 7 K to 15 K are found with cooling rates of 2±1.5 mK s−1. By tracking the time of maximum cooling, the speed of the Moon’s shadow was estimated from thermal data to be 2788±21 km h−1 and from the time of minimum reflectance in the visible data to be 2598±181 km h−1, with a notable time dependence. The methodology and analyses are new and the results compare favorably with NASA’s eclipse data computed using Besselian elements.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139759594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation-based VHR SAR images built-up area change detection: a coarse-to-fine approach 基于分割的 VHR SAR 图像建成区变化检测:一种从粗到细的方法
IF 1.7 4区 地球科学
Journal of Applied Remote Sensing Pub Date : 2024-01-01 DOI: 10.1117/1.jrs.18.016503
Jingxing Zhu, Feng Wang, Hongjian You
{"title":"Segmentation-based VHR SAR images built-up area change detection: a coarse-to-fine approach","authors":"Jingxing Zhu, Feng Wang, Hongjian You","doi":"10.1117/1.jrs.18.016503","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016503","url":null,"abstract":"The change detection in built-up areas within very high resolution synthetic aperture radar images is a very challenging task due to speckle noise and geometric distortions caused by the unique imaging mechanism. To tackle this issue, we propose an object-based coarse-to-fine change detection method that integrates segmentation and uncertainty analysis techniques. First, we propose a multi-temporal joint multi-scale segmentation method for generating multi-scale segmentation masks with hierarchical nested relationships. Second, we use the neighborhood ratio detector and Jensen–Shannon distance to produce both pixel-level and object-level change maps, respectively. These maps are fused using the Demeter–Shafer evidence theory, resulting in an initial change map. We then apply a threshold to classify parcels within the initial change map into three categories: changed, unchanged, and uncertain. Third, we perform uncertainty analysis and implement progressive classification by support vector machine for uncertain parcels, moving from coarse to fine segmentation levels. Finally, we integrate change maps across all scales to obtain the final change map. The proposed method is evaluated on three datasets from the GF-3 and ICEYE-X6 satellites. The results show that our approach outperforms alternative methods in extracting more comprehensive changed regions.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139422836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信