ISPRS Journal of Photogrammetry and Remote Sensing最新文献

筛选
英文 中文
MuSRFM: Multiple scale resolution fusion based precise and robust satellite derived bathymetry model for island nearshore shallow water regions using sentinel-2 multi-spectral imagery MuSRFM:利用哨兵-2 号多光谱图像为岛屿近岸浅水区建立基于多尺度分辨率融合的精确、稳健的卫星水深模型
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-09-14 DOI: 10.1016/j.isprsjprs.2024.09.007
Xiaoming Qin , Ziyin Wu , Xiaowen Luo , Jihong Shang , Dineng Zhao , Jieqiong Zhou , Jiaxin Cui , Hongyang Wan , Guochang Xu
{"title":"MuSRFM: Multiple scale resolution fusion based precise and robust satellite derived bathymetry model for island nearshore shallow water regions using sentinel-2 multi-spectral imagery","authors":"Xiaoming Qin ,&nbsp;Ziyin Wu ,&nbsp;Xiaowen Luo ,&nbsp;Jihong Shang ,&nbsp;Dineng Zhao ,&nbsp;Jieqiong Zhou ,&nbsp;Jiaxin Cui ,&nbsp;Hongyang Wan ,&nbsp;Guochang Xu","doi":"10.1016/j.isprsjprs.2024.09.007","DOIUrl":"10.1016/j.isprsjprs.2024.09.007","url":null,"abstract":"<div><p>The multi-spectral imagery based Satellite Derived Bathymetry (SDB) provides an efficient and cost-effective approach for acquiring bathymetry data of nearshore shallow water regions. Compared with conventional pixelwise inversion models, Deep Learning (DL) models have the theoretical capability to encompass a broader receptive field, automatically extracting comprehensive spatial features. However, enhancing spatial features by increasing the input size escalates computational complexity and model scale, challenging the hardware. To address this issue, we propose the Multiple Scale Resolution Fusion Model (MuSRFM), a novel DL-based SDB model, to integrate information of varying scales by utilizing temporally fused Sentinel-2 L2A multi-spectral imagery. The MuSRFM uses a Multi-scale Center-aligned Hierarchical Resampler (MCHR) to composite large-scale multi-spectral imagery into hierarchical scale resolution representations since the receptive field gradually narrows its focus as the spatial resolution decreases. Through this strategy, the MuSRFM gains access to rich spatial information while maintaining efficiency by progressively aggregating features of different scales through the Cropped Aligned Fusion Module (CAFM). We select St. Croix (Virgin Islands) as the training/testing dataset source, and the Root Mean Square Error (RMSE) obtained by the MuSRFM on the testing dataset is 0.8131 m (with a bathymetric range of 0–25 m), surpassing the machine learning based models and traditional semi-empirical models used as the baselines by over 35 % and 60 %, respectively. Additionally, multiple island areas worldwide, including Vieques, Oahu, Kauai, Saipan and Tinian, which exhibit distinct characteristics, are utilized to construct a real-world dataset for assessing the generalizability and transferability of the proposed MuSRFM. While the MuSRFM experiences a degradation in accuracy when applied to the diverse real-world dataset, it outperforms other baseline models considerably. Across various study areas in the real-world dataset, its RMSE lead over the second-ranked model ranges from 6.8 % to 38.1 %, indicating its accuracy and generalizability; in the Kauai area, where the performance is not ideal, a significant improvement in accuracy is achieved through fine-tuning on limited in-situ data. The code of MuSRFM is available at <span><span>https://github.com/qxm1995716/musrfm</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 150-169"},"PeriodicalIF":10.6,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624003459/pdfft?md5=4925ae29c5fd595f63a6ca31611a8d4c&pid=1-s2.0-S0924271624003459-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Snow depth retrieval method for PolSAR data using multi-parameters snow backscattering model 利用多参数雪后散射模型的 PolSAR 数据雪深检索方法
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-09-13 DOI: 10.1016/j.isprsjprs.2024.09.005
Haiwei Qiao , Ping Zhang , Zhen Li , Lei Huang , Zhipeng Wu , Shuo Gao , Chang Liu , Shuang Liang , Jianmin Zhou , Wei Sun
{"title":"Snow depth retrieval method for PolSAR data using multi-parameters snow backscattering model","authors":"Haiwei Qiao ,&nbsp;Ping Zhang ,&nbsp;Zhen Li ,&nbsp;Lei Huang ,&nbsp;Zhipeng Wu ,&nbsp;Shuo Gao ,&nbsp;Chang Liu ,&nbsp;Shuang Liang ,&nbsp;Jianmin Zhou ,&nbsp;Wei Sun","doi":"10.1016/j.isprsjprs.2024.09.005","DOIUrl":"10.1016/j.isprsjprs.2024.09.005","url":null,"abstract":"<div><p>Snow depth (SD) is a crucial property of snow, its spatial and temporal variation is important for global change, snowmelt runoff simulation, disaster prediction, and freshwater storage estimation. Polarimetric Synthetic Aperture Radar (PolSAR) can precisely describe the backscattering of the target and emerge as an effective tool for SD retrieval. The backscattering component of dry snow is mainly composed of volume scattering from the snowpack and surface scattering from the snow-ground interface. However, the existing method for retrieving SD using PolSAR data has the problems of over-reliance on in-situ data and ignoring surface scattering from the snow-ground interface. We proposed a novel SD retrieval method for PolSAR data by fully considering the primary backscattering components of snow and through multi-parameter estimation to solve the snow backscattering model. Firstly, a snow backscattering model was formed by combining the small permittivity volume scattering model and the Michigan semi-empirical surface scattering model to simulate the different scattering components of snow, and the corresponding backscattering coefficients were extracted using the Yamaguchi decomposition. Then, the snow permittivity was calculated through generalized volume parameters and the extinction coefficient was further estimated through modeling. Finally, the snow backscattering model was solved by these parameters to retrieve SD. The proposed method was validated by Ku-band UAV SAR data acquired in Altay, Xinjiang, and the accuracy was evaluated by in-situ data. The correlation coefficient, root mean square error, and mean absolute error are 0.80, 4.49 cm, and 3.95 cm, respectively. Meanwhile, the uncertainties generated by different SD, model parameters estimation, solution method, and underlying surface are analyzed to enhance the generality of the proposed method.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 136-149"},"PeriodicalIF":10.6,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sequential polarimetric phase optimization algorithm for dynamic deformation monitoring of landslides 用于滑坡动态变形监测的序列偏振相位优化算法
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-09-12 DOI: 10.1016/j.isprsjprs.2024.08.013
Yian Wang , Jiayin Luo , Jie Dong , Jordi J. Mallorqui , Mingsheng Liao , Lu Zhang , Jianya Gong
{"title":"Sequential polarimetric phase optimization algorithm for dynamic deformation monitoring of landslides","authors":"Yian Wang ,&nbsp;Jiayin Luo ,&nbsp;Jie Dong ,&nbsp;Jordi J. Mallorqui ,&nbsp;Mingsheng Liao ,&nbsp;Lu Zhang ,&nbsp;Jianya Gong","doi":"10.1016/j.isprsjprs.2024.08.013","DOIUrl":"10.1016/j.isprsjprs.2024.08.013","url":null,"abstract":"<div><p>In the era of big SAR data, it is urgent to develop dynamic time series DInSAR processing procedures for near-real-time monitoring of landslides. However, the dense vegetation coverage in mountainous areas causes severe decorrelations, which demands high precision and efficiency of phase optimization processing. The common phase optimization using single-polarization SAR data cannot produce satisfactory results due to the limited statistical samples in some natural scenarios. The novel polarimetric phase optimization algorithms, however, have low computational efficiency, limiting their applications in large-scale scenarios and long data sequences. In addition, temporal changes in the scattering properties of ground features and the continuous increase of SAR data require dynamic phase optimization processing. To achieve efficient phase optimization for dynamic DInSAR time series analysis, we combine the Sequential Estimator (SE) with the Total Power (TP) polarization stacking method and solve it using eigen decomposition-based Maximum Likelihood Estimator (EMI), named SETP-EMI. The simulation and real data experiments demonstrate the significant improvements of the SETP-EMI method in precision and efficiency compared to the EMI and TP-EMI methods. The SETP-EMI exhibits an increase of more than 50% and 20% in highly coherent points for the real data compared to the EMI and TP-EMI, respectively. It, meanwhile, achieves approximately six and two times more efficient than the EMI and TP-EMI methods in the real data case. These results highlight the effectiveness of the SETP-EMI method in promptly capturing and analyzing evolving landslide deformations, providing valuable insights for real-time monitoring and decision-making.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 84-100"},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A general albedo recovery approach for aerial photogrammetric images through inverse rendering 通过反渲染恢复航空摄影测量图像反照率的一般方法
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-09-12 DOI: 10.1016/j.isprsjprs.2024.09.001
Shuang Song , Rongjun Qin
{"title":"A general albedo recovery approach for aerial photogrammetric images through inverse rendering","authors":"Shuang Song ,&nbsp;Rongjun Qin","doi":"10.1016/j.isprsjprs.2024.09.001","DOIUrl":"10.1016/j.isprsjprs.2024.09.001","url":null,"abstract":"<div><p>Modeling outdoor scenes for the synthetic 3D environment requires the recovery of reflectance/albedo information from raw images, which is an ill-posed problem due to the complicated unmodeled physics in this process (e.g., indirect lighting, volume scattering, specular reflection). The problem remains unsolved in a practical context. The recovered albedo can facilitate model relighting and shading, which can further enhance the realism of rendered models and the applications of digital twins. Typically, photogrammetric 3D models simply take the source images as texture materials, which inherently embed unwanted lighting artifacts (at the time of capture) into the texture. Therefore, these “polluted” textures are suboptimal for a synthetic environment to enable realistic rendering. In addition, these embedded environmental lightings further bring challenges to photo-consistencies across different images that cause image-matching uncertainties. This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations and derives the inverse model to resolve the albedo information through inverse rendering intrinsic image decomposition. Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry, thus they can provide direct inputs for this ill-posed problem. This physics-based approach does not require additional input other than data acquired through the typical drone-based photogrammetric collection and was shown to favorably outperform existing approaches. We also demonstrate that the recovered albedo image can in turn improve typical image processing tasks in photogrammetry such as feature and dense matching, edge, and line extraction. [This work extends our prior work “A Novel Intrinsic Image Decomposition Method to Recover Albedo for Aerial Images in Photogrammetry Processing” in ISPRS Congress 2022]. The code will be made available at <span><span>github.com/GDAOSU/albedo_aerial_photogrammetry</span><svg><path></path></svg></span></p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 101-119"},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating AVHRR snow cover fraction by coupling physical constraints into a deep learning framework 通过将物理约束条件与深度学习框架相结合来估算 AVHRR 雪盖分数
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-09-12 DOI: 10.1016/j.isprsjprs.2024.08.015
Qin Zhao , Xiaohua Hao , Tao Che , Donghang Shao , Wenzheng Ji , Siqiong Luo , Guanghui Huang , Tianwen Feng , Leilei Dong , Xingliang Sun , Hongyi Li , Jian Wang
{"title":"Estimating AVHRR snow cover fraction by coupling physical constraints into a deep learning framework","authors":"Qin Zhao ,&nbsp;Xiaohua Hao ,&nbsp;Tao Che ,&nbsp;Donghang Shao ,&nbsp;Wenzheng Ji ,&nbsp;Siqiong Luo ,&nbsp;Guanghui Huang ,&nbsp;Tianwen Feng ,&nbsp;Leilei Dong ,&nbsp;Xingliang Sun ,&nbsp;Hongyi Li ,&nbsp;Jian Wang","doi":"10.1016/j.isprsjprs.2024.08.015","DOIUrl":"10.1016/j.isprsjprs.2024.08.015","url":null,"abstract":"<div><p>Accurate snow cover information is crucial for studying global climate and hydrology. Although deep learning has innovated snow cover fraction (SCF) retrieval, its effectiveness in practical application remains limited. This limitation stems from its reliance on appropriate training data and the necessity for more advanced interpretability. To overcome these challenges, a novel deep learning framework model by coupling the asymptotic radiative transfer (ART) model was developed to retrieve the Northern Hemisphere SCF based on advanced very high-resolution radiometer (AVHRR) surface reflectance data, named the ART-DL SCF model. Using Landsat 5 snow cover images as the reference SCF, the new model incorporates snow surface albedo retrieval from the ART model as a physical constraint into relevant snow identification parameters. Comprehensive validation results with Landsat reference SCF show an RMSE of 0.2228, an NMAD of 0.1227, and a bias of −0.0013. Moreover, the binary validation reveals an overall accuracy of 90.20%, with omission and commission errors both below 10%. Significantly, introducing physical constraints both improves the accuracy and stability of the model and mitigates underestimation issues. Compared to the model without physical constraints, the ART-DL SCF model shows a marked reduction of 4.79 percentage points in the RMSE and 5.35 percentage points in MAE. These accuracies were significantly higher than the currently available SnowCCI AVHRR products from the European Space Agency (ESA). Additionally, the model exhibits strong temporal and spatial generalizability and performs well in forest areas. This study presents a physical model coupled with deep learning for SCF retrieval that can better serve global climatic, hydrological, and other related studies.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 120-135"},"PeriodicalIF":10.6,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective variance attention-enhanced diffusion model for crop field aerial image super resolution 用于作物田航空图像超级分辨率的有效方差注意力增强扩散模型
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-09-11 DOI: 10.1016/j.isprsjprs.2024.08.017
Xiangyu Lu , Jianlin Zhang , Rui Yang , Qina Yang , Mengyuan Chen , Hongxing Xu , Pinjun Wan , Jiawen Guo , Fei Liu
{"title":"Effective variance attention-enhanced diffusion model for crop field aerial image super resolution","authors":"Xiangyu Lu ,&nbsp;Jianlin Zhang ,&nbsp;Rui Yang ,&nbsp;Qina Yang ,&nbsp;Mengyuan Chen ,&nbsp;Hongxing Xu ,&nbsp;Pinjun Wan ,&nbsp;Jiawen Guo ,&nbsp;Fei Liu","doi":"10.1016/j.isprsjprs.2024.08.017","DOIUrl":"10.1016/j.isprsjprs.2024.08.017","url":null,"abstract":"<div><p>Image super-resolution (SR) can significantly improve the resolution and quality of aerial imagery. Emerging diffusion models (DM) have shown superior image generation capabilities through multistep refinement. To explore their effectiveness on high-resolution cropland aerial imagery SR, we first built the CropSR dataset, which includes 321,992 samples for self-supervised SR training and two real-matched SR datasets from high-low altitude orthomosaics and fixed-point photography (CropSR-OR/FP) for testing. Inspired by the observed trend of decreasing image variance with higher flight altitude, we developed the Variance-Average-Spatial Attention (VASA). The VASA demonstrated effectiveness across various types of SR models, and we further developed the Efficient VASA-enhanced Diffusion Model (EVADM). To comprehensively and consistently evaluate the quality of SR models, we introduced the Super-resolution Relative Fidelity Index (SRFI), which considers both structural and perceptual similarity. On the × 2 and × 4 real SR datasets, EVADM reduced Fréchet-Inception-Distance (FID) by 14.6 and 8.0, respectively, along with SRFI gains of 27 % and 6 % compared to the baselines. The superior generalization ability of EVADM was further validated using the open Agriculture-Vision dataset. Extensive downstream case studies have demonstrated the high practicality of our SR method, indicating a promising avenue for realistic aerial imagery enhancement and effective downstream applications. The code and dataset for testing are available at <span><span>https://github.com/HobbitArmy/EVADM</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 50-68"},"PeriodicalIF":10.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-resolution mapping of grassland canopy cover in China through the integration of extensive drone imagery and satellite data 通过整合大量无人机图像和卫星数据,高分辨率绘制中国草地冠层覆盖图
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-09-11 DOI: 10.1016/j.isprsjprs.2024.09.004
Tianyu Hu , Mengqi Cao , Xiaoxia Zhao , Xiaoqiang Liu , Zhonghua Liu , Liangyun Liu , Zhenying Huang , Shengli Tao , Zhiyao Tang , Yanpei Guo , Chengjun Ji , Chengyang Zheng , Guoyan Wang , Xiaokang Hu , Luhong Zhou , Yunxiang Cheng , Wenhong Ma , Yonghui Wang , Pujin Zhang , Yuejun Fan , Yanjun Su
{"title":"High-resolution mapping of grassland canopy cover in China through the integration of extensive drone imagery and satellite data","authors":"Tianyu Hu ,&nbsp;Mengqi Cao ,&nbsp;Xiaoxia Zhao ,&nbsp;Xiaoqiang Liu ,&nbsp;Zhonghua Liu ,&nbsp;Liangyun Liu ,&nbsp;Zhenying Huang ,&nbsp;Shengli Tao ,&nbsp;Zhiyao Tang ,&nbsp;Yanpei Guo ,&nbsp;Chengjun Ji ,&nbsp;Chengyang Zheng ,&nbsp;Guoyan Wang ,&nbsp;Xiaokang Hu ,&nbsp;Luhong Zhou ,&nbsp;Yunxiang Cheng ,&nbsp;Wenhong Ma ,&nbsp;Yonghui Wang ,&nbsp;Pujin Zhang ,&nbsp;Yuejun Fan ,&nbsp;Yanjun Su","doi":"10.1016/j.isprsjprs.2024.09.004","DOIUrl":"10.1016/j.isprsjprs.2024.09.004","url":null,"abstract":"<div><p>Canopy cover is a crucial indicator for assessing grassland health and ecosystem services. However, achieving accurate high-resolution estimates of grassland canopy cover at a large spatial scale remains challenging due to the limited spatial coverage of field measurements and the scale mismatch between field measurements and satellite imagery. In this study, we addressed these challenges by proposing a regression-based approach to estimate large-scale grassland canopy cover, leveraging the integration of drone imagery and multisource remote sensing data. Specifically, over 90,000 10 × 10 m drone image tiles were collected at 1,255 sites across China. All drone image tiles were classified into grass and non-grass pixels to generate ground-truth canopy cover estimates. These estimates were then temporally aligned with satellite imagery-derived features to build a random forest regression model to map the grassland canopy cover distribution of China. Our results revealed that a single classification model can effectively distinguish between grass and non-grass pixels in drone images collected across diverse grassland types and large spatial scales, with multilayer perceptron demonstrating superior classification accuracy compared to Canopeo, support vector machine, random forest, and pyramid scene parsing network. The integration of extensive drone imagery successfully addressed the scale-mismatch issue between traditional ground measurements and satellite imagery, contributing significantly to enhancing mapping accuracy. The national canopy cover map of China generated for the year 2021 exhibited a spatial pattern of increasing canopy cover from northwest to southeast, with an average value of 56 % and a standard deviation of 26 %. Moreover, it demonstrated high accuracy, with a coefficient of determination of 0.89 and a root-mean-squared error of 12.38 %. The resulting high-resolution canopy cover map of China holds great potential in advancing our comprehension of grassland ecosystem processes and advocating for the sustainable management of grassland resources.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 69-83"},"PeriodicalIF":10.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of synthetic aperture radar with deep learning in agricultural applications 深度学习合成孔径雷达在农业应用中的研究综述
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-09-10 DOI: 10.1016/j.isprsjprs.2024.08.018
Mahya G.Z. Hashemi , Ehsan Jalilvand , Hamed Alemohammad , Pang-Ning Tan , Narendra N. Das
{"title":"Review of synthetic aperture radar with deep learning in agricultural applications","authors":"Mahya G.Z. Hashemi ,&nbsp;Ehsan Jalilvand ,&nbsp;Hamed Alemohammad ,&nbsp;Pang-Ning Tan ,&nbsp;Narendra N. Das","doi":"10.1016/j.isprsjprs.2024.08.018","DOIUrl":"10.1016/j.isprsjprs.2024.08.018","url":null,"abstract":"<div><p>Synthetic Aperture Radar (SAR) observations, valued for their consistent acquisition schedule and not being affected by cloud cover and variations between day and night, have become extensively utilized in a range of agricultural applications. The advent of deep learning allows for the capture of salient features from SAR observations. This is accomplished through discerning both spatial and temporal relationships within SAR data. This study reviews the current state of the art in the use of SAR with deep learning for crop classification/mapping, monitoring and yield estimation applications and the potential of leveraging both for the detection of agricultural management practices.</p><p>This review introduces the principles of SAR and its applications in agriculture, highlighting current limitations and challenges. It explores deep learning techniques as a solution to mitigate these issues and enhance the capability of SAR for agricultural applications. The review covers various aspects of SAR observables, methodologies for the fusion of optical and SAR data, common and emerging deep learning architectures, data augmentation techniques, validation and testing methods, and open-source reference datasets, all aimed at enhancing the precision and utility of SAR with deep learning for agricultural applications.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 20-49"},"PeriodicalIF":10.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harmony in diversity: Content cleansing change detection framework for very-high-resolution remote-sensing images 多样性中的和谐:超高分辨率遥感图像的内容清理变化检测框架
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-09-10 DOI: 10.1016/j.isprsjprs.2024.09.002
Mofan Cheng , Wei He , Zhuohong Li , Guangyi Yang , Hongyan Zhang
{"title":"Harmony in diversity: Content cleansing change detection framework for very-high-resolution remote-sensing images","authors":"Mofan Cheng ,&nbsp;Wei He ,&nbsp;Zhuohong Li ,&nbsp;Guangyi Yang ,&nbsp;Hongyan Zhang","doi":"10.1016/j.isprsjprs.2024.09.002","DOIUrl":"10.1016/j.isprsjprs.2024.09.002","url":null,"abstract":"<div><p>Change detection, as a crucial task in the field of Earth observation, aims to identify changed pixels between multi-temporal remote-sensing images captured at the same geographical area. However, in practical applications, there are challenges of pseudo changes arising from diverse imaging conditions and different remote-sensing platforms. Existing methods either overlook the different imaging styles between bi-temporal images, or transfer the bi-temporal styles via domain adaptation that may lose ground details. To address these problems, we introduce the disentangled representation learning that mitigates differences of imaging styles while preserving content details to develop a change detection framework, named Content Cleansing Network (CCNet). Specifically, CCNet embeds each input image into two distinct subspaces: a shared content space and a private style space. The separation of style space aims to mitigate the discrepant style due to different imaging condition, while the extracted content space reflects semantic features that is essential for change detection. Then, a multi-resolution parallel structure constructs the content space encoder, facilitating robust feature extraction of semantic information and spatial details. The cleansed content features enable accurate detection of changes in the land surface. Additionally, a lightweight decoder for image restoration enhances the independence and interpretability of the disentangled spaces. To verify the proposed method, CCNet is applied to five public datasets and a multi-temporal dataset collected in this study. Comparative experiments against eleven advanced methods demonstrate the effectiveness and superiority of CCNet. The experimental results show that our method robustly addresses the issues related to both temporal and platform variations, making it a promising method for change detection in complex conditions and supporting downstream applications.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 1-19"},"PeriodicalIF":10.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S092427162400340X/pdfft?md5=05257e0a48272b7c28a6809497111281&pid=1-s2.0-S092427162400340X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards SDG 11: Large-scale geographic and demographic characterisation of informal settlements fusing remote sensing, POI, and open geo-data 实现可持续发展目标 11:融合遥感、POI 和开放地理数据的大规模非正规住区地理和人口特征描述
IF 10.6 1区 地球科学
ISPRS Journal of Photogrammetry and Remote Sensing Pub Date : 2024-08-31 DOI: 10.1016/j.isprsjprs.2024.08.014
Wei Tu , Dongsheng Chen , Rui Cao , Jizhe Xia , Yatao Zhang , Qingquan Li
{"title":"Towards SDG 11: Large-scale geographic and demographic characterisation of informal settlements fusing remote sensing, POI, and open geo-data","authors":"Wei Tu ,&nbsp;Dongsheng Chen ,&nbsp;Rui Cao ,&nbsp;Jizhe Xia ,&nbsp;Yatao Zhang ,&nbsp;Qingquan Li","doi":"10.1016/j.isprsjprs.2024.08.014","DOIUrl":"10.1016/j.isprsjprs.2024.08.014","url":null,"abstract":"<div><p>Informal settlements’ geographic and demographic mapping is essential for evaluating human-centric sustainable development in cities, thus fostering the road to Sustainable Development Goal 11. However, fine-grained informal settlements’ geographic and demographic information is not well available. To fill the gap, this study proposes an effective framework for both fine-grained geographic and demographic characterisation of informal settlements by integrating openly available remote sensing imagery, points-of-interest (POI), and demographic data. Pixel-level informal settlement is firstly mapped by a hierarchical recognition method with satellite imagery and POI. The patch-scale and city-scale geographic patterns of informal settlements are further analysed with landscape metrics. Spatial-demographic profiles are depicted by linking with the open WorldPop dataset to reveal the demographic pattern. Taking the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) in China as the study area, the experiment demonstrates the effectiveness of informal settlement mapping, with an overall accuracy of 91.82%. The aggregated data and code are released (<span><span>https://github.com/DongshengChen9/IF4SDG11</span><svg><path></path></svg></span>). The demographic patterns of the informal settlements reveal that Guangzhou and Shenzhen, the two core cities in the GBA, concentrate more on young people living in the informal settlements. While the rapid-developing city Shenzhen shows a more significant trend of gender imbalance in the informal settlements. These findings provide valuable insights into monitoring informal settlements in the urban agglomeration and human-centric urban sustainable development, as well as SDG 11.1.1.</p></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"217 ","pages":"Pages 199-215"},"PeriodicalIF":10.6,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0924271624003253/pdfft?md5=ea26a3272c1484993048b4db670eff37&pid=1-s2.0-S0924271624003253-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信