PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science最新文献

筛选
英文 中文
Self-Supervised 3D Semantic Occupancy Prediction from Multi-View 2D Surround Images 根据多视角二维环绕图像进行自监督三维语义占用预测
IF 4.1 4区 地球科学
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science Pub Date : 2024-09-18 DOI: 10.1007/s41064-024-00308-9
S. Abualhanud, E. Erahan, M. Mehltretter
{"title":"Self-Supervised 3D Semantic Occupancy Prediction from Multi-View 2D Surround Images","authors":"S. Abualhanud, E. Erahan, M. Mehltretter","doi":"10.1007/s41064-024-00308-9","DOIUrl":"https://doi.org/10.1007/s41064-024-00308-9","url":null,"abstract":"<p>An accurate 3D representation of the geometry and semantics of an environment builds the basis for a large variety of downstream tasks and is essential for autonomous driving related tasks such as path planning and obstacle avoidance. The focus of this work is put on 3D semantic occupancy prediction, i.e., the reconstruction of a scene as a voxel grid where each voxel is assigned both an occupancy and a semantic label. We present a Convolutional Neural Network-based method that utilizes multiple color images from a surround-view setup with minimal overlap, together with the associated interior and exterior camera parameters as input, to reconstruct the observed environment as a 3D semantic occupancy map. To account for the ill-posed nature of reconstructing a 3D representation from monocular 2D images, the image information is integrated over time: Under the assumption that the camera setup is moving, images from consecutive time steps are used to form a multi-view stereo setup. In exhaustive experiments, we investigate the challenges presented by dynamic objects and the possibilities of training the proposed method with either 3D or 2D reference data. Latter being motivated by the comparably higher costs of generating and annotating 3D ground truth data. Moreover, we present and investigate a novel self-supervised training scheme that does not require any geometric reference data, but only relies on sparse semantic ground truth. An evaluation on the Occ3D dataset, including a comparison against current state-of-the-art self-supervised methods from the literature, demonstrates the potential of our self-supervised variant.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"6 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142258861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterization of transient movements within the Joshimath hillslope complex: Results from multi-sensor InSAR observations 乔希马特山坡复合体瞬时移动的特征:多传感器 InSAR 观测结果
IF 4.1 4区 地球科学
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science Pub Date : 2024-09-17 DOI: 10.1007/s41064-024-00315-w
Wandi Wang, Mahdi Motagh, Zhuge Xia, Zhong Lu, Sadra Karimzadeh, Chao Zhou, Alina V. Shevchenko, Sigrid Roessner
{"title":"Characterization of transient movements within the Joshimath hillslope complex: Results from multi-sensor InSAR observations","authors":"Wandi Wang, Mahdi Motagh, Zhuge Xia, Zhong Lu, Sadra Karimzadeh, Chao Zhou, Alina V. Shevchenko, Sigrid Roessner","doi":"10.1007/s41064-024-00315-w","DOIUrl":"https://doi.org/10.1007/s41064-024-00315-w","url":null,"abstract":"<p>This paper investigates the spatiotemporal characteristics and life-cycle of movements within the Joshimath landslide-prone slope over the period from 2015 to 2024, utilizing multi-sensor interferometric data from Sentinel‑1, ALOS‑2, and TerraSAR‑X satellites. Multi-temporal InSAR analysis before the 2023 slope destabilization crisis, when the region experienced significant ground deformation acceleration, revealed two distinct deformation clusters within the eastern and middle parts of the slope. These active deformation regions have been creeping up to −200 mm/yr. Slope deformation analysis indicates that the entire Joshimath landslide-prone slope can be categorized kinematically as either Extremely-Slow (ES) or Very-Slow (VS) moving slope, with the eastern cluster mainly exhibiting ES movements, while the middle cluster showing VS movements. Two episodes of significant acceleration occurred on August 21, 2019 and November 2, 2021, with the rate of slope deformation increasing by 20% (from −50 to −60 mm/yr) and around threefold (from −60 to −249 mm/yr), respectively. Following the 2023 destabilization crisis, the rate of ground deformation notably increased across all datasets for both clusters, except for the Sentinel‑1 ascending data in the eastern cluster. Pre-crisis, horizontal deformation was dominant both in the eastern and middle clusters. Horizontal deformation remained dominant and increased significantly in the eastern cluster post-crisis phase, whereas vertical deformation became predominant in the middle cluster. Wavelet analysis reveals a strong correlation between two acceleration episodes and extreme precipitation in 2019 and 2021, but no similar correlation was detected in other years. This indicates that while extreme rainfall significantly influenced the dynamics of slope movements during these episodes, less strong precipitation had a minimal impact on slope movements during other periods.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"1 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142258853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monocular Pose and Shape Reconstruction of Vehicles in UAV imagery using a Multi-task CNN 利用多任务 CNN 对无人机图像中的车辆进行单目姿态和形状重构
IF 4.1 4区 地球科学
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science Pub Date : 2024-09-16 DOI: 10.1007/s41064-024-00311-0
S. El Amrani Abouelassad, M. Mehltretter, F. Rottensteiner
{"title":"Monocular Pose and Shape Reconstruction of Vehicles in UAV imagery using a Multi-task CNN","authors":"S. El Amrani Abouelassad, M. Mehltretter, F. Rottensteiner","doi":"10.1007/s41064-024-00311-0","DOIUrl":"https://doi.org/10.1007/s41064-024-00311-0","url":null,"abstract":"<p>Estimating the pose and shape of vehicles from aerial images is an important, yet challenging task. While there are many existing approaches that use stereo images from street-level perspectives to reconstruct objects in 3D, the majority of aerial configurations used for purposes like traffic surveillance are limited to monocular images. Addressing this challenge, a Convolutional Neural Network-based method is presented in this paper, which jointly performs detection, pose, type and 3D shape estimation for vehicles observed in monocular UAV imagery. For this purpose, a robust 3D object model is used following the concept of an Active Shape Model. In addition, different variants of loss functions for learning 3D shape estimation are presented, focusing on the height component, which is particularly challenging to estimate from monocular near-nadir images. We also introduce a UAV-based dataset to evaluate our model in addition to an augmented version of the publicly available Hessigheim benchmark dataset. Our method yields promising results in pose and shape estimation: utilising images with a ground sampling distance (GSD) of 3 cm, it achieves median errors of up to 4 cm in position and 3° in orientation. Additionally, it achieves root mean square (RMS) errors of <span>(pm 6)</span> cm in planimetry and <span>(pm 18)</span> cm in height for keypoints defining the car shape.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"81 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142258854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the Impact of Data-resolution On Ocean Frontal Characteristics 评估数据分辨率对海洋锋面特征的影响
IF 4.1 4区 地球科学
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science Pub Date : 2024-09-16 DOI: 10.1007/s41064-024-00318-7
Kai Yang, Andrew M. Fischer
{"title":"Assessing the Impact of Data-resolution On Ocean Frontal Characteristics","authors":"Kai Yang, Andrew M. Fischer","doi":"10.1007/s41064-024-00318-7","DOIUrl":"https://doi.org/10.1007/s41064-024-00318-7","url":null,"abstract":"<p>Easy access to and advances in satellite remote sensing data has enabled enhanced analysis of ocean fronts, physical and ecologically important areas where water masses converge. Recent development of higher-resolution satellite imagery to detect ocean fronts provides the potential to better capture patterns and trends of ocean change and improve modelling and prediction efforts. This study examines the relationship between satellite data spatial resolution and its influence on the quantification of frontal characteristics, frontal quantity, length, strength and density. We also examine the relationship between Finite-Size Lyapunov Exponents and image resolution. We found higher spatial resolution leads to increased frontal quantity and decreased frontal length. Also, both strength and spatial density of fronts differ at various resolutions. The Finite-Size Lyapunov Exponent value does not change significantly with resolution. Knowledge of the impact of resolution on the quantification of frontal characteristics is crucial as it enables the exploration of novel experimental design to further facilitate the development of improved parameterization and uncertainties in ocean modelling/studies.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"39 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142258855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges and Opportunities of Sentinel-1 InSAR for Transport Infrastructure Monitoring 用于交通基础设施监测的哨兵-1 InSAR 的挑战与机遇
IF 4.1 4区 地球科学
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science Pub Date : 2024-09-16 DOI: 10.1007/s41064-024-00314-x
Andreas Piter, Mahmud Haghshenas Haghighi, Mahdi Motagh
{"title":"Challenges and Opportunities of Sentinel-1 InSAR for Transport Infrastructure Monitoring","authors":"Andreas Piter, Mahmud Haghshenas Haghighi, Mahdi Motagh","doi":"10.1007/s41064-024-00314-x","DOIUrl":"https://doi.org/10.1007/s41064-024-00314-x","url":null,"abstract":"<p>Monitoring displacement at transport infrastructure using Sentinel‑1 Interferometric Synthetic Aperture Radar (InSAR) faces challenges due to the sensor’s medium spatial resolution, which limits the pixel coverage over the infrastructure. Therefore, carefully selecting coherent pixels is crucial to achieve a high density of reliable measurement points and to minimize noisy observations. This study evaluates the effectiveness of various pixel selection methods for displacement monitoring within transport infrastructures. We employ a two-step InSAR time series processing approach. First, high-quality first-order pixels are selected using temporal phase coherence (TPC) to estimate and correct atmospheric contributions. Then, a combination of different pixel selection methods is applied to identify coherent second-order pixels for displacement analysis. These methods include amplitude dispersion index (ADI), TPC, phase linking coherence (PLC), and top eigenvalue percentage (TEP), targeting both point-like scatterer (PS) and distributed scatterer (DS) pixels. Experiments are conducted in two case studies: one in Germany, characterized by dense vegetation, and one in Spain, with sparse vegetation. In Germany, the density of measurement points was approximately 30 points/km², with the longest segment of the infrastructure without any coherent pixels being 2.8 km. In Spain, the density of measurement points exceeded 500 points/km², with the longest section without coherent pixels being 700 meters. The results indicate that despite the challenges posed by medium-resolution data, the sensor is capable of providing adequate measurement points when suitable pixel selection methods are employed. However, careful consideration is necessary to exclude noisy pixels from the analysis. The findings highlight the importance of choosing a proper method tailored to infrastructure characteristics. Specifically, combining TPC and PLC methods offers a complementary set of pixels suitable for displacement measurements, whereas ADI and TEP are less effective in this context. This study demonstrates the potential of Sentinel‑1 InSAR for capturing both regional-scale and localized displacements at transport infrastructure.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"11 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142258856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weighted Multiple Point Cloud Fusion 加权多点云融合
IF 4.1 4区 地球科学
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science Pub Date : 2024-09-12 DOI: 10.1007/s41064-024-00310-1
Kwasi Nyarko Poku-Agyemang, Alexander Reiterer
{"title":"Weighted Multiple Point Cloud Fusion","authors":"Kwasi Nyarko Poku-Agyemang, Alexander Reiterer","doi":"10.1007/s41064-024-00310-1","DOIUrl":"https://doi.org/10.1007/s41064-024-00310-1","url":null,"abstract":"<p>Multiple viewpoint 3D reconstruction has been used in recent years to create accurate complete scenes and objects used for various applications. This is to overcome limitations of single viewpoint 3D digital imaging such as occlusion within the scene during the reconstruction process. In this paper, we propose a weighted point cloud fusion process using both local and global spatial information of the point clouds to fuse them together. The process aims to minimize duplication and remove noise while maintaining a consistent level of details using spatial information from point clouds to compute a weight to fuse them. The algorithm improves the overall accuracy of the fused point cloud while maintaining a similar degree of coverage comparable with state-of-the-art point cloud fusion algorithms.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"3 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stripe Error Correction for Landsat-7 Using Deep Learning 利用深度学习对 Landsat-7 进行条纹误差校正
IF 4.1 4区 地球科学
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science Pub Date : 2024-08-29 DOI: 10.1007/s41064-024-00306-x
Hilal Adıyaman, Yunus Emre Varul, Tolga Bakırman, Bülent Bayram
{"title":"Stripe Error Correction for Landsat-7 Using Deep Learning","authors":"Hilal Adıyaman, Yunus Emre Varul, Tolga Bakırman, Bülent Bayram","doi":"10.1007/s41064-024-00306-x","DOIUrl":"https://doi.org/10.1007/s41064-024-00306-x","url":null,"abstract":"<p>Long-term time series satellite imagery became highly essential for analyzing earth cycles such as global warming, climate change, and urbanization. Landsat‑7 satellite imagery plays a key role in this domain since it provides open-access data with expansive coverage and consistent temporal resolution for more than two decades. This paper addresses the challenge of stripe errors induced by Scan Line Corrector sensor malfunction in Landsat‑7 ETM+ satellite imagery, resulting in data loss and degradation. To overcome this problem, we propose a Generative Adversarial Networks approach to fill the gaps in the Landsat‑7 ETM+ panchromatic images. First, we introduce the YTU_STRIPE dataset, comprising Landsat‑8 OLI panchromatic images with synthetically induced stripe errors, for model training and testing. Our results indicate sufficient performance of the Pix2Pix GAN for this purpose. We demonstrate the efficiency of our approach through systematic experimentation and evaluation using various accuracy metrics, including Peak Signal-to-Noise Ratio, Structural Similarity Index Measurement, Universal Image Quality Index, Correlation Coefficient, and Root Mean Square Error which were calculated as 38.5570, 0.9206, 0.7670, 0.7753 and 3.8212, respectively. Our findings suggest promising prospects for utilizing synthetic imagery from Landsat‑8 OLI to mitigate stripe errors in Landsat‑7 ETM+ SLC-off imagery, thereby enhancing image reconstruction efforts. The datasets and model weights generated in this study are publicly available for further research and development: https://github.com/ynsemrevrl/eliminating-stripe-errors.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"14 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EnhancedNet, an End-to-End Network for Dense Disparity Estimation and its Application to Aerial Images 增强型网络(EnhancedNet)--用于密集差异估计的端到端网络及其在航空图像中的应用
IF 4.1 4区 地球科学
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science Pub Date : 2024-08-28 DOI: 10.1007/s41064-024-00307-w
Junhua Kang, Lin Chen, Christian Heipke
{"title":"EnhancedNet, an End-to-End Network for Dense Disparity Estimation and its Application to Aerial Images","authors":"Junhua Kang, Lin Chen, Christian Heipke","doi":"10.1007/s41064-024-00307-w","DOIUrl":"https://doi.org/10.1007/s41064-024-00307-w","url":null,"abstract":"<p>Recent developments in deep learning technology have boosted the performance of dense stereo reconstruction. However, the state-of-the-art deep learning-based stereo matching methods are mainly trained using close-range synthetic images. Consequently, the application of these methods in aerial photogrammetry and remote sensing is currently far from straightforward. In this paper, we propose a new disparity estimation network for stereo matching and investigate its generalization abilities in regard to aerial images. First, we propose an end-to-end deep learning network for stereo matching, regularized by disparity gradients, which includes a residual cost volume and a reconstruction error volume in a refinement module, and multiple losses. In order to investigate the influence of the multiple losses, a comprehensive analysis is presented. Second, based on this network trained with synthetic close-range data, we propose a new pipeline for matching high-resolution aerial imagery. The experimental results show that the proposed network improves the disparity accuracy by up to 40% in terms of errors larger than 1 px compared to results when not including the refinement network, especially in areas containing detailed small objects. In addition, in qualitative and quantitative experiments, we are able to show that our model, pre-trained on a synthetic stereo dataset, achieves very competitive sub-pixel geometric accuracy on aerial images. These results confirm that the domain gap between synthetic close-range and real aerial images can be satisfactorily bridged using the proposed new deep learning method for dense image matching.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"25 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fresh Concrete Properties from Stereoscopic Image Sequences 从立体图像序列中获取新鲜混凝土特性
IF 4.1 4区 地球科学
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science Pub Date : 2024-08-26 DOI: 10.1007/s41064-024-00303-0
Max Meyer, Amadeus Langer, Max Mehltretter, Dries Beyer, Max Coenen, Tobias Schack, Michael Haist, Christian Heipke
{"title":"Fresh Concrete Properties from Stereoscopic Image Sequences","authors":"Max Meyer, Amadeus Langer, Max Mehltretter, Dries Beyer, Max Coenen, Tobias Schack, Michael Haist, Christian Heipke","doi":"10.1007/s41064-024-00303-0","DOIUrl":"https://doi.org/10.1007/s41064-024-00303-0","url":null,"abstract":"<p>Increasing the degree of digitization and automation in concrete production can make a decisive contribution to reducing the associated <span>(text{CO}_{2})</span> emissions. This paper presents a method which predicts the properties of fresh concrete during the mixing process on the basis of stereoscopic image sequences of the moving concrete and mix design information or a variation of these. A Convolutional Neural Network (CNN) is used for the prediction, which receives the images supported by information about the mix design as input. In addition, the network receives temporal information in the form of the time difference between image acquisition and the point in time for which the concrete properties are to be predicted. During training, the times at which the reference values were captured are used for the latter. With this temporal information, the network implicitly learns the time-dependent behavior of the concrete properties. The network predicts the slump flow diameter, the yield stress and the plastic viscosity. The time-dependent prediction opens up the possibility of forecasting the temporal development of the fresh concrete properties during mixing. This is a significant advantage for the concrete industry, as countermeasures can then be taken in a timely manner, if the properties deviate from the desired ones. In various experiments it is shown that both the stereoscopic observations and the mix design information contain valuable information for the time-dependent prediction of the fresh concrete properties.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"3 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing Patterns and Trends in Urbanization and Land Use Efficiency Across the Philippines: A Comprehensive Analysis Using Global Earth Observation Data and SDG 11.3.1 Indicators 评估菲律宾各地城市化和土地使用效率的模式和趋势:利用全球地球观测数据和可持续发展目标 11.3.1 指标进行综合分析
IF 4.1 4区 地球科学
PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science Pub Date : 2024-08-13 DOI: 10.1007/s41064-024-00305-y
Jojene R. Santillan, Christian Heipke
{"title":"Assessing Patterns and Trends in Urbanization and Land Use Efficiency Across the Philippines: A Comprehensive Analysis Using Global Earth Observation Data and SDG 11.3.1 Indicators","authors":"Jojene R. Santillan, Christian Heipke","doi":"10.1007/s41064-024-00305-y","DOIUrl":"https://doi.org/10.1007/s41064-024-00305-y","url":null,"abstract":"<p>Urbanization, a global phenomenon with profound implications for sustainable development, is a focal point of Sustainable Development Goal 11 (SDG 11). Aimed at fostering inclusive, resilient, and sustainable urbanization by 2030, SDG 11 emphasizes the importance of monitoring land use efficiency (LUE) through indicator 11.3.1. In the Philippines, urbanization has surged over recent decades. Despite its importance, research on urbanization and LUE has predominantly focused on the country’s national capital region (Metro Manila), while little to no attention is given to comprehensive investigations across different regions, provinces, cities, and municipalities of the country. Additionally, challenges in acquiring consistent spatial data, especially due to the Philippines’ archipelagic nature, have hindered comprehensive analysis. To address these gaps, this study conducts a thorough examination of urbanization patterns and LUE dynamics in the Philippines from 1975 to 2020, leveraging Global Human Settlement Layers (GHSL) data and secondary indicators associated with SDG 11.3.1. Our study examines spatial patterns and temporal trends in built-up area expansion, population growth, and LUE characteristics at both city and municipal levels. Among the major findings are the substantial growth in built-up areas and population across the country. We also found a shift in urban growth dynamics, with Metro Manila showing limited expansion in recent years while new urban growth emerges in other regions of the country. Our analysis of the spatiotemporal patterns of Land Consumption Rate (LCR) revealed three distinct evolutional phases: a growth phase between 1975–1990, followed by a decline phase between 1990–2005, and a resurgence phase from 2005–2020. Generally declining trends in LCR and Population Growth Rate (PGR) were evident, demonstrating the country’s direction towards efficient built-up land utilization. However, this efficiency coincides with overcrowding issues as revealed by additional indicators such as the Abstract Achieved Population Density in Expansion Areas (AAPDEA) and Marginal Land Consumption per New Inhabitant (MLCNI). We also analyzed the spatial patterns and temporal trends of LUE across the country and found distinct clusters of transitioning urban centers, densely inhabited metropolises, expanding metropolitan regions, and rapidly growing urban hubs. The study’s findings suggest the need for policy interventions that promote compact and sustainable urban development, equitable regional development, and measures to address overcrowding in urban areas. By aligning policies with the observed spatial and temporal trends, decision-makers can work towards achieving SDG 11, fostering inclusive, resilient, and sustainable urbanization in the Philippines.</p>","PeriodicalId":56035,"journal":{"name":"PFG-Journal of Photogrammetry Remote Sensing and Geoinformation Science","volume":"7 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信