Remote. Sens.Pub Date : 2024-02-22DOI: 10.3390/rs16050774
Justyna Górniak-Zimroz, K. Romańczukiewicz, Magdalena Sitarska, A. Szrek
{"title":"Light-Pollution-Monitoring Method for Selected Environmental and Social Elements","authors":"Justyna Górniak-Zimroz, K. Romańczukiewicz, Magdalena Sitarska, A. Szrek","doi":"10.3390/rs16050774","DOIUrl":"https://doi.org/10.3390/rs16050774","url":null,"abstract":"Light pollution significantly interferes with animal and human life and should, therefore, be included in the factors that threaten ecosystems. The main aim of this research is to develop a methodology for monitoring environmental and social elements subjected to light pollution in anthropogenic areas. This research is based on yearly and monthly photographs acquired from the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-Orbiting Partnership (Suomi NPP) satellite; land cover data from the CORINE Land Cover (CLC) program; and environmental data from the European Environment Agency (EEA) and the World Database on Protected Areas (WDPA). The processing of input data for further analyses, the testing of the methodology and the interpretation of the final results were performed in GIS-type software (ArcGIS Pro). Light pollution in the investigated area was analyzed with the use of maps generated for the years 2014 and 2019. The environmental and social elements were spatially identified in five light pollution classes. The research results demonstrate that the proposed methodology allows for the identification of environmental and social elements that emit light, as well as those that are subjected to light pollution. The methodology used in this work allows us to observe changes resulting from light pollution (decreasing or increasing the intensity). Owing to the use of publicly available data, the methodology can be applied to light pollution monitoring as part of spatial planning in anthropogenic areas. The proposed methodology makes it possible to cover the area exposed to light pollution and to observe (almost online) the environmental and social changes resulting from reductions in light emitted by anthropogenic areas.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"43 4","pages":"774"},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140440629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote. Sens.Pub Date : 2024-02-22DOI: 10.3390/rs16050766
Muhammad A. A. Abdelgawad, Ray C. C. Cheung, Hong Yan
{"title":"Efficient Blind Hyperspectral Unmixing Framework Based on CUR Decomposition (CUR-HU)","authors":"Muhammad A. A. Abdelgawad, Ray C. C. Cheung, Hong Yan","doi":"10.3390/rs16050766","DOIUrl":"https://doi.org/10.3390/rs16050766","url":null,"abstract":"Hyperspectral imaging captures detailed spectral data for remote sensing. However, due to the limited spatial resolution of hyperspectral sensors, each pixel of a hyperspectral image (HSI) may contain information from multiple materials. Although the hyperspectral unmixing (HU) process involves estimating endmembers, identifying pure spectral components, and estimating pixel abundances, existing algorithms mostly focus on just one or two tasks. Blind source separation (BSS) based on nonnegative matrix factorization (NMF) algorithms identify endmembers and their abundances at each pixel of HSI simultaneously. Although they perform well, the factorization results are unstable, require high computational costs, and are difficult to interpret from the original HSI. CUR matrix decomposition selects specific columns and rows from a dataset to represent it as a product of three small submatrices, resulting in interpretable low-rank factorization. In this paper, we propose a new blind HU framework based on CUR factorization called CUR-HU that performs the entire HU process by exploiting the low-rank structure of given HSIs. CUR-HU incorporates several techniques to perform the HU process with a performance comparable to state-of-the-art methods but with higher computational efficiency. We adopt a deterministic sampling method to select the most informative pixels and spectrum components in HSIs. We use an incremental QR decomposition method to reduce computation complexity and estimate the number of endmembers. Various experiments on synthetic and real HSIs are conducted to evaluate the performance of CUR-HU. CUR-HU performs comparably to state-of-the-art methods for estimating the number of endmembers and abundance maps, but it outperforms other methods for estimating the endmembers and the computational efficiency. It has a 9.4 to 249.5 times speedup over different methods for different real HSIs.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"3 8","pages":"766"},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140440725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote. Sens.Pub Date : 2024-02-22DOI: 10.3390/rs16050763
Qi Zhang, Wenjin Sun, Huaihai Guo, Changming Dong, Hong Zheng
{"title":"A Transfer Learning-Enhanced Generative Adversarial Network for Downscaling Sea Surface Height through Heterogeneous Data Fusion","authors":"Qi Zhang, Wenjin Sun, Huaihai Guo, Changming Dong, Hong Zheng","doi":"10.3390/rs16050763","DOIUrl":"https://doi.org/10.3390/rs16050763","url":null,"abstract":"In recent decades, satellites have played a pivotal role in observing ocean dynamics, providing diverse datasets with varying spatial resolutions. Notably, within these datasets, sea surface height (SSH) data typically exhibit low resolution, while sea surface temperature (SST) data have significantly higher resolution. This study introduces a Transfer Learning-enhanced Generative Adversarial Network (TLGAN) for reconstructing high-resolution SSH fields through the fusion of heterogeneous SST data. In contrast to alternative deep learning approaches that involve directly stacking SSH and SST data as input channels in neural networks, our methodology utilizes bifurcated blocks comprising Residual Dense Module and Residual Feature Distillation Module to extract features from SSH and SST data, respectively. A pixelshuffle module-based upscaling block is then concatenated to map these features into a common latent space. Employing a hybrid strategy involving adversarial training and transfer learning, we overcome the limitation that SST and SSH data should share the same time dimension and achieve significant resolution enhancement in SSH reconstruction. Experimental results demonstrate that, when compared to interpolation method, TLGAN effectively reduces reconstruction errors and fusing SST data could significantly enhance in generating more realistic and physically plausible results.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"24 4","pages":"763"},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140441288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote. Sens.Pub Date : 2024-02-22DOI: 10.3390/rs16050764
Changjing Wang, Hongmin Zhou, Guodong Zhang, Jianguo Duan, Moxiao Lin
{"title":"High Spatial Resolution Leaf Area Index Estimation for Woodland in Saihanba Forestry Center, China","authors":"Changjing Wang, Hongmin Zhou, Guodong Zhang, Jianguo Duan, Moxiao Lin","doi":"10.3390/rs16050764","DOIUrl":"https://doi.org/10.3390/rs16050764","url":null,"abstract":"Owing to advancements in satellite remote sensing technology, the acquisition of global land surface parameters, notably, the leaf area index (LAI), has become increasingly accessible. The Sentinel-2 (S2) satellite plays an important role in the monitoring of ecological environments and resource management. The prevalent use of the 20 m spatial resolution band in S2-based inversion models imposes significant limitations on the applicability of S2 data in applications requiring finer spatial resolution. Furthermore, although a substantial body of research on LAI retrieval using S2 data concentrates on agricultural landscapes, studies dedicated to forest ecosystems, although increasing, remain relatively less prevalent. This study aims to establish a viable methodology for retrieving 10 m resolution LAI data in forested regions. The empirical model of the soil adjusted vegetation index (SAVI), the backpack neural network based on simulated annealing (SA-BP) algorithm, and the variational heteroscedastic Gaussian process regression (VHGPR) model are established in this experiment based on the LAI data measured and the corresponding 10 m spatial resolution S2 satellite surface reflectance data in the Saihanba Forestry Center (SFC). The LAI retrieval performance of the three models is then validated using field data, and the error sources of the best performing VHGPR models (R2 of 0.8696 and RMSE of 0.5078) are further analyzed. Moreover, the VHGPR model stands out for its capacity to quantify the uncertainty in LAI estimation, presenting a notable advantage in assessing the significance of input data, eliminating redundant bands, and being well suited for uncertainty estimation. This feature is particularly valuable in generating accurate LAI products, especially in regions characterized by diverse forest compositions.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"10 14","pages":"764"},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140441561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote. Sens.Pub Date : 2024-02-21DOI: 10.3390/rs16050756
Ying-Chih Lai, Tzu-Yun Lin
{"title":"Vision-Based Mid-Air Object Detection and Avoidance Approach for Small Unmanned Aerial Vehicles with Deep Learning and Risk Assessment","authors":"Ying-Chih Lai, Tzu-Yun Lin","doi":"10.3390/rs16050756","DOIUrl":"https://doi.org/10.3390/rs16050756","url":null,"abstract":"With the increasing demand for unmanned aerial vehicles (UAVs), the number of UAVs in the airspace and the risk of mid-air collisions caused by UAVs are increasing. Therefore, detect and avoid (DAA) technology for UAVs has become a crucial element for mid-air collision avoidance. This study presents a collision avoidance approach for UAVs equipped with a monocular camera to detect small fixed-wing intruders. The proposed system can detect any size of UAV over a long range. The development process consists of three phases: long-distance object detection, object region estimation, and collision risk assessment and collision avoidance. For long-distance object detection, an optical flow-based background subtraction method is utilized to detect an intruder far away from the host. A mask region-based convolutional neural network (Mask R-CNN) model is trained to estimate the region of the intruder in the image. Finally, the collision risk assessment adopts the area expansion rate and bearing angle of the intruder in the images to conduct mid-air collision avoidance based on visual flight rules (VFRs) and conflict areas. The proposed collision avoidance approach is verified by both simulations and experiments. The results show that the system can successfully detect different sizes of fixed-wing intruders, estimate their regions, and assess the risk of collision at least 10 s in advance before the expected collision would happen.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"8 2","pages":"756"},"PeriodicalIF":0.0,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140442584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote. Sens.Pub Date : 2024-02-21DOI: 10.3390/rs16050754
Minghao Sun, Song-hua Liu, Lixin Guo
{"title":"Scattering Field Intensity and Orbital Angular Momentum Spectral Distribution of Vortex Electromagnetic Beams Scattered by Electrically Large Targets Comprising Different Materials","authors":"Minghao Sun, Song-hua Liu, Lixin Guo","doi":"10.3390/rs16050754","DOIUrl":"https://doi.org/10.3390/rs16050754","url":null,"abstract":"In this study, we obtained the intensity and orbital angular momentum (OAM) spectral distribution of the scattering fields of vortex electromagnetic beams illuminating electrically large targets composed of different materials. We used the angular spectral decomposition method to decompose a vortex beam into plane waves in the spectral domain at different elevations and azimuths. We combined this method with the physical optics algorithm to calculate the scattering field distribution. The OAM spectra of the scattering field along different observation radii were analyzed using the spiral spectrum expansion method. The numerical results indicate that for beams with different parameters (such as polarization, topological charge, half-cone angle, and frequency) and targets with different characteristics (such as composition), the scattering field intensity distribution and OAM spectral characteristics varied considerably. When the beam parameters change, the results of scattering from different materials show similar changing trends. Compared with beams scattered by uncoated metal and dielectric targets, the scattering field of the coating target can better maintain the shape and OAM mode of beams from the incident field. The scattering characteristics of metal targets were the most sensitive to beam-parameter changes. The relationship between the beam parameters, target parameters, the scattering field intensity, and the OAM spectra of the scattering field was constructed, confirming that the spiral spectrum of the scattering field carries the target information. These findings can be used in remote sensing engineering to supplement existing radar imaging, laying the foundation for further identification of beam or target parameters.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"97 ","pages":"754"},"PeriodicalIF":0.0,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140445416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote. Sens.Pub Date : 2024-02-21DOI: 10.3390/rs16050749
Randa Qashoa, Vithurshan Suthakar, Gabriel Chianelli, Perushan Kunalakantha, Regina S. K. Lee
{"title":"Technology Demonstration of Space Situational Awareness (SSA) Mission on Stratospheric Balloon Platform","authors":"Randa Qashoa, Vithurshan Suthakar, Gabriel Chianelli, Perushan Kunalakantha, Regina S. K. Lee","doi":"10.3390/rs16050749","DOIUrl":"https://doi.org/10.3390/rs16050749","url":null,"abstract":"As the number of resident space objects (RSOs) orbiting Earth increases, the risk of collision increases, and mitigating this risk requires the detection, identification, characterization, and tracking of as many RSOs as possible in view at any given time, an area of research referred to as Space Situational Awareness (SSA). In order to develop algorithms for RSO detection and characterization, starfield images containing RSOs are needed. Such images can be obtained from star trackers, which have traditionally been used for attitude determination. Despite their low resolution, star tracker images have the potential to be useful for SSA. Using star trackers in this dual-purpose manner offers the benefit of leveraging existing star tracker technology already in orbit, eliminating the need for new and costly equipment to be launched into space. In August 2022, we launched a CubeSat-class payload, Resident Space Object Near-space Astrometric Research (RSONAR), on a stratospheric balloon. The primary objective of the payload was to demonstrate a dual-purpose star tracker for imaging and analyzing RSOs from a space-like environment, aiding in the field of SSA. Building on the experience and lessons learned from the 2022 campaign, we developed a next-generation dual-purpose camera in a 4U-inspired CubeSat platform, named RSONAR II. This payload was successfully launched in August 2023. With the RSONAR II payload, we developed a real-time, multi-purpose imaging system with two main cameras of varying cost that can adjust imaging parameters in real-time to evaluate the effectiveness of each configuration for RSO imaging. We also performed onboard RSO detection and attitude determination to verify the performance of our algorithms. Additionally, we implemented a downlink capability to verify payload performance during flight. To add a wider variety of images for testing our algorithms, we altered the resolution of one of the cameras throughout the mission. In this paper, we demonstrate a dual-purpose star tracker system for future SSA missions and compare two different sensor options for RSO imaging.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"21 2","pages":"749"},"PeriodicalIF":0.0,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140444420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote. Sens.Pub Date : 2024-01-27DOI: 10.3390/rs16030494
Kedong Wang, M. Jia, Xiaohai Zhang, Chuanpeng Zhao, Rong Zhang, Zongming Wang
{"title":"Evaluating Ecosystem Service Value Changes in Mangrove Forests in Guangxi, China, from 2016 to 2020","authors":"Kedong Wang, M. Jia, Xiaohai Zhang, Chuanpeng Zhao, Rong Zhang, Zongming Wang","doi":"10.3390/rs16030494","DOIUrl":"https://doi.org/10.3390/rs16030494","url":null,"abstract":"Mangrove forests play a vital role in maintaining ecological balance in coastal regions. Accurately assessing changes in the ecosystem service value (ESV) of these mangrove forests requires more precise distribution data and an appropriate set of evaluation methods. In this study, we accurately mapped the spatial distribution data and patterns of mangrove forests in Guangxi province in 2016 and 2020, using 10 m spatial resolution Sentinel-2 imagery, and conducted a comprehensive evaluation of ESV provided by mangrove forests. The results showed that (1) from 2016 to 2020, mangrove forests in Guangxi demonstrated a positive development trend and were undergoing a process of recovery. The area of mangrove forests in Guangxi increased from 6245.15 ha in 2016 to 6750.01 ha in 2020, with a net increase of 504.81 ha, which was mainly concentrated in Lianzhou Bay, Tieshan Harbour, and Dandou Bay; (2) the ESV of mangrove forests was USD 363.78 million in 2016 and USD 390.74 million in 2020; (3) the value of fishery, soil conservation, wave absorption, and pollution purification comprises the largest proportions of the ESV of mangrove forests. This study provides valuable insights and information to enhance our understanding of the relationship between the spatial pattern of mangrove forests and their ecosystem service value.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"45 12","pages":"494"},"PeriodicalIF":0.0,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140492263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote. Sens.Pub Date : 2024-01-16DOI: 10.48550/arXiv.2401.08787
Wenwen Li, Chia-Yu Hsu, Sizhe Wang, Yezhou Yang, Hyunho Lee, Anna K. Liljedahl, C. Witharana, Yili Yang, Brendan M. Rogers, S. Arundel, Matthew B. Jones, Kenton McHenry, Patricia Solis
{"title":"Segment Anything Model Can Not Segment Anything: Assessing AI Foundation Model's Generalizability in Permafrost Mapping","authors":"Wenwen Li, Chia-Yu Hsu, Sizhe Wang, Yezhou Yang, Hyunho Lee, Anna K. Liljedahl, C. Witharana, Yili Yang, Brendan M. Rogers, S. Arundel, Matthew B. Jones, Kenton McHenry, Patricia Solis","doi":"10.48550/arXiv.2401.08787","DOIUrl":"https://doi.org/10.48550/arXiv.2401.08787","url":null,"abstract":"This paper assesses trending AI foundation models, especially emerging computer vision foundation models and their performance in natural landscape feature segmentation. While the term foundation model has quickly garnered interest from the geospatial domain, its definition remains vague. Hence, this paper will first introduce AI foundation models and their defining characteristics. Built upon the tremendous success achieved by Large Language Models (LLMs) as the foundation models for language tasks, this paper discusses the challenges of building foundation models for geospatial artificial intelligence (GeoAI) vision tasks. To evaluate the performance of large AI vision models, especially Meta’s Segment Anything Model (SAM), we implemented different instance segmentation pipelines that minimize the changes to SAM to leverage its power as a foundation model. A series of prompt strategies were developed to test SAM’s performance regarding its theoretical upper bound of predictive accuracy, zero-shot performance, and domain adaptability through fine-tuning. The analysis used two permafrost feature datasets, ice-wedge polygons and retrogressive thaw slumps because (1) these landform features are more challenging to segment than man-made features due to their complicated formation mechanisms, diverse forms, and vague boundaries; (2) their presence and changes are important indicators for Arctic warming and climate change. The results show that although promising, SAM still has room for improvement to support AI-augmented terrain mapping. The spatial and domain generalizability of this finding is further validated using a more general dataset EuroCrops for agricultural field mapping. Finally, we discuss future research directions that strengthen SAM’s applicability in challenging geospatial domains.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"35 5","pages":"797"},"PeriodicalIF":0.0,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140505768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote. Sens.Pub Date : 2023-07-07DOI: 10.3390/rs15133449
B. Han, X. Qu, Xiaopeng Yang, Zhengyan Zhang, Wolin Li
{"title":"DRFM Repeater Jamming Suppression Method Based on Joint Range-Angle Sparse Recovery and Beamforming for Distributed Array Radar","authors":"B. Han, X. Qu, Xiaopeng Yang, Zhengyan Zhang, Wolin Li","doi":"10.3390/rs15133449","DOIUrl":"https://doi.org/10.3390/rs15133449","url":null,"abstract":"Distributed array radar achieves high angular resolution and measurement accuracy, which could provide a solution to suppress digital radio frequency memory (DRFM) repeater jamming. However, owing to the large aperture of a distributed radar, the far-field plane wave assumption is no longer satisfied. Consequently, traditional adaptive beamforming methods cannot work effectively due to mismatched steering vectors. To address this issue, a DRFM repeater jamming suppression method based on joint range-angle sparse recovery and beamforming for distributed array radar is proposed in this paper. First, the steering vectors of the distributed array are reconstructed according to the spherical wave model under near-field conditions. Then, a joint range-angle sparse dictionary is generated using reconstructed steering vectors, and the range-angle position of jamming is estimated using the weighted L1-norm singular value decomposition (W-L1-SVD) algorithm. Finally, beamforming with joint range-angle nulling is implemented based on the linear constrained minimum variance (LCMV) algorithm for jamming suppression. The performance and effectiveness of proposed method is validated by simulations and experiments on an actual ground-based distributed array radar system.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"1 1","pages":"3449"},"PeriodicalIF":0.0,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73213585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}