Ying Xu;Hongzhan Zhou;Fangzhao Zhang;Zaozao Yang;Ruozhou Wang
{"title":"Analysis of Spatiotemporal Properties and Modeling of the Nonisotropy of GNSS Tropospheric Slant Path Delay","authors":"Ying Xu;Hongzhan Zhou;Fangzhao Zhang;Zaozao Yang;Ruozhou Wang","doi":"10.1109/JSTARS.2025.3525501","DOIUrl":"https://doi.org/10.1109/JSTARS.2025.3525501","url":null,"abstract":"The slant path delay (SPD) exhibits “nonisotropy” in the horizontal direction, validated by ray tracing. This nonisotropy can cause decimeter-level errors in SPD, yet specific models and influencing factors remain under-researched. This study aims to quantify SPD nonisotropy with the nonisotropic value (ΔN), which represents the deviation between SPD and average SPD at corresponding elevations. We analyzed the spatiotemporal characteristics of nonisotropic SPD by estimating ΔN at 77 grid points (2019–2021, 1-day interval) and 804 grid points at different altitudes (2019–2021, 90-day interval). Using the IGG- scheme, we developed a nonisotropic SPD model considering azimuth continuity. We validated this model by incorporating VMF1 with horizontal gradient correction and VMF1 with horizontal gradient correction combined with the nonisotropic model into static PPP, tested at 16 IGS stations. Results indicate ΔN depends on time, latitude, altitude, elevation, and azimuth. The model categorizes SPD into positive anisotropy, undetermined isotropy, or negative anisotropy. For the 16 IGS stations, the nonisotropic model reduced the STD by 7.5%, 5.8%, and 2.8% in the E, N, and U directions, respectively, and decreased convergence time by 12.8%, 25.4%, and 1.4%. This confirms the model's effectiveness, offering a valuable tool for accurate SPD estimation and improved navigation under real atmospheric conditions.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"3879-3892"},"PeriodicalIF":4.7,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10820978","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HDSA-Net: Haze Density and Semantic Awareness Network for Hyperspectral Image Dehazing","authors":"Qianru Liu;Tiecheng Song;Anyong Qin;Yin Liu;Feng Yang;Chenqiang Gao","doi":"10.1109/JSTARS.2024.3525072","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3525072","url":null,"abstract":"Hyperspectral image (HSI) dehazing is a challenging task due to the complex imaging conditions. Existing deep learning-based dehazing methods neither fully consider the physical characteristics of HSIs, nor take advantage of high-level semantic information to improve the dehazing performance. To remedy these, in this article we propose a Haze Density and Semantic Awareness Network (HDSA-Net) for HSI dehazing. Our dual-awareness network not only provides low-level physical information guidance but also high-level semantic guidance for haze removal. Specifically, we estimate the haze density by considering both internal spectral characteristics and external dehazing effects. Based on this, we build a Haze Density Awareness (HDA) block, which enables the network to perceive and focus on difficult dehazing regions with high density. Moreover, we design a Semantic information Extraction Block (SEB) based on the pretrained Segment Anything Model (SAM), followed by several Semantic information Perception Blocks (SPBs), to provide semantic guidance for HSI dehazing. In particular, SEB adapts SAM for the special HSI data and SPBs enable the network to progressively recover semantic information via channel-level coarse guidance and pixel-level fine guidance. The experimental results on simulated and real datasets show the superiority of HDSA-Net over state-of-the-art methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"3989-4003"},"PeriodicalIF":4.7,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10820032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenjin Huang;Shaoyi Chen;Yichang Wu;Ruihua Li;Tianrui Li;Yihua Huang;Xiaochun Cao;Zhaohui Li
{"title":"DAShip: A Large-Scale Annotated Dataset for Ship Detection Using Distributed Acoustic Sensing Technique","authors":"Wenjin Huang;Shaoyi Chen;Yichang Wu;Ruihua Li;Tianrui Li;Yihua Huang;Xiaochun Cao;Zhaohui Li","doi":"10.1109/JSTARS.2024.3525082","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3525082","url":null,"abstract":"Ship detection and identification is the key part of the maritime monitoring and safety. Ship monitoring methods based on coastal video surveillance, satellite imagery, and synthetic aperture radar have been well developed. As the emerging remote sensing technology, distributed acoustic sensing (DAS) technology which continuously detects vibrations along underwater optical fiber cables facilitates all-weather, all-day, and real-time ship detection capabilities, possessing the potential for detecting dark ships. However, the reliance on expert knowledge for analyzing ship passage signals hinders the development of an automated framework for ship detection, limiting the application of DAS technology in the ship detection. In addition, the scarcity of datasets for ship passage events in the DAS field hampers the adoption of deep learning technologies for enhancing ship detection. To address these challenges, an automatic annotation method is proposed, utilizing 18 625 cleaned ship records based on the automatic identification system (AIS) to annotate ship passages adaptively from 5-month DAS data. Thus, a large-scale, high-quality annotated dataset named DAShip is established, containing 55 875 ship passage samples. Furthermore, an online ship detection and identification framework is proposed to achieve real-time ship detection from the massive DAS data flow and further identify coarse-grained ship features, such as ship speed, heading, angle, and ship type. In this proposed framework, YOLO models, primarily trained on DAShip, are used as ship detectors and ship feature classifiers, achieving accurate dark ship detection combined with AIS message and demonstrating competitive performance in ship feature classification.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4093-4107"},"PeriodicalIF":4.7,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10820076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Few-shot Remote Sensing Imagery Recognition with Compositionality Inductive Bias in Hierarchical Representation Space","authors":"Shichao Zhou;Zhuowei Wang;Zekai Zhang;Wenzheng Wang;Yingrui Zhao;Yunpu Zhang","doi":"10.1109/JSTARS.2024.3524573","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3524573","url":null,"abstract":"Remote sensing scenes from aerial perspective can be constructed by distinct visual parts in a combinatorial number of different ways. Such combinatorial explosion poses great challenges to understanding remote sensing imagery (RSI) with few prior instances (i.e., few-shot RSI recognition). Despite empirical success of existing methods such as data augmentation and knowledge transfer, no large-scale dataset can cover all possible combinations of visual parts. In this case, the prior knowledge learned from these data-driven methods may exhibit dataset bias, resulting in inadequate generalization to the current recognition task. Different from the naive data-driven strategies mentioned above, we alternatively devote to delicate feature modeling by constraining the mapping behavior of deep neural networks. Specifically, we embed inductive bias of compositionality into hierarchical latent representation space, which operates on two aspects: 1) disentangled and reusable representation. We establish a clustering-oriented factorized representation with a mixture model to represent multipart distributions of tokens. Each cluster centroid represents a re-occurring part. New patches are allocated to the nearest cluster centroid, and then we obtain the posterior representation; 2) compositional and discriminative representation. We introduce a hierarchical context prediction mechanism for compositional representation learning, utilizing a predictive NCE loss function to encourage global remote sensing scenes to accurately predict similar local parts, and thus automatically inferring compositional representations of high-level but discriminative latent concepts. Extensive experiments, including comparative experiments with SOTA, sensitivity evaluations, and ablation studies, demonstrate comparable or even superior performance of our method in few-shot RSI recognition.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"3544-3555"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10819630","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacques Cherblanc, Emmanuelle Zech, Susan Cadell, Isabelle Côté, Camille Boever, Manuel Fernández-Alcántara, Christiane Bergeron-Leclerc, Danielle Maltais, Geneviève Gauthier, Chantal Verdon, Josée Grenier, Chantale Simard
{"title":"Are Mediators of Grief Reactions Better Predictors Than Risk Factors? A Study Testing the Role of Satisfaction With Rituals, Perceived Social Support, and Coping Strategies.","authors":"Jacques Cherblanc, Emmanuelle Zech, Susan Cadell, Isabelle Côté, Camille Boever, Manuel Fernández-Alcántara, Christiane Bergeron-Leclerc, Danielle Maltais, Geneviève Gauthier, Chantal Verdon, Josée Grenier, Chantale Simard","doi":"10.1177/10541373231191316","DOIUrl":"10.1177/10541373231191316","url":null,"abstract":"<p><p>The present study aimed to assess the mediating role of adjustment processes in known risk factors associated with prolonged grief disorder. Data were collected in March-April 2021 through an online survey of 542 Canadian adults bereaved since March 2020. The mediating role of satisfaction with funeral rituals, bereavement support, and coping strategies on grief outcomes was tested using structural equation modeling. Results showed that such adjustment processes played a significant role in the grief process and that they were better predictors than risk factors alone. Since they are more amenable determinants of grief reactions, they should be further studied using a longitudinal design.</p>","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"14 1","pages":"22-43"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11530346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74407375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Route Recognition in Urban Spaces: A Scalable Approach Using Open Street View Data","authors":"Menglin Wu;Qingren Jia;Anran Yang;Zhinong Zhong;Mengyu Ma;Luo Chen;Ning Jing","doi":"10.1109/JSTARS.2024.3524296","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3524296","url":null,"abstract":"This article presents a novel pipeline for visual route recognition (VRR) in large-scale urban environments, leveraging open street view data. The proposed approach aims to identify the path of a video recorder by analyzing visual cues from continuous video frames and street landmarks, evaluated through datasets from New York and Taipei City. The pipeline begins with semantic visual geo-localization (SemVG), a semantic fused feature extraction network that filters out nonlandmark noise, generating robust visual representations. We construct a feature database from multiperspective street view images to enable efficient feature retrieval for query video frames. In addition, we introduce a spatio-temporal trajectory reconstruction method that corrects mismatches in the camera's motion path, ensuring consistency. Our contributions include the development of SemVG, a method for maintaining spatio-temporal consistency in trajectory reconstruction, and a large-scale Taipei dataset designed for VRR. This work has implications for urban surveillance, law enforcement, and smart city applications, supporting urban planning, resource management, search and rescue, and augmented reality navigation by improving localization without specialized hardware.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"4004-4019"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10819660","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Frequency-Aware Integrity Learning Network for Semantic Segmentation of Remote Sensing Images","authors":"Penghan Yang;Wujie Zhou;Yuanyuan Liu","doi":"10.1109/JSTARS.2024.3524753","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3524753","url":null,"abstract":"The semantic segmentation of remote sensing images is crucial for computer perception tasks. Integrating dual-modal information enhances semantic understanding. However, existing segmentation methods often suffer from incomplete feature information (features without integrity), leading to inadequate segmentation of pixels near object boundaries. This study introduces the concept of integrity in semantic segmentation and presents a complete integrity learning network using contextual semantics in the multiscale feature decoding process. Specifically, we propose a frequency-aware integrity learning network (FILNet) that compensates for missing features by capturing a shared integrity feature, enabling accurate differentiation between object categories and precise pixel segmentation. First, we design a frequency-driven awareness generator that produces an awareness map by extracting frequency-domain features with high-level semantics, guiding the multiscale feature aggregation process. Second, we implement a split–fuse–replenish strategy, which divides features into two branches for feature extraction and information replenishment, followed by cross-modal fusion and direct connection for information replenishment, resulting in fused features. Finally, we present an integrity assignment and enhancement method that leverages a capsule network to learn the correlation of multiscale features, generating a shared integrity feature. This feature is assigned to multiscale features to enhance their integrity, leading to accurate predictions facilitated by an adaptive large kernel module. Experiments on the Vaihingen and Potsdam datasets demonstrate that our method outperforms current state-of-the-art segmentation techniques.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"3398-3409"},"PeriodicalIF":4.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10819987","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MSF-GhostNet: Computationally Efficient YOLO for Detecting Drones in Low-Light Conditions","authors":"Maham Misbah;Misha Urooj Khan;Zeeshan Kaleem;Ali Muqaibel;Muhamad Zeshan Alam;Ran Liu;Chau Yuen","doi":"10.1109/JSTARS.2024.3524379","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3524379","url":null,"abstract":"Uncrewed aerial vehicles (UAVs) are popular in various applications due to their mobility, size, and user-friendliness. However, identifying malicious UAVs presents challenges that need to be encountered in general image-based object detection. These challenges arise because UAVs can fly at different altitudes, making it challenging to distinguish them from other flying objects and identify their size. In addition, the speed of UAVs also adds to the difficulty of capturing their clear images, which can lead to blurring, particularly in complex backgrounds. To address these challenges, we present an improved YOLOv5 architecture named multiscale feature map GhostNet (MSF-GhostNet) by introducing GhostConv and C3Ghost modules to reduce the redundant operations in the head and neck. We also proposed three feature map combinations to evaluate the performance of multiscale and multitarget flying objects, including drones, birds, planes, and helicopters. This approach significantly reduces the waste of computing resources when detecting small-sized flying objects. We also integrated autoanchor and batch size mechanisms to ensure efficient model training and avoid overfitting. Our proposed model showed 1.25% fewer false positives than the state-of-the-art GhostNet-YOLOv5 model. The proposed MSF-GhostNet outperformed GhostNet-YOLOv5 with higher precision, recall, and F1 scores (1.3%, 5.3%, and 3.7%, respectively) and reduced model parameters and model size by 3.1% and 4.1%, respectively. The proposed solution also outperformed several other state-of-the-art algorithms exists in the literature.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"3840-3851"},"PeriodicalIF":4.7,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10818706","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuming Li;Jin Liu;Xingye Li;Xiliang Zhang;Zhongdai Wu;Bing Han
{"title":"A Lightweight Network for Ship Detection in SAR Images Based on Edge Feature Aware and Fusion","authors":"Yuming Li;Jin Liu;Xingye Li;Xiliang Zhang;Zhongdai Wu;Bing Han","doi":"10.1109/JSTARS.2024.3524402","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3524402","url":null,"abstract":"Recently, with the increasing adoption of synthetic aperture radar (SAR) ship detection methods on mobile platforms, the lightweighting of detection methods has become a research focus. Despite certain achievements, there are still several limitations: 1) Existing studies have mainly focused on reducing model complexity through shallow network structures. However, this approach frequently results in performance degradation, as they neglected a thorough investigation into achieving a better balance point between inference speed and detection accuracy. 2) Under the lightweight network structure, the rich edge features contained in SAR images, which are crucial for distinguishing ship targets from complex backgrounds, are often underutilized. To address these issues, we propose a novel lightweight detection method based on edge feature aware and fusion. Specifically, to effectively extract edge feature, we introduce an Edge Feature-Aware (EFA) network that incorporates a multiscale channel attention module. Furthermore, a lightweight feature fusion network, Filter-Pruned Bi-directional Feature Pyramid Network (FP-BiFPN), is carefully designed, which can not only suppresses background information, but also accentuates ship targets. Finally, we propose a selective quantization algorithm based on a bit-width selection mechanism to reduce model memory usage without compromising performance. To validate the superiority of our proposed method, we conduct extensive experiments on multiple public datasets, achieving average accuracy scores of 94.2%, 97.6%, and 97.7% on the HRSID, SAR-Ship-Dataset, and SSDD, respectively, with a model parameter size of only 3.36 M, and the fastest processing time for a single frame is 7.2 ms.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"3782-3796"},"PeriodicalIF":4.7,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10818772","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Hybrid Method of PROSAIL RTM for the Retrieval Canopy LAI and Chlorophyll Content of Moso Bamboo (Phyllostachys pubescens) Forests From Sentinel-2 MSI Data","authors":"Zhanghua Xu;Chaofei Zhang;Songyang Xiang;Lingyan Chen;Xier Yu;Haitao Li;Zenglu Li;Xiaoyu Guo;Huafeng Zhang;Xuying Huang;Fengying Guan","doi":"10.1109/JSTARS.2024.3522774","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3522774","url":null,"abstract":"Leaf area index (LAI) and chlorophyll content are crucial variables in photosynthesis, respiration, and transpiration, playing a vital role in monitoring vegetation stress, estimating productivity, and evaluating carbon cycling processes. Currently, physical models are widely adopted for estimating LAI and canopy chlorophyll content (CCC). However, the main challenges of physical model-based methods for estimating LAI and CCC are the high computational cost and the fact that different combinations of canopy variables result in similar spectral reflectance for local minima. To address this limitation, a hybrid model was proposed to invert the LAI and CCC in Moso bamboo (<italic>Phyllostachys pubescens</i>) forests. This approach utilized the PROSAIL canopy radiation transfer model, established look-up table (LUT) for LAI and CCC, and employed the Stacking ensemble learning framework. Compared with the PROSAIL LUT method, the hybrid model demonstrated higher performance in predicting LAI and CCC by incorporating the strengths of different models within the hybrid framework. The R<sup>2</sup> values between predicted and measured values were improved by 3.28% and 7.15%, while the RMSE values were reduced by 19.71% and 16.14%, respectively. Moreover, the hybrid model based on Stacking ensemble learning achieved an 86% reduction in running time. Therefore, the hybrid model, which integrates the PROSAIL model with the Stacking ensemble learning framework, offers a more efficient and accurate approach for remotely estimating the LAI and CCC in Moso bamboo forests. The high efficiency of this method makes it promising and suitable for application to other types of vegetation.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"3125-3143"},"PeriodicalIF":4.7,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10818736","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}