{"title":"Monitoring of land subsidence by combining small baseline subset interferometric synthetic aperture radar and generic atmospheric correction online service in Qingdao City, China","authors":"Xuepeng Li, Qiuxiang Tao, Yang Chen, Anye Hou, Ruixiang Liu, Yixin Xiao","doi":"10.1117/1.jrs.18.014506","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014506","url":null,"abstract":"Owing to accelerated urbanization, land subsidence has damaged urban infrastructure and impeded sustainable economic and social development in Qingdao City, China. Combining interferometric synthetic aperture radar (InSAR) and generic atmospheric correction online service (GACOS), atmospheric correction has not yet been investigated for land subsidence in Qingdao. A small baseline subset of InSAR (SBAS InSAR), GACOS, and 28 Sentinel-1A images were combined to produce a land subsidence time series from January 2019 to December 2020 for the urban areas of Qingdao, and the spatiotemporal evolution of land subsidence before and after GACOS atmospheric correction was compared, analyzed, and verified using leveling data. Our work demonstrates that the overall surface condition of the Qingdao urban area is stable, and subsidence areas are mainly concentrated in the coastal area of Jiaozhou Bay, northwestern Jimo District, and northern Chengyang District. The GACOS atmospheric correction could reduce the root-mean-square error of the differential interferometric phase. The land subsidence time series after correction was in better agreement with the leveling-monitored results. It is effective to perform GACOS atmospheric correction to improve the accuracy of SBAS InSAR-monitored land subsidence over a large scale and long time series in coastal cities.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"210 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139559661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2023 List of Reviewers","authors":"","doi":"10.1117/1.jrs.18.010102","DOIUrl":"https://doi.org/10.1117/1.jrs.18.010102","url":null,"abstract":"JARS thanks the reviewers who served the journal in 2023.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"10 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139408029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Plume motion characterization in unmanned aerial vehicle aerial video and imagery","authors":"Mehrube Mehrubeoglu, Kirk Cammarata, Hua Zhang, Lifford McLauchlan","doi":"10.1117/1.jrs.18.016501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016501","url":null,"abstract":"Sediment plumes are generated from both natural and human activities in benthic environments, increasing the turbidity of the water and reducing the amount of sunlight reaching the benthic vegetation. Seagrasses, which are photosynthetic bioindicators of their environment, are threatened by chronic reductions in sunlight, impacting entire aquatic food chains. Our research uses unmanned aerial vehicle (UAV) aerial video and imagery to investigate the characteristics of sediment plumes generated by a model of anthropogenic disturbance. The extent, speed, and motion of the plumes were assessed as these parameters may pertain to the potential impacts of plume turbidity on seagrass communities. In a case study using UAV video, the turbidity plume was observed to spread more than 200 ft over 20 min of the UAV campaign. The directional speed of the plume was estimated to be between 10.4 and 10.6 ft/min. This was corroborated by observation of the greatest plume turbidity and sediment load near the location of the disturbance and diminishing with distance. Further temporal studies are necessary to determine any long-term impacts of human activity-generated sediment plumes on seagrass beds.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"14 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139092104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatiotemporal fusion convolutional neural network: tropical cyclone intensity estimation from multisource remote sensing images","authors":"Randi Fu, Haiyan Hu, Nan Wu, Zhening Liu, Wei Jin","doi":"10.1117/1.jrs.18.018501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.018501","url":null,"abstract":"Utilizing multisource remote sensing images to accurately estimate tropical cyclone (TC) intensity is crucial and challenging. Traditional approaches rely on a single image for intensity estimation and lack the capability to perceive dynamic spatiotemporal information. Meanwhile, many existing deep learning methods sample from a time series of fixed length and depend on computation-intensive 3D feature extraction modules, limiting the model’s flexibility and scalability. By organically linking the genesis and dissipation mechanisms of a TC with computer vision techniques, we introduce a spatiotemporal fusion convolutional neural network that integrates three distinct improvement approaches. First, an a priori aware nonparametric fusion module is introduced to effectively fuse key features from multisource remote sensing data. Second, we design a scale-aware contraction–expansion module. This module effectively captures detailed features of the TC by connecting information from different scales through a weighted and up-sampling method. Finally, we propose a 1D–2D conditional sampling training method that balances single-step regression (for short sequences) and latent-variable-based temporal modeling (for long sequences) to achieve flexible spatiotemporal feature perception, thereby avoiding the data scale constraint imposed by fixed sequence lengths. Through qualitative and quantitative experimental comparisons, the proposed spatiotemporal fusion convolutional neural network achieved a root-mean-square error of 8.89 kt, marking a 29.7% improvement over the advanced Dvorak technique, and its efficacy in actual TC case analyses indicates its practical viability and potential for broader applications.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"68 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139463955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EPAWFusion: multimodal fusion for 3D object detection based on enhanced points and adaptive weights","authors":"Xiang Sun, Shaojing Song, Fan Wu, Tingting Lu, Bohao Li, Zhiqing Miao","doi":"10.1117/1.jrs.18.017501","DOIUrl":"https://doi.org/10.1117/1.jrs.18.017501","url":null,"abstract":"Fusing LiDAR point cloud and camera image for 3D object detection in autonomous driving has emerged as a captivating research avenue. The core challenge of multimodal fusion is how to seamlessly fuse 3D LiDAR point cloud with 2D camera image. Although current approaches exhibit promising results, they often rely solely on fusion at either the data level, feature level, or object level, and there is still a room for improvement in the utilization of multimodal information. We present an advanced and effective multimodal fusion framework called EPAWFusion for fusing 3D point cloud and 2D camera image at both data level and feature level. EPAWFusion model consists of three key modules: a point enhanced module based on semantic segmentation for data-level fusion, an adaptive weight allocation module for feature-level fusion, and a detector based on 3D sparse convolution. The semantic information of the 2D image is extracted using semantic segmentation, and the calibration matrix is used to establish the point-pixel correspondence. The semantic information and distance information are then attached to the point cloud to achieve data-level fusion. The geometry features of enhanced point cloud are extracted by voxel encoding, and the texture features of image are obtained using a pretrained 2D CNN. Feature-level fusion is achieved via the adaptive weight allocation module. The fused features are fed into a 3D sparse convolution-based detector to obtain the accurate 3D objects. Experiment results demonstrate that EPAWFusion outperforms the baseline network MVXNet on the KITTI dataset for 3D detection of cars, pedestrians, and cyclists by 5.81%, 6.97%, and 3.88%. Additionally, EPAWFusion performs well for single-vehicle-side 3D object detection based on the experimental findings on DAIR-V2X dataset and the inference frame rate of our proposed model reaches 11.1 FPS. The two-layer level fusion of EPAWFusion significantly enhances the performance of multimodal 3D object detection.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"16 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139464032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Synthetic aperture radar image change detection using saliency detection and attention capsule network","authors":"Shaona Wang, Di Wang, Jia Shi, Zhenghua Zhang, Xiang Li, Yanmiao Guo","doi":"10.1117/1.jrs.18.016505","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016505","url":null,"abstract":"Synthetic aperture radar (SAR) image change detection has been widely applied in a variety of fields as one of the research hotspots in remote sensing image processing. To increase the accuracy of SAR image change detection, an algorithm based on saliency detection and an attention capsule network is proposed. First, the difference image (DI) is processed using the saliency detection method. The DI’s most significant regions are extracted. Considering the saliency detection characteristics, we select training samples only from the DI’s most salient regions. The regions in the background are omitted. This results in a significant reduction in the number of training samples. Second, a capsule network based on an attention mechanism is constructed. The spatial attention model is capable of extracting the salient characteristics. Capsule networks enable precise classification. Finally, a final change map is obtained using capsule network to classify images. To compare the proposed method with the related methods, experiments are carried out on four real SAR datasets. The results show that the proposed method is effective in improving the exactitude of change detection.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"66 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139656853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Continual domain adaptation on aerial images under gradually degrading weather","authors":"Chowdhury Sadman Jahan, Andreas Savakis","doi":"10.1117/1.jrs.18.016504","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016504","url":null,"abstract":"Domain adaptation (DA) aims to reduce the effects of the distribution gap between the source domain where a model is trained and the target domain where the model is deployed. When a deep learning model is deployed on an aerial platform, it may face gradually degrading weather conditions during its operation, leading to gradually widening gaps between the source training data and the encountered target data. Because there are no existing datasets with gradually degrading weather, we generate four datasets by introducing progressively worsening clouds and snowflakes on aerial images. During deployment, unlabeled target domain samples are acquired in small batches, and adaptation is performed continually with each batch of incoming data, instead of assuming that the entire target dataset is available. We evaluate two continual DA models against a baseline standard DA model under gradually degrading conditions. All of these models are source-free, i.e., they operate without access to the source training data during adaptation. We utilize both convolutional and transformer architectures in the models for comparison. In our experiments, we find that continual DA methods perform better but sometimes encounter stability issues during adaptation. We propose gradient normalization as a simple but effective solution for managing instability during adaptation.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"2 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LRSNet: a high-efficiency lightweight model for object detection in remote sensing","authors":"Shiliang Zhu, Min Miao, Yutong Wang","doi":"10.1117/1.jrs.18.016502","DOIUrl":"https://doi.org/10.1117/1.jrs.18.016502","url":null,"abstract":"Unmanned aerial vehicles (UAVs) exhibit the ability to flexibly conduct aerial remote-sensing imaging. By employing deep learning object-detection algorithms, they efficiently perceive objects, finding widespread application in various practical engineering tasks. Consequently, UAV-based remote sensing object detection technology holds considerable research value. However, the background of UAV remote sensing images is often complex, with varying shooting angles and heights leading to difficulties in unifying target scales and features. Moreover, there is the challenge of numerous densely distributed small targets. In addition, UAVs face significant limitations in terms of hardware resources. Against this background, we propose a lightweight remote sensing object detection network (LRSNet) model based on YOLOv5s. In the backbone of LRSNet, the lightweight network MobileNetV3 is used to substantially reduce the model’s computational complexity and parameter count. In the model’s neck, a multiscale feature pyramid network named CM-FPN is introduced to enhance the detection capability of small objects. CM-FPN comprises two key components: C3EGhost, based on GhostNet and efficient channel attention modules, and the multiscale feature fusion channel attention mechanism (MFFC). C3EGhost, serving as CM-FPN’s primary feature extraction module, possesses lower computational complexity and fewer parameters, as well as effectively reducing background interference. MFFC, as the feature fusion node of CM-FPN, can adaptively weight the fusion of shallow and deep features, acquiring more effective details and semantic information for object detection. LRSNet, evaluated on the NWPU VHR-10, DOTA V1.0, and VisDrone-2019 datasets, achieved mean average precision of 94.0%, 71.9%, and 35.6%, with Giga floating-point operations per second and Param (M) measuring only 5.8 and 4.1, respectively. This outcome affirms the efficiency of LRSNet in UAV-based remote-sensing object detection tasks.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"21 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139415338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Burton Neuner III, Skylar D. Lilledahl, Kyle R. Drexler
{"title":"Feasibility of remote estimation of optical turbulence via quick response code imaging","authors":"Burton Neuner III, Skylar D. Lilledahl, Kyle R. Drexler","doi":"10.1117/1.jrs.18.014505","DOIUrl":"https://doi.org/10.1117/1.jrs.18.014505","url":null,"abstract":"Turbulence estimation theory is presented and demonstrated by imaging a series of spatially encoded quick response (QR) codes in ambient radiation through atmospheric scintillation. This remote sensing concept was verified though preliminary feasibility experiments and detailed MATLAB simulations using QR codes displayed on a low-power digital e-ink screen. Of note, knowledge of propagation range and QR code dimensions are not required ahead of time, as each code contains information detailing its block size and overall physical size, enabling automated calculations of spatial resolution and target range. Estimation algorithms leverage the extracted resolution and range information to determine path-integrated optical turbulence, as quantified by the Fried parameter, r0. The estimation criterion is obtained by cycling a series of QR code sizes on an e-ink screen and determining the transition point at which the QR code can no longer be read, resulting in a system capable of automatically estimating path-integrated optical turbulence.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"38 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139501516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Man-made object segmentation around reservoirs by an end-to-end two-phase deep learning-based workflow","authors":"Nayereh Hamidishad, Roberto Marcondes Cesar Jr.","doi":"10.1117/1.jrs.18.018502","DOIUrl":"https://doi.org/10.1117/1.jrs.18.018502","url":null,"abstract":"Reservoirs are fundamental infrastructures for the management of water resources. Constructions around them can negatively impact their water quality. Such constructions can be detected by segmenting man-made objects around reservoirs in the remote sensing (RS) images. Deep learning (DL) has attracted considerable attention in recent years as a method for segmenting the RS imagery into different land covers/uses and has achieved remarkable success. We develop an approach based on DL and image processing techniques for man-made object segmentation around the reservoirs. In order to segment man-made objects around the reservoirs in an end-to-end procedure, segmenting reservoirs and identifying the region of interest (RoI) around them are essential. In the proposed two-phase workflow, the reservoir is initially segmented using a DL model, and a postprocessing stage is proposed to remove errors, such as floating vegetation in the generated reservoir map. In the second phase, the RoI around the reservoir (RoIaR) is extracted using the proposed image processing techniques. Finally, the man-made objects in the RoIaR are segmented using a DL model. To illustrate the proposed approach, our task of interest is segmenting man-made objects around some of the most important reservoirs in Brazil. Therefore, we trained the proposed workflow using collected Google Earth images of eight reservoirs in Brazil over two different years. The U-Net-based and SegNet-based architectures are trained to segment the reservoirs. To segment man-made objects in the RoIaR, we trained and evaluated four architectures: U-Net, feature pyramid network, LinkNet, and pyramid scene parsing network. Although the collected data are highly diverse (for example, they belong to different states, seasons, resolutions, etc.), we achieved good performances in both phases. The F1-score of phase-1 and phase-2 highest performance models in segmenting test sets are 96.53% and 90.32%, respectively. Furthermore, applying the proposed postprocessing to the output of reservoir segmentation improves the precision in all studied reservoirs except two cases. We validated the prepared workflow with a reservoir dataset outside the training reservoirs. The F1-scores of the phase-1 segmentation stage, postprocessing stage, and phase-2 segmentation stage are 92.54%, 94.68%, and 88.11%, respectively, which show high generalization ability of the prepared workflow.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"20 1","pages":""},"PeriodicalIF":1.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139554459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}