Zonghao Han, Ziye Zhang, Shun Zhang, Ge Zhang, Shaohui Mei
{"title":"Aerial Visible-to-Infrared Image Translation: Dataset, Evaluation, and Baseline","authors":"Zonghao Han, Ziye Zhang, Shun Zhang, Ge Zhang, Shaohui Mei","doi":"10.34133/remotesensing.0096","DOIUrl":"https://doi.org/10.34133/remotesensing.0096","url":null,"abstract":"Aerial visible-to-infrared image translation aims to transfer aerial visible images to their corresponding infrared images, which can effectively generate the infrared images of specific targets. Although some image-to-image translation algorithms have been applied to color-to-thermal natural images and achieved impressive results, they cannot be directly applied to aerial visible-to-infrared image translation due to the substantial differences between natural images and aerial images, including shooting angles, multi-scale targets, and complicated backgrounds. In order to verify the performance of existing image-to-image translation algorithms on aerial scenes as well as advance the development of aerial visible-to-infrared image translation, an Aerial Visible-to-Infrared Image Dataset (AVIID) is created, which is the first specialized dataset for aerial visible-to-infrared image translation and consists of over 3,000 paired visible-infrared images. Over the constructed AVIID, a complete evaluation system is presented to evaluate the generated infrared images from 2 aspects: overall appearance and target quality. In addition, a comprehensive survey of existing image-to-image translation approaches that could be applied to aerial visible-to-infrared image translation is given. We then provide a performance analysis of a set of representative methods under our proposed evaluation system on AVIID, which can serve as baseline results for future work. Finally, we summarize some meaningful conclusions, problems of existing methods, and future research directions to advance state-of-the-art algorithms for aerial visible-to-infrared image translation.","PeriodicalId":46432,"journal":{"name":"Korean Journal of Remote Sensing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135104667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated mapping of global 30 m tidal flats using time-series Landsat imagery: algorithm and products","authors":"Xiao Zhang, Liangyun Liu, Jinqing Wang, Tingting Zhao, Wendi Liu, Xidong Chen","doi":"10.34133/remotesensing.0091","DOIUrl":"https://doi.org/10.34133/remotesensing.0091","url":null,"abstract":"Tidal flats are an important part of coastal ecosystems and play an important role in shoreline protection and biodiversity maintenance. Although many efforts have been made in tidal flat mapping, an accurate global tidal flat product covering all coasts globally is still lacking and urgently needed. In this study, a novel method is proposed for the automated mapping of global tidal flats at 30 m (GTF30) in 2020 based on the Google Earth Engine, which is also the first global tidal flat dataset covering the high latitudes (>60°N). Specifically, we first propose a new spectral index named the LTideI index through a sensitivity analysis, which is robust and can accurately capture low-tide information. Second, globally distributed training samples are automatically generated by combining multisource datasets and the spatiotemporal refinement method. Third, the global coasts are divided into 588 5°×5° geographical tiles, and the local adaptive classification strategy is used to map tidal flats in each 5°×5° region by using multisourced training features and the derived globally distributed training samples. The statistical results show that the total global area of tidal flats is about 140,922.5 km 2 , with more than 75% distributed on 3 continents in the Northern Hemisphere, especially in Asia (approximately 43.1% of the total). Finally, the GTF30 tidal flat dataset is quantitatively assessed using 13,994 samples, yielding a good overall accuracy of 90.34%. Meanwhile, the intercomparisons with several existing tidal flat datasets indicate that the GTF30 products can greatly improve the mapping accuracy of tidal flats. Therefore, the novel method can support the automated mapping of tidal flats, and the GTF30 dataset can provide scientific guidance and data support for protecting coastal ecosystems and supporting coastal economic and social development. The GTF30 tidal flat dataset in 2020 is freely accessible via https://doi.org/10.5281/zenodo.7936721 .","PeriodicalId":46432,"journal":{"name":"Korean Journal of Remote Sensing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135549437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shili Meng, Yong Pang, Chengquan Huang, Zengyuan Li
{"title":"A Multi-factor Weighting Method for Improved Clear View Compositing using All Available Landsat 8 and Sentinel-2 Images in Google Earth Engine","authors":"Shili Meng, Yong Pang, Chengquan Huang, Zengyuan Li","doi":"10.34133/remotesensing.0086","DOIUrl":"https://doi.org/10.34133/remotesensing.0086","url":null,"abstract":"The increasing availability of freely accessible remote sensing data has been crucial for improved global monitoring studies. Multisource image combination is a common approach for overcoming a major limitation associated with single-sensor data sources, which cannot provide adequate observations to fill data gaps arising from cloud contamination, shadows, and other atmospheric effects. In particular, image compositing is often used to generate clear view images over a large area. For example, the best available pixel (BAP) method has been proposed to construct clear view and spatially contiguous composites based on pixel-level quality rules. For any location with a bad observation, this method searches observations acquired in other dates and uses the one with the highest score to replace the contaminated observation. This, however, can lead to artificially large discontinuities along the edge of a filled area, which is typically caused by large phenological differences among the observations considered. To mitigate this issue, we developed a multifactor weighting (MFW) method for constructing clear view composites with a higher level of spatial continuity and radiometric consistency than those produced using the BAP method. Assessments over 4 study sites selected from different climate zones in China demonstrated that the composites produced using the MFW method were more consistent with reference images than those generated using the BAP method. Spectral agreements between MFW composites and the reference ( R = 0.78 to 0.95) were generally higher than the agreements between BAP composites and the reference ( R = 0.65 to 0.93). These results demonstrated that the proposed MFW method can provide a promising strategy for constructing clear view, seamless, and radiometrically consistent image composites for large-scale applications.","PeriodicalId":46432,"journal":{"name":"Korean Journal of Remote Sensing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135401156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}