{"title":"Sat2rain:基于改进GAN的多卫星图像到降雨量的转换","authors":"Hidetomo Sakaino, A. Higuchi","doi":"10.1109/ICMLA55696.2022.00233","DOIUrl":null,"url":null,"abstract":"This paper presents a conversion method of cloud to precipitation images based on an improved Generative Adversarial Network (GAN) using multiple satellite and radar images. Since heavy rainfall events have been yearly increasing everywhere on the earth, precipitation radar images on lands become more important to use and predict, where much denser data is observed than on-the-ground sensor data. However, the coverage of such radar sites is very limited in small regions like land and/or near the sea. On the other hand, satellite images, i.e., Himawari-8, are available globally, but no direct precipitation images, i.e., rain clouds, can be obtained. GAN is a good selection for image translation, but it is known that high edges and textures can be lost. This paper proposes ‘sat2rain’, a two-step algorithm with a new constraint of the loss function. First, multiple satellite band and topography images are input to GAN, where block-wised images from overall images are used to cover over 2500 km x 2500 km. Second, enhanced GAN-based training between satellite images and radar images is conducted. Experimental results show the effectiveness of the proposed sat2rain mesh-wise method over the previous point-wise Random Forest method in terms of high edge and texture.","PeriodicalId":128160,"journal":{"name":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Sat2rain: Multiple Satellite Images to Rainfall Amounts Conversion By Improved GAN\",\"authors\":\"Hidetomo Sakaino, A. Higuchi\",\"doi\":\"10.1109/ICMLA55696.2022.00233\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a conversion method of cloud to precipitation images based on an improved Generative Adversarial Network (GAN) using multiple satellite and radar images. Since heavy rainfall events have been yearly increasing everywhere on the earth, precipitation radar images on lands become more important to use and predict, where much denser data is observed than on-the-ground sensor data. However, the coverage of such radar sites is very limited in small regions like land and/or near the sea. On the other hand, satellite images, i.e., Himawari-8, are available globally, but no direct precipitation images, i.e., rain clouds, can be obtained. GAN is a good selection for image translation, but it is known that high edges and textures can be lost. This paper proposes ‘sat2rain’, a two-step algorithm with a new constraint of the loss function. First, multiple satellite band and topography images are input to GAN, where block-wised images from overall images are used to cover over 2500 km x 2500 km. Second, enhanced GAN-based training between satellite images and radar images is conducted. Experimental results show the effectiveness of the proposed sat2rain mesh-wise method over the previous point-wise Random Forest method in terms of high edge and texture.\",\"PeriodicalId\":128160,\"journal\":{\"name\":\"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA55696.2022.00233\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA55696.2022.00233","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Sat2rain: Multiple Satellite Images to Rainfall Amounts Conversion By Improved GAN
This paper presents a conversion method of cloud to precipitation images based on an improved Generative Adversarial Network (GAN) using multiple satellite and radar images. Since heavy rainfall events have been yearly increasing everywhere on the earth, precipitation radar images on lands become more important to use and predict, where much denser data is observed than on-the-ground sensor data. However, the coverage of such radar sites is very limited in small regions like land and/or near the sea. On the other hand, satellite images, i.e., Himawari-8, are available globally, but no direct precipitation images, i.e., rain clouds, can be obtained. GAN is a good selection for image translation, but it is known that high edges and textures can be lost. This paper proposes ‘sat2rain’, a two-step algorithm with a new constraint of the loss function. First, multiple satellite band and topography images are input to GAN, where block-wised images from overall images are used to cover over 2500 km x 2500 km. Second, enhanced GAN-based training between satellite images and radar images is conducted. Experimental results show the effectiveness of the proposed sat2rain mesh-wise method over the previous point-wise Random Forest method in terms of high edge and texture.