{"title":"时空融合卷积神经网络:从多源遥感图像估算热带气旋强度","authors":"Randi Fu, Haiyan Hu, Nan Wu, Zhening Liu, Wei Jin","doi":"10.1117/1.jrs.18.018501","DOIUrl":null,"url":null,"abstract":"Utilizing multisource remote sensing images to accurately estimate tropical cyclone (TC) intensity is crucial and challenging. Traditional approaches rely on a single image for intensity estimation and lack the capability to perceive dynamic spatiotemporal information. Meanwhile, many existing deep learning methods sample from a time series of fixed length and depend on computation-intensive 3D feature extraction modules, limiting the model’s flexibility and scalability. By organically linking the genesis and dissipation mechanisms of a TC with computer vision techniques, we introduce a spatiotemporal fusion convolutional neural network that integrates three distinct improvement approaches. First, an a priori aware nonparametric fusion module is introduced to effectively fuse key features from multisource remote sensing data. Second, we design a scale-aware contraction–expansion module. This module effectively captures detailed features of the TC by connecting information from different scales through a weighted and up-sampling method. Finally, we propose a 1D–2D conditional sampling training method that balances single-step regression (for short sequences) and latent-variable-based temporal modeling (for long sequences) to achieve flexible spatiotemporal feature perception, thereby avoiding the data scale constraint imposed by fixed sequence lengths. Through qualitative and quantitative experimental comparisons, the proposed spatiotemporal fusion convolutional neural network achieved a root-mean-square error of 8.89 kt, marking a 29.7% improvement over the advanced Dvorak technique, and its efficacy in actual TC case analyses indicates its practical viability and potential for broader applications.","PeriodicalId":54879,"journal":{"name":"Journal of Applied Remote Sensing","volume":"68 1","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spatiotemporal fusion convolutional neural network: tropical cyclone intensity estimation from multisource remote sensing images\",\"authors\":\"Randi Fu, Haiyan Hu, Nan Wu, Zhening Liu, Wei Jin\",\"doi\":\"10.1117/1.jrs.18.018501\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Utilizing multisource remote sensing images to accurately estimate tropical cyclone (TC) intensity is crucial and challenging. Traditional approaches rely on a single image for intensity estimation and lack the capability to perceive dynamic spatiotemporal information. Meanwhile, many existing deep learning methods sample from a time series of fixed length and depend on computation-intensive 3D feature extraction modules, limiting the model’s flexibility and scalability. By organically linking the genesis and dissipation mechanisms of a TC with computer vision techniques, we introduce a spatiotemporal fusion convolutional neural network that integrates three distinct improvement approaches. First, an a priori aware nonparametric fusion module is introduced to effectively fuse key features from multisource remote sensing data. Second, we design a scale-aware contraction–expansion module. This module effectively captures detailed features of the TC by connecting information from different scales through a weighted and up-sampling method. Finally, we propose a 1D–2D conditional sampling training method that balances single-step regression (for short sequences) and latent-variable-based temporal modeling (for long sequences) to achieve flexible spatiotemporal feature perception, thereby avoiding the data scale constraint imposed by fixed sequence lengths. Through qualitative and quantitative experimental comparisons, the proposed spatiotemporal fusion convolutional neural network achieved a root-mean-square error of 8.89 kt, marking a 29.7% improvement over the advanced Dvorak technique, and its efficacy in actual TC case analyses indicates its practical viability and potential for broader applications.\",\"PeriodicalId\":54879,\"journal\":{\"name\":\"Journal of Applied Remote Sensing\",\"volume\":\"68 1\",\"pages\":\"\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Applied Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1117/1.jrs.18.018501\",\"RegionNum\":4,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENVIRONMENTAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1117/1.jrs.18.018501","RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
Utilizing multisource remote sensing images to accurately estimate tropical cyclone (TC) intensity is crucial and challenging. Traditional approaches rely on a single image for intensity estimation and lack the capability to perceive dynamic spatiotemporal information. Meanwhile, many existing deep learning methods sample from a time series of fixed length and depend on computation-intensive 3D feature extraction modules, limiting the model’s flexibility and scalability. By organically linking the genesis and dissipation mechanisms of a TC with computer vision techniques, we introduce a spatiotemporal fusion convolutional neural network that integrates three distinct improvement approaches. First, an a priori aware nonparametric fusion module is introduced to effectively fuse key features from multisource remote sensing data. Second, we design a scale-aware contraction–expansion module. This module effectively captures detailed features of the TC by connecting information from different scales through a weighted and up-sampling method. Finally, we propose a 1D–2D conditional sampling training method that balances single-step regression (for short sequences) and latent-variable-based temporal modeling (for long sequences) to achieve flexible spatiotemporal feature perception, thereby avoiding the data scale constraint imposed by fixed sequence lengths. Through qualitative and quantitative experimental comparisons, the proposed spatiotemporal fusion convolutional neural network achieved a root-mean-square error of 8.89 kt, marking a 29.7% improvement over the advanced Dvorak technique, and its efficacy in actual TC case analyses indicates its practical viability and potential for broader applications.
期刊介绍:
The Journal of Applied Remote Sensing is a peer-reviewed journal that optimizes the communication of concepts, information, and progress among the remote sensing community.