{"title":"跨模态交互和全局特征融合的RGB-T语义分割","authors":"Zhiwei Zhang;Yisha Liu;Weimin Xue;Yan Zhuang","doi":"10.1109/TETCI.2024.3462168","DOIUrl":null,"url":null,"abstract":"RGB-T semantic segmentation aims to enhance the robustness of segmentation methods in complex environments by utilizing thermal information. To facilitate the effective interaction and fusion of multimodal information, we propose a novel Cross-modality Interaction and Global-feature Fusion Network, namely CIGF-Net. In each feature extraction stage, we propose a Cross-modality Interaction Module (CIM) to enable effective interaction between the RGB and thermal modalities. CIM utilizes channel and spatial attention mechanisms to process the feature information from both modalities. By encouraging cross-modal information exchange, the CIM facilitates the integration of complementary information and improves the overall segmentation performance. Subsequently, the Global-feature Fusion Module (GFM) is proposed to focus on fusing the information provided by the CIM. GFM assigns different weights to the multimodal features to achieve cross-modality fusion. Experimental results show that CIGF-Net achieves state-of-the-art performance on RGB-T image semantic segmentation datasets, with a remarkable 60.8 mIoU on the MFNet dataset and 86.93 mIoU on the PST900 dataset.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 3","pages":"2440-2451"},"PeriodicalIF":5.3000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CIGF-Net: Cross-Modality Interaction and Global-Feature Fusion for RGB-T Semantic Segmentation\",\"authors\":\"Zhiwei Zhang;Yisha Liu;Weimin Xue;Yan Zhuang\",\"doi\":\"10.1109/TETCI.2024.3462168\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"RGB-T semantic segmentation aims to enhance the robustness of segmentation methods in complex environments by utilizing thermal information. To facilitate the effective interaction and fusion of multimodal information, we propose a novel Cross-modality Interaction and Global-feature Fusion Network, namely CIGF-Net. In each feature extraction stage, we propose a Cross-modality Interaction Module (CIM) to enable effective interaction between the RGB and thermal modalities. CIM utilizes channel and spatial attention mechanisms to process the feature information from both modalities. By encouraging cross-modal information exchange, the CIM facilitates the integration of complementary information and improves the overall segmentation performance. Subsequently, the Global-feature Fusion Module (GFM) is proposed to focus on fusing the information provided by the CIM. GFM assigns different weights to the multimodal features to achieve cross-modality fusion. Experimental results show that CIGF-Net achieves state-of-the-art performance on RGB-T image semantic segmentation datasets, with a remarkable 60.8 mIoU on the MFNet dataset and 86.93 mIoU on the PST900 dataset.\",\"PeriodicalId\":13135,\"journal\":{\"name\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"volume\":\"9 3\",\"pages\":\"2440-2451\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Emerging Topics in Computational Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10689460/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10689460/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
CIGF-Net: Cross-Modality Interaction and Global-Feature Fusion for RGB-T Semantic Segmentation
RGB-T semantic segmentation aims to enhance the robustness of segmentation methods in complex environments by utilizing thermal information. To facilitate the effective interaction and fusion of multimodal information, we propose a novel Cross-modality Interaction and Global-feature Fusion Network, namely CIGF-Net. In each feature extraction stage, we propose a Cross-modality Interaction Module (CIM) to enable effective interaction between the RGB and thermal modalities. CIM utilizes channel and spatial attention mechanisms to process the feature information from both modalities. By encouraging cross-modal information exchange, the CIM facilitates the integration of complementary information and improves the overall segmentation performance. Subsequently, the Global-feature Fusion Module (GFM) is proposed to focus on fusing the information provided by the CIM. GFM assigns different weights to the multimodal features to achieve cross-modality fusion. Experimental results show that CIGF-Net achieves state-of-the-art performance on RGB-T image semantic segmentation datasets, with a remarkable 60.8 mIoU on the MFNet dataset and 86.93 mIoU on the PST900 dataset.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.