CCANet: Cross-Modality Comprehensive Feature Aggregation Network for Indoor Scene Semantic Segmentation

IF 5 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhang Zihao;Yang Yale;Hou Huifang;Meng Fanman;Zhang Fan;Xie Kangzhan;Zhuang Chunsheng
{"title":"CCANet: Cross-Modality Comprehensive Feature Aggregation Network for Indoor Scene Semantic Segmentation","authors":"Zhang Zihao;Yang Yale;Hou Huifang;Meng Fanman;Zhang Fan;Xie Kangzhan;Zhuang Chunsheng","doi":"10.1109/TCDS.2024.3455356","DOIUrl":null,"url":null,"abstract":"The semantic segmentation of indoor scenes based on RGB and depth information has been a persistent and enduring research topic. However, how to fully utilize the complementarity of multimodal features and achieve efficient fusion remains a challenging research topic. To address this challenge, we proposed an innovative cross-modal comprehensive feature aggregation network (CCANet) to achieve high-precision semantic segmentation of indoor scenes. In this method, we first propose a bidirectional cross-modality feature rectification (BCFR) module to complement each other and remove noise in both channel and spatial correlations. After that, the adaptive criss-cross attention fusion (CAF) module is designed to realize multistage deep multimodal feature fusion. Finally, a multisupervision strategy is applied to accurately learn additional details of the target, guiding the gradual refinement of segmentation maps. By conducting thorough experiments on two openly accessible datasets of indoor scenes, the results demonstrate that CCANet exhibits outstanding performance and robustness in aggregating RGB and depth features.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"17 2","pages":"366-378"},"PeriodicalIF":5.0000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cognitive and Developmental Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10669091/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The semantic segmentation of indoor scenes based on RGB and depth information has been a persistent and enduring research topic. However, how to fully utilize the complementarity of multimodal features and achieve efficient fusion remains a challenging research topic. To address this challenge, we proposed an innovative cross-modal comprehensive feature aggregation network (CCANet) to achieve high-precision semantic segmentation of indoor scenes. In this method, we first propose a bidirectional cross-modality feature rectification (BCFR) module to complement each other and remove noise in both channel and spatial correlations. After that, the adaptive criss-cross attention fusion (CAF) module is designed to realize multistage deep multimodal feature fusion. Finally, a multisupervision strategy is applied to accurately learn additional details of the target, guiding the gradual refinement of segmentation maps. By conducting thorough experiments on two openly accessible datasets of indoor scenes, the results demonstrate that CCANet exhibits outstanding performance and robustness in aggregating RGB and depth features.
CCANet:用于室内场景语义分割的跨模态综合特征聚合网络
基于RGB和深度信息的室内场景语义分割一直是一个经久不衰的研究课题。然而,如何充分利用多模态特征的互补性,实现高效融合仍然是一个具有挑战性的研究课题。为了解决这一挑战,我们提出了一种创新的跨模态综合特征聚合网络(CCANet)来实现室内场景的高精度语义分割。在该方法中,我们首先提出了双向交叉模态特征校正(BCFR)模块,以相互补充并去除信道和空间相关性中的噪声。然后,设计自适应交叉注意融合(CAF)模块,实现多阶段深度多模态特征融合。最后,采用多监督策略精确学习目标的附加细节,指导分割图的逐步细化。通过在两个开放访问的室内场景数据集上进行深入的实验,结果表明CCANet在聚合RGB和深度特征方面表现出出色的性能和鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.20
自引率
10.00%
发文量
170
期刊介绍: The IEEE Transactions on Cognitive and Developmental Systems (TCDS) focuses on advances in the study of development and cognition in natural (humans, animals) and artificial (robots, agents) systems. It welcomes contributions from multiple related disciplines including cognitive systems, cognitive robotics, developmental and epigenetic robotics, autonomous and evolutionary robotics, social structures, multi-agent and artificial life systems, computational neuroscience, and developmental psychology. Articles on theoretical, computational, application-oriented, and experimental studies as well as reviews in these areas are considered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信