ADFNet: Attention-based Fusion Network for Few-shot RGB-D Semantic Segmentation

Chengkai Zhang, Jichao Jiao, Weizhuo Xu, Ning Li, Mingliang Pang, Jianye Dong
{"title":"ADFNet: Attention-based Fusion Network for Few-shot RGB-D Semantic Segmentation","authors":"Chengkai Zhang, Jichao Jiao, Weizhuo Xu, Ning Li, Mingliang Pang, Jianye Dong","doi":"10.1145/3529836.3529864","DOIUrl":null,"url":null,"abstract":"∗Deep CNNs have made great progress in image semantic segmentation. However, they require a large-scale labeled image dataset, which might be costly. Moreover, the model can hardly generalize to unseen classes. Few-shot segmentation, which can learn to perform segmentation on new classes from a few labeled samples, has been developed recently to tackle the problem. In this paper, we proposed a novel prototype network to undertake the challenging task of few-shot semantic segmentation on complex scenes with RGB-D datasets, which is named ADFNet (Attention-based Depth Fusion Network). Our ADFNet learns class-specific prototypes from both RGB channels and depth channels. Meanwhile, we proposed an attention-based fusion module to fuse the depth feature into the image feature that can better utilize the information of the support depth images. We also proposed RELIEF-prototype which refines the prototype and provides an additional improvement to the model. Furthermore, we proposed a new few-shot RGB-D segmentation benchmark based on SUN RGB-D, named SUN RGB-D-5i. Experiments on SUN RGB-D-5i show that our method achieves the mIoU score of 27.4% and 34.6% for 1-shot and 5-shot settings respectively, outperforming the baseline method by 4.2% and 4.4% respectively.","PeriodicalId":285191,"journal":{"name":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529836.3529864","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

∗Deep CNNs have made great progress in image semantic segmentation. However, they require a large-scale labeled image dataset, which might be costly. Moreover, the model can hardly generalize to unseen classes. Few-shot segmentation, which can learn to perform segmentation on new classes from a few labeled samples, has been developed recently to tackle the problem. In this paper, we proposed a novel prototype network to undertake the challenging task of few-shot semantic segmentation on complex scenes with RGB-D datasets, which is named ADFNet (Attention-based Depth Fusion Network). Our ADFNet learns class-specific prototypes from both RGB channels and depth channels. Meanwhile, we proposed an attention-based fusion module to fuse the depth feature into the image feature that can better utilize the information of the support depth images. We also proposed RELIEF-prototype which refines the prototype and provides an additional improvement to the model. Furthermore, we proposed a new few-shot RGB-D segmentation benchmark based on SUN RGB-D, named SUN RGB-D-5i. Experiments on SUN RGB-D-5i show that our method achieves the mIoU score of 27.4% and 34.6% for 1-shot and 5-shot settings respectively, outperforming the baseline method by 4.2% and 4.4% respectively.
ADFNet:基于注意力的少镜头RGB-D语义分割融合网络
深度cnn在图像语义分割方面取得了很大进展。然而,它们需要一个大规模的标记图像数据集,这可能是昂贵的。此外,该模型很难推广到不可见的类。为了解决这一问题,最近开发了一种基于少量标记样本学习对新类别进行分割的方法。在本文中,我们提出了一种新的原型网络,用于在RGB-D数据集上承担复杂场景的少镜头语义分割任务,该网络被命名为ADFNet (Attention-based Depth Fusion network)。我们的ADFNet从RGB通道和深度通道学习类特定的原型。同时,我们提出了一种基于注意力的融合模块,将深度特征融合到图像特征中,可以更好地利用支持深度图像的信息。我们还提出了RELIEF-prototype,它对原型进行了细化,并对模型进行了额外的改进。在此基础上,提出了一种新的基于SUN RGB-D的少镜头RGB-D分割基准,命名为SUN RGB-D-5i。在SUN RGB-D-5i上的实验表明,我们的方法在1枪和5枪设置下的mIoU得分分别为27.4%和34.6%,分别比基线方法高4.2%和4.4%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信