LayerCLIP: A fine-grained class activation map for weakly supervised semantic segmentation

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Lingma Sun , Le Zou , Xianghu Lv, Zhize Wu, Xiaofeng Wang
{"title":"LayerCLIP: A fine-grained class activation map for weakly supervised semantic segmentation","authors":"Lingma Sun ,&nbsp;Le Zou ,&nbsp;Xianghu Lv,&nbsp;Zhize Wu,&nbsp;Xiaofeng Wang","doi":"10.1016/j.patcog.2025.112452","DOIUrl":null,"url":null,"abstract":"<div><div>Weakly supervised semantic segmentation (WSSS) using image-level labels aims to create pseudo-labels leveraging Class Activation Maps (CAM) to train a separate segmentation model. Recent methods that utilize Contrastive Language-Image Pre-training (CLIP) models have achieved significant advancements. These approaches take advantage of CLIP’s capability to identify various categories without requiring additional training. However, due to the limited local information of the final embedding layer, the CAM generated by the CLIP model is still a rough region with an under-activated or over-activated issue. Furthermore, the abundant multi-layer information of CLIP, which plays a vital role in dense prediction, has been ignored. In this paper, we proposed a LayerCLIP model for a fine-grained CAM generation via hierarchical features, which consists of two consecutive components: a dynamic hierarchical CAMs module and an adaptive affinity module. Specifically, the dynamic hierarchical CAMs module utilizes the hierarchical features to produce two complementary CAMs, along with a dynamic strategy to fuse these CAMs. Subsequently, the affinity based on multi-head self-attention is adaptively reweighted to refine CAM by the CAM itself in the adaptive affinity module. LayerCLIP significantly enhances the quality of CAM. Our method achieves a new state-of-the-art performance on PASCAL VOC 2012 (75.1 % mIoU) and MS COCO 2014 (46.9 % mIoU) through extensive benchmark experiments.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"172 ","pages":"Article 112452"},"PeriodicalIF":7.6000,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325011148","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Weakly supervised semantic segmentation (WSSS) using image-level labels aims to create pseudo-labels leveraging Class Activation Maps (CAM) to train a separate segmentation model. Recent methods that utilize Contrastive Language-Image Pre-training (CLIP) models have achieved significant advancements. These approaches take advantage of CLIP’s capability to identify various categories without requiring additional training. However, due to the limited local information of the final embedding layer, the CAM generated by the CLIP model is still a rough region with an under-activated or over-activated issue. Furthermore, the abundant multi-layer information of CLIP, which plays a vital role in dense prediction, has been ignored. In this paper, we proposed a LayerCLIP model for a fine-grained CAM generation via hierarchical features, which consists of two consecutive components: a dynamic hierarchical CAMs module and an adaptive affinity module. Specifically, the dynamic hierarchical CAMs module utilizes the hierarchical features to produce two complementary CAMs, along with a dynamic strategy to fuse these CAMs. Subsequently, the affinity based on multi-head self-attention is adaptively reweighted to refine CAM by the CAM itself in the adaptive affinity module. LayerCLIP significantly enhances the quality of CAM. Our method achieves a new state-of-the-art performance on PASCAL VOC 2012 (75.1 % mIoU) and MS COCO 2014 (46.9 % mIoU) through extensive benchmark experiments.
LayerCLIP:用于弱监督语义分割的细粒度类激活映射
使用图像级标签的弱监督语义分割(WSSS)旨在利用类激活图(CAM)创建伪标签来训练单独的分割模型。最近使用对比语言图像预训练(CLIP)模型的方法取得了重大进展。这些方法利用了CLIP识别各种类别的能力,而不需要额外的培训。然而,由于最终嵌入层的局部信息有限,CLIP模型生成的CAM仍然是一个粗糙区域,存在激活不足或过度激活的问题。此外,CLIP丰富的多层信息在密集预测中起着至关重要的作用,却被忽略了。在本文中,我们提出了一个分层特征生成细粒度CAM的LayerCLIP模型,该模型由两个连续的组件组成:一个动态分层CAM模块和一个自适应关联模块。具体而言,动态分层凸轮模块利用分层特征产生两个互补的凸轮,并采用动态策略融合这些凸轮。随后,在自适应亲和性模块中,对基于多头自注意的亲和性进行自适应加权,由CAM自身对CAM进行细化。LayerCLIP显著提高了CAM的质量。通过广泛的基准实验,我们的方法在PASCAL VOC 2012 (75.1% mIoU)和MS COCO 2014 (46.9% mIoU)上实现了新的最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信