分而治之:通过样本特征融合实现有监督的领域适应

IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhuangzhuang Chen , Bin Pu , Lei Zhao , Jie He , Pengchen Liang
{"title":"分而治之:通过样本特征融合实现有监督的领域适应","authors":"Zhuangzhuang Chen ,&nbsp;Bin Pu ,&nbsp;Lei Zhao ,&nbsp;Jie He ,&nbsp;Pengchen Liang","doi":"10.1016/j.inffus.2024.102757","DOIUrl":null,"url":null,"abstract":"<div><div>The training of deep models relies on appropriate regularization from a copious amount of labeled data. And yet, obtaining a large and well-annotated dataset is costly. Thus, supervised domain adaptation (SDA) becomes attractive, especially when it aims to regularize these networks for a data-scarce target domain by exploiting an available data-rich source domain. Different from previous methods focusing on an cumbersome adversarial learning manner, we assume that a source or target sample in the feature space can be regarded as a combination of (1) domain-oriented features (i.e., those reflecting the difference among domains) and (2) class-specific features (i.e., those inherently defining a specific class). By exploiting this, we present Divide and Augment (DivAug), a feature fusion-based data augmentation framework that performs target domain augmentation by transforming source samples into the target domain in an energy-efficient manner. Specifically, with a novel <em>semantic inconsistency loss</em> based on a multi-task ensemble learning scheme, DivAug enforces two encoders to learn the decomposed domain-oriented and class-specific features, respectively. Furthermore, we propose a simple sample-wise feature fusion rule that transforms source samples into target domain by combining class-specific features from a source sample and domain-oriented features from a target sample. Extensive experiments demonstrate that our method outperforms the current state-of-the-art methods across various datasets in SDA settings.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"115 ","pages":"Article 102757"},"PeriodicalIF":14.7000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Divide and augment: Supervised domain adaptation via sample-wise feature fusion\",\"authors\":\"Zhuangzhuang Chen ,&nbsp;Bin Pu ,&nbsp;Lei Zhao ,&nbsp;Jie He ,&nbsp;Pengchen Liang\",\"doi\":\"10.1016/j.inffus.2024.102757\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The training of deep models relies on appropriate regularization from a copious amount of labeled data. And yet, obtaining a large and well-annotated dataset is costly. Thus, supervised domain adaptation (SDA) becomes attractive, especially when it aims to regularize these networks for a data-scarce target domain by exploiting an available data-rich source domain. Different from previous methods focusing on an cumbersome adversarial learning manner, we assume that a source or target sample in the feature space can be regarded as a combination of (1) domain-oriented features (i.e., those reflecting the difference among domains) and (2) class-specific features (i.e., those inherently defining a specific class). By exploiting this, we present Divide and Augment (DivAug), a feature fusion-based data augmentation framework that performs target domain augmentation by transforming source samples into the target domain in an energy-efficient manner. Specifically, with a novel <em>semantic inconsistency loss</em> based on a multi-task ensemble learning scheme, DivAug enforces two encoders to learn the decomposed domain-oriented and class-specific features, respectively. Furthermore, we propose a simple sample-wise feature fusion rule that transforms source samples into target domain by combining class-specific features from a source sample and domain-oriented features from a target sample. Extensive experiments demonstrate that our method outperforms the current state-of-the-art methods across various datasets in SDA settings.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"115 \",\"pages\":\"Article 102757\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2024-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253524005359\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253524005359","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

深度模型的训练依赖于大量标注数据的适当正则化。然而,获得一个庞大且标注齐全的数据集成本高昂。因此,有监督的领域适应(SDA)变得很有吸引力,尤其是当它旨在通过利用可用的数据丰富的源领域,为数据稀缺的目标领域正则化这些网络时。与以往侧重于繁琐的对抗学习方式的方法不同,我们假定特征空间中的源样本或目标样本可被视为(1)面向领域的特征(即反映领域间差异的特征)和(2)特定类别的特征(即定义特定类别的特征)的组合。利用这一点,我们提出了基于特征融合的数据增强框架 Divide and Augment (DivAug),该框架通过将源样本转化为目标域,以节能的方式执行目标域增强。具体来说,DivAug 采用基于多任务集合学习方案的新型语义不一致损失,强制使用两个编码器分别学习面向领域的分解特征和特定类别的特征。此外,我们还提出了一种简单的样本特征融合规则,通过结合源样本中的特定类别特征和目标样本中的面向领域特征,将源样本转化为目标领域样本。广泛的实验证明,在 SDA 设置的各种数据集上,我们的方法优于目前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Divide and augment: Supervised domain adaptation via sample-wise feature fusion
The training of deep models relies on appropriate regularization from a copious amount of labeled data. And yet, obtaining a large and well-annotated dataset is costly. Thus, supervised domain adaptation (SDA) becomes attractive, especially when it aims to regularize these networks for a data-scarce target domain by exploiting an available data-rich source domain. Different from previous methods focusing on an cumbersome adversarial learning manner, we assume that a source or target sample in the feature space can be regarded as a combination of (1) domain-oriented features (i.e., those reflecting the difference among domains) and (2) class-specific features (i.e., those inherently defining a specific class). By exploiting this, we present Divide and Augment (DivAug), a feature fusion-based data augmentation framework that performs target domain augmentation by transforming source samples into the target domain in an energy-efficient manner. Specifically, with a novel semantic inconsistency loss based on a multi-task ensemble learning scheme, DivAug enforces two encoders to learn the decomposed domain-oriented and class-specific features, respectively. Furthermore, we propose a simple sample-wise feature fusion rule that transforms source samples into target domain by combining class-specific features from a source sample and domain-oriented features from a target sample. Extensive experiments demonstrate that our method outperforms the current state-of-the-art methods across various datasets in SDA settings.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信