Adapting Across Domains via Target-Oriented Transferable Semantic Augmentation Under Prototype Constraint

IF 11.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Mixue Xie, Shuang Li, Kaixiong Gong, Yulin Wang, Gao Huang
{"title":"Adapting Across Domains via Target-Oriented Transferable Semantic Augmentation Under Prototype Constraint","authors":"Mixue Xie, Shuang Li, Kaixiong Gong, Yulin Wang, Gao Huang","doi":"10.1007/s11263-023-01944-1","DOIUrl":null,"url":null,"abstract":"<p>The demand for reducing label annotation cost and adapting to new data distributions gives rise to the emergence of domain adaptation (DA). DA aims to learn a model that performs well on the unlabeled or scarcely labeled target domain by transferring the rich knowledge from a related and well-annotated source domain. Existing DA methods mainly resort to learning domain-invariant representations with a source-supervised classifier shared by two domains. However, such a shared classifier may bias towards source domain, limiting its generalization capability on target data. To alleviate this issue, we present a <i>target-oriented transferable semantic augmentation (T</i><span>\\(^2\\)</span><i>SA)</i> method, which enhances the generalization ability of the classifier by training it with a target-like augmented domain, constructed by semantically augmenting source data towards target at the feature level in an implicit manner. Specifically, to equip the augmented domain with target semantics, we delicately design a class-wise multivariate normal distribution based on the statistics estimated from features to sample the transformation directions for source data. Moreover, we achieve the augmentation implicitly by minimizing the upper bound of the expected Angular-softmax loss over the augmented domain, which is of high efficiency. Additionally, to further ensure that the augmented domain can imitate target domain nicely and discriminatively, the prototype constraint is enforced on augmented features class-wisely, which minimizes the expected distance between augmented features and corresponding target prototype (i.e., average representation) in Euclidean space. As a general technique, T<span>\\(^2\\)</span>SA can be easily plugged into various DA methods to further boost their performances. Extensive experiments under single-source DA, multi-source DA and domain generalization scenarios validate the efficacy of T<span>\\(^2\\)</span>SA.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"83 19","pages":""},"PeriodicalIF":11.6000,"publicationDate":"2023-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Vision","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11263-023-01944-1","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The demand for reducing label annotation cost and adapting to new data distributions gives rise to the emergence of domain adaptation (DA). DA aims to learn a model that performs well on the unlabeled or scarcely labeled target domain by transferring the rich knowledge from a related and well-annotated source domain. Existing DA methods mainly resort to learning domain-invariant representations with a source-supervised classifier shared by two domains. However, such a shared classifier may bias towards source domain, limiting its generalization capability on target data. To alleviate this issue, we present a target-oriented transferable semantic augmentation (T\(^2\)SA) method, which enhances the generalization ability of the classifier by training it with a target-like augmented domain, constructed by semantically augmenting source data towards target at the feature level in an implicit manner. Specifically, to equip the augmented domain with target semantics, we delicately design a class-wise multivariate normal distribution based on the statistics estimated from features to sample the transformation directions for source data. Moreover, we achieve the augmentation implicitly by minimizing the upper bound of the expected Angular-softmax loss over the augmented domain, which is of high efficiency. Additionally, to further ensure that the augmented domain can imitate target domain nicely and discriminatively, the prototype constraint is enforced on augmented features class-wisely, which minimizes the expected distance between augmented features and corresponding target prototype (i.e., average representation) in Euclidean space. As a general technique, T\(^2\)SA can be easily plugged into various DA methods to further boost their performances. Extensive experiments under single-source DA, multi-source DA and domain generalization scenarios validate the efficacy of T\(^2\)SA.

Abstract Image

原型约束下面向目标可转移语义增强的跨域自适应
为了降低标注成本和适应新的数据分布,领域自适应(domain adaptation, DA)应运而生。数据挖掘旨在通过从相关且注释良好的源领域转移丰富的知识来学习在未标记或几乎没有标记的目标领域上表现良好的模型。现有的数据分析方法主要是利用两个域共享的源监督分类器来学习域不变表示。然而,这种共享分类器可能会偏向源域,限制了其对目标数据的泛化能力。为了缓解这一问题,我们提出了一种面向目标的可转移语义增强(T \(^2\) SA)方法,该方法通过隐式地对源数据在特征级别上向目标进行语义增强而构建的类目标增强域来训练分类器,从而增强分类器的泛化能力。具体来说,为了使增强域具有目标语义,我们基于特征估计的统计量精心设计了一个分类多元正态分布,对源数据的转换方向进行采样。此外,我们通过最小化增广域上期望的Angular-softmax损失的上界来隐式地实现增广,效率很高。此外,为了进一步保证增广域能够很好地模仿目标域,对增广特征进行类明智的原型约束,使增广特征与相应目标原型(即平均表示)在欧几里得空间中的期望距离最小化。作为一种通用技术,T \(^2\) SA可以很容易地插入到各种数据处理方法中,以进一步提高它们的性能。在单源数据分析、多源数据分析和领域泛化场景下的大量实验验证了T \(^2\) SA的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Computer Vision
International Journal of Computer Vision 工程技术-计算机:人工智能
CiteScore
29.80
自引率
2.10%
发文量
163
审稿时长
6 months
期刊介绍: The International Journal of Computer Vision (IJCV) serves as a platform for sharing new research findings in the rapidly growing field of computer vision. It publishes 12 issues annually and presents high-quality, original contributions to the science and engineering of computer vision. The journal encompasses various types of articles to cater to different research outputs. Regular articles, which span up to 25 journal pages, focus on significant technical advancements that are of broad interest to the field. These articles showcase substantial progress in computer vision. Short articles, limited to 10 pages, offer a swift publication path for novel research outcomes. They provide a quicker means for sharing new findings with the computer vision community. Survey articles, comprising up to 30 pages, offer critical evaluations of the current state of the art in computer vision or offer tutorial presentations of relevant topics. These articles provide comprehensive and insightful overviews of specific subject areas. In addition to technical articles, the journal also includes book reviews, position papers, and editorials by prominent scientific figures. These contributions serve to complement the technical content and provide valuable perspectives. The journal encourages authors to include supplementary material online, such as images, video sequences, data sets, and software. This additional material enhances the understanding and reproducibility of the published research. Overall, the International Journal of Computer Vision is a comprehensive publication that caters to researchers in this rapidly growing field. It covers a range of article types, offers additional online resources, and facilitates the dissemination of impactful research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信