UnDAF: A General Unsupervised Domain Adaptation Framework for Disparity or Optical Flow Estimation

H. Wang, Rui Fan, Peide Cai, Ming Liu, Lujia Wang
{"title":"UnDAF: A General Unsupervised Domain Adaptation Framework for Disparity or Optical Flow Estimation","authors":"H. Wang, Rui Fan, Peide Cai, Ming Liu, Lujia Wang","doi":"10.1109/icra46639.2022.9811811","DOIUrl":null,"url":null,"abstract":"Disparity and optical flow estimation are respectively 1D and 2D dense correspondence matching (DCM) tasks in nature. Unsupervised domain adaptation (UDA) is crucial for their success in new and unseen scenarios, enabling networks to draw inferences across different domains without manually-labeled ground truth. In this paper, we propose a general UDA framework (UnDAF) for disparity or optical flow estimation. Unlike existing approaches based on adversarial learning that suffers from pixel distortion and dense correspondence mismatch after domain alignment, our UnDAF adopts a straightforward but effective coarse-to-fine strategy, where a co-teaching strategy (two networks evolve by complementing each other) refines DCM estimations after Fourier transform initializes domain alignment. The simplicity of our approach makes it extremely easy to guide adaptation across different domains, or more practically, from synthetic to real-world domains. Extensive experiments carried out on the KITTI and MPI Sintel benchmarks demonstrate the accuracy and robustness of our UnDAF, advancing all other state-of-the-art UDA approaches for disparity or optical flow estimation. Our project page is available at https://sites.google.com/view/undaf.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icra46639.2022.9811811","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Disparity and optical flow estimation are respectively 1D and 2D dense correspondence matching (DCM) tasks in nature. Unsupervised domain adaptation (UDA) is crucial for their success in new and unseen scenarios, enabling networks to draw inferences across different domains without manually-labeled ground truth. In this paper, we propose a general UDA framework (UnDAF) for disparity or optical flow estimation. Unlike existing approaches based on adversarial learning that suffers from pixel distortion and dense correspondence mismatch after domain alignment, our UnDAF adopts a straightforward but effective coarse-to-fine strategy, where a co-teaching strategy (two networks evolve by complementing each other) refines DCM estimations after Fourier transform initializes domain alignment. The simplicity of our approach makes it extremely easy to guide adaptation across different domains, or more practically, from synthetic to real-world domains. Extensive experiments carried out on the KITTI and MPI Sintel benchmarks demonstrate the accuracy and robustness of our UnDAF, advancing all other state-of-the-art UDA approaches for disparity or optical flow estimation. Our project page is available at https://sites.google.com/view/undaf.
unaf:用于视差或光流估计的一般无监督域自适应框架
视差和光流估计本质上分别是一维和光流密度对应匹配(DCM)任务。无监督域适应(UDA)对于它们在新的和看不见的场景中取得成功至关重要,它使网络能够在没有手动标记的基础真理的情况下跨不同域进行推断。本文提出了一种用于视差或光流估计的通用UDA框架(UnDAF)。与现有的基于对抗性学习的方法不同,这种方法在域对齐后会受到像素失真和密集对应不匹配的影响,我们的UnDAF采用了一种直接但有效的从粗到精策略,其中联合教学策略(两个网络通过相互补充而进化)在傅里叶变换初始化域对齐后改进DCM估计。我们方法的简单性使得它非常容易指导跨不同领域的适应,或者更实际地说,从合成领域到现实世界领域的适应。在KITTI和MPI sinintel基准测试上进行的广泛实验证明了我们的UnDAF的准确性和鲁棒性,推进了所有其他最先进的UDA方法用于视差或光流估计。我们的项目页面可访问https://sites.google.com/view/undaf。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信