{"title":"VFM-Depth: Leveraging Vision Foundation Model for Self-Supervised Monocular Depth Estimation","authors":"Shangshu Yu;Meiqing Wu;Siew-Kei Lam","doi":"10.1109/TCSVT.2024.3523702","DOIUrl":null,"url":null,"abstract":"Self-supervised monocular depth estimation has exploited semantics to reduce depth ambiguities in texture-less regions and object boundaries. However, existing methods struggle to obtain universal semantics across scenes for effective depth estimation. This paper proposes VFM-Depth, a novel self-supervised teacher-student framework, that effectively leverages the vision foundation model as semantic regularization to significantly improve the accuracy of monocular depth estimation. Firstly, we propose a novel Geometric-Semantic Aggregation Encoding, integrating universal semantic constraints from the foundation model to reduce ambiguities in the teacher model. Specifically, semantic features from the foundation model and geometric features from the depth model are first encoded and then fused through cross-modal aggregation. Secondly, we introduce a novel Multi-Alignment for Depth Distillation to distill semantic constraints from the teacher, further leveraging knowledge from the foundation model. We obtain a lightweight yet effective student model through an innovative approach that combines distance category alignment with complementary feature and depth imitation. Extensive experiments on KITTI, Cityscapes, and Make3D datasets demonstrate that VFM-Depth (both teacher and student) outperforms state-of-the-art self-supervised methods by a large margin.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"5078-5091"},"PeriodicalIF":8.3000,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10817597/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Self-supervised monocular depth estimation has exploited semantics to reduce depth ambiguities in texture-less regions and object boundaries. However, existing methods struggle to obtain universal semantics across scenes for effective depth estimation. This paper proposes VFM-Depth, a novel self-supervised teacher-student framework, that effectively leverages the vision foundation model as semantic regularization to significantly improve the accuracy of monocular depth estimation. Firstly, we propose a novel Geometric-Semantic Aggregation Encoding, integrating universal semantic constraints from the foundation model to reduce ambiguities in the teacher model. Specifically, semantic features from the foundation model and geometric features from the depth model are first encoded and then fused through cross-modal aggregation. Secondly, we introduce a novel Multi-Alignment for Depth Distillation to distill semantic constraints from the teacher, further leveraging knowledge from the foundation model. We obtain a lightweight yet effective student model through an innovative approach that combines distance category alignment with complementary feature and depth imitation. Extensive experiments on KITTI, Cityscapes, and Make3D datasets demonstrate that VFM-Depth (both teacher and student) outperforms state-of-the-art self-supervised methods by a large margin.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.