Zeyu Cheng, Yi Zhang, Yang Yu, Zhe Song, Chengkai Tang
{"title":"TinyDepth: Lightweight self-supervised monocular depth estimation based on transformer","authors":"Zeyu Cheng, Yi Zhang, Yang Yu, Zhe Song, Chengkai Tang","doi":"10.1016/j.engappai.2024.109313","DOIUrl":null,"url":null,"abstract":"<div><p>Monocular depth estimation plays an important role in autonomous driving, virtual reality, augmented reality, and other fields. Self-supervised monocular depth estimation has received much attention because it does not require hard-to-obtain depth labels during training. The previously used convolutional neural network (CNN) has shown limitations in modeling large-scale spatial dependencies. A new idea for monocular depth estimation is replacing the CNN architecture or merging it with a Vision Transformer (ViT) architecture that can model large-scale spatial dependencies in images. However, there are still problems with too many parameters and calculations, making deployment difficult on mobile platforms. In response to these problems, we propose TinyDepth, a lightweight self-supervised monocular depth estimation method based on Transformer that employs hierarchical representation learning suitable for dense prediction, uses mobile convolution to reduce parameters and computational overhead. and includes a novel decoder based on multi-scale fusion attention that improves the local and global inference capability of the network through scale-wise attention processing and layer-wise fusion sampling for more accurate depth prediction. In experiments, TinyDepth achieved state-of-the-art results with few parameters on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) dataset, and exhibited good generalization ability on the challenging indoor New York University (NYU) dataset. Source code is available at <span><span>https://github.com/ZYCheng777/TinyDepth</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624014714","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Monocular depth estimation plays an important role in autonomous driving, virtual reality, augmented reality, and other fields. Self-supervised monocular depth estimation has received much attention because it does not require hard-to-obtain depth labels during training. The previously used convolutional neural network (CNN) has shown limitations in modeling large-scale spatial dependencies. A new idea for monocular depth estimation is replacing the CNN architecture or merging it with a Vision Transformer (ViT) architecture that can model large-scale spatial dependencies in images. However, there are still problems with too many parameters and calculations, making deployment difficult on mobile platforms. In response to these problems, we propose TinyDepth, a lightweight self-supervised monocular depth estimation method based on Transformer that employs hierarchical representation learning suitable for dense prediction, uses mobile convolution to reduce parameters and computational overhead. and includes a novel decoder based on multi-scale fusion attention that improves the local and global inference capability of the network through scale-wise attention processing and layer-wise fusion sampling for more accurate depth prediction. In experiments, TinyDepth achieved state-of-the-art results with few parameters on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) dataset, and exhibited good generalization ability on the challenging indoor New York University (NYU) dataset. Source code is available at https://github.com/ZYCheng777/TinyDepth.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.