Qingyu Zhang, Chunyan Wei, Qingxia Li, Xiaosen Tian, Chuanpeng Li
{"title":"无监督单目深度估计的池化金字塔视觉变压器","authors":"Qingyu Zhang, Chunyan Wei, Qingxia Li, Xiaosen Tian, Chuanpeng Li","doi":"10.1109/SmartIoT55134.2022.00025","DOIUrl":null,"url":null,"abstract":"Compared with other sensors, high-quality depth estimation based on monocular camera has strong competitiveness and widespread application in intelligent transportation, etc. Although the barrier of training has been greatly lowered by unsupervised learning, most related works are still based on convolutional neural networks (CNNs) that suffer from unbridgeable gaps in the full-stage global information and high-resolution features while extracting multi-scale features. To break this predicament, we attempt to introduce vision transformer. However, the vision transformer with large sequence length due to image embedding brings great challenges to the computational cost. Thus, this work proposes a new pure transformer backbone named pooling pyramid vision transformer (PPViT), simultaneously shrinking out multi-scale features and reducing sequence length used for attention operation. Then, we provide two backbone settings including PPViT10 and PPViT18 whose number of parameters are close to the common ResNet18 and ResNet50, respectively. The experiments on KITTI dataset demonstrate that our work show a great potentiality of improving the capability of model and produce superior results to the previous CNN-based works. Equally important, we have lower latency than the related transformer-based work.","PeriodicalId":422269,"journal":{"name":"2022 IEEE International Conference on Smart Internet of Things (SmartIoT)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Pooling Pyramid Vision Transformer for Unsupervised Monocular Depth Estimation\",\"authors\":\"Qingyu Zhang, Chunyan Wei, Qingxia Li, Xiaosen Tian, Chuanpeng Li\",\"doi\":\"10.1109/SmartIoT55134.2022.00025\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Compared with other sensors, high-quality depth estimation based on monocular camera has strong competitiveness and widespread application in intelligent transportation, etc. Although the barrier of training has been greatly lowered by unsupervised learning, most related works are still based on convolutional neural networks (CNNs) that suffer from unbridgeable gaps in the full-stage global information and high-resolution features while extracting multi-scale features. To break this predicament, we attempt to introduce vision transformer. However, the vision transformer with large sequence length due to image embedding brings great challenges to the computational cost. Thus, this work proposes a new pure transformer backbone named pooling pyramid vision transformer (PPViT), simultaneously shrinking out multi-scale features and reducing sequence length used for attention operation. Then, we provide two backbone settings including PPViT10 and PPViT18 whose number of parameters are close to the common ResNet18 and ResNet50, respectively. The experiments on KITTI dataset demonstrate that our work show a great potentiality of improving the capability of model and produce superior results to the previous CNN-based works. Equally important, we have lower latency than the related transformer-based work.\",\"PeriodicalId\":422269,\"journal\":{\"name\":\"2022 IEEE International Conference on Smart Internet of Things (SmartIoT)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Smart Internet of Things (SmartIoT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SmartIoT55134.2022.00025\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Smart Internet of Things (SmartIoT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartIoT55134.2022.00025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Pooling Pyramid Vision Transformer for Unsupervised Monocular Depth Estimation
Compared with other sensors, high-quality depth estimation based on monocular camera has strong competitiveness and widespread application in intelligent transportation, etc. Although the barrier of training has been greatly lowered by unsupervised learning, most related works are still based on convolutional neural networks (CNNs) that suffer from unbridgeable gaps in the full-stage global information and high-resolution features while extracting multi-scale features. To break this predicament, we attempt to introduce vision transformer. However, the vision transformer with large sequence length due to image embedding brings great challenges to the computational cost. Thus, this work proposes a new pure transformer backbone named pooling pyramid vision transformer (PPViT), simultaneously shrinking out multi-scale features and reducing sequence length used for attention operation. Then, we provide two backbone settings including PPViT10 and PPViT18 whose number of parameters are close to the common ResNet18 and ResNet50, respectively. The experiments on KITTI dataset demonstrate that our work show a great potentiality of improving the capability of model and produce superior results to the previous CNN-based works. Equally important, we have lower latency than the related transformer-based work.