2014 2nd International Conference on 3D Vision最新文献

筛选
英文 中文
A Data-Driven Regularization Model for Stereo and Flow 立体与流的数据驱动正则化模型
2014 2nd International Conference on 3D Vision Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.97
D. Wei, Ce Liu, W. Freeman
{"title":"A Data-Driven Regularization Model for Stereo and Flow","authors":"D. Wei, Ce Liu, W. Freeman","doi":"10.1109/3DV.2014.97","DOIUrl":"https://doi.org/10.1109/3DV.2014.97","url":null,"abstract":"Data-driven techniques can reliably build semantic correspondence among images. In this paper, we present a new regularization model for stereo or flow through transferring the shape information of the disparity or flow from semantically matched patches in the training database. Compared to previous regularization models based on image appearance alone, we can better resolve local ambiguity of the disparity or flow by considering the semantic information without explicit object modeling. We incorporate this data-driven regularization model into a standard Markov Random Field (MRF) model, inferred with a gradient descent algorithm and learned with a discriminative learning approach. Compared to prior state-of-the-art methods, our full model achieves comparable or better results on the KITTI stereo and flow datasets, and improves results on the Sintel Flow dataset under an online estimation setting.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115740362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Classification of Vehicle Parts in Unstructured 3D Point Clouds 非结构化三维点云中车辆部件的分类
2014 2nd International Conference on 3D Vision Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.58
Allan Zelener, Philippos Mordohai, I. Stamos
{"title":"Classification of Vehicle Parts in Unstructured 3D Point Clouds","authors":"Allan Zelener, Philippos Mordohai, I. Stamos","doi":"10.1109/3DV.2014.58","DOIUrl":"https://doi.org/10.1109/3DV.2014.58","url":null,"abstract":"Unprecedented amounts of 3D data can be acquired in urban environments, but their use for scene understanding is challenging due to varying data resolution and variability of objects in the same class. An additional challenge is due to the nature of the point clouds themselves, since they lack detailed geometric or semantic information that would aid scene understanding. In this paper we present a general algorithm for segmenting and jointly classifying object parts and the object itself. Our pipeline consists of local feature extraction, robust RANSAC part segmentation, part-level feature extraction, a structured model for parts in objects, and classification using state-of-the-art classifiers. We have tested this pipeline in a very challenging dataset that consists of real world scans of vehicles. Our contributions include the development of a segmentation and classification pipeline for objects and their parts, and a method for segmentation that is robust to the complexity of unstructured 3D points clouds, as well as a part ordering strategy for the sequential structured model and a joint feature representation between object parts.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125423148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
High-Quality Depth Recovery via Interactive Multi-view Stereo 通过交互式多视图立体高质量深度恢复
2014 2nd International Conference on 3D Vision Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.55
Weifeng Chen, Guofeng Zhang, Xiaojun Xiang, Jiaya Jia, H. Bao
{"title":"High-Quality Depth Recovery via Interactive Multi-view Stereo","authors":"Weifeng Chen, Guofeng Zhang, Xiaojun Xiang, Jiaya Jia, H. Bao","doi":"10.1109/3DV.2014.55","DOIUrl":"https://doi.org/10.1109/3DV.2014.55","url":null,"abstract":"Although multi-view stereo has been extensively studied during the past decades, automatically computing high-quality dense depth information from captured images/videos is still quite difficult. Many factors, such as serious occlusion, large texture less regions and strong reflection, easily cause erroneous depth recovery. In this paper, we present a novel semi-automatic multi-view stereo system, which can quickly create and repair depth from a monocular sequence taken by a freely moving camera. One of our main contributions is that we propose a novel multi-view stereo model incorporating prior constraints indicated by user interaction, which makes it possible to even handle Non-Lambertian surface that surely violates the photo-consistency constraint. Users only need to provide a coarse segmentation and a few user interactions, our system can automatically correct depth and refine boundary. With other priors and occlusion handling, the erroneous depth can be effectively corrected even for very challenging examples that are difficult for state-of-the-art methods.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114393387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building Modeling through Enclosure Reasoning 通过围合推理进行建筑建模
2014 2nd International Conference on 3D Vision Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.65
Adam Stambler, Daniel F. Huber
{"title":"Building Modeling through Enclosure Reasoning","authors":"Adam Stambler, Daniel F. Huber","doi":"10.1109/3DV.2014.65","DOIUrl":"https://doi.org/10.1109/3DV.2014.65","url":null,"abstract":"This paper introduces a method for automatically transforming a point cloud from a laser scanner into a volumetric 3D building model based on the new concept of enclosure reasoning. Rather than simply classifying and modeling building surfaces independently or with pair wise contextual relationships, this work introduces room, floor and building level reasoning. Enclosure reasoning premises that rooms are cycles of walls enclosing free interior space. These cycles should be of minimum description length (MDL) and obey the statistical priors expected for rooms. Floors and buildings then contain the best coverage of the mostly likely rooms. This allows the pipeline to generate higher fidelity models by performing modeling and recognition jointly over the entire building at once. The complete pipeline takes raw, registered laser scan surveys of a single building. It extracts the most likely smooth architectural surfaces, locates the building, and generates wall hypotheses. The algorithm then optimizes the model by growing, merging, and pruning these hypotheses to generate the most likely rooms, floors, and building in the presence of significant clutter.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129413942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A Real-Time View-Dependent Shape Optimization for High Quality Free-Viewpoint Rendering of 3D Video 面向高质量自由视点3D视频渲染的实时视相关形状优化
2014 2nd International Conference on 3D Vision Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.28
S. Nobuhara, Wei Ning, T. Matsuyama
{"title":"A Real-Time View-Dependent Shape Optimization for High Quality Free-Viewpoint Rendering of 3D Video","authors":"S. Nobuhara, Wei Ning, T. Matsuyama","doi":"10.1109/3DV.2014.28","DOIUrl":"https://doi.org/10.1109/3DV.2014.28","url":null,"abstract":"This paper is aimed at proposing a new high quality free-viewpoint rendering algorithm of 3D video. The main challenge on visualizing 3D video is how to utilize the original multi-view images used to estimate the 3D surface, and how to manage the mismatches between them due to calibration and reconstruction errors. The key idea to solve this problem is to optimize the 3D shape on a per-viewpoint basis on the fly. Given a virtual viewpoint for visualization, our algorithm optimizes the 3D shape so as to maximize the photo-consistency over the surface visible from the virtual viewpoint. An evaluation demonstrates that our method outperforms the state-of-the-art rendering qualitatively and quantitatively.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124577817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Matching Many Identical Features of Planar Urban Facades Using Global Regularity 利用全局规则匹配平面城市立面许多相同特征
2014 2nd International Conference on 3D Vision Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.107
Eduardo B. Almeida, D. Cooper
{"title":"Matching Many Identical Features of Planar Urban Facades Using Global Regularity","authors":"Eduardo B. Almeida, D. Cooper","doi":"10.1109/3DV.2014.107","DOIUrl":"https://doi.org/10.1109/3DV.2014.107","url":null,"abstract":"Reasonable computation and accurate camera calibration require matching many interest points over long baselines. This is a difficult problem requiring better solutions than presently exist for urban scenes involving large buildings containing many windows since windows in a facade all have the same texture and, therefore, cannot be distinguished from one another based solely on appearance. Hence, the usual approach to feature detection and matching, such as use of SIFT, does not work in these scenes. A novel algorithm is introduced to provide correspondences for multiple repeating feature patterns seen under significant viewpoint changes. Most existing appearance-based algorithms cannot handle highly repetitive textures due to the match location ambiguity. However, the target structure provides a rich set of repeating features to be matched and tracked across multiple views, thus potentially improving camera estimation accuracy. The proposed method also exploits the geometric structure of regular grids of repeating features on planar surfaces.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130377306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Non-rigid registration with reliable distance field for dynamic shape completion 具有可靠距离场的非刚性配准,用于动态形状补全
2014 2nd International Conference on 3D Vision Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.111
Kent Fujiwara, Hiroshi Kawasaki, R. Sagawa, K. Ogawara, K. Ikeuchi
{"title":"Non-rigid registration with reliable distance field for dynamic shape completion","authors":"Kent Fujiwara, Hiroshi Kawasaki, R. Sagawa, K. Ogawara, K. Ikeuchi","doi":"10.1109/3DV.2014.111","DOIUrl":"https://doi.org/10.1109/3DV.2014.111","url":null,"abstract":"We propose a non-rigid registration method for completion of dynamic shapes with occlusion. Our method is based on the idea that an occluded region in a certain frame should be visible in another frame and that local regions should be moving rigidly when the motion is small. We achieve this with a novel reliable distance field (DF) for non-rigid registration with missing regions. We first fit a pseudo-surface onto the input shape using a surface reconstruction method. We then calculate the difference between the DF of the input shape and the pseudo-surface. We define the areas with large difference as unreliable, as these areas indicate that the original shape cannot be found nearby. We then conduct non-rigid registration using local rigid transformations to match the source and target data at visible regions and maintain the original shape as much as possible in occluded regions. The experimental results demonstrate that our method is capable of accurately filling in the missing regions using the shape information from prior or posterior frames. By sequentially processing the data, our method is also capable of completing an entire sequence with missing regions.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121158327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Sparse 3D Models for Man-Made Environments Using Line-Based 3D Reconstruction 基于线的三维重建改进人工环境稀疏三维模型
2014 2nd International Conference on 3D Vision Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.14
Manuel Hofer, Michael Maurer, H. Bischof
{"title":"Improving Sparse 3D Models for Man-Made Environments Using Line-Based 3D Reconstruction","authors":"Manuel Hofer, Michael Maurer, H. Bischof","doi":"10.1109/3DV.2014.14","DOIUrl":"https://doi.org/10.1109/3DV.2014.14","url":null,"abstract":"Traditional Structure-from-Motion (SfM) approaches work well for richly textured scenes with a high number of distinctive feature points. Since man-made environments often contain texture less objects, the resulting point cloud suffers from a low density in corresponding scene parts. The missing 3D information heavily affects all kinds of subsequent post-processing tasks (e.g. Meshing), and significantly decreases the visual appearance of the resulting 3D model. We propose a novel 3D reconstruction approach, which uses the output of conventional SfM pipelines to generate additional complementary 3D information, by exploiting line segments. We use appearance-less epipolar guided line matching to create a potentially large set of 3D line hypotheses, which are then verified using a global graph clustering procedure. We show that our proposed method outperforms the current state-of-the-art in terms of runtime and accuracy, as well as visual appearance of the resulting reconstructions.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114692449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
LETHA: Learning from High Quality Inputs for 3D Pose Estimation in Low Quality Images LETHA:从低质量图像的3D姿态估计的高质量输入中学习
2014 2nd International Conference on 3D Vision Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.18
Adrián Peñate Sánchez, F. Moreno-Noguer, J. Andrade-Cetto, F. Fleuret
{"title":"LETHA: Learning from High Quality Inputs for 3D Pose Estimation in Low Quality Images","authors":"Adrián Peñate Sánchez, F. Moreno-Noguer, J. Andrade-Cetto, F. Fleuret","doi":"10.1109/3DV.2014.18","DOIUrl":"https://doi.org/10.1109/3DV.2014.18","url":null,"abstract":"We introduce LETHA (Learning on Easy data, Test on Hard), a new learning paradigm consisting of building strong priors from high quality training data, and combining them with discriminative machine learning to deal with low-quality test data. Our main contribution is an implementation of that concept for pose estimation. We first automatically build a 3D model of the object of interest from high-definition images, and devise from it a pose-indexed feature extraction scheme. We then train a single classifier to process these feature vectors. Given a low quality test image, we visit many hypothetical poses, extract features consistently and evaluate the response of the classifier. Since this process uses locations recorded during learning, it does not require matching points anymore. We use a boosting procedure to train this classifier common to all poses, which is able to deal with missing features, due in this context to self-occlusion. Our results demonstrate that the method combines the strengths of global image representations, discriminative even for very tiny images, and the robustness to occlusions of approaches based on local feature point descriptors.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114833464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Match Box: Indoor Image Matching via Box-Like Scene Estimation 火柴盒:室内图像匹配通过盒样场景估计
2014 2nd International Conference on 3D Vision Pub Date : 2014-12-08 DOI: 10.1109/3DV.2014.56
F. Srajer, A. Schwing, M. Pollefeys, T. Pajdla
{"title":"Match Box: Indoor Image Matching via Box-Like Scene Estimation","authors":"F. Srajer, A. Schwing, M. Pollefeys, T. Pajdla","doi":"10.1109/3DV.2014.56","DOIUrl":"https://doi.org/10.1109/3DV.2014.56","url":null,"abstract":"Key point matching in images of indoor scenes traditionally employs features like SIFT, GIST and HOG. While those features work very well for two images related to each other by small camera transformations, we commonly observe a drop in performance for patches representing scene elements visualized from a very different perspective. Since increasing the space of considered local transformations for feature matching decreases their discriminative abilities, we propose a more global approach inspired by the recent success of monocular scene understanding. In particular we propose to reconstruct a box-like model of the scene from every single image and use it to rectify images before matching. We show that a monocular scene model reconstruction and rectification preceding standard feature matching significantly improves key point matching and dramatically improves reconstruction of difficult indoor scenes.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115869832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信