2016 Fourth International Conference on 3D Vision (3DV)最新文献

筛选
英文 中文
Shape Analysis with Anisotropic Windowed Fourier Transform 基于各向异性加窗傅里叶变换的形状分析
2016 Fourth International Conference on 3D Vision (3DV) Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.57
S. Melzi, E. Rodolà, U. Castellani, M. Bronstein
{"title":"Shape Analysis with Anisotropic Windowed Fourier Transform","authors":"S. Melzi, E. Rodolà, U. Castellani, M. Bronstein","doi":"10.1109/3DV.2016.57","DOIUrl":"https://doi.org/10.1109/3DV.2016.57","url":null,"abstract":"We propose Anisotropic Windowed Fourier Transform (AWFT), a framework for localized space-frequency analysis of deformable 3D shapes. With AWFT, we are able to extract meaningful intrinsic localized orientation-sensitive structures on surfaces, and use them in applications such as shape segmentation, salient point detection, feature point description, and matching. Our method outperforms previous approaches in the considered applications.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132236787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A 3D Reconstruction with High Density and Accuracy Using Laser Profiler and Camera Fusion System on a Rover 利用激光剖面仪和相机融合系统在火星车上进行高密度、高精度的三维重建
2016 Fourth International Conference on 3D Vision (3DV) Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.70
Ryoichi Ishikawa, Menandro Roxas, Yoshihiro Sato, Takeshi Oishi, T. Masuda, K. Ikeuchi
{"title":"A 3D Reconstruction with High Density and Accuracy Using Laser Profiler and Camera Fusion System on a Rover","authors":"Ryoichi Ishikawa, Menandro Roxas, Yoshihiro Sato, Takeshi Oishi, T. Masuda, K. Ikeuchi","doi":"10.1109/3DV.2016.70","DOIUrl":"https://doi.org/10.1109/3DV.2016.70","url":null,"abstract":"3D Sensing systems mounted on mobile platform are emerging and have been developed for various applications. In this paper, we propose a profiler scanning system mounted on a rover to scan and reconstruct a bas-relief with high density and accuracy. Our hardware system consists of an omnidirectional camera and a 3D laser scanner. Our method selects good projection points for tracking to estimate motion stably and reject mismatches caused by difference between the positions of laser scanner and camera using an error metric based on the distance from omnidirectional camera to scanned point. We demonstrate that our results has better accuracy than comparable approach. In addition to local motion estimation method, we propose global poses refinement method using multi modal 2D-3D registration and our result shows good consistency between reflectance image and 2D RGB image.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122636308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
HS-Nets: Estimating Human Body Shape from Silhouettes with Convolutional Neural Networks HS-Nets:用卷积神经网络从轮廓估计人体形状
2016 Fourth International Conference on 3D Vision (3DV) Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.19
E. Dibra, H. Jain, Cengiz Oztireli, R. Ziegler, M. Gross
{"title":"HS-Nets: Estimating Human Body Shape from Silhouettes with Convolutional Neural Networks","authors":"E. Dibra, H. Jain, Cengiz Oztireli, R. Ziegler, M. Gross","doi":"10.1109/3DV.2016.19","DOIUrl":"https://doi.org/10.1109/3DV.2016.19","url":null,"abstract":"We represent human body shape estimation from binary silhouettes or shaded images as a regression problem, and describe a novel method to tackle it using CNNs. Utilizing a parametric body model, we train CNNs to learn a global mapping from the input to shape parameters used to reconstruct the shapes of people, in neutral poses, with the application of garment fitting in mind. This results in an accurate, robust and automatic system, orders of magnitude faster than methods we compare to, enabling interactive applications. In addition, we show how to combine silhouettes from two views to improve prediction over a single view. The method is extensively evaluated on thousands of synthetic shapes and real data and compared to state of-art approaches, clearly outperforming methods based on global fitting and strongly competing with more expensive local fitting based ones.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128500651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
Point Cloud Noise and Outlier Removal for Image-Based 3D Reconstruction 基于图像的三维重建中的点云噪声和离群值去除
2016 Fourth International Conference on 3D Vision (3DV) Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.20
Katja Wolff, Changil Kim, H. Zimmer, Christopher Schroers, M. Botsch, O. Sorkine-Hornung, A. Sorkine-Hornung
{"title":"Point Cloud Noise and Outlier Removal for Image-Based 3D Reconstruction","authors":"Katja Wolff, Changil Kim, H. Zimmer, Christopher Schroers, M. Botsch, O. Sorkine-Hornung, A. Sorkine-Hornung","doi":"10.1109/3DV.2016.20","DOIUrl":"https://doi.org/10.1109/3DV.2016.20","url":null,"abstract":"Point sets generated by image-based 3D reconstruction techniques are often much noisier than those obtained using active techniques like laser scanning. Therefore, they pose greater challenges to the subsequent surface reconstruction (meshing) stage. We present a simple and effective method for removing noise and outliers from such point sets. Our algorithm uses the input images and corresponding depth maps to remove pixels which are geometrically or photometrically inconsistent with the colored surface implied by the input. This allows standard surface reconstruction methods (such as Poisson surface reconstruction) to perform less smoothing and thus achieve higher quality surfaces with more features. Our algorithm is efficient, easy to implement, and robust to varying amounts of noise. We demonstrate the benefits of our algorithm in combination with a variety of state-of-the-art depth and surface reconstruction methods.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130853838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
Energy-Based Global Ternary Image for Action Recognition Using Sole Depth Sequences 基于能量的全局三元图像单一深度序列动作识别
2016 Fourth International Conference on 3D Vision (3DV) Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.14
Mengyuan Liu, Hong Liu, Chen Chen, M. Najafian
{"title":"Energy-Based Global Ternary Image for Action Recognition Using Sole Depth Sequences","authors":"Mengyuan Liu, Hong Liu, Chen Chen, M. Najafian","doi":"10.1109/3DV.2016.14","DOIUrl":"https://doi.org/10.1109/3DV.2016.14","url":null,"abstract":"In order to efficiently recognize actions from depth sequences, we propose a novel feature, called Global Ternary Image (GTI), which implicitly encodes both motion regions and motion directions between consecutive depth frames via recording the changes of depth pixels. In this study, each pixel in GTI indicates one of the three possible states, namely positive, negative and neutral, which represents increased, decreased and same depth values, respectively. Since GTI is sensitive to the subject's speed, we obtain energy-based GTI (E-GTI) by extracting GTI from pairwise depth frames with equal motion energy. To involve temporal information among depth frames, we extract E-GTI using multiple settings of motion energy. Here, the noise can be effectively suppressed by describing E-GTIs using the Radon Transform (RT). The 3D action representation is formed as a result of feeding the hierarchical combination of RTs to the Bag of Visual Words model (BoVW). From the extensive experiments on four benchmark datasets, namely MSRAction3D, DHA, MSRGesture3D and SKIG, it is evident that the hierarchical E-GTI outperforms the existing methods in 3D action recognition. We tested our proposed approach on extended MSRAction3D dataset to further investigate and verify its robustness against partial occlusions, noise and speed.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122456361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A Hybrid Structure/Trajectory Constraint for Visual SLAM 视觉SLAM的混合结构/轨迹约束
2016 Fourth International Conference on 3D Vision (3DV) Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.12
Angélique Loesch, S. Bourgeois, V. Gay-Bellile, M. Dhome
{"title":"A Hybrid Structure/Trajectory Constraint for Visual SLAM","authors":"Angélique Loesch, S. Bourgeois, V. Gay-Bellile, M. Dhome","doi":"10.1109/3DV.2016.12","DOIUrl":"https://doi.org/10.1109/3DV.2016.12","url":null,"abstract":"This paper presents a hybrid structure/trajectory constraint, that uses output camera poses of a model-based tracker, for object localization with SLAM algorithm. This constraint takes into account the structure information given by a CAD model while relying on the formalism of trajectory constraints. It has the advantages to be compact in memory and to accelerate the SLAM optimization process. The accuracy and robustness of the resulting localization as well as the memory and time gains are evaluated on synthetic and real data. Videos are available as supplementary material.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126698899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust Real-Time 3D Face Tracking from RGBD Videos under Extreme Pose, Depth, and Expression Variation 基于极端姿态、深度和表情变化的RGBD视频鲁棒实时3D人脸跟踪
2016 Fourth International Conference on 3D Vision (3DV) Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.54
Hai Xuan Pham, V. Pavlovic
{"title":"Robust Real-Time 3D Face Tracking from RGBD Videos under Extreme Pose, Depth, and Expression Variation","authors":"Hai Xuan Pham, V. Pavlovic","doi":"10.1109/3DV.2016.54","DOIUrl":"https://doi.org/10.1109/3DV.2016.54","url":null,"abstract":"We introduce a novel end-to-end real-time pose-robust 3D face tracking framework from RGBD videos, which is capable of tracking head pose and facial actions simultaneously in unconstrained environment without intervention or pre-calibration from a user. In particular, we emphasize tracking the head pose from profile to profile and improving tracking performance in challenging instances, where the tracked subject is at a considerably large distance from the camera and the quality of data deteriorates severely. To achieve these goals, the tracker is guided by an efficient multi-view 3D shape regressor, trained upon generic RGB datasets, which is able to predict model parameters despite large head rotations or tracking range. Specifically, the shape regressor is made aware of the head pose by inferring the possibility of particular facial landmarks being visible through a joint regression-classification local random forest framework, and piecewise linear regression models effectively map visibility features into shape parameters. In addition, the regressor is combined with a joint 2D+3D optimization that sparsely exploits depth information to further refine shape parameters to maintain tracking accuracy over time. The result is a robust on-line RGBD 3D face tracker that can model extreme head poses and facial expressions accurately in challenging scenes, which are demonstrated in our extensive experiments.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"458 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125831852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
CNN-Based Object Segmentation in Urban LIDAR with Missing Points 基于cnn的城市激光雷达缺点目标分割
2016 Fourth International Conference on 3D Vision (3DV) Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.51
Allan Zelener, I. Stamos
{"title":"CNN-Based Object Segmentation in Urban LIDAR with Missing Points","authors":"Allan Zelener, I. Stamos","doi":"10.1109/3DV.2016.51","DOIUrl":"https://doi.org/10.1109/3DV.2016.51","url":null,"abstract":"We examine the task of point-level object segmentation in outdoor urban LIDAR scans. A key challenge in this area is the problem of missing points in the scans due to technical limitations of the LIDAR sensors. Our core contributions are demonstrating the benefit of reframing the segmentation task over the scan acquisition grid as opposed to considering only the acquired 3D point cloud and developing a pipeline for training and applying a convolutional neural network to accomplish this segmentation on large scale LIDAR scenes. By labeling missing points in the scanning grid we show that we can train our classifier to achieve a more accurate and complete segmentation mask for the vehicle object category which is particularly prone to missing points. Additionally we show that the choice of input features maps to the CNN significantly effect the accuracy of the segmentation and these features should be chosen to fully encapsulate the 3D scene structure. We evaluate our model on a LIDAR dataset collected by Google Street View cars over a large area of New York City.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128542240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Matching Deformable Objects in Clutter 在杂乱中匹配可变形物体
2016 Fourth International Conference on 3D Vision (3DV) Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.10
L. Cosmo, E. Rodolà, Jonathan Masci, A. Torsello, M. Bronstein
{"title":"Matching Deformable Objects in Clutter","authors":"L. Cosmo, E. Rodolà, Jonathan Masci, A. Torsello, M. Bronstein","doi":"10.1109/3DV.2016.10","DOIUrl":"https://doi.org/10.1109/3DV.2016.10","url":null,"abstract":"We consider the problem of deformable object detection and dense correspondence in cluttered 3D scenes. Key ingredient to our method is the choice of representation: we formulate the problem in the spectral domain using the functional maps framework, where we seek for the most regular nearly-isometric parts in the model and the scene that minimize correspondence error. The problem is initialized by solving a sparse relaxation of a quadratic assignment problem on features obtained via data-driven metric learning. The resulting matching pipeline is solved efficiently, and yields accurate results in challenging settings that were previously left unexplored in the literature.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129890098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Comparison of Radial and Tangential Geometries for Cylindrical Panorama 圆柱全景的径向几何和切向几何的比较
2016 Fourth International Conference on 3D Vision (3DV) Pub Date : 2016-10-01 DOI: 10.1109/3DV.2016.81
F. Amjadi, S. Roy
{"title":"Comparison of Radial and Tangential Geometries for Cylindrical Panorama","authors":"F. Amjadi, S. Roy","doi":"10.1109/3DV.2016.81","DOIUrl":"https://doi.org/10.1109/3DV.2016.81","url":null,"abstract":"This paper presents a new approach which builds 360-degree cylindrical panoramic images from multiple cameras. In order to ensure a perceptually correct result, mosaicing typically requires either a planar or near-planar scene, parallax-free camera motion between source frames, or a dense sampling of the scene. When these conditions are not satisfied, various artifacts may appear. There are many algorithms to overcome these problems. We propose a panoramic setup where cameras are placed evenly around a circle. Instead of looking outward, which is the traditional configuration, we propose to make the optical axes tangent to the camera circle, a \"tangential\" configuration. We will demonstrate that this configuration is very insensitive to depth estimation, which reduces stitching artifacts. This property is only limited by the fact that tangential cameras usually occlude each other along the circle. Beside an analysis and comparison of radial and tangential geometries, we provide an experimental setup with real panoramas obtained in realistic conditions.","PeriodicalId":425304,"journal":{"name":"2016 Fourth International Conference on 3D Vision (3DV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130816131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信