Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)最新文献

筛选
英文 中文
Line-Based Structure from Motion for Urban Environments 城市环境运动中的基于线的结构
Grant Schindler, P. Krishnamurthy, F. Dellaert
{"title":"Line-Based Structure from Motion for Urban Environments","authors":"Grant Schindler, P. Krishnamurthy, F. Dellaert","doi":"10.1109/3DPVT.2006.90","DOIUrl":"https://doi.org/10.1109/3DPVT.2006.90","url":null,"abstract":"We present a novel method for recovering the 3D-line structure of a scene from multiple widely separated views. Traditional optimization-based approaches to line-based structure from motion minimize the error between measured line segments and the projections of corresponding 3D lines. In such a case, 3D lines can be optimized using a minimum of 4 parameters. We show that this number of parameters can be further reduced by introducing additional constraints on the orientations of lines in a 3D scene. In our approach, 2D-lines are automatically detected in images with the assistance of an EM-based vanishing point estimation method which assumes the existence of edges along mutally orthogonal vanishing directions. Each detected line is automatically labeled with the orientation (e.g. vertical, horizontal) of the 3D line which generated the measurement, and it is this additional knowledge that we use to reduce the number of degrees of freedom of 3D lines during optimization. We present 3D reconstruction results for urban scenes based on manually established feature correspondences across images.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127798324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 118
Linking Feature Lines on 3D Triangle Meshes with Artificial Potential Fields 用人工势场连接三维三角形网格上的特征线
D. Page, A. Koschan, M. Abidi
{"title":"Linking Feature Lines on 3D Triangle Meshes with Artificial Potential Fields","authors":"D. Page, A. Koschan, M. Abidi","doi":"10.1109/3DPVT.2006.91","DOIUrl":"https://doi.org/10.1109/3DPVT.2006.91","url":null,"abstract":"We propose artificial potential fields as a support theory for a feature linking algorithm. This algorithm operates on 3D triangle meshes derived from multiple range scans of an object, and the features of interest are curvature extrema on the object's surface. A problem that arises with detecting these features is that results from standard algorithms are often incomplete in that feature lines are broken and discontinuous. Our novel linking algorithm closes these broken feature lines to form a more complete feature description. The main contribution of this algorithm is the use of artificial potential fields to govern the linking process. In this paper, we discuss the feature detection process itself and then define the linking procedure in the context of potential fields. We present results for both synthetic and scanned models.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131660257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Angle Independent Bundle Adjustment Refinement 角度独立束调整细化
Jeffrey Zhang, Daniel G. Aliaga, M. Boutin, R. Insley
{"title":"Angle Independent Bundle Adjustment Refinement","authors":"Jeffrey Zhang, Daniel G. Aliaga, M. Boutin, R. Insley","doi":"10.1109/3DPVT.2006.30","DOIUrl":"https://doi.org/10.1109/3DPVT.2006.30","url":null,"abstract":"Obtaining a digital model of a real-world 3D scene is a challenging task pursued by computer vision and computer graphics. Given an initial approximate 3D model, a popular refinement process is to perform a bundle adjustment of the estimated camera position, camera orientation, and scene points. Unfortunately, simultaneously solving for both camera position and camera orientation is an ill-conditioned problem. To address this issue, we propose an improved, camera-orientation independent cost function that can be used instead of the standard bundle adjustment cost function. This yields a new bundle adjustment formulation which exhibits noticeably better numerical behavior, but at the expense of an increased computational cost. We alleviate the additional cost by automatically partitioning the dataset into smaller subsets. Minimizing our cost function for these subsets still achieves significant error reduction over standard bundle adjustment. We empirically demonstrate our formulation using several different size models and image sequences.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133798449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
EKF-Based Recursive Dual Estimation of Structure & Motion from Stereo Data 基于ekf的立体数据结构和运动的递归对偶估计
Hongsheng Zhang, S. Negahdaripour
{"title":"EKF-Based Recursive Dual Estimation of Structure & Motion from Stereo Data","authors":"Hongsheng Zhang, S. Negahdaripour","doi":"10.1109/3DPVT.2006.55","DOIUrl":"https://doi.org/10.1109/3DPVT.2006.55","url":null,"abstract":"Extended Kalman filters (EKF) have been proposed to estimate ego-motion and to recursively update scene structure in the form of 3-D positions of selected prominent features from motion and stereo sequences. Previous methods typically accommodate no more than a few dozen features for real-time processing. To maintain motion estimation accuracy, this calls for high contrast images to compute image feature locations with precision. Within manmade environments, various prominent corner points exist that can be extracted and tracked with required accuracy. However, prominent features are more difficult to localize precisely in natural scenes. Statistically, more feature points become necessary to maintain the same level of motion estimation accuracy and robustness. However, this imposes a computational burden beyond the capability of EKF-based techniques for real-time processing. A sequential dual EKF estimator utilizing stereo data is proposed for improved computation efficiency. Two important issues, unbiased estimation and stochastic stability are addressed. Furthermore, the dynamic feature set is handled in a more effective, efficient and robust way. Experimental results to demonstrate the merits of the new theoretical and algorithmic developments are presented.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114391763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Visual Shapes of Silhouette Sets 轮廓集的视觉形状
Jean-Sébastien Franco, M. Lapierre, Edmond Boyer
{"title":"Visual Shapes of Silhouette Sets","authors":"Jean-Sébastien Franco, M. Lapierre, Edmond Boyer","doi":"10.1109/3DPVT.2006.148","DOIUrl":"https://doi.org/10.1109/3DPVT.2006.148","url":null,"abstract":"Shape from silhouette methods are extensively used to model dynamic and non-rigid objects using binary foreground-background images. Since the problem of reconstructing shapes from silhouettes is ambiguous, a number of solutions exist and several approaches only consider the one with a maximal volume, called the visual hull. However, the visual hull is not always a good approximation of shapes, in particular when observing smooth surfaces with few cameras. In this paper, we consider instead a class of solutions to the silhouette reconstruction problem that we call visual shapes. Such a class includes the visual hull, but also better approximations of the observed shapes which can take into account local assumptions such as smoothness, among others. Our contributions with respect to existing works is first to identify silhouette consistent shapes different from the visual hull, and second to give a practical way to estimate such shapes in real time. Experiments on various sets of data including human body silhouettes are shown to illustrate the principle and the interests of visual shapes.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124361220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Direct and Indirect 3-D Reconstruction from Opti-Acoustic Stereo Imaging 光声立体成像的直接和间接三维重建
H. Sekkati, S. Negahdaripour
{"title":"Direct and Indirect 3-D Reconstruction from Opti-Acoustic Stereo Imaging","authors":"H. Sekkati, S. Negahdaripour","doi":"10.1109/3DPVT.2006.49","DOIUrl":"https://doi.org/10.1109/3DPVT.2006.49","url":null,"abstract":"Utilization of an acoustic camera for range measurements is a significant advantage for 3D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of visual and acoustic image correspondences is described in terms of conic sections and trigonometric functions. In this paper, we propose and analyze a number of methods based on direct and indirect approaches that provide insight on the merits of the new imaging and 3D object reconstruction paradigm. We have devised certain indirect methods, built on a regularization formulation, to first compute from noisy correspondences maximum likelihood estimates that satisfy the epipolar geometry. The 3D target points can then be determined from a number of closed-form solutions applied to these ML estimates. An alternative direct approach is also presented for 3D reconstruction directly from noisy correspondences. Computer simulations verify consistency between the analytical and experimental reconstruction SNRs - the criterion applied in performance assessment of these various solutions.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128486783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Integrating LiDAR, Aerial Image and Ground Images for Complete Urban Building Modeling 集成激光雷达,航空图像和地面图像完成城市建筑建模
Jinhui Hu, Suya You, U. Neumann
{"title":"Integrating LiDAR, Aerial Image and Ground Images for Complete Urban Building Modeling","authors":"Jinhui Hu, Suya You, U. Neumann","doi":"10.1109/3DPVT.2006.82","DOIUrl":"https://doi.org/10.1109/3DPVT.2006.82","url":null,"abstract":"This paper presents a hybrid modeling system that fuses LiDAR data, an aerial image and ground view images for rapid creation of accurate building models. Outlines for complex building shapes are interactively extracted from a high-resolution aerial image, surface information is automatically fit with a primitive based method from LiDAR data, and high-resolution ground view images are integrated into the model to generate fully textured CAD models. Our method benefits from the merit of each dataset, and evaluation results are presented on a university campus-size model.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"s1-11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127191048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Extracting 3D Shape Features in Discrete Scale-Space 离散尺度空间中的三维形状特征提取
John Novatnack, K. Nishino, A. Shokoufandeh
{"title":"Extracting 3D Shape Features in Discrete Scale-Space","authors":"John Novatnack, K. Nishino, A. Shokoufandeh","doi":"10.1109/3DPVT.2006.60","DOIUrl":"https://doi.org/10.1109/3DPVT.2006.60","url":null,"abstract":"3D shape features are inherently scale-dependent. For instance, on a 3D model of a human body, the top of the head and a fingertip can both be detected as corner points, however, at entirely different scales. In this paper, we present a method for extracting and integrating 3D shape features in the discrete scale-space of a triangular mesh model. We first parameterize the surface of the mesh model on a 2D plane and then construct a dense surface normal map. In general, the parametrization is not isometric. To account for this, we compute the relative stretch of the original edge lengths. Next, we compute a dense distortion map which is used to approximate the geodesic distances on the normal map. Then, we construct a discrete scale-space of the original 3D shape by successively convolving the normal map with distortion-adapted Gaussian kernels of increasing standard deviation. We derive corner and edge detectors to extract 3D features at each scale in the discrete scale-space. Furthermore, we show how to combine the detector responses from different scales to form a unified representation of the 3D features.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128951642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Point Containment in Discrete Arbitrary Dimension 离散任意维的点包容
Luciano Silva
{"title":"Point Containment in Discrete Arbitrary Dimension","authors":"Luciano Silva","doi":"10.1109/3DPVT.2006.108","DOIUrl":"https://doi.org/10.1109/3DPVT.2006.108","url":null,"abstract":"The point containment predicate which specifies if a point is part of a mathematically defined shape is one of the most elementary operations in computer graphics and is a natural way to perform the many raster calculations. It plays an essential role in several important processes such as filling, stroking, anti-aliasing, geometric modeling and volume rendering. This paper presents a generalized point containment algorithm for arbitrary dimension discrete objects whose main characteristics are low complexity, simple data structures and suitability for hardware implementation.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123088059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Registration of Multiple Range Images by the Local Log-Polar Range Images 局部对数极坐标距离图像的多距离图像自动配准
T. Masuda
{"title":"Automatic Registration of Multiple Range Images by the Local Log-Polar Range Images","authors":"T. Masuda","doi":"10.1109/3DPVT.2006.35","DOIUrl":"https://doi.org/10.1109/3DPVT.2006.35","url":null,"abstract":"We propose a method for automatic registration of multiple range images by matching invariant feature vectors generated from the local log-polar range images. Point pairs are corresponded by finding the nearest neighbor of invariant feature vectors. The correspondence is validated, and the pairwise transformations between the input range images are determined by using the RANSAC algorithm. The registration of all input range images are determined by constructing the view tree of the input range images. The registration result of the proposed method is used as the initial value of a fine registration methods for object shape modeling.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"910 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120930919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信