Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)最新文献

筛选
英文 中文
3D modeling of outdoor environments by integrating omnidirectional range and color images 结合全方位范围和彩色图像的户外环境三维建模
T. Asai, M. Kanbara, N. Yokoya
{"title":"3D modeling of outdoor environments by integrating omnidirectional range and color images","authors":"T. Asai, M. Kanbara, N. Yokoya","doi":"10.1109/3DIM.2005.3","DOIUrl":"https://doi.org/10.1109/3DIM.2005.3","url":null,"abstract":"This paper describes a 3D modeling method for wide area outdoor environments which is based on integrating omnidirectional range and color images. In the proposed method, outdoor scenes can be efficiently digitized by an omnidirectional laser rangefinder which can obtain a 3D shape with high-accuracy and by an omnidirectional multi-camera system (OMS) which can capture a high-resolution color image. Multiple range images are registered by minimizing the distances between corresponding points in the different range images. In order to register multiple range images stably, points on plane portions detected from the range data are used in registration process. The position and orientation acquired by RTK-GPS and gyroscope are used as initial values of simultaneous registration. The 3D model obtained by registration of range data is mapped by textures selected from omnidirectional images in consideration of the resolution of texture and occlusions of the model. In experiments, we have carried out 3D modeling of our campus with the proposed method.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129781048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Uncalibrated multiple image stereo system with arbitrarily movable camera and projector for wide range scanning 未经校准的多图像立体系统,可任意移动相机和投影仪进行大范围扫描
Furukawa Ryo, Hiroshi Kawasaki
{"title":"Uncalibrated multiple image stereo system with arbitrarily movable camera and projector for wide range scanning","authors":"Furukawa Ryo, Hiroshi Kawasaki","doi":"10.1109/3DIM.2005.80","DOIUrl":"https://doi.org/10.1109/3DIM.2005.80","url":null,"abstract":"In this paper, we propose an uncalibrated, multi-image 3D reconstruction, using coded structured light. Normally, a conventional coded structured light system consists of a camera and a projector and needs precalibration before scanning. Since the camera and the projector have to be fixed after calibration, reconstruction of a wide area of the scene or reducing occlusions by multiple scanning are difficult and sometimes impossible. In the proposed method, multiple scanning while moving the camera or the projector is possible by applying the uncalibrated stereo method, thereby achieving a multi-image 3D reconstruction. As compared to the conventional coded structured light method, our system does not require calibration of extrinsic camera parameters, occlusions are reduced, and a wide area of the scene can be acquired. As compared to image-based multi-image reconstruction, the proposed system can obtain dense shape data with higher precision. As a result of these advantages, users can freely move either the cameras or projectors to scan a wide range of objects, but not if both the camera and the projector are moved at the same time.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130685819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Multiresolution interactive modeling with efficient visualization 具有高效可视化的多分辨率交互建模
J. Deschênes, P. Hébert, Philippe Lambert, Jean-Nicolas Ouellet, D. Tubic
{"title":"Multiresolution interactive modeling with efficient visualization","authors":"J. Deschênes, P. Hébert, Philippe Lambert, Jean-Nicolas Ouellet, D. Tubic","doi":"10.1109/3DIM.2005.59","DOIUrl":"https://doi.org/10.1109/3DIM.2005.59","url":null,"abstract":"3D interactive modeling from range data aims at simultaneously producing and visualizing the surface model of an object while data is collected. The current research challenge is producing the final result in real-time. Using a recently proposed framework, a surface model is built in a volumetric structure encoding a vector field in the neighborhood of the object surface. In this paper, it is shown that the framework allows one to locally control the model resolution during acquisition. Using ray tracing, efficient visualization approaches of the multiresolution vector field are described and compared. More precisely, it is shown that volume traversal can be optimized while preventing holes and reducing aliasing in the rendered image.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129812519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Efficient discovery service for a digital library of 3D models 三维模型数字图书馆的高效发现服务
H. Anan, K. Maly, M. Zubair
{"title":"Efficient discovery service for a digital library of 3D models","authors":"H. Anan, K. Maly, M. Zubair","doi":"10.1109/3DIM.2005.34","DOIUrl":"https://doi.org/10.1109/3DIM.2005.34","url":null,"abstract":"Many geographically distributed experts in different areas such as medical imaging, e-commerce, and digital museums, are in need of 3D models. Although 3D models are becoming widely available due to the recent technological advancement and modeling tools, we lack a digital library system where they can be searched and retrieved efficiently. In this paper, we focus on an efficient discovery service consisting of multilevel hierarchical browsing service that enables users to navigate large sets of 3D models. For this purpose, we use shape based clustering to abstract a large set of 3D models to a small set of representative models (key models). Our service applies clustering recursively to limit the number of key models that a user views at a time. Clustering is derived from metrics that are based on a concept of compression and similarity computation using surface signatures. Signatures are the two-dimensional representations of a 3D model and they can be used to define similarity between 3D models. We integrated the proposed browsing capability with 3DLIB, (a digital library for 3D models that we are building at Old Dominion University), and evaluated the proposed browsing service using the Princeton Shape Benchmark (PSB). Our evaluation shows significant better precision and recall as compared to other approaches.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127297691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust and accurate partial surface registration based on variational implicit surfaces for automatic 3D model building 基于变分隐式曲面的鲁棒精确局部曲面配准方法
P. Claes, D. Vandermeulen, L. Gool, P. Suetens
{"title":"Robust and accurate partial surface registration based on variational implicit surfaces for automatic 3D model building","authors":"P. Claes, D. Vandermeulen, L. Gool, P. Suetens","doi":"10.1109/3DIM.2005.70","DOIUrl":"https://doi.org/10.1109/3DIM.2005.70","url":null,"abstract":"Three-dimensional models are often assembled from several partial reconstructions from unknown viewpoints. In order to provide a fully automatic, robust and accurate method for aligning and integrating partial reconstructions without any prior knowledge of the relative viewpoints of the sensor or the geometry of the imaging process, we propose a 4-step registration and integration algorithm based on a common Variational Implicit Surface (VIS) representation of the partial surface reconstructions. First, a global crude registration without a priori knowledge is performed followed by a pose refinement of partial reconstruction pairs. Pair-wise registrations are converted into a multi-view registration, before a final integration of the reconstructions into one entity or model occurs. Furthermore, making use of the smoothing properties of the VIS representations, the algorithm proves to be robust against noise in the reconstruction data. Experimental results on real-live, as well as noiseless and noisy simulated data are presented to show the feasibility, the accuracy and robustness of our registration scheme.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115200518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Relighting acquired models of outdoor scenes 重新照明获取户外场景的模型
Alejandro J. Troccoli, P. Allen
{"title":"Relighting acquired models of outdoor scenes","authors":"Alejandro J. Troccoli, P. Allen","doi":"10.1109/3DIM.2005.69","DOIUrl":"https://doi.org/10.1109/3DIM.2005.69","url":null,"abstract":"In this paper we introduce a relighting algorithm for diffuse outdoor scenes that enables us to create geometrically correct and illumination consistent models from a series of range scans and a set of overlapping photographs that have been taken under different illumination conditions. To perform the relighting we compute a set of mappings from the overlap region of two images. We call these mappings irradiance ratio maps (IRMs). Our algorithm handles cast shadows, being able to relight shadowed regions into non-shadowed regions and vice-versa. We solve these cases by computing four different IRMs, to handle all four combinations of shadowed vs. non-shadowed surfaces. To relight the non-overlapping region of an image, we look into the appropriate IRM which we index on surface normal, and apply its value to the corresponding pixels. The result is an illumination consistent set of images.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115701262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Non-parametric 3D surface completion 非参数三维曲面完井
T. Breckon, Robert B. Fisher
{"title":"Non-parametric 3D surface completion","authors":"T. Breckon, Robert B. Fisher","doi":"10.1109/3DIM.2005.61","DOIUrl":"https://doi.org/10.1109/3DIM.2005.61","url":null,"abstract":"We consider the completion of the hidden or missing portions of 3D objects after the visible portions have been acquired with 2 1/2 D (or 3D) range capture. Our approach uses a combination of global surface fitting, to derive the underlying geometric surface completion, together with an extension, from 2D to 3D, of nonparametric texture synthesis in order to complete localised surface texture relief and structure. Through this combination and adaptation of existing completion techniques we are able to achieve realistic, plausible completion of 2 1/2 D range captures.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125344654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Digitizing archaeological excavations from multiple views 从多个角度数字化考古发掘
Xenophon Zabulis, Alexander Patterson, Kostas Daniilidis
{"title":"Digitizing archaeological excavations from multiple views","authors":"Xenophon Zabulis, Alexander Patterson, Kostas Daniilidis","doi":"10.1109/3DIM.2005.32","DOIUrl":"https://doi.org/10.1109/3DIM.2005.32","url":null,"abstract":"We present a novel approach on digitizing large scale unstructured environments like archaeological excavations using off-the-shelf digital still cameras. The cameras are calibrated with respect to few markers captured by a theodolite system. Having all cameras registered in the same coordinate system enables a volumetric approach. Our new algorithm has as input multiple calibrated images and outputs an occupancy voxel space where occupied pixels have a local orientation and a confidence value. Both, orientation and confidence facilitate an efficient rendering and texture mapping of the resulting point cloud. Our algorithm combines the following new features: Images are back-projected to hypothesized local patches in the world and correlated on these patches yielding the best orientation. Adjacent cameras build tuples which yield a product of pair-wise correlations, called strength. Multiple camera tuples compete each other for the best strength for each voxel. A voxel is regarded as occupied if strength is maximum along the normal. Unlike other multi-camera algorithms using silhouettes, photoconsistency, or global correspondence, our algorithm makes no assumption on camera locations being outside the convex hull of the scene. We present compelling results of outdoors excavation areas.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123723311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Automatic burr detection on surfaces of revolution based on adaptive 3D scanning 基于自适应三维扫描的旋转曲面毛刺自动检测
Kasper Claes, T. Koninckx, H. Bruyninckx
{"title":"Automatic burr detection on surfaces of revolution based on adaptive 3D scanning","authors":"Kasper Claes, T. Koninckx, H. Bruyninckx","doi":"10.1109/3DIM.2005.21","DOIUrl":"https://doi.org/10.1109/3DIM.2005.21","url":null,"abstract":"This paper describes how to automatically extract the presence and location of geometrical irregularities on a surface of revolution. To this end a partial 3D scan of the workpiece under consideration is acquired by structured light ranging. The application we focus on is the detection and removal of burrs on industrial workpieces. Cylindrical metallic objects cause a strong specular reflection in every direction. These highlights are compensated for in the projected patterns, hence 'adaptive 3D scanning'. The triangular mesh produced is then used to identify the axis and generatrix of the corresponding surface of revolution. The search space for finding this axis is four dimensional: a valid choice of parameters is two orientation angles (as in spherical coordinates) and the 2D intersection point with the plane spanned by two out of three axis of the local coordinate system. For finding the axis we test the circularity of the planar intersections of the mesh in different directions, using statistical estimation methods to deal with noise. Finally the 'ideal' generatrix derived from the scan data is compared to the real surface topology. The difference identifies the burr. The algorithm is demonstrated on a metal wheel that has burrs on both sides. Visual servoing of a robotic arm based on this detection is work in progress.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115650204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Coordination of appearance and motion data for virtual view generation of traditional dances 传统舞蹈虚拟视图生成中外观与动作数据的协调
Y. Kamon, R. Yamane, Y. Mukaigawa, Takeshi Shakunaga
{"title":"Coordination of appearance and motion data for virtual view generation of traditional dances","authors":"Y. Kamon, R. Yamane, Y. Mukaigawa, Takeshi Shakunaga","doi":"10.1109/3DIM.2005.28","DOIUrl":"https://doi.org/10.1109/3DIM.2005.28","url":null,"abstract":"A novel method is proposed for virtual view generation of traditional dances. In the proposed framework, a traditional dance is captured separately for appearance registration and motion registration. By coordinating the appearance and motion data, we can easily control virtual camera motion within a dancer-centered coordinate system. For this purpose, a coordination problem should be solved between the appearance and motion data, since they are captured separately and the dancer moves freely in the room. The present paper shows a practical algorithm to solve it. A set of algorithms are also provided for appearance and motion registration, and virtual view generation from archived data. In the appearance registration, a 3D human shape is recovered in each time from a set of input images after suppressing their backgrounds. By combining the recovered 3D shape and a set of images for each time, we can compose archived dance data. In the motion registration, stereoscopic tracking is accomplished for color markers placed on the dancer. A virtual view generation is formalized as a color blending among multiple views, and a novel and efficient algorithm is proposed for the composition of a natural virtual view from a set of images. In the proposed method, weightings of the linear combination are calculated from both an assumed viewpoint and a surface normal.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128311777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信