Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.最新文献

筛选
英文 中文
Surface height recovery using heat flow and manifold embedding 利用热流和流形嵌入法回收地表高度
A. Robles-Kelly, E. Hancock
{"title":"Surface height recovery using heat flow and manifold embedding","authors":"A. Robles-Kelly, E. Hancock","doi":"10.1109/3DPVT.2004.121","DOIUrl":"https://doi.org/10.1109/3DPVT.2004.121","url":null,"abstract":"We make two contributions to the problem of shape-from-shading. First, we develop a new method for surface normal recovery. We pose the problem as that of solving the steady state heat equation subject to the hard constraint that Lambert's law is satisfied. According to this picture, the surface normals are found by taking the gradient of a scalar field. The heat equation for the scalar field can be solved using simple finite difference methods and leads to an iterative procedure for surface normal estimation. The second contribution is to show how surface height recovery from the field of surface normals can be posed as one of low dimensional embedding. We experiment with the resulting method on a variety of real-world image data, where it produces qualitatively good reconstructed surfaces.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132500041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient model creation of large structures based on range segmentation 基于距离分割的大型结构的高效模型创建
I. Stamos, Marius Leordeanu
{"title":"Efficient model creation of large structures based on range segmentation","authors":"I. Stamos, Marius Leordeanu","doi":"10.1109/TDPVT.2004.1335272","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335272","url":null,"abstract":"This work describes an efficient 3D modeling method from 3D range data-sets that is utilizing range data segmentation. Our algorithm starts with a set of unregistered 3D range scans of a large scale scene. The scans are being preprocessed for noise removal and hole filling. The next step is range segmentation and the extraction of planar and linear features. These features are utilized for the automatic registration of the range scans into a common frame of reference [I. Stamos et al, (2003)]. A volumetric-based algorithm is used for the construction of a coherent 3D mesh that encloses all range scans. Finally, the original segmented scans are used in order to simplify the constructed mesh. The mesh can now be represented as a set of planar regions at areas of low complexity and as a set of dense mesh triangular elements at areas of high complexity. This is achieved by computing the overlaps of the original segmented planar areas on the generated 3D mesh. The example of the construction of the 3D model of a building in the NYC area is presented.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130744470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Heterogeneous deformation model for 3D shape and motion recovery from multi-viewpoint images 多视点图像三维形状和运动恢复的非均匀变形模型
S. Nobuhara, T. Matsuyama
{"title":"Heterogeneous deformation model for 3D shape and motion recovery from multi-viewpoint images","authors":"S. Nobuhara, T. Matsuyama","doi":"10.1109/TDPVT.2004.1335289","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335289","url":null,"abstract":"This work presents a framework for dynamic 3D shape and motion reconstruction from multiviewpoint images using a deformable mesh model. By deforming a mesh at a frame to that at the next frame, we can obtain both 3D shape and motion of the object simultaneously. The deformation process of our mesh model is heterogeneous. Each vertex changes its deformation process according to its 1) photometric property (i.e., if it has prominent texture or not), and 2) physical property (i.e., if it is an element of rigid part of the object or not). This heterogeneous deformation model enables us to reconstruct the object which consists of different kinds of materials or parts with different motion models, e.g., rigidly acting body parts and deforming soft clothes or its skins, by a single and unified computational framework.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133707267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
A non causal Bayesian framework for object tracking and occlusion handling for the synthesis of stereoscopic video 一个用于立体视频合成的目标跟踪和遮挡处理的非因果贝叶斯框架
K. Moustakas, D. Tzovaras, M. Strintzis
{"title":"A non causal Bayesian framework for object tracking and occlusion handling for the synthesis of stereoscopic video","authors":"K. Moustakas, D. Tzovaras, M. Strintzis","doi":"10.1109/TDPVT.2004.1335188","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335188","url":null,"abstract":"This work presents a framework for the synthesis of stereoscopic video using as input only a monoscopic image sequence. Initially, bi-directional 2D motion estimation is performed, which is followed by an efficient method for the reliable tracking of object contours. Rigid 3D motion and structure is recovered utilizing extended Kalman filtering. Finally, occlusions are dealt with a novel Bayesian framework, which exploits future information to correctly reconstruct occluded areas. Experimental evaluation shows that the layered object scene representation, combined with the proposed methods for object tracking throughout the sequence and occlusion handling, yields very accurate results.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114606910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multi-spectral stereo image matching using mutual information 基于互信息的多光谱立体图像匹配
C. Fookes, A. Maeder, S. Sridharan, Jamie Cook
{"title":"Multi-spectral stereo image matching using mutual information","authors":"C. Fookes, A. Maeder, S. Sridharan, Jamie Cook","doi":"10.1109/TDPVT.2004.1335420","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335420","url":null,"abstract":"Mutual information (MI) has shown promise as an effective stereo matching measure for images affected by radiometric distortion. This is due to the robustness of MI against changes in illumination. However, MI-based approaches are particularly prone to the generation of false matches due to the small statistical power of the matching windows. Consequently, most previous MI approaches utilise large matching windows which smooth the estimated disparity field. This work proposes extensions to MI-based stereo matching in order to increase the robustness of the algorithm. Firstly, prior probabilities are incorporated into the MI measure in order to considerably increase the statistical power of the matching windows. These prior probabilities, which are calculated from the global joint histogram between the stereo pair, are tuned to a two level hierarchical approach. A 2D match surface, in which the match score is computed for every possible combination of template and matching window, is also utilised. This enforces left-right consistency and uniqueness constraints. These additions to MI-based stereo matching significantly enhance the algorithm's ability to detect correct matches while decreasing computation time and improving the accuracy. Results show that the MI measure does not perform quite as well for standard stereo pairs when compared to traditional area-based metrics. However, the MI approach is far superior when matching across multispectra stereo pairs.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123964859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
A graph cut based adaptive structured light approach for real-time range acquisition 一种基于图割的自适应结构光实时距离获取方法
T. Koninckx, I. Geys, T. Jaeggli, L. Gool
{"title":"A graph cut based adaptive structured light approach for real-time range acquisition","authors":"T. Koninckx, I. Geys, T. Jaeggli, L. Gool","doi":"10.1109/TDPVT.2004.1335268","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335268","url":null,"abstract":"This work describes a new algorithm that yields dense range maps in real-time. Reconstructions are based on a single frame structured light illumination. On-the-fly adaptation of the projection pattern renders the system more robust against scene variability. A continuous trade off between speed and quality is made. The correspondence problem is solved by using geometric pattern coding in combination with sparse color coding. Only local spatial and temporal continuity are assumed. This allows to construct a neighbor relationship within every frame and to track correspondences over time. All cues are integrated in one consistent labeling. This is achieved by reformulating the problem as a graph cut. Every cue is weighted based on its average consistency with the result within a small time window. Integration and weighting of additional cues is straightforward. The correctness of the range maps is not guaranteed, but an estimation of the uncertainty is provided for each part of the reconstruction. Our prototype is implemented using unmodified consumer hardware only. Frame rates vary between 10 and 25 fps dependent on scene complexity.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129283554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
3D image sensing for bit plane method of progressive transmission 三维图像传感的位平面渐进传输方法
D. Loganathan, K. Mehata
{"title":"3D image sensing for bit plane method of progressive transmission","authors":"D. Loganathan, K. Mehata","doi":"10.1109/TDPVT.2004.1335180","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335180","url":null,"abstract":"Image compression has received a lot of interest over the years. Almost all the compression algorithms and standards, discussed in literature, gather statistics and compress on the complete image and compresses to suit various requirements such as lossy / lossless, baseline/ progressive, spatial, region on interest. Natural images such as gray scale images and color images are best compressed in the existing literature based on the local and global properties such as attributes of constituent pixels of the given image. In this paper, it is proposed to quantize the amplitudes of pixel values to form a number of bit planes and these bit planes are transmitted in either lossy, lossless, progressive manner. Bit plane formation is attempted from the image acquiring stage to compress and then to transmission stage. Results obtained are promising and give rise to new method or ideology in image sensing, acquiring, storage and transmission.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128831775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A unified approach for motion analysis and view synthesis 运动分析和视图合成的统一方法
A. Rav-Acha, Shmuel Peleg
{"title":"A unified approach for motion analysis and view synthesis","authors":"A. Rav-Acha, Shmuel Peleg","doi":"10.1109/TDPVT.2004.1335386","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335386","url":null,"abstract":"Image based rendering (IBR) consists of several steps: (i) calibration (or ego-motion computation) of all input images, (ii) determination of regions in the input images used to synthesize the new view. (iii) interpolating the new view from the selected areas of the input images. We propose a unified representation for all these aspects of IBR using the space-time (x-y-t) volume. The presented approach is very robust, and allows to use IBR in general conditions even with a hand-held camera. To take care of (i), the space-time volume is constructed by placing frames at locations along the time axis so that image features create straight lines in the EPI (epipolar plane images). Different slices of the space-time volume are used to produce new views, taking care of (ii). Step (iii) is done by interpolating between image samples using the feature lines in the EPI images. IBR examples are shown for various cases: sequences taken from a driving car, from a handheld camera, or when using a tripod.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126636514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Spacetime-coherent geometry reconstruction from multiple video streams 从多个视频流的时空相干几何重建
M. Magnor, Bastian Goldlücke
{"title":"Spacetime-coherent geometry reconstruction from multiple video streams","authors":"M. Magnor, Bastian Goldlücke","doi":"10.1109/TDPVT.2004.1335231","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335231","url":null,"abstract":"By reconstructing time-varying geometry one frame at a time, one ignores the continuity of natural motion, wasting useful information about the underlying video-image formation process and taking into account temporally discontinuous reconstruction results. In 4D spacetime, the surface of a dynamic object describes a continuous 3D hyper-surface. This hyper-surface can be implicitly defined as the minimum of an energy functional designed to optimize photo-consistency. Based on an Eider-Lagrange reformulation of the problem, we find this hyper-surface from a handful of synchronized video recordings. The resulting object geometry varies smoothly over time, and intermittently invisible object regions are correctly interpolated from previously and/or future frames.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126472615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Theoretical accuracy analysis of N-Ocular vision systems for scene reconstruction, motion estimation, and positioning n -眼视觉系统用于场景重建、运动估计和定位的理论精度分析
P. Firoozfam, S. Negahdaripour
{"title":"Theoretical accuracy analysis of N-Ocular vision systems for scene reconstruction, motion estimation, and positioning","authors":"P. Firoozfam, S. Negahdaripour","doi":"10.1109/TDPVT.2004.1335409","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335409","url":null,"abstract":"Theoretical models are derived to analyze the accuracy of N-Ocular vision systems for scene reconstruction, motion estimation and self positioning. Covariance matrices are given to estimate the uncertainty bounds for the reconstructed points in 3D space, motion parameters, and 3D position of the vision system. Simulation results of various experiments, based on synthetic and real data acquired with a 12-camera stereo panoramic imaging system, are given to demonstrate the application of these models, as well as to evaluate the performance of the panoramic system for high-precision 3D mapping and positioning.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120938019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信