{"title":"On robustness and localization accuracy of optical flow computation from color imagery","authors":"H. Madjidi, S. Negahdaripour","doi":"10.1109/TDPVT.2004.1335219","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335219","url":null,"abstract":"Accurate and efficient optical flow estimation is a major step in many computational vision problems, including tracking and 2D/3D mapping applications. Processing of grayscale images has been the dominant approach, with only a few studies investigating selected aspects in the use of color imagery. In a physics-based analysis to study the impact of the spectral-dependent medium attenuation on the color channels, we have shown merit in the use of color cues in the computation of optical flow for underwater imagery- the primary motivation of the investigation [Negahdaripour, S. et al., (2002)]. Comparisons among various color representations and traditional intensity component on the optical flow computation are given, suggesting that the HSV representation could be the most suitable. For both underwater and terrestrial imagery, even where data in the 3 color channels are highly correlated, one expects multiple constraints from color channels to give increased robustness due to the independent channel noises. Results of experiments are given to demonstrate improved localization and accuracy.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125154941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating curvatures and their derivatives on triangle meshes","authors":"S. Rusinkiewicz","doi":"10.1109/TDPVT.2004.1335277","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335277","url":null,"abstract":"The computation of curvature and other differential properties of surfaces is essential for many techniques in analysis and rendering. We present a finite-differences approach for estimating curvatures on irregular triangle meshes that may be thought of as an extension of a common method for estimating per-vertex normals. The technique is efficient in space and time, and results in significantly fewer outlier estimates while more broadly offering accuracy comparable to existing methods. It generalizes naturally to computing derivatives of curvature and higher-order surface differentials.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125171506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GPU-assisted z-field simplification","authors":"A. Bogomjakov, C. Gotsman","doi":"10.1109/TDPVT.2004.1335303","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335303","url":null,"abstract":"Height fields and depth maps which we collectively refer to as z-fields, usually carry a lot of redundant information and are often used in real-time applications. This is the reason why efficient methods for their simplification are necessary. On the other hand, the computation power and programmability of commodity graphics hardware has significantly grown. We present an adaptation of an existing real-time z-field simplification method for execution in graphics hardware. The main parts of the algorithm are implemented as fragment programs which run on the GPU. The resulting polygonal models are identical to the ones obtained by the original method. The main benefit is that the computation load is imposed on the GPU, freeing-up the CPU for other tasks. Additionally, the new method exhibits a performance improvement when compared to a pure CPU implementation.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125622638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Bustos, D. Keim, D. Saupe, T. Schreck, D. Vranic
{"title":"An experimental comparison of feature-based 3D retrieval methods","authors":"B. Bustos, D. Keim, D. Saupe, T. Schreck, D. Vranic","doi":"10.1109/TDPVT.2004.1335197","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335197","url":null,"abstract":"3D objects are an important type of multimedia data with many promising application possibilities. Defining the aspects that constitute the similarity among 3D objects, and designing algorithms that implement such similarity definitions is a difficult problem. Over the last few years, a strong interest in methods for feature-based 3D similarity search has arisen, and a growing number of competing algorithms for content-based retrieval of 3D objects have been proposed. We present an extensive experimental evaluation of the retrieval effectiveness and efficiency of a large part of the current state-of-the-art feature-based methods for 3D similarity search, giving a contrasting assessment of the different approaches.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122853569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. MacVicar, S. Ford, E. Borland, R. Rixon, J. Patterson, W. Cockshott
{"title":"3D performance capture for facial animation","authors":"D. MacVicar, S. Ford, E. Borland, R. Rixon, J. Patterson, W. Cockshott","doi":"10.1109/TDPVT.2004.1335140","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335140","url":null,"abstract":"This work describes how a photogrammetry based 3D capture system can be used as an input device for animation. The 3D Dynamic Capture System is used to capture the motion of a human face, which is extracted from a sequence of 3D models captured at TV frame rate. Initially the positions of a set of landmarks on the face are extracted. These landmarks are then used to provide motion data in two different ways. First, a high level description of the movements is extracted, and these can be used as input to a procedural animation package (i.e. CreaToon). Second the landmarks can be used as registration points for a conformation process where the model to be animated is modified to match the captured model. This approach gives a new sequence of models, which have the structure of the drawn model but the movement of the captured sequence.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127819593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamically optimised 3D (virtual reality) data transmission for mobile devices","authors":"M. Feißt, Andreas H. Christ","doi":"10.1109/TDPVT.2004.1335205","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335205","url":null,"abstract":"Nowadays the processing power of mobile phones, smartphones and PDAs is increasing as well as the transmission bandwidth. Nevertheless there is still the need to reduce the content and the need of processing the data. We discuss the proposals and solutions for dynamic reduction of the transmitted content. For that, device specific properties are taken into account, as much as for the aim to reduce the need of processing power at the client side to be able to display the 3D (virtual reality) data. Therefore, well known technologies, e.g. data compression are combined with new developed ideas to reach the goal of adaptive content transmission. To achieve a device dependant reduction of processing power the data have to be preprocessed at the server side or the server even has to take over functionality of weak mobile devices.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133645288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Surface model generation from range images of industrial environments","authors":"A. Sappa","doi":"10.1109/TDPVT.2004.1335406","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335406","url":null,"abstract":"We present an hybrid segmentation technique that combines both the speed of an edge based approach with the robustness of a surface based approach. It consists of three stages. In the first stage a scan line approximation process extracts the edges contained into the given range image. These edges are later on used to define the positions of seed points. Through the second stage a two steps region growing technique is applied. First a 2D growing process enlarges the original seed points generating bigger regions. Next, each region is fitted to a plane and a cylinder. The one that best fit the given points is selected to represent that region and used during the 3D growing stage. The 3D growing stage is carried out taking into account the approximation error from candidate points to be added to the fitted surface. In this way, each surface is grown until no points can be added according to a user defined threshold. Finally, in the third stage, a post-processing algorithm merges neighbour regions that belong to the same surface. Experimental results by using industrial environments are presented.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"82 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133931987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualization of arbitrary-shaped 3D scenes on depth-limited 3D displays","authors":"A. Redert","doi":"10.1109/3DPVT.2004.144","DOIUrl":"https://doi.org/10.1109/3DPVT.2004.144","url":null,"abstract":"We propose a depth scaling method that enables visualization of arbitrary-shaped 3D scenes on 3D displays. Most current 3D displays have a depth limitation, while the scene to be displayed has not. The trivial solutions as clipping or linear scaling of the scene's 3D bounding box suffer from nonoptimal utilization of the display's capabilities. Our approach uses spatially adaptive depth scaling that maximizes the perceptual 3D effect. From the original scene geometry, the topology and local depth ordering among objects are preserved, while depth linearity is disregarded. The scaling method applies to nearly all 3D displays, such as glasses-based, head-tracked, multiview, holographic and volumetric 3D displays. Subjective tests with the dynamic dimension display system show that our method significantly increases the perceptual 3D effect.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114838513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced real-time stereo using bilateral filtering","authors":"A. Ansar, A. Castano, L. Matthies","doi":"10.1109/TDPVT.2004.1335273","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335273","url":null,"abstract":"In recent years, there have been significant strides in increasing quality of range from stereo using global techniques such as energy minimization. These methods cannot yet achieve real-time performance. However, the need to improve range quality for real-time applications persists. All real-time stereo implementations rely on a simple correlation step which employs some local similarity metric between the left and right image. Typically, the correlation takes place on an image pair modified in some way to compensate for photometric variations between the left and right cameras. Improvements and modifications to such algorithms tend to fall into one of two broad categories: those which address the correlation step itself (e.g., shiftable windows, adaptive windows) and those which address the preprocessing of input imagery (e.g. band-pass filtering, Rank, Census). Our efforts lie in the latter area. We present in this paper a modification of the standard band-pass filtering technique used by many SSD- and SAD-based correlation algorithms. By using the bilateral filter of Tomasi and Manduchi [(1998)], we minimize blurring at the filtering stage. We show that in conjunction with SAD correlation, our new method improves stereo quality at range discontinuities while maintaining real-time performance.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117000144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unifying measured point sequences of deforming objects","authors":"Mikio Shinya","doi":"10.1109/TDPVT.2004.1335411","DOIUrl":"https://doi.org/10.1109/TDPVT.2004.1335411","url":null,"abstract":"Recent progress in digitizing technologies is making it possible to capture the 3D shapes of moving objects. To efficiently utilize time series records of spatial data, the information must be unified to yield coherent deforming models. We present a general method that unifies unregistered 3D point sequences to generate deforming mesh models. The method does not assume any specific kinematic structure, and is applicable to any digitizer. The method first polygonizes the initial points and then deforms meshes to best fit the subsequent data points while minimizing the deformation energy. Experiments are conducted on real measured data and CG data, and successful results are obtained. As an application of the method, we examine data compression and achieve a 380 fold reduction rate for a measured data sequence.","PeriodicalId":191172,"journal":{"name":"Proceedings. 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004.","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124275049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}