{"title":"Contour-Based Warping","authors":"Kwai Hung Chan, Rynson W.H. Lau","doi":"10.1006/gmip.1998.0476","DOIUrl":"10.1006/gmip.1998.0476","url":null,"abstract":"<div><p>In this paper, a new warping technique called contour-based warping is presented. Feature contours of objects are defined and mapped to their target shapes. This allows the user greater flexibility in defining the warping with minimal effort. Two image warping methods are introduced in this paper and both are based on the concept of mapping contours. The<em>peel-and-resample</em>method can warp simple image objects with a single inner-feature in a short time, but suffers from the problems of misalignment and inability of handling multiple features. The<em>wave propagation</em>method solves these two problems. Unlike most existing methods, this method warps image objects based on specified feature contours instead of points or vectors. Results of this method demonstrate that increasing the number of contour features distributed on the warping image reduces the computational time. However, it is slower compared with the peel-and-resample method when warping simple image objects with a single inner-feature.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"60 5","pages":"Pages 331-348"},"PeriodicalIF":0.0,"publicationDate":"1998-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1998.0476","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114043268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Zeta: A Resolution Modeling System","authors":"P. Cignoni, C. Montani, C. Rocchini, R. Scopigno","doi":"10.1006/gmip.1998.0477","DOIUrl":"10.1006/gmip.1998.0477","url":null,"abstract":"<div><p>Very large graphics models are common in a number of applications, and many different simplification methods have recently been developed. Some of them support the construction of multiresolution representations of the input meshes. On the basis of these innovative techniques, we foresee a modeling framework based on three separate stages (shape modeling, multiresolution encoding, and resolution modeling), and propose a new approach to the last stage,<em>resolution modeling</em>, which is highly general, user-driven, and not strictly tied to a particular simplification method. The approach proposed is based on a multiresolution representation scheme for triangulated, 2-manifold meshes, the Hypertriangulation Model (HyT). This scheme allows selective “walks” along the multiresolution surface, moving between adjacent faces efficiently. A prototypal<em>resolution modeling</em>system,<em>Zeta</em>, has been implemented to allow interactive modeling of surface details and has been evaluated on several practical models. It supports the efficient extraction of fixed resolution representations; unified management of selective refinements and selective simplifications; easy composition of the selective refinement/simplification actions, with no cracks in the variable resolution mesh produced; multiresolution editing; and interactive response times.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"60 5","pages":"Pages 305-329"},"PeriodicalIF":0.0,"publicationDate":"1998-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1998.0477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124619318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandre X. Falcão , Jayaram K. Udupa , Supun Samarasekera , Shoba Sharma , Bruce Elliot Hirsch , Roberto de A. Lotufo
{"title":"User-Steered Image Segmentation Paradigms: Live Wire and Live Lane","authors":"Alexandre X. Falcão , Jayaram K. Udupa , Supun Samarasekera , Shoba Sharma , Bruce Elliot Hirsch , Roberto de A. Lotufo","doi":"10.1006/gmip.1998.0475","DOIUrl":"10.1006/gmip.1998.0475","url":null,"abstract":"<div><p>In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide<em>effective control</em>to the user on the segmentation process<em>while</em>it is being executed, and (ii) to minimize the total user's time required in the process. With these goals in mind, we present in this paper two paradigms, referred to as<em>live wire</em>and<em>live lane</em>, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its ``boundariness,'' and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (live-wire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes as a function of the speed and acceleration of cursor motion. Live-wire segments are generated and displayed in real time between successive points. The users get the feeling that the curve snaps onto the boundary as and while they roughly mark in the vicinity of the boundary.</p><p>We describe formal evaluation studies to compare the utility of the new methods with that of manual tracing based on speed and repeatability of tracing and on data taken from a large ongoing application. The studies indicate that the new methods are statistically significantly more repeatable and 1.5–2.5 times faster than manual tracing.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"60 4","pages":"Pages 233-260"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1998.0475","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113964040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Control of Polygonal Mesh Resolution for 3-D Computer Vision","authors":"Andrew E. Johnson , Martial Hebert","doi":"10.1006/gmip.1998.0474","DOIUrl":"10.1006/gmip.1998.0474","url":null,"abstract":"<div><p>A common representation in 3-D computer vision is the polygonal surface mesh because meshes can model objects of arbitrary shape and are easily constructed from sensed 3-D data. The resolution of a surface mesh is the overall spacing between vertices that comprise the mesh. Because sensed 3-D points are often unevenly distributed, the resolution of a surface mesh is often poorly defined. We present an algorithm that transforms a mesh with an uneven spacing between vertices into a mesh with a more even spacing between vertices, thus improving its definition of resolution. In addition, we show how the algorithm can be used to control the resolution of surface meshes, making them amenable to multiresolution approaches in computer vision.</p><p>The structure of our algorithm is modeled on iterative mesh simplification algorithms common in computer graphics; however, the individual steps in our algorithm are designed specifically to control mesh resolution. An even spacing between vertices is generated by applying a sequence of local edge operations that promote uniform edge lengths while preserving mesh shape. To account for polyhedral objects, we introduce an accurate shape change measure that permits edge operations along sharp creases. By locally bounding the total change in mesh shape, drastic changes in global shape are prevented. We show results from many 3-D sensing domains including computed tomography, range imaging, and digital elevation map construction.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"60 4","pages":"Pages 261-285"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1998.0474","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115932439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Approach to the Visualization of Tensor Fields","authors":"R.K. Dodd","doi":"10.1006/gmip.1998.0473","DOIUrl":"10.1006/gmip.1998.0473","url":null,"abstract":"<div><p>This paper investigates the geometry connected with a new way of representing tensors called TRIG (tensor realization by intrinsic geometry).</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"60 4","pages":"Pages 286-303"},"PeriodicalIF":0.0,"publicationDate":"1998-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1998.0473","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125956648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Panoramic Stereo Imaging System with Automatic Disparity Warping and Seaming","authors":"Ho-Chao Huang, Yi-Ping Hung","doi":"10.1006/gmip.1998.0467","DOIUrl":"10.1006/gmip.1998.0467","url":null,"abstract":"<div><p>Two commonly used approaches for building a virtual reality (VR) world are the model-based approach and the image-based approach. Recently, the image-based approach has received much attention for its advantages of being easier to build a VR model and of being able to provide photo-realistic views. However, traditional image-based VR systems cannot produce the stereo views that can give the users the feeling of 3D depth. In this paper, we present a panoramic stereo imaging (PSI) system which can produce stereo panoramas for image-based VR systems. This PSI system is referred to as the PSI-II system, which is an improved system of our previous experimental PSI-I system. The PSI-I system uses a well-calibrated tripod system to acquire a series of stereo image pairs, while the PSI-II system does not require the use of a well-calibrated tripod system and can automatically generate a stereo-pair of panoramic images by using a novel disparity warping technique and a hierarchical seaming algorithm. Our PSI-II system can automatically correct the epipolar-line inconsistency of the stereo images pairs and the image disparity caused by the dislocation of the camera's lens center in the image acquisition process. Our experiments have shown that the proposed method can easily provide realistic 360° panoramic views for image-based VR systems.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"60 3","pages":"Pages 196-208"},"PeriodicalIF":0.0,"publicationDate":"1998-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1998.0467","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134048289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Approximating Polygonal Curves in Two and Three Dimensions","authors":"Kento Miyaoku , Koichi Harada","doi":"10.1006/gmip.1997.0468","DOIUrl":"10.1006/gmip.1997.0468","url":null,"abstract":"<div><p>We discuss the<em>weighted minimum number</em>polygonal approximation problem. Eu and Toussaint (<em>1994, CVGIP: Graphical Models Image Process.</em><strong>56</strong>, 231–246) considered this problem subject to the<em>parallel-strip</em>error criterion in<em>R</em><sup>2</sup>with<em>L</em><sub><em>q</em></sub>distance metrics, and they concluded that it can be solved in<em>O</em>(<em>n</em><sup>2</sup>) time by using the Cone intersection method. In this note, we clarify part of their discussion and show that solving their problem correctly requires<em>O</em>(<em>n</em><sup>2</sup>log<em>n</em>) time. Also, we discuss the<em>weighted minimum number</em>problem subject to the<em>line segment</em>error criterion. When input curves are strictly monotone in<em>R</em><sup>3</sup>, we demonstrate that if the<em>L</em><sub>1</sub>or<em>L</em><sub>∞</sub>metric is used, this problem also can be solved in<em>O</em>(<em>n</em><sup>2</sup>) time.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"60 3","pages":"Pages 222-225"},"PeriodicalIF":0.0,"publicationDate":"1998-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1997.0468","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115254625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Hierarchical Markov Modeling Approach for the Segmentation and Tracking of Deformable Shapes","authors":"Charles Kervrann , Fabrice Heitz","doi":"10.1006/gmip.1998.0469","DOIUrl":"10.1006/gmip.1998.0469","url":null,"abstract":"<div><p>In many applications of dynamic scene analysis, the objects or structures to be analyzed undergo deformations that have to be modeled. In this paper, we develop a hierarchical statistical modeling framework for the representation, segmentation, and tracking of 2D deformable structures in image sequences. The model relies on the specification of a template, on which global as well as local deformations are defined. Global deformations are modeled using a statistical modal analysis of the deformations observed on a representative population. Local deformations are represented by a (first-order) Markov random process. A model-based segmentation of the scene is obtained by a joint bayesian estimation of global deformation parameters and local deformation variables. Spatial or spatio-temporal observations are considered in this estimation procedure, yielding an edge-based or a motion-based segmentation of the scene. The segmentation procedure is combined with a temporal tracking of the deformable structure over long image sequences, using a Kalman filtering approach. This combined segmentation-tracking procedure has produced reliable extraction of deformable parts from long image sequences in adverse situations such as low signal-to-noise ratio, nongaussian noise, partial occlusions, or random initialization. The approach is demonstrated on a variety of synthetic as well as real-world image sequences featuring different classes of deformable objects.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"60 3","pages":"Pages 173-195"},"PeriodicalIF":0.0,"publicationDate":"1998-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1998.0469","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129434200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing a Strip Separating Two Polygons","authors":"Gill Barequet , Barbara Wolfers","doi":"10.1006/gmip.1998.0470","DOIUrl":"10.1006/gmip.1998.0470","url":null,"abstract":"<div><p>We consider the problem of finding a strip separating between two polygons, whose intersection with a third (convex) polygon is of maximum area. We present an optimal linear-time algorithm for computing the optimum strip. When the third polygon is not convex, the running time of the algorithm is quadratic in the size of the input. The application in mind is the piecewise-linear surface interpolation in simple branching cases, where the sought volume branches from one contour in one slice into two contours in the other slice.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"60 3","pages":"Pages 214-221"},"PeriodicalIF":0.0,"publicationDate":"1998-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1998.0470","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133567443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ellipse Fitting Using Orthogonal Hyperbolae and Stirling's Oval","authors":"Paul L. Rosin","doi":"10.1006/gmip.1998.0471","DOIUrl":"10.1006/gmip.1998.0471","url":null,"abstract":"<div><p>Two methods for approximating the normal distance to an ellipse using (a) its orthogonal hyperbolae and (b) Stirling's oval are described. Analysis with a set of quantitative measures shows that the former provides an accurate approximation with few irregularities or biases. Its suitability is evaluated by comparing several approximations as error of fit functions and applying them to ellipse fitting.</p></div>","PeriodicalId":100591,"journal":{"name":"Graphical Models and Image Processing","volume":"60 3","pages":"Pages 209-213"},"PeriodicalIF":0.0,"publicationDate":"1998-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/gmip.1998.0471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120994688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}