{"title":"A progressive refinement approach for image magnification","authors":"H. Johan, T. Nishita","doi":"10.1109/PCCGA.2004.1348366","DOIUrl":"https://doi.org/10.1109/PCCGA.2004.1348366","url":null,"abstract":"The rapid growth in computer graphics and digital cameras has resulted in computer users being able to easily produce digital images. As a result, the need to display and print digital images has increased. Nowadays, high-resolution display and printing devices are available to users. Therefore, high-resolution images are needed in order to produce high quality displayed images and high quality prints. However, since high-re solution images are not usually provided, there is a need to magnify the original images. Previous methods on magnifying images have the disadvantage that either the sharpness of the edges cannot be preserved or that some distinct artifacts are produced in the magnified image. In this paper, we present a novel method for doubling the size of images in which the sharpness of the edges is preserved without introducing distinct artifacts in the magnified images. The proposed method consists of two steps, first generation of an initial magnified image and then progressively refining this image to produce a high quality magnified image. The experimental results show that with the proposed method it is possible to produce magnified images of comparable, and in some cases superior, visual quality to those produced using previous methods.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"237 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114005213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. W. Park, B. Budge, L. Linsen, B. Hamann, K. Joy
{"title":"Multi-dimensional transfer functions for interactive 3D flow visualization","authors":"S. W. Park, B. Budge, L. Linsen, B. Hamann, K. Joy","doi":"10.1109/PCCGA.2004.1348348","DOIUrl":"https://doi.org/10.1109/PCCGA.2004.1348348","url":null,"abstract":"Transfer functions are a standard technique used in volume rendering to assign color and opacity to a volume of a scalar field. Multidimensional transfer functions (MDTFs) have proven to be an effective way to extract specific features with subtle properties. As 3D texture-based methods gain widespread popularity for the visualization of steady and unsteady flow field data, there is a need to define and apply similar MDTFs to interactive 3D flow visualization. We exploit flow field properties such as velocity, gradient, curl, helicity, and divergence using vector calculus methods to define an MDTF that can be used to extract and track features in a flow field. We show how the defined MDTF can be applied to interactive 3D flow visualization by combining them with state-of-the-art texture-based flow visualization of steady and unsteady fields. We demonstrate that MDTFs can be used to help alleviate the problem of occlusion, which is one of the main inherent drawbacks of 3D texture-based flow visualization techniques. In our implementation, we make use of current graphics hardware to obtain interactive frame rates.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124731568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification for Fourier volume rendering","authors":"Z. Nagy, Gero Müller, R. Klein","doi":"10.1109/PCCGA.2004.1348334","DOIUrl":"https://doi.org/10.1109/PCCGA.2004.1348334","url":null,"abstract":"In the last decade, Fourier volume rendering (FVR) has obtained considerable attention due to its O(N/sup 2/logN) rendering complexity, where O(N/sup 3/) is the volume size. Although ordinary volume rendering has O(N/sup 3/) rendering complexity, it is still preferred over FVR for the main reason, that FVR offers bad localization of spatial structures. As a consequence, it was assumed, that it is hardly possible to apply ID transfer functions, which arbitrarily modify voxel values not only in dependence of the position, but also the voxel value. We show that this assumption is not true for threshold operators. Based on the theory of Fourier series, we derive a FVR method, which is capable of integrating all sample points greater (or alternatively, lower) than an iso-value T during rendering, where T can be modified interactively during the rendering session. We compare our method with other approaches and we show examples on well-known datasets to illustrate the quality of the renderings.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"97 5-6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120893072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Antialiased rendering of self-intersecting polygons using polygon decomposition","authors":"K. Doan","doi":"10.1109/PCCGA.2004.1348369","DOIUrl":"https://doi.org/10.1109/PCCGA.2004.1348369","url":null,"abstract":"Although numerous pre-filtering and area sampling antialiasing techniques have been devised, few are capable of rendering complex, self-intersecting polygons. This paper presents a new antialiasing method capable of handling such polygons, according to any fill rule. The method is based on area sampling and is hence free from the visual aliasing artifacts of super sampling associated with near horizontal edges. It works by decomposing the coverage area of a polygon within each pixel into separate constituent regions, which are simple regions whose areas can be easily computed. The decomposition step does not attempt to generate non-overlapping regions, unlike that employed by existing methods, and hence operates very efficiently. This, together with other novel approximation techniques in the recombination stage allow pixel coverage areas to estimated with an efficiency approaching that of super sampling.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133410687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Non-photorealistic real-time rendering of characteristic faces","authors":"Thomas Luft, O. Deussen","doi":"10.1109/PCCGA.2004.1348365","DOIUrl":"https://doi.org/10.1109/PCCGA.2004.1348365","url":null,"abstract":"We propose a system for real-time sketching of human faces. On the basis of a three-dimensional description of a face model, characteristic line strokes are extracted and represented in an artistic way. In order to enrich the results with details that cannot be determined analytically from the model surface and anchor strokes are supplemented interactively and are maintained during animation. Because of the real-time ability of our rendering pipeline the system is suitable for interactive facial animation. Thus, interesting areas of application within the range of the virtual avatars are possible.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128077892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CSG tree rendering for point-sampled objects","authors":"M. Wicke, M. Teschner, M. Gross","doi":"10.1109/PCCGA.2004.1348346","DOIUrl":"https://doi.org/10.1109/PCCGA.2004.1348346","url":null,"abstract":"This paper presents an algorithm for rendering of point-sampled CSG models. The approach works with arbitrary CSG trees of surfel models with arbitrary sampling densities. Edges and corners are rendered by reconstructing the involved surfaces separately. The reconstructed surfaces are clipped at intersections. This way, blurring at any magnification is avoided. As opposed to existing methods, which resample surfaces close to object intersections, the proposed approach preserves the original object representation. Since no resampling is needed, dynamic scenes can be handled very flexible. Complex intersections involving any number of objects can be rendered.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134013646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Direct anisotropic quad-dominant remeshing","authors":"Martin Marinov, L. Kobbelt","doi":"10.1109/PCCGA.2004.1348351","DOIUrl":"https://doi.org/10.1109/PCCGA.2004.1348351","url":null,"abstract":"We present an extension of the anisotropic polygonal remeshing technique developed by Alliez et al. (2003). Our algorithm does not rely on a global parameterization of the mesh and therefore is applicable to arbitrary genus surfaces. We show how to exploit the structure of the original mesh in order to perform efficiently the proximity queries required in the line integration phase, thus improving dramatically the scalability and the performance of the original algorithm. Finally, we propose a technique for producing conforming quad-dominant meshes in isotropic regions as well by propagating directional information from the anisotropic regions.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124663867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Frequency-based environment matting","authors":"Jiayuan Zhu, Herbert Yang","doi":"10.1109/PCCGA.2004.1348371","DOIUrl":"https://doi.org/10.1109/PCCGA.2004.1348371","url":null,"abstract":"Environment matting is a technique to extract the environment matte which is used to describe how an object reflects and refracts the environment light. In this paper, we propose a novel environment matting method to obtain the environment matte of a real scene. Previous methods use different backdrops as the calibration patterns and search for the environment matte in the spatial domain. In our method, however, a series of background images displayed on a screen sequentially in time are interpreted as signals. The frequency similarity of these signals is used as the searching criterion. The frequencies of these signals are not changed when they interact with the foreground objects and thus can be used to extract the environment matte. While using correspondence in the spatial domain in existing approaches is prone to error, using frequency correspondence is not. Thus, our approach is robust to noise and can easily deal with some of the complex light transport phenomena which cannot be easily handled using current methods. The experimental results are very encouraging.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115551410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunjin Lee, Seungyong Lee, Ariel Shamir, D. Cohen-Or, H. Seidel
{"title":"Intelligent mesh scissoring using 3D snakes","authors":"Yunjin Lee, Seungyong Lee, Ariel Shamir, D. Cohen-Or, H. Seidel","doi":"10.1109/PCCGA.2004.1348358","DOIUrl":"https://doi.org/10.1109/PCCGA.2004.1348358","url":null,"abstract":"Mesh partitioning and parts extraction have become key ingredients for many mesh manipulation applications both manual and automatic. In this paper, we present an intelligent scissoring operator for meshes which supports both automatic segmentation and manual cutting. Instead of segmenting the mesh by clustering, our approach concentrates on finding and defining the contours for cutting. This approach is based on the minima rule, which states that human perception usually divides a surface into parts along the contours of concave discontinuity of the tangent plane. The technique uses feature extraction to find such candidate feature contours. Subsequently, such a contour can be selected either automatically or manually, or the user may draw a 2D line to start the scissoring process. The given open contour is completed to form a loop around a specific part of the mesh, and this loop is used as the initial position of a 3D geometric snake. The snake moves by relaxation until it settles to define the final scissoring position. This process uses several fundamental geometric mesh attributes, such as curvature and centricity, and enables both automatic segmentation and an easy-to-use intelligent-scissoring operator.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133558066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hardware-accelerated reconstruction of polygonal isosurface representations on unstructured grids","authors":"T. Klein, S. Stegmaier, T. Ertl","doi":"10.1109/PCCGA.2004.1348349","DOIUrl":"https://doi.org/10.1109/PCCGA.2004.1348349","url":null,"abstract":"Volume visualization using isosurface extraction is a well-researched topic. Research demonstrated that even for unstructured grids peak performances of millions of tetrahedra per second can be achieved by exploiting the parallel processing capabilities of modern GPUs. In this paper we present a hardware-accelerated solution that further improves the extraction performance. In contrary to existing approaches, our technique explicitly extracts the isosurface geometry in a fragment program by rendering only a single screen-sized quadrilateral. The extracted geometry is directly written to an on-board graphics memory object allowing for direct rendering without further bus transfers. Additionally, the geometry can be manipulated by shader programs or read back to the application for further processing. Examples and application scenarios are given that can benefit from our approach.","PeriodicalId":264796,"journal":{"name":"12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131183983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}