{"title":"Displaying Readable Text in a Head-Tracked, Stereoscopic Virtual Environment","authors":"Eric Karasuda, Sara McMains","doi":"10.1080/2151237X.2007.10129240","DOIUrl":"https://doi.org/10.1080/2151237X.2007.10129240","url":null,"abstract":"In a head-tracked, stereoscopic virtual environment, many straightforward text implementations suffer from poor readability or unnatural behavior. For example, scan-converted text often appears blurry or \"shimmery\" due to rapidly alternating text thickness because scan conversion depends on the user's location and the user rarely stays perfectly still. Likewise, bitmapped fonts cannot generally mimic objects with fixed size and location because they do not scale and thus do not appear larger as the viewer moves closer. This paper describes a simple method for displaying readable text that need not have a fixed location in the virtual environment, such as menu-system and annotation text. Our approach positions text relative to the user's view frustums (one frustum per eye), adjusting the 3D placement of each piece of text as the user moves, so the text occupies a constant location in each of the view frustums and projects to the same pixels regardless of the user's location. The result is crisp, clear text, consistently fused stereo vision, and reduced visual fatigue compared to many other types of text in virtual-reality environments.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126785057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast Computation of Vertex Normals for Linearly Deforming Meshes","authors":"Jindrich Parus, I. Kolingerová, A. Hast","doi":"10.1080/2151237X.2007.10129249","DOIUrl":"https://doi.org/10.1080/2151237X.2007.10129249","url":null,"abstract":"In this paper, we deal with shading of linearly deforming triangular meshes that deform in time so that each vertex travels independently along its linear trajectory. We will show how the vertex normal can be computed efficiently for an arbitrary triangular polygon mesh under linear deformation using the weighting scheme referred to by Jin et al. as \"mean weighted by areas of adjacent triangles.\" Our computation approach is also faster than simple normal recomputation. Moreover, it is more accurate than the usual linear interpolation. The proposed approach is general enough to be used to compute the vertex normal for any number of adjacent faces.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121585724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"iSlerp: An Incremental Approach to Slerp","authors":"Xin Li","doi":"10.1080/2151237X.2007.10129245","DOIUrl":"https://doi.org/10.1080/2151237X.2007.10129245","url":null,"abstract":"In this paper, an incremental quaternion-interpolation algorithm is introduced. With the assumption of a constant interval between a pair of quaternions, the cost of the interpolation algorithm is significantly reduced. Expensive trigonometric calculations in Slerp are replaced with simple linear-combination arithmetic. The round-off errors and drifting behavior accumulated through incremental steps are also analyzed.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121565945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bidirectional Adaptive √3-Subdivision","authors":"Gerd Sußner, M. Stamminger, G. Greiner","doi":"10.1080/2151237X.2007.10129247","DOIUrl":"https://doi.org/10.1080/2151237X.2007.10129247","url":null,"abstract":"Starting with small, rough, and low-detailed base models, √3- subdivision is able to produce smooth and highly accurate models by adaptive subdivision. However, current smoothing methods are only unidirectional, i.e., a mesh can be smoothed locally, but it is not possible to adaptively coarsen the mesh again. As a consequence, for view-dependent display, a change of the viewing parameters requires a complete restart at the base level. In this paper, a framework for bidirectional adaptive subdivision is presented, which allows us to adapt the current triangulation in both directions locally, i.e., add detail where required and remove detail whenever possible. It operates on the current triangulation only, i.e., there is no need to restore any previous states. Our smoothing process allows us to keep a constant frame rate within every single step in the refinement loop. To avoid popping artifacts, inserted and removed vertices are \"geomorphed.\" Basic key-frame animation support is provided by moving positions of vertices of the base mesh. The framework is suitable for various applications, e.g., as part of a graphics engine for computer games or for character modeling. As a proof of concept, we implemented a view-dependent application, rendering a large number of arbitrary meshes at interactive frame rates with sufficient detail.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131761537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Thresholding using the Integral Image","authors":"D. Bradley, G. Roth","doi":"10.1080/2151237X.2007.10129236","DOIUrl":"https://doi.org/10.1080/2151237X.2007.10129236","url":null,"abstract":"Image thresholding is a common task in many computer vision and graphics applications. The goal of thresholding an image is to classify pixels as either \"dark\" or \"light.\" Adaptive thresholding is a form of thresholding that takes into account spatial variations in illumination. We present a technique for real-time adaptive thresholding using the integral image of the input. Our technique is an extension of a previous method. However, our solution is more robust to illumination changes in the image. Additionally, our method is simple and easy to implement. Our technique is suitable for processing live video streams at a real-time frame-rate, making it a valuable tool for interactive applications such as augmented reality. Source code is available online.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133711333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exact Evaluation of Catmull-Clark Subdivision Surfaces Near B-Spline Boundaries","authors":"Dylan Lacewell, Brent Burley","doi":"10.1080/2151237X.2007.10129243","DOIUrl":"https://doi.org/10.1080/2151237X.2007.10129243","url":null,"abstract":"We extend the eigenbasis method of Stam to evaluate Catmull-Clark subdivision surfaces near extraordinary vertices on B-spline boundaries. Source code is available online.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"10 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134575439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Faster Sphere-Box Overlap Testing","authors":"T. Larsson, T. Akenine-Möller, Eric Lengyel","doi":"10.1080/2151237X.2007.10129232","DOIUrl":"https://doi.org/10.1080/2151237X.2007.10129232","url":null,"abstract":"We present faster overlap tests between spheres and either axis-aligned or oriented boxes. By utilizing quick rejection tests, faster execution times are observed compared to previous techniques. In addition, we present alternative vectorized overlap tests, which are compared to the sequential algorithms. Source code is available online.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"1039 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131551684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Textured Shadow Volumes","authors":"J. Hasselgren, T. Akenine-Möller","doi":"10.1080/2151237X.2007.10129251","DOIUrl":"https://doi.org/10.1080/2151237X.2007.10129251","url":null,"abstract":"We extend the shadow-volume algorithm so that every shadow-casting triangle can be associated with a transmittance texture, which dictates how much light can pass through the triangle at different locations. Our algorithm handles several layers of colored and semitransparent shadows. It allows efficient rendering of effects such as realistic shadows from leaves, fur, and colored glass. Source code is available online.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"50 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116338822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Eisemann, M. Magnor, Thorsten Grosch, S. Müller
{"title":"Fast Ray/Axis-Aligned Bounding Box Overlap Tests using Ray Slopes","authors":"M. Eisemann, M. Magnor, Thorsten Grosch, S. Müller","doi":"10.1080/2151237X.2007.10129248","DOIUrl":"https://doi.org/10.1080/2151237X.2007.10129248","url":null,"abstract":"This paper proposes a new method for fast ray/axis-aligned bounding box overlap tests. This method tests the slope of a ray against an axis-aligned bounding box by projecting itself and the box onto the three planes orthogonal to the world-coordinate axes and performs the tests on them separately. The method is division-free, and successive calculations are independent of each other. No intersection distance is computed, but it can be added easily. Test results show the technique is up to 18% faster than any other method known to us and 14% faster on average for a wide variety of different test scenes and different processor architectures. Source code is available online.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116882950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unified Distance Formulas for Halfspace Fog","authors":"Eric Lengyel","doi":"10.1080/2151237X.2007.10129239","DOIUrl":"https://doi.org/10.1080/2151237X.2007.10129239","url":null,"abstract":"In many real-time rendering applications, it is necessary to model a fog volume that is bounded by a single plane but is otherwise infinite in extent. This paper presents unified formulas that provide the correct distance traveled through a fog halfspace for all possible camera and surface point locations. Such formulas effectively remove the need to code for multiple cases separately, thereby achieving optimal fragment shading performance on modern rendering hardware.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116933374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}