Displaying Readable Text in a Head-Tracked, Stereoscopic Virtual Environment

Eric Karasuda, Sara McMains
{"title":"Displaying Readable Text in a Head-Tracked, Stereoscopic Virtual Environment","authors":"Eric Karasuda, Sara McMains","doi":"10.1080/2151237X.2007.10129240","DOIUrl":null,"url":null,"abstract":"In a head-tracked, stereoscopic virtual environment, many straightforward text implementations suffer from poor readability or unnatural behavior. For example, scan-converted text often appears blurry or \"shimmery\" due to rapidly alternating text thickness because scan conversion depends on the user's location and the user rarely stays perfectly still. Likewise, bitmapped fonts cannot generally mimic objects with fixed size and location because they do not scale and thus do not appear larger as the viewer moves closer. This paper describes a simple method for displaying readable text that need not have a fixed location in the virtual environment, such as menu-system and annotation text. Our approach positions text relative to the user's view frustums (one frustum per eye), adjusting the 3D placement of each piece of text as the user moves, so the text occupies a constant location in each of the view frustums and projects to the same pixels regardless of the user's location. The result is crisp, clear text, consistently fused stereo vision, and reduced visual fatigue compared to many other types of text in virtual-reality environments.","PeriodicalId":318334,"journal":{"name":"Journal of Graphics Tools","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Graphics Tools","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/2151237X.2007.10129240","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

In a head-tracked, stereoscopic virtual environment, many straightforward text implementations suffer from poor readability or unnatural behavior. For example, scan-converted text often appears blurry or "shimmery" due to rapidly alternating text thickness because scan conversion depends on the user's location and the user rarely stays perfectly still. Likewise, bitmapped fonts cannot generally mimic objects with fixed size and location because they do not scale and thus do not appear larger as the viewer moves closer. This paper describes a simple method for displaying readable text that need not have a fixed location in the virtual environment, such as menu-system and annotation text. Our approach positions text relative to the user's view frustums (one frustum per eye), adjusting the 3D placement of each piece of text as the user moves, so the text occupies a constant location in each of the view frustums and projects to the same pixels regardless of the user's location. The result is crisp, clear text, consistently fused stereo vision, and reduced visual fatigue compared to many other types of text in virtual-reality environments.
在头部跟踪的立体虚拟环境中显示可读文本
在头部跟踪的立体虚拟环境中,许多直接的文本实现存在可读性差或行为不自然的问题。例如,扫描转换后的文本通常会由于文本厚度的快速变化而显得模糊或“闪烁”,因为扫描转换取决于用户的位置,而用户很少保持完全静止。同样,位图字体通常不能模仿具有固定大小和位置的对象,因为它们不能缩放,因此在观看者靠近时不会显得更大。本文介绍了一种在虚拟环境中显示可读文本(如菜单系统和注释文本)的简单方法。我们的方法定位文本相对于用户的视锥体(每只眼睛一个视锥体),在用户移动时调整每一块文本的3D位置,因此文本在每个视锥体中占据一个恒定的位置,并且无论用户的位置如何,都投射到相同的像素上。结果是清晰、清晰的文本,始终融合立体视觉,与虚拟现实环境中的许多其他类型的文本相比,减少了视觉疲劳。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信