Augmented Reality: Beyond Interaction

A. Nijholt
{"title":"Augmented Reality: Beyond Interaction","authors":"A. Nijholt","doi":"10.54941/ahfe1002058","DOIUrl":null,"url":null,"abstract":"In 1997 Ronald T. Azuma introduced the following definition of Augmented Reality (AR): “Some researchers define AR in a way that requires the use of Head-Mounted Displays (HMDs). To avoid limiting AR to specific technologies, this survey defines AR as systems that have the following three characteristics: 1) Combines real and virtual, 2) Interactive in real-time, 3) Registered in 3-D.” Azuma also mentions that “AR might apply to all senses, not just sight.” [1] This definition has been leading in AR research until now. AR researchers focused on the various ways technology, in particular digital technology (computer-generated imagery, computer vision and world modelling, interaction technology, and AR display technology), could be developed to realize this AR view. The emphasis has been on addressing the sight sense when generating and aligning virtual content, our most dominant sense, although we can not survive without the others. Azuma and others mention the other senses and assume that this definition also covers other than computer-generated imagery, per-haps even other than computer generated and (spatial-temporal) generated and con-trolled virtual content. Nevertheless, the definition has some constituents that can be given various interpretations. This makes it workable, but it is useful to discuss how we should distinguish between real and virtual content, what is it that distinguishes real from virtual, or how virtual content can trigger changes in the real world (and the other way around), take into account that AR becomes part of ubiquitous computing. That is, rather than looking at AR from the point of view of particular professional, educational, or entertaining applications, we should look at AR from the point of view that it is ever-present, and embedded in ubiquitous computing (Ubicomp), and having its AR devices’ sensors and actuators communicate with the smart environments in which it is embedded.The focus in this paper is on ‘optical see-through’ (OSR) AR and ever-present AR. Ever-present AR will become possible with non-obtrusive AR glasses [2] or contact lenses [3,4]. Usually, interaction is looked upon from the point of view of what we see and hear. But we certainly are aware of touch experiences and exploring objects with active touch. We can also experience scents and flavors, passively but also actively, that is, consciously explore scents or tastes, become aware of them, and ask the environment, not necessarily explicitly since our preferences are known and our intentions can be predicted, to respond in an appropriate way to evoke or continue an interaction.Interaction in AR and with AR technology requires a new look at interaction. Are we interacting with the AR device, with the environment, or with the environment through the AR device? Part of what we perceive is real, part of what we perceive is superimposed on reality, and part of what we perceive is the interaction between real and virtual reality. How to interact with this mix of realities? Additionally, our HMD AR provides us with view changes because of position and head orientation or gaze changes. We interact with the device with, for example, speech and hand gestures, we interact with the environment with, for example, pose changes, and we interact with the virtual content with interaction modalities that are appropriate for that content: push a virtual block, open a virtual door, or have a conversation with a virtual hu-man that inhabits the AR world. In addition, we can think of interactions that be-come possible because technology allows us to get access and act upon sensor information that cannot be perceived with our natural perception receptors. In a ubiquitous computing environment, our AR device can provide us with a 360 degrees view of our environment, drones can feed us with information from above, infrared sensors know about people and events in the dark, our car receives visual information about not yet visible vehicles approaching an intersection [5], sound frequencies be-yond the human ear can be made accessible, smell sensors can enhance the human smell sense, et cetera.In this paper, we investigate the characteristics of interactions in AR and relate them to the regular human-computer interaction characteristics (interacting with tools) [6], interaction with multimedia [7] interaction through behavior [8], implicit interaction [9], embodied interaction [10], fake interaction [11], and interaction based on Gibson’s visual perception theory [12]. This will be done from the point of view of ever-present AR [13] with optical see-through wearable devices.References could not be included because of space limitations.","PeriodicalId":389727,"journal":{"name":"Human Factors in Virtual Environments and Game Design","volume":"130 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors in Virtual Environments and Game Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1002058","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In 1997 Ronald T. Azuma introduced the following definition of Augmented Reality (AR): “Some researchers define AR in a way that requires the use of Head-Mounted Displays (HMDs). To avoid limiting AR to specific technologies, this survey defines AR as systems that have the following three characteristics: 1) Combines real and virtual, 2) Interactive in real-time, 3) Registered in 3-D.” Azuma also mentions that “AR might apply to all senses, not just sight.” [1] This definition has been leading in AR research until now. AR researchers focused on the various ways technology, in particular digital technology (computer-generated imagery, computer vision and world modelling, interaction technology, and AR display technology), could be developed to realize this AR view. The emphasis has been on addressing the sight sense when generating and aligning virtual content, our most dominant sense, although we can not survive without the others. Azuma and others mention the other senses and assume that this definition also covers other than computer-generated imagery, per-haps even other than computer generated and (spatial-temporal) generated and con-trolled virtual content. Nevertheless, the definition has some constituents that can be given various interpretations. This makes it workable, but it is useful to discuss how we should distinguish between real and virtual content, what is it that distinguishes real from virtual, or how virtual content can trigger changes in the real world (and the other way around), take into account that AR becomes part of ubiquitous computing. That is, rather than looking at AR from the point of view of particular professional, educational, or entertaining applications, we should look at AR from the point of view that it is ever-present, and embedded in ubiquitous computing (Ubicomp), and having its AR devices’ sensors and actuators communicate with the smart environments in which it is embedded.The focus in this paper is on ‘optical see-through’ (OSR) AR and ever-present AR. Ever-present AR will become possible with non-obtrusive AR glasses [2] or contact lenses [3,4]. Usually, interaction is looked upon from the point of view of what we see and hear. But we certainly are aware of touch experiences and exploring objects with active touch. We can also experience scents and flavors, passively but also actively, that is, consciously explore scents or tastes, become aware of them, and ask the environment, not necessarily explicitly since our preferences are known and our intentions can be predicted, to respond in an appropriate way to evoke or continue an interaction.Interaction in AR and with AR technology requires a new look at interaction. Are we interacting with the AR device, with the environment, or with the environment through the AR device? Part of what we perceive is real, part of what we perceive is superimposed on reality, and part of what we perceive is the interaction between real and virtual reality. How to interact with this mix of realities? Additionally, our HMD AR provides us with view changes because of position and head orientation or gaze changes. We interact with the device with, for example, speech and hand gestures, we interact with the environment with, for example, pose changes, and we interact with the virtual content with interaction modalities that are appropriate for that content: push a virtual block, open a virtual door, or have a conversation with a virtual hu-man that inhabits the AR world. In addition, we can think of interactions that be-come possible because technology allows us to get access and act upon sensor information that cannot be perceived with our natural perception receptors. In a ubiquitous computing environment, our AR device can provide us with a 360 degrees view of our environment, drones can feed us with information from above, infrared sensors know about people and events in the dark, our car receives visual information about not yet visible vehicles approaching an intersection [5], sound frequencies be-yond the human ear can be made accessible, smell sensors can enhance the human smell sense, et cetera.In this paper, we investigate the characteristics of interactions in AR and relate them to the regular human-computer interaction characteristics (interacting with tools) [6], interaction with multimedia [7] interaction through behavior [8], implicit interaction [9], embodied interaction [10], fake interaction [11], and interaction based on Gibson’s visual perception theory [12]. This will be done from the point of view of ever-present AR [13] with optical see-through wearable devices.References could not be included because of space limitations.
增强现实:超越互动
1997年,Ronald T. Azuma介绍了增强现实(AR)的以下定义:“一些研究人员以一种需要使用头戴式显示器(hmd)的方式定义AR。为了避免将AR局限于特定技术,本调查将AR定义为具有以下三个特征的系统:1)结合真实和虚拟,2)实时交互,3)3d注册。”Azuma还提到“AR可能适用于所有感官,而不仅仅是视觉。[1]到目前为止,这一定义在AR研究中一直处于领先地位。AR研究人员专注于各种技术,特别是数字技术(计算机生成图像、计算机视觉和世界建模、交互技术和AR显示技术),可以开发实现这种AR视图。在生成和对齐虚拟内容时,重点一直放在解决视觉问题上,这是我们最主要的感官,尽管没有其他感官我们无法生存。Azuma和其他人提到了其他意义,并假设这个定义也涵盖了计算机生成的图像之外的内容,甚至可能还包括计算机生成和(时空)生成和控制的虚拟内容。然而,这个定义有一些成分,可以给出不同的解释。这使得它是可行的,但是讨论我们应该如何区分真实和虚拟内容,区分真实和虚拟的是什么,或者虚拟内容如何触发现实世界的变化(或者反过来)是有用的,考虑到AR成为无处不在的计算的一部分。也就是说,我们不应该从特定的专业、教育或娱乐应用的角度来看待AR,而应该从它无处不在的角度来看待AR,并嵌入到无处不在的计算(Ubicomp)中,并让其AR设备的传感器和执行器与嵌入其中的智能环境进行通信。本文的重点是“光学透视”(OSR) AR和实时AR。实时AR将通过非突发性AR眼镜[2]或隐形眼镜[3,4]成为可能。通常,互动是从我们所看到和听到的角度来看待的。但我们当然知道触摸体验和探索物体的主动触摸。我们也可以被动或主动地体验气味和口味,也就是说,有意识地探索气味或口味,意识到它们,并询问环境,不一定是明确的,因为我们的偏好是已知的,我们的意图是可以预测的,以适当的方式做出反应,以唤起或继续互动。增强现实中的交互以及与增强现实技术的交互需要一个新的视角。我们是通过AR设备与环境互动,还是与环境互动?我们感知的一部分是真实的,我们感知的一部分是叠加在现实上的,我们感知的一部分是真实和虚拟现实之间的相互作用。如何与这种混合的现实互动?此外,我们的头戴式增强现实为我们提供了由于位置和头部方向或凝视变化而导致的视图变化。例如,我们通过语音和手势与设备进行交互,我们通过姿势变化与环境进行交互,我们通过适合于该内容的交互方式与虚拟内容进行交互:推动虚拟块,打开虚拟门,或与居住在AR世界中的虚拟人进行对话。此外,我们可以想到,由于技术允许我们获取传感器信息并对其采取行动,而这些信息是我们的自然感知受体无法感知的,因此互动成为可能。在一个无处不在的计算环境中,我们的增强现实设备可以为我们提供360度的环境视图,无人机可以从上面给我们提供信息,红外传感器可以了解黑暗中的人和事件,我们的汽车可以接收到尚未可见的车辆接近十字路口的视觉信息[5],声音频率可以超越人耳,嗅觉传感器可以增强人类的嗅觉,等等。在本文中,我们研究了AR中交互的特征,并将其与常规人机交互特征(与工具交互)[6]、与多媒体交互[7]、通过行为交互[8]、内隐交互[9]、体现交互[10]、虚假交互[11]以及基于Gibson视觉感知理论的交互[12]联系起来。这将从始终存在的AR[13]与光学透明可穿戴设备的角度来完成。由于篇幅限制,不能包括参考文献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信