V. Sundareswaran, Kenneth Wang, Steven Chen, R. Behringer, J. McGee, C. Tam, P. Zahorik
{"title":"3D audio augmented reality: implementation and experiments","authors":"V. Sundareswaran, Kenneth Wang, Steven Chen, R. Behringer, J. McGee, C. Tam, P. Zahorik","doi":"10.1109/ISMAR.2003.1240728","DOIUrl":null,"url":null,"abstract":"Augmented reality (AR) presentations may be visual or auditory. Auditory presentation has the potential to provide hands-free and visually non-obstructing cues. Recently, we have developed a 3D audio wearable system that can be used to provide alerts and informational cues to a mobile user in such a manner as to appear to emanate from specific locations in the user's environment. In order to study registration errors in 3D audio AR representations, we conducted a perceptual training experiment in which visual and auditory cues were presented to observers. The results of this experiment suggest that perceived registration errors may be reduced through head movement and through training presentations that include both visual and auditory cues.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"41","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR.2003.1240728","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 41
Abstract
Augmented reality (AR) presentations may be visual or auditory. Auditory presentation has the potential to provide hands-free and visually non-obstructing cues. Recently, we have developed a 3D audio wearable system that can be used to provide alerts and informational cues to a mobile user in such a manner as to appear to emanate from specific locations in the user's environment. In order to study registration errors in 3D audio AR representations, we conducted a perceptual training experiment in which visual and auditory cues were presented to observers. The results of this experiment suggest that perceived registration errors may be reduced through head movement and through training presentations that include both visual and auditory cues.