{"title":"Augmented audio reality: telepresence/VR hybrid acoustic environments","authors":"Michael Cohen, S. Aoki, N. Koizumi","doi":"10.1109/ROMAN.1993.367692","DOIUrl":null,"url":null,"abstract":"Augmented audio reality consists of hybrid presentations in which computer-generated sounds are overlayed on top of more directly acquired audio signals. We are exploring the alignability of binaural signals with artificially spatialized sources, synthesized by convolving monaural signals with left/right pairs of directional transfer functions. We use MAW (multidimensional audio windows), a NeXT-based system, as a binaural directional mixing console. Since the rearrangement of a dynamic map is used to dynamically select transfer functions, a user may specify the virtual location of a sound source, throwing the source into perceptual space, using exocentric graphical control to drive egocentric auditory display. As a concept demonstration, we muted a telephone, and then used MAW to spatialize a ringing signal at its location, putting the sonic image of the phone into the office environment. By juxtaposing and mixing 'real' and 'synthetic' audio transmissions, we are exploring the relationship between acoustic telepresence and VR presentations: telepresence manifests as the actual configuration of sources in a sound field, as perceivable by a dummyhead; VR is the perception yielded by filtering of virtual sources with respect to virtual sinks. We have conducted an experiment testing the usefulness of such a hybrid.<<ETX>>","PeriodicalId":270591,"journal":{"name":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","volume":"520 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1993-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"42","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.1993.367692","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 42
Abstract
Augmented audio reality consists of hybrid presentations in which computer-generated sounds are overlayed on top of more directly acquired audio signals. We are exploring the alignability of binaural signals with artificially spatialized sources, synthesized by convolving monaural signals with left/right pairs of directional transfer functions. We use MAW (multidimensional audio windows), a NeXT-based system, as a binaural directional mixing console. Since the rearrangement of a dynamic map is used to dynamically select transfer functions, a user may specify the virtual location of a sound source, throwing the source into perceptual space, using exocentric graphical control to drive egocentric auditory display. As a concept demonstration, we muted a telephone, and then used MAW to spatialize a ringing signal at its location, putting the sonic image of the phone into the office environment. By juxtaposing and mixing 'real' and 'synthetic' audio transmissions, we are exploring the relationship between acoustic telepresence and VR presentations: telepresence manifests as the actual configuration of sources in a sound field, as perceivable by a dummyhead; VR is the perception yielded by filtering of virtual sources with respect to virtual sinks. We have conducted an experiment testing the usefulness of such a hybrid.<>