Florian Heller, Jayan Jevanesan, P. Dietrich, Jan O. Borchers
{"title":"我们在哪里?评估当前移动音频增强现实系统的渲染保真度","authors":"Florian Heller, Jayan Jevanesan, P. Dietrich, Jan O. Borchers","doi":"10.1145/2935334.2935365","DOIUrl":null,"url":null,"abstract":"Mobile audio augmented reality systems (MAARS) simulate virtual audio sources in a physical space via headphones. While 20 years ago, these required expensive sensing and rendering equipment, the necessary technology has become widely available. Smartphones have become capable to run high-fidelity spatial audio rendering algorithms, and modern sensors can provide rich data to the rendering process. Combined, these constitute an inexpensive, powerful platform for audio augmented reality. We evaluated the practical limitations of currently available off-the-shelf hardware using a voice sample in a lab experiment. State of the art motion sensors provide multiple degrees of freedom, including pitch and roll angles instead of yaw only. Since our rendering algorithm is also capable of including this richer sensor data in terms of source elevation, we also measured its impact on sound localization. Results show that mobile audio augmented reality systems achieve the same horizontal resolution as stationary systems. We found that including pitch and roll angles did not significantly improve the users' localization performance.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Where are we?: evaluating the current rendering fidelity of mobile audio augmented reality systems\",\"authors\":\"Florian Heller, Jayan Jevanesan, P. Dietrich, Jan O. Borchers\",\"doi\":\"10.1145/2935334.2935365\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Mobile audio augmented reality systems (MAARS) simulate virtual audio sources in a physical space via headphones. While 20 years ago, these required expensive sensing and rendering equipment, the necessary technology has become widely available. Smartphones have become capable to run high-fidelity spatial audio rendering algorithms, and modern sensors can provide rich data to the rendering process. Combined, these constitute an inexpensive, powerful platform for audio augmented reality. We evaluated the practical limitations of currently available off-the-shelf hardware using a voice sample in a lab experiment. State of the art motion sensors provide multiple degrees of freedom, including pitch and roll angles instead of yaw only. Since our rendering algorithm is also capable of including this richer sensor data in terms of source elevation, we also measured its impact on sound localization. Results show that mobile audio augmented reality systems achieve the same horizontal resolution as stationary systems. We found that including pitch and roll angles did not significantly improve the users' localization performance.\",\"PeriodicalId\":420843,\"journal\":{\"name\":\"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2935334.2935365\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2935334.2935365","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Where are we?: evaluating the current rendering fidelity of mobile audio augmented reality systems
Mobile audio augmented reality systems (MAARS) simulate virtual audio sources in a physical space via headphones. While 20 years ago, these required expensive sensing and rendering equipment, the necessary technology has become widely available. Smartphones have become capable to run high-fidelity spatial audio rendering algorithms, and modern sensors can provide rich data to the rendering process. Combined, these constitute an inexpensive, powerful platform for audio augmented reality. We evaluated the practical limitations of currently available off-the-shelf hardware using a voice sample in a lab experiment. State of the art motion sensors provide multiple degrees of freedom, including pitch and roll angles instead of yaw only. Since our rendering algorithm is also capable of including this richer sensor data in terms of source elevation, we also measured its impact on sound localization. Results show that mobile audio augmented reality systems achieve the same horizontal resolution as stationary systems. We found that including pitch and roll angles did not significantly improve the users' localization performance.