虚拟现实与摄影测量技术用于提高人机交互研究的再现性

Mark Murnane, Max Breitmeyer, Cynthia Matuszek, Don Engel
{"title":"虚拟现实与摄影测量技术用于提高人机交互研究的再现性","authors":"Mark Murnane, Max Breitmeyer, Cynthia Matuszek, Don Engel","doi":"10.1109/VR.2019.8798186","DOIUrl":null,"url":null,"abstract":"Collecting data in robotics, especially human-robot interactions, traditionally requires a physical robot in a prepared environment, that presents substantial scalability challenges. First, robots provide many possible points of system failure, while the availability of human participants is limited. Second, for tasks such as language learning, it is important to create environments that provide interesting’ varied use cases. Traditionally, this requires prepared physical spaces for each scenario being studied. Finally, the expense associated with acquiring robots and preparing spaces places serious limitations on the reproducible quality of experiments. We therefore propose a novel mechanism for using virtual reality to simulate robotic sensor data in a series of prepared scenarios. This allows for a reproducible dataset that other labs can recreate using commodity VR hardware. We demonstrate the effectiveness of this approach with an implementation that includes a simulated physical context, a reconstruction of a human actor, and a reconstruction of a robot. This evaluation shows that even a simple “sandbox” environment allows us to simulate robot sensor data, as well as the movement (e.g., view-port) and speech of humans interacting with the robot in a prescribed scenario.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Virtual Reality and Photogrammetry for Improved Reproducibility of Human-Robot Interaction Studies\",\"authors\":\"Mark Murnane, Max Breitmeyer, Cynthia Matuszek, Don Engel\",\"doi\":\"10.1109/VR.2019.8798186\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Collecting data in robotics, especially human-robot interactions, traditionally requires a physical robot in a prepared environment, that presents substantial scalability challenges. First, robots provide many possible points of system failure, while the availability of human participants is limited. Second, for tasks such as language learning, it is important to create environments that provide interesting’ varied use cases. Traditionally, this requires prepared physical spaces for each scenario being studied. Finally, the expense associated with acquiring robots and preparing spaces places serious limitations on the reproducible quality of experiments. We therefore propose a novel mechanism for using virtual reality to simulate robotic sensor data in a series of prepared scenarios. This allows for a reproducible dataset that other labs can recreate using commodity VR hardware. We demonstrate the effectiveness of this approach with an implementation that includes a simulated physical context, a reconstruction of a human actor, and a reconstruction of a robot. This evaluation shows that even a simple “sandbox” environment allows us to simulate robot sensor data, as well as the movement (e.g., view-port) and speech of humans interacting with the robot in a prescribed scenario.\",\"PeriodicalId\":315935,\"journal\":{\"name\":\"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)\",\"volume\":\"108 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-03-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VR.2019.8798186\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR.2019.8798186","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

在机器人技术中收集数据,特别是人机交互,传统上需要一个物理机器人在一个准备好的环境中,这提出了大量的可扩展性挑战。首先,机器人提供了许多可能的系统故障点,而人类参与者的可用性是有限的。其次,对于诸如语言学习之类的任务,创建提供有趣的“各种用例”的环境非常重要。传统上,这需要为所研究的每个场景准备物理空间。最后,与获取机器人和准备空间相关的费用严重限制了实验的可重复性。因此,我们提出了一种利用虚拟现实在一系列准备好的场景中模拟机器人传感器数据的新机制。这允许其他实验室可以使用商品VR硬件重新创建可重复的数据集。我们通过一个实现来证明这种方法的有效性,该实现包括模拟物理环境、人类演员的重建和机器人的重建。这一评估表明,即使是一个简单的“沙盒”环境也允许我们模拟机器人传感器数据,以及在规定的场景中人类与机器人交互的运动(例如,视口)和语音。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Virtual Reality and Photogrammetry for Improved Reproducibility of Human-Robot Interaction Studies
Collecting data in robotics, especially human-robot interactions, traditionally requires a physical robot in a prepared environment, that presents substantial scalability challenges. First, robots provide many possible points of system failure, while the availability of human participants is limited. Second, for tasks such as language learning, it is important to create environments that provide interesting’ varied use cases. Traditionally, this requires prepared physical spaces for each scenario being studied. Finally, the expense associated with acquiring robots and preparing spaces places serious limitations on the reproducible quality of experiments. We therefore propose a novel mechanism for using virtual reality to simulate robotic sensor data in a series of prepared scenarios. This allows for a reproducible dataset that other labs can recreate using commodity VR hardware. We demonstrate the effectiveness of this approach with an implementation that includes a simulated physical context, a reconstruction of a human actor, and a reconstruction of a robot. This evaluation shows that even a simple “sandbox” environment allows us to simulate robot sensor data, as well as the movement (e.g., view-port) and speech of humans interacting with the robot in a prescribed scenario.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信