Spooky action at a distance: real-time VR interaction for non real-time remote robotics

Pavel A. Savkin, N. Quinn, L. Wilson
{"title":"Spooky action at a distance: real-time VR interaction for non real-time remote robotics","authors":"Pavel A. Savkin, N. Quinn, L. Wilson","doi":"10.1145/3306305.3332361","DOIUrl":null,"url":null,"abstract":"We control robots through a simulated environment in game engine using VR and interact with it intuitively. A major breakthrough of this system is that, even if real-time robot control is not possible, the user can interact with the environment in real-time to complete tasks. Our system consists of a robot, vision sensor (RGB-D camera), game engine, and VR headset with controllers. The robot-side visual is provided as a scanned 3D geometry snapshot. We leverage point cloud as a visualization. Given the information to the user, two steps are required to control the robot. First, object annotation is needed. Given virtual 3d objects, the user is asked to place them roughly where they are in VR, therefore making the process intuitive. Next, computer vision based optimization refines the position to an accuracy level required for robot grasping. Optimization runs using non-blocking threads to maintain real-time experience. Second, the user needs to interact with objects. A robot simulation and UI will assist the process. A virtual robot gripper will provide a stable grasp estimation when it is brought close to a target. Once the object is picked up, placing it is also assisted. As in our example with block construction, each block's alignment with other blocks is assisted using its geometric characteristics, facilitating accurate placement. During the process, robot actions are simulated then visualized. The simulation and assistance is processed in real-time. Once interaction is given, simulated actions are sent and executed. Interaction and annotation processes can be queued without waiting for a robot to complete each step. Additionally, the user can easily abort planned actions then redo them. Our system demonstrates how powerful it is to combine game engine technologies, VR, and robots with computer vision/graphics algorithms to achieve semantic control over time and space.","PeriodicalId":137562,"journal":{"name":"ACM SIGGRAPH 2019 Real-Time Live!","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH 2019 Real-Time Live!","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306305.3332361","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We control robots through a simulated environment in game engine using VR and interact with it intuitively. A major breakthrough of this system is that, even if real-time robot control is not possible, the user can interact with the environment in real-time to complete tasks. Our system consists of a robot, vision sensor (RGB-D camera), game engine, and VR headset with controllers. The robot-side visual is provided as a scanned 3D geometry snapshot. We leverage point cloud as a visualization. Given the information to the user, two steps are required to control the robot. First, object annotation is needed. Given virtual 3d objects, the user is asked to place them roughly where they are in VR, therefore making the process intuitive. Next, computer vision based optimization refines the position to an accuracy level required for robot grasping. Optimization runs using non-blocking threads to maintain real-time experience. Second, the user needs to interact with objects. A robot simulation and UI will assist the process. A virtual robot gripper will provide a stable grasp estimation when it is brought close to a target. Once the object is picked up, placing it is also assisted. As in our example with block construction, each block's alignment with other blocks is assisted using its geometric characteristics, facilitating accurate placement. During the process, robot actions are simulated then visualized. The simulation and assistance is processed in real-time. Once interaction is given, simulated actions are sent and executed. Interaction and annotation processes can be queued without waiting for a robot to complete each step. Additionally, the user can easily abort planned actions then redo them. Our system demonstrates how powerful it is to combine game engine technologies, VR, and robots with computer vision/graphics algorithms to achieve semantic control over time and space.
幽灵般的远距离行动:用于非实时远程机器人的实时VR交互
我们利用VR技术,通过游戏引擎中的模拟环境来控制机器人,并与机器人进行直观的交互。该系统的一个重大突破是,即使无法实现机器人的实时控制,用户也可以与环境实时交互来完成任务。我们的系统由机器人、视觉传感器(RGB-D相机)、游戏引擎和带有控制器的VR头显组成。机器人方面的视觉是作为扫描的3D几何快照提供的。我们利用点云作为可视化。将信息提供给用户后,需要两个步骤来控制机器人。首先,需要对象注释。给定虚拟3d物体,用户被要求将它们大致放置在VR中的位置,因此使这个过程变得直观。接下来,基于计算机视觉的优化将位置细化到机器人抓取所需的精度水平。优化使用非阻塞线程运行,以保持实时体验。其次,用户需要与对象进行交互。机器人模拟和用户界面将协助这一过程。虚拟机器人抓取器在接近目标时能够提供一个稳定的抓取估计。一旦物体被捡起,它也会被帮助放置。与我们的块结构示例一样,每个块与其他块的对齐使用其几何特性来辅助,从而促进准确的放置。在这个过程中,机器人的动作被模拟,然后被可视化。仿真和辅助是实时处理的。一旦交互完成,模拟动作就会被发送和执行。交互和注释过程可以排队,而无需等待机器人完成每个步骤。此外,用户可以很容易地中止计划的操作,然后重做。我们的系统展示了将游戏引擎技术,VR和机器人与计算机视觉/图形算法结合起来实现时间和空间的语义控制是多么强大。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信