A multi-modal intelligent user interface for supervisory control of unmanned platforms

Glenn Taylor, Rich Frederiksen, J. Crossman, Michael Quist, Patrick Theisen
{"title":"A multi-modal intelligent user interface for supervisory control of unmanned platforms","authors":"Glenn Taylor, Rich Frederiksen, J. Crossman, Michael Quist, Patrick Theisen","doi":"10.1109/CTS.2012.6261037","DOIUrl":null,"url":null,"abstract":"Typical human-robot interaction (HRI) is through tele-operation or point-and-click interfaces that require extensive training to become proficient and require the user's complete attention to operate. For unmanned platforms to reach their full potential, users must be able to exert supervisory control over those platforms. This requires more effective means of communication in both directions, including high-level commands given to the vehicle and meaningful feedback to the user. Our aim is to reduce the training requirements and workload needed to interact with unmanned systems effectively and to raise the level of user interaction with these systems so that supervisory control is possible. In this paper we describe an intelligent user interface, called the Smart Interaction Device (SID) that facilitates a dialogue between the user and the unmanned platform. SID works with the user to understand the user's intent, including asking any clarification questions. Once an understanding is established, SID translates that intent into the language of the platform. SID also monitors the platform's progress in order to give feedback to the user about status or problems that arise. We have incorporated multiple input modalities, including speech, gesture, and sketch as natural ways for a user to communicate with unmanned platforms. SID also provides multiple modes of feedback, including graphics, video and speech. We describe SID's architecture and some examples of its application in different domains.","PeriodicalId":200122,"journal":{"name":"2012 International Conference on Collaboration Technologies and Systems (CTS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 International Conference on Collaboration Technologies and Systems (CTS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CTS.2012.6261037","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

Abstract

Typical human-robot interaction (HRI) is through tele-operation or point-and-click interfaces that require extensive training to become proficient and require the user's complete attention to operate. For unmanned platforms to reach their full potential, users must be able to exert supervisory control over those platforms. This requires more effective means of communication in both directions, including high-level commands given to the vehicle and meaningful feedback to the user. Our aim is to reduce the training requirements and workload needed to interact with unmanned systems effectively and to raise the level of user interaction with these systems so that supervisory control is possible. In this paper we describe an intelligent user interface, called the Smart Interaction Device (SID) that facilitates a dialogue between the user and the unmanned platform. SID works with the user to understand the user's intent, including asking any clarification questions. Once an understanding is established, SID translates that intent into the language of the platform. SID also monitors the platform's progress in order to give feedback to the user about status or problems that arise. We have incorporated multiple input modalities, including speech, gesture, and sketch as natural ways for a user to communicate with unmanned platforms. SID also provides multiple modes of feedback, including graphics, video and speech. We describe SID's architecture and some examples of its application in different domains.
用于无人驾驶平台监控的多模态智能用户界面
典型的人机交互(HRI)是通过远程操作或点击界面进行的,需要大量的培训才能熟练操作,并且需要用户的全部注意力来操作。为了让无人平台充分发挥其潜力,用户必须能够对这些平台进行监督控制。这需要更有效的双向通信手段,包括向车辆发出高级命令和向用户提供有意义的反馈。我们的目标是减少与无人系统有效交互所需的培训要求和工作量,并提高与这些系统的用户交互水平,从而实现监督控制。在本文中,我们描述了一个智能用户界面,称为智能交互设备(SID),它促进了用户和无人平台之间的对话。SID与用户一起理解用户的意图,包括询问任何澄清性问题。一旦建立了理解,SID就会将该意图转换为平台的语言。SID还监视平台的进度,以便向用户提供有关状态或出现的问题的反馈。我们整合了多种输入方式,包括语音、手势和素描,作为用户与无人平台交流的自然方式。SID还提供多种反馈模式,包括图形、视频和语音。我们描述了SID的体系结构以及它在不同领域中的一些应用示例。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信