A comparison of visual and auditory EEG interfaces for robot multi-stage task control

Kai Arulkumaran, Marina Di Vincenzo, Rousslan Fernand Julien Dossa, Shogo Akiyama, Dan Ogawa Lillrank, Motoshige Sato, Kenichi Tomeoka, Shuntaro Sasai
{"title":"A comparison of visual and auditory EEG interfaces for robot multi-stage task control","authors":"Kai Arulkumaran, Marina Di Vincenzo, Rousslan Fernand Julien Dossa, Shogo Akiyama, Dan Ogawa Lillrank, Motoshige Sato, Kenichi Tomeoka, Shuntaro Sasai","doi":"10.3389/frobt.2024.1329270","DOIUrl":null,"url":null,"abstract":"Shared autonomy holds promise for assistive robotics, whereby physically-impaired people can direct robots to perform various tasks for them. However, a robot that is capable of many tasks also introduces many choices for the user, such as which object or location should be the target of interaction. In the context of non-invasive brain-computer interfaces for shared autonomy—most commonly electroencephalography-based—the two most common choices are to provide either auditory or visual stimuli to the user—each with their respective pros and cons. Using the oddball paradigm, we designed comparable auditory and visual interfaces to speak/display the choices to the user, and had users complete a multi-stage robotic manipulation task involving location and object selection. Users displayed differing competencies—and preferences—for the different interfaces, highlighting the importance of considering modalities outside of vision when constructing human-robot interfaces.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 11","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Robotics and AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frobt.2024.1329270","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Shared autonomy holds promise for assistive robotics, whereby physically-impaired people can direct robots to perform various tasks for them. However, a robot that is capable of many tasks also introduces many choices for the user, such as which object or location should be the target of interaction. In the context of non-invasive brain-computer interfaces for shared autonomy—most commonly electroencephalography-based—the two most common choices are to provide either auditory or visual stimuli to the user—each with their respective pros and cons. Using the oddball paradigm, we designed comparable auditory and visual interfaces to speak/display the choices to the user, and had users complete a multi-stage robotic manipulation task involving location and object selection. Users displayed differing competencies—and preferences—for the different interfaces, highlighting the importance of considering modalities outside of vision when constructing human-robot interfaces.
用于机器人多阶段任务控制的视觉和听觉脑电图界面比较
共享自主性为辅助机器人技术带来了希望,在这种技术中,身体有缺陷的人可以指挥机器人为他们执行各种任务。然而,能够执行多种任务的机器人也会给用户带来很多选择,比如交互的目标应该是哪个物体或位置。在用于共享自主权的非侵入式脑机接口(最常见的是基于脑电图的接口)中,最常见的两种选择是向用户提供听觉或视觉刺激,这两种刺激各有利弊。我们利用怪人范例,设计了可比较的听觉和视觉界面,向用户说出/显示选择,并让用户完成一项涉及位置和物体选择的多阶段机器人操作任务。用户对不同的界面表现出了不同的能力和偏好,这凸显了在构建人机界面时考虑视觉以外的模式的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信