Frontiers in Robotics and AI最新文献

筛选
英文 中文
Trustworthy navigation with variational policy in deep reinforcement learning. 深度强化学习中具有变分策略的可信导航。
IF 3
Frontiers in Robotics and AI Pub Date : 2025-10-08 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1652050
Karla Bockrath, Liam Ernst, Rohaan Nadeem, Bryan Pedraza, Dimah Dera
{"title":"Trustworthy navigation with variational policy in deep reinforcement learning.","authors":"Karla Bockrath, Liam Ernst, Rohaan Nadeem, Bryan Pedraza, Dimah Dera","doi":"10.3389/frobt.2025.1652050","DOIUrl":"https://doi.org/10.3389/frobt.2025.1652050","url":null,"abstract":"<p><strong>Introduction: </strong>Developing a reliable and trustworthy navigation policy in deep reinforcement learning (DRL) for mobile robots is extremely challenging, particularly in real-world, highly dynamic environments. Particularly, exploring and navigating unknown environments without prior knowledge, while avoiding obstacles and collisions, is very cumbersome for mobile robots.</p><p><strong>Methods: </strong>This study introduces a novel trustworthy navigation framework that utilizes variational policy learning to quantify uncertainty in the estimation of the robot's action, localization, and map representation. Trust-Nav employs the Bayesian variational approximation of the posterior distribution over the policy-based neural network's parameters. Policy-based and value-based learning are combined to guide the robot's actions in unknown environments. We derive the propagation of variational moments through all layers of the policy network and employ a first-order approximation for the nonlinear activation functions. The uncertainty in robot action is measured by the propagated variational covariance in the DRL policy network. At the same time, the uncertainty in the robot's localization and mapping is embedded in the reward function and stems from the traditional Theory of Optimal Experimental Design. The total loss function optimizes the parameters of the policy and value networks to maximize the robot's cumulative reward in an unknown environment.</p><p><strong>Results: </strong>Experiments conducted using the Gazebo robotics simulator demonstrate the superior performance of the proposed Trust-Nav model in achieving robust autonomous navigation and mapping.</p><p><strong>Discussion: </strong>Trust-Nav consistently outperforms deterministic DRL approaches, particularly in complicated environments involving noisy conditions and adversarial attacks. This integration of uncertainty into the policy network promotes safer and more reliable navigation, especially in complex or unpredictable environments. Trust-Nav offers a step toward deployable, self-aware robotic systems capable of recognizing and responding to their own limitations.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1652050"},"PeriodicalIF":3.0,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12541417/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring multimodal collaborative storytelling with Pepper: a preliminary study with zero-shot LLMs. 探索与Pepper的多模式协作讲故事:零射击llm的初步研究。
IF 3
Frontiers in Robotics and AI Pub Date : 2025-10-08 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1662819
Unai Zabala, Juan Echevarria, Igor Rodriguez, Elena Lazkano
{"title":"Exploring multimodal collaborative storytelling with Pepper: a preliminary study with zero-shot LLMs.","authors":"Unai Zabala, Juan Echevarria, Igor Rodriguez, Elena Lazkano","doi":"10.3389/frobt.2025.1662819","DOIUrl":"https://doi.org/10.3389/frobt.2025.1662819","url":null,"abstract":"<p><p>With the rise of large language models (LLMs), collaborative storytelling in virtual agents or chatbots has gained popularity. Despite storytelling has long been employed in social robotics as a means to educate, entertain, and persuade audiences, the integration of LLMs into such platforms remains largely unexplored. This paper presents the initial steps for a novel multimodal collaborative storytelling system in which users co-create stories with the social robot Pepper through natural language interaction and by presenting physical objects. The robot employs a YOLO-based vision system to recognize these objects and seamlessly incorporate them into the narrative. Story generation and adaptation are handled autonomously using the Llama model in a zero-shot setting, aiming to assess the usability and maturity of such models in interactive storytelling. To enhance immersion, the robot performs the final story using expressive gestures, emotional cues, and speech modulation. User feedback, collected through questionnaires and semi-structured interviews, indicates a high level of acceptance.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1662819"},"PeriodicalIF":3.0,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12541253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized causal explanations of a robot's behavior. 对机器人行为的个性化因果解释。
IF 3
Frontiers in Robotics and AI Pub Date : 2025-10-08 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1637574
José Galeas, Suna Bensch, Thomas Hellström, Antonio Bandera
{"title":"Personalized causal explanations of a robot's behavior.","authors":"José Galeas, Suna Bensch, Thomas Hellström, Antonio Bandera","doi":"10.3389/frobt.2025.1637574","DOIUrl":"https://doi.org/10.3389/frobt.2025.1637574","url":null,"abstract":"<p><p>The deployment of robots in environments shared with humans implies that they must be able to justify or explain their behavior to nonexpert users when the user, or the situation itself, requires it. We propose a framework for robots to generate personalized explanations of their behavior by integrating cause-and-effect structures, social roles, and natural language queries. Robot events are stored as cause-effect pairs in a causal log. Given a human natural language query, the system uses machine learning to identify the matching cause-and-effect entry in the causal log and determine the social role of the inquirer. An initial explanation is generated and is then further refined by a large language model (LLM) to produce linguistically diverse responses tailored to the social role and the query. This approach maintains causal and factual accuracy while providing language variation in the generated explanations. Qualitative and quantitative experiments show that combining the causal information with the social role and the query when generating the explanations yields the most appreciated explanations.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1637574"},"PeriodicalIF":3.0,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12540097/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An approach for unsupervised interaction clustering in human-robot co-work using spatiotemporal graph convolutional networks. 基于时空图卷积网络的人机协同工作无监督交互聚类方法。
IF 3
Frontiers in Robotics and AI Pub Date : 2025-10-01 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1545712
Aaron Heuermann, Zied Ghrairi, Anton Zitnikov, Abdullah Al Noman, Klaus-Dieter Thoben
{"title":"An approach for unsupervised interaction clustering in human-robot co-work using spatiotemporal graph convolutional networks.","authors":"Aaron Heuermann, Zied Ghrairi, Anton Zitnikov, Abdullah Al Noman, Klaus-Dieter Thoben","doi":"10.3389/frobt.2025.1545712","DOIUrl":"10.3389/frobt.2025.1545712","url":null,"abstract":"<p><p>In this paper, we present an approach to cluster interaction forms in industrial human-robot co-work using spatiotemporal graph convolutional networks (STGCNs). Humans will increasingly work with robots in the future, whereas previously, humans worked side by side, hand in hand, or alone. The growing frequency of robotic and human-robot co-working applications and the requirement to increase flexibility affect the variety and variability of interactions between humans and robots, which can be observed at production workplaces. In this paper, we investigate the variety and variability of human-robot interactions in industrial co-work scenarios where full automation is impractical. To address the challenges of interaction modeling and clustering, we present an approach that utilizes STGCNs for interaction clustering. Data were collected from 12 realistic human-robot co-work scenarios using a high-accuracy tracking system. The approach identified 10 distinct interaction forms, revealing more granular interaction patterns than established taxonomies. These results support continuous, data-driven analysis of human-robot behavior and contribute to the development of more flexible, human-centered systems that are aligned with Industry 5.0.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1545712"},"PeriodicalIF":3.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12520915/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing socially assistive robots for clinical practice: insights from an asynchronous remote community of speech-language pathologists. 为临床实践设计社交辅助机器人:来自语言病理学家异步远程社区的见解。
IF 3
Frontiers in Robotics and AI Pub Date : 2025-10-01 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1646880
Denielle Oliva, Abbie Olszewski, Shekoufeh Sadeghi, Karthik Dantu, David Feil-Seifer
{"title":"Designing socially assistive robots for clinical practice: insights from an asynchronous remote community of speech-language pathologists.","authors":"Denielle Oliva, Abbie Olszewski, Shekoufeh Sadeghi, Karthik Dantu, David Feil-Seifer","doi":"10.3389/frobt.2025.1646880","DOIUrl":"10.3389/frobt.2025.1646880","url":null,"abstract":"<p><strong>Introduction: </strong>Socially Assistive Robots (SARs) hold promise for augmenting speech-language therapy by addressing high caseloads and enhancing child engagement. However, many implementations remain misaligned with clinician practices and overlook expressive strategies central to speech-language pathology.</p><p><strong>Methods: </strong>We conducted a 4-week Asynchronous Remote Community (ARC) study with thirteen licensed speech-language pathologists (SLPs). Participants engaged in weekly activities and asynchronous discussions, contributing reflective insights on emotional expression, domain-specific needs, and potential roles for SARs. The ARC format supported distributed, flexible engagement and facilitated iterative co-design through longitudinal peer dialogue. Data were analyzed using thematic analysis to identify emerging patterns.</p><p><strong>Results: </strong>Analysis revealed five clinician-driven design considerations for SARs: (1) the need for expressive and multi-modal communication; (2) customization of behaviors to accommodate sensory and developmental profiles; (3) adaptability of roles across therapy contexts; (4) ethical concerns surrounding overuse and fears of clinician replacement; and (5) opportunities for data tracking and personalization.</p><p><strong>Discussion: </strong>Findings highlight clinician-informed design implications that can guide the development of socially intelligent, adaptable, and ethically grounded SARs. The ARC approach proved a viable co-design framework, enabling deeper reflection and peer-driven requirements than traditional short-term methods. This work bridges the gap between robotic capabilities and clinical expectations, underscoring the importance of embedding clinician expertise in SAR design to foster meaningful integration into speech-language interventions.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1646880"},"PeriodicalIF":3.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12521808/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145309659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy evaluation of robot-guided laser osteotomy for dental implant bed preparation - a digital high-tech procedure. 机器人引导激光截骨术在牙种植床准备中的准确性评估——一种数字高科技程序。
IF 3
Frontiers in Robotics and AI Pub Date : 2025-09-29 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1614659
Florian M Thieringer, Regina Walher, Florian S Halbeisen, Quentin Garnier, Adrian Dragu, Bilal Msallem
{"title":"Accuracy evaluation of robot-guided laser osteotomy for dental implant bed preparation - a digital high-tech procedure.","authors":"Florian M Thieringer, Regina Walher, Florian S Halbeisen, Quentin Garnier, Adrian Dragu, Bilal Msallem","doi":"10.3389/frobt.2025.1614659","DOIUrl":"10.3389/frobt.2025.1614659","url":null,"abstract":"<p><strong>Background: </strong>The accuracy and reproducibility of emerging high-tech procedures for dental implant placement need continuous evaluation. This is essential to facilitate the transition from conventional surgical guides to digital planning systems. This study investigates the accuracy of implant placement using robot-guided laser technology based on cone-beam computed tomography and intraoral scanning.</p><p><strong>Methods: </strong>Twelve dental implants were placed using surgical planning software and a robot-guided laser osteotome. The procedure incorporated surface scanning and enabled implant bed preparation using a robot-guided laser.</p><p><strong>Results: </strong>The mean overall 3D offset (mean ± SD) was 2.50 ± 1.30 mm at the base and 2.80 ± 1.00 mm at the tip, with a mean angular deviation of 6.60 ± 3.10°.</p><p><strong>Conclusion: </strong>The results demonstrate a considerably greater deviation than conventional guided systems. In the context of the high demands of oral surgery, accuracy is particularly susceptible to fluctuations, some of which may stem from intermediate workflow steps, particularly due to the early development stage of the robotic system. Notably, the absence of real-time depth measurement and robot-assisted implant placement remains a significant constraint. However, future technological advances are expected to address these challenges.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1614659"},"PeriodicalIF":3.0,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12516704/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145294053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Robotics software engineering. 编辑:机器人软件工程。
IF 3
Frontiers in Robotics and AI Pub Date : 2025-09-29 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1686496
Federico Ciccozzi, Ivano Malavolta, Christopher Timperley, Andreas Angerer, Alwin Hoffmann
{"title":"Editorial: Robotics software engineering.","authors":"Federico Ciccozzi, Ivano Malavolta, Christopher Timperley, Andreas Angerer, Alwin Hoffmann","doi":"10.3389/frobt.2025.1686496","DOIUrl":"https://doi.org/10.3389/frobt.2025.1686496","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1686496"},"PeriodicalIF":3.0,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12516138/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145294027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DreamerNav: learning-based autonomous navigation in dynamic indoor environments using world models. DreamerNav:使用世界模型在动态室内环境中进行基于学习的自主导航。
IF 3
Frontiers in Robotics and AI Pub Date : 2025-09-26 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1655171
Stuart Shanks, Jonathan Embley-Riches, Jianheng Liu, Andromachi Maria Delfaki, Carlo Ciliberto, Dimitrios Kanoulas
{"title":"DreamerNav: learning-based autonomous navigation in dynamic indoor environments using world models.","authors":"Stuart Shanks, Jonathan Embley-Riches, Jianheng Liu, Andromachi Maria Delfaki, Carlo Ciliberto, Dimitrios Kanoulas","doi":"10.3389/frobt.2025.1655171","DOIUrl":"10.3389/frobt.2025.1655171","url":null,"abstract":"<p><p>Robust autonomous navigation in complex, dynamic indoor environments remains a central challenge in robotics, requiring agents to make adaptive decisions in real time under partial observability and uncertain obstacle motion. This paper presents DreamerNav, a robot-agnostic navigation framework that extends DreamerV3, a state-of-the-art world model-based reinforcement learning algorithm, with multimodal spatial perception, hybrid global-local planning, and curriculum-based training. By formulating navigation as a Partially Observable Markov Decision Process (POMDP), the system enables agents to integrate egocentric depth images with a structured local occupancy map encoding dynamic obstacle positions, historical trajectories, points of interest, and a global A* path. A Recurrent State-Space Model (RSSM) learns stochastic and deterministic latent dynamics, supporting long-horizon prediction and collision-free path planning in cluttered, dynamic scenes. Training is carried out in high-fidelity, photorealistic simulation using NVIDIA Isaac Sim, gradually increasing task complexity to improve learning stability, sample efficiency, and generalization. We benchmark against NoMaD, ViNT, and A*, showing superior success rates and adaptability in dynamic environments. Real-world proof-of-concept trials on two quadrupedal robots without retraining further validate the framework's robustness and quadruped robot platform independence.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1655171"},"PeriodicalIF":3.0,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12510832/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Should robots display what they hear? Mishearing as a practical accomplishment. 机器人应该展示它们听到的声音吗?误听是一种实际的成就。
IF 3
Frontiers in Robotics and AI Pub Date : 2025-09-26 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1597276
Damien Rudaz, Christian Licoppe
{"title":"Should robots display what they hear? Mishearing as a practical accomplishment.","authors":"Damien Rudaz, Christian Licoppe","doi":"10.3389/frobt.2025.1597276","DOIUrl":"10.3389/frobt.2025.1597276","url":null,"abstract":"<p><p>As a contribution to research on transparency and failures in human-robot interaction (HRI), our study investigates whether the informational ecology configured by publicly displaying a robot's automatic speech recognition (ASR) results is consequential in how miscommunications emerge and are dealt with. After a preliminary quantitative analysis of our participants' gaze behavior during an experiment where they interacted with a conversational robot, we rely on a micro-analytic approach to detail how the interpretation of this robot's conduct as inadequate was configured by what it displayed as having \"heard\" on its tablet. We investigate cases where an utterance or gesture by the robot was treated by participants as sequentially relevant only as long as they had not read the automatic speech recognition transcript but re-evaluated it as troublesome once they had read it. In doing so, we contribute to HRI by showing that systematically displaying an ASR transcript can play a crucial role in participants' interpretation of a co-constructed action (such as shaking hands with a robot) as having \"failed\". We demonstrate that \"mistakes\" and \"errors\" can be approached as practical accomplishments that emerge as such over the course of interaction rather than as social or technical phenomena pre-categorized by the researcher in reference to criteria exogenous to the activity being analyzed. In the end, while narrowing down on two video fragments, we find that this peculiar informational ecology did not merely impact how the robot was responded to. Instead, it modified the very definition of \"mutual understanding\" that was enacted and oriented to as relevant by the human participants in these fragments. Besides social robots, we caution that systematically providing such transcripts is a design decision not to be taken lightly; depending on the setting, it may have unintended consequences on interactions between humans and any form of conversational interface.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1597276"},"PeriodicalIF":3.0,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12511783/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-user multi-robot multi-goal multi-device human-robot interaction manipulation benchmark. 一个多用户多机器人多目标多设备人机交互操作基准。
IF 3
Frontiers in Robotics and AI Pub Date : 2025-09-25 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1528754
Akito Yoshida, Rousslan Fernand Julien Dossa, Marina Di Vincenzo, Shivakanth Sujit, Hannah Douglas, Kai Arulkumaran
{"title":"A multi-user multi-robot multi-goal multi-device human-robot interaction manipulation benchmark.","authors":"Akito Yoshida, Rousslan Fernand Julien Dossa, Marina Di Vincenzo, Shivakanth Sujit, Hannah Douglas, Kai Arulkumaran","doi":"10.3389/frobt.2025.1528754","DOIUrl":"10.3389/frobt.2025.1528754","url":null,"abstract":"<p><p>One weakness of human-robot interaction (HRI) research is the lack of reproducible results, due to the lack of standardised benchmarks. In this work we introduce a multi-user multi-robot multi-goal multi-device manipulation benchmark (M4Bench), a flexible HRI platform in which multiple users can direct either a single-or multiple-simulated robots to perform a multi-goal pick-and-place task. Our software exposes a web-based visual interface, with support for mouse, keyboard, gamepad, eye tracker and electromyograph/electroencephalograph (EMG/EEG) user inputs. It can be further extended using native browser libraries or WebSocket interfaces, allowing researchers to add support for their own devices. We also provide tracking for several HRI metrics, such as task completion and command selection time, enabling quantitative comparisons between different user interfaces and devices. We demonstrate the utility of our benchmark with a user study (n = 50) conducted to compare five different input devices, and also compare single-vs. multi-user control. In the pick-and-place task, we found that users performed worse when using the eye tracker + EMG device pair, as compared to mouse + keyboard or gamepad + gamepad, over four quantitative metrics (corrected p <math><mrow><mo><</mo></mrow> </math> 0.001). Our software is available at https://github.com/arayabrain/m4bench.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1528754"},"PeriodicalIF":3.0,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12507559/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信