{"title":"UpStory: the uppsala storytelling dataset.","authors":"Marc Fraile, Natalia Calvo-Barajas, Anastasia Sophia Apeiron, Giovanna Varni, Joakim Lindblad, Nataša Sladoje, Ginevra Castellano","doi":"10.3389/frobt.2025.1547578","DOIUrl":"10.3389/frobt.2025.1547578","url":null,"abstract":"<p><p>Friendship and rapport play an important role in the formation of constructive social interactions, and have been widely studied in education due to their impact on learning outcomes. Given the growing interest in automating the analysis of such phenomena through Machine Learning, access to annotated interaction datasets is highly valuable. However, no dataset on child-child interactions explicitly capturing rapport currently exists. Moreover, despite advances in the automatic analysis of human behavior, no previous work has addressed the prediction of rapport in child-child interactions in educational settings. We present UpStory - the Uppsala Storytelling dataset: a novel dataset of naturalistic dyadic interactions between primary school aged children, with an experimental manipulation of rapport. Pairs of children aged 8-10 participate in a task-oriented activity: designing a story together, while being allowed free movement within the play area. We promote balanced collection of different levels of rapport by using a within-subjects design: self-reported friendships are used to pair each child twice, either minimizing or maximizing pair separation in the friendship network. The dataset contains data for 35 pairs, totaling 3 h 40 m of audiovisual recordings. It includes two video sources, and separate voice recordings per child. An anonymized version of the dataset is made publicly available, containing per-frame head pose, body pose, and face features. Finally, we confirm the informative power of the UpStory dataset by establishing baselines for the prediction of rapport. A simple approach achieves 68% test accuracy using data from one child, and 70% test accuracy aggregating data from a pair.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1547578"},"PeriodicalIF":3.0,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12320241/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Allan Grosvenor, Anton Zemlyansky, Abdul Wahab, Kyrylo Bohachov, Aras Dogan, Dwyer Deighan
{"title":"Hybrid intelligence systems for reliable automation: advancing knowledge work and autonomous operations with scalable AI architectures.","authors":"Allan Grosvenor, Anton Zemlyansky, Abdul Wahab, Kyrylo Bohachov, Aras Dogan, Dwyer Deighan","doi":"10.3389/frobt.2025.1566623","DOIUrl":"10.3389/frobt.2025.1566623","url":null,"abstract":"<p><strong>Introduction: </strong>Mission-critical automation demands decision-making that is explainable, adaptive, and scalable-attributes elusive to purely symbolic or data-driven approaches. We introduce a hybrid intelligence (H-I) system that fuses symbolic reasoning with advanced machine learning <i>via</i> a hierarchical architecture, inspired by cognitive frameworks like Global Workspace Theory (Baars, A Cognitive Theory of Consciousness, 1988).</p><p><strong>Methods: </strong>This architecture operates across three levels to achieve autonomous, end-to-end workflows: Navigation: Using Vision Transformers, and graph-based neural networks, the system navigates file systems, databases, and software interfaces with precision. Discrete Actions: Multi-framework automated machine learning (AutoML) trains agents to execute discrete decisions, augmented by Transformers and Joint Embedding Predictive Architectures (JEPA) (Assran et al., 2023, 15619-15629) for complex time-series analysis, such as anomaly detection. Planning: Reinforcement learning, world model-based reinforcement learning, and model predictive control orchestrate adaptive workflows tailored to user requests or live system demands.</p><p><strong>Results: </strong>The system's capabilities are demonstrated in two mission-critical applications: Space Domain Awareness, Satellite Behavior Detection: A graph-based JEPA paired with multi-agent reinforcement learning enables near real-time anomaly detection across 15,000 on-orbit objects, delivering a precision-recall score of 0.98. Autonomously Driven Simulation Setup: The system autonomously configures Computational Fluid Dynamics (CFD) setups, with an AutoML-driven optimizer enhancing the meshing step-boosting boundary layer capture propagation (BL-CP) from 8% to 98% and cutting geometry failure rates from 88% to 2% on novel aircraft geometries. Scalability is a cornerstone, with the distributed training pipeline achieving linear scaling across 2,000 compute nodes for AI model training, while secure model aggregation incurs less than 4% latency in cross-domain settings.</p><p><strong>Discussion: </strong>By blending symbolic precision with data-driven adaptability, this hybrid intelligence system offers a robust, transferable framework for automating complex knowledge work in domains like space operations and engineering simulations-and adjacent applications such as autonomous energy and industrial facility operations, paving the way for next-generation industrial AI systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1566623"},"PeriodicalIF":3.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12310485/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data-driven modeling and identification of a bistable soft-robot element based on dielectric elastomer.","authors":"Abd Elkarim Masoud, Jürgen Maas","doi":"10.3389/frobt.2025.1546945","DOIUrl":"10.3389/frobt.2025.1546945","url":null,"abstract":"<p><p>This paper presents the development and experimental validation of a hybrid modeling framework for a bistable soft robotic system driven by dielectric elastomer (DE) actuators. The proposed approach combines physics-based analytical modeling with data-driven radial basis function (RBF) networks to capture the nonlinear and dynamic behavior of the soft robots. The bistable DE system consists of a buckled beam structure and symmetric DE membranes to achieve rapid switching between two stable states. A physics-based model is first derived to describe the electromechanical coupling, energy functions, and dynamic behavior of the actuator. To address discrepancies between the analytical model and experimental data caused by geometric asymmetries and unmodeled effects, the model is augmented with RBF networks. The model is refined using experimental data and validated through analytical, numerical, and experimental investigation.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1546945"},"PeriodicalIF":3.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12310471/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Helen Beierling, Robin Beierling, Anna-Lisa Vollmer
{"title":"The power of combined modalities in interactive robot learning.","authors":"Helen Beierling, Robin Beierling, Anna-Lisa Vollmer","doi":"10.3389/frobt.2025.1598968","DOIUrl":"10.3389/frobt.2025.1598968","url":null,"abstract":"<p><p>With the continuous advancement of Artificial intelligence (AI), robots as embodied intelligent systems are increasingly becoming more present in daily life like households or in elderly care. As a result, lay users are required to interact with these systems more frequently and teach them to meet individual needs. Human-in-the-loop reinforcement learning (HIL-RL) offers an effective way to realize this teaching. Studies show that various feedback modalities, such as preference, guidance, or demonstration can significantly enhance learning success, though their suitability varies among users expertise in robotics. Research also indicates that users apply different scaffolding strategies when teaching a robot, such as motivating it to explore actions that promise success. Thus, providing a collection of different feedback modalities allows users to choose the method that best suits their teaching strategy, and allows the system to individually support the user based on their interaction behavior. However, most state-of-the-art approaches provide users with only one feedback modality at a time. Investigating combined feedback modalities in interactive robot learning remains an open challenge. To address this, we conducted a study that combined common feedback modalities. Our research questions focused on whether these combinations improve learning outcomes, reveal user preferences, show differences in perceived effectiveness, and identify which modalities influence learning the most. The results show that combining the feedback modalities improves learning, with users perceiving the effectiveness of the modalities vary ways, and certain modalities directly impacting learning success. The study demonstrates that combining feedback modalities can support learning even in a simplified setting and suggests the potential for broader applicability, especially in robot learning scenarios with a focus on user interaction. Thus, this paper aims to motivate the use of combined feedback modalities in interactive imitation learning.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1598968"},"PeriodicalIF":3.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12312635/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A pivot joint steering mechanism for tip-everting soft growing robots.","authors":"Tianchen Ji, Zheyuan Bi, Lin Cao","doi":"10.3389/frobt.2025.1627116","DOIUrl":"10.3389/frobt.2025.1627116","url":null,"abstract":"<p><p>Soft growing robots (SGRs) navigate confined environments by everting material from the tip while keeping the rest of the body stationary, enabling frictionless navigation. This opens up huge potential for inspection, search, and rescue tasks. However, controlling the direction of tip growth is still a challenge because of the ever-changing tip of the robot during tip growth. This study presents a compact steering mechanism that integrates a tendon-driven pivot joint with a pressure-tunable internal bladder. By modulating friction between the pivot joint and the inner material, the mechanism switches between two states: decoupled (stationary for bending) and coupled (move forward together with robot's inner material). This enables the robot to bend locally and then continue growing in the new direction, without using complex full-body actuation or external mechanisms. A robotic platform was developed to implement this mechanism, and its performance was characterized and validated through modeling and experiments. Experimental results confirm that the mechanism achieves reliable tip steering, closely matches kinematics models, and interacts gently with the environment. The proposed design offers a scalable and structurally simple solution for long-range navigation in soft growing robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1627116"},"PeriodicalIF":3.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12310495/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haolin Fei, Tao Xue, Yiyang He, Sheng Lin, Guanglong Du, Yao Guo, Ziwei Wang
{"title":"Large language model-driven natural language interaction control framework for single-operator bimanual teleoperation.","authors":"Haolin Fei, Tao Xue, Yiyang He, Sheng Lin, Guanglong Du, Yao Guo, Ziwei Wang","doi":"10.3389/frobt.2025.1621033","DOIUrl":"10.3389/frobt.2025.1621033","url":null,"abstract":"<p><p>Bimanual teleoperation imposes cognitive and coordination demands on a single human operator tasked with simultaneously controlling two robotic arms. Although assigning each arm to a separate operator can distribute workload, it often leads to ambiguities in decision authority and degrades overall efficiency. To overcome these challenges, we propose a novel bimanual teleoperation large language model assistant (BTLA) framework, an intelligent co-pilot that augments a single operator's motor control capabilities. In particular, BTLA enables operators to directly control one robotic arm through conventional teleoperation while directing a second assistive arm via simple voice commands, and therefore commanding two robotic arms simultaneously. By integrating the GPT-3.5-turbo model, BTLA interprets contextual voice instructions and autonomously selects among six predefined manipulation skills, including real-time mirroring, trajectory following, and autonomous object grasping. Experimental evaluations in bimanual object manipulation tasks demonstrate that BTLA increased task coverage by 76.1 <math><mrow><mi>%</mi></mrow> </math> and success rate by 240.8 <math><mrow><mi>%</mi></mrow> </math> relative to solo teleoperation, and outperformed dyadic control with a 19.4 <math><mrow><mi>%</mi></mrow> </math> gain in coverage and a 69.9 <math><mrow><mi>%</mi></mrow> </math> gain in success. Furthermore, NASA Task Load Index (NASA-TLX) assessments revealed a 38-52 <math><mrow><mi>%</mi></mrow> </math> reduction in operator mental workload, and 85 <math><mrow><mi>%</mi></mrow> </math> of participants rated the voice-based interaction as \"natural\" and \"highly effective.\"</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1621033"},"PeriodicalIF":3.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12310453/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144761779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lukas Dust, Rong Gu, Saad Mubeen, Mikael Ekström, Cristina Seceleanu
{"title":"A model-based approach to automation of formal verification of ROS 2-based systems.","authors":"Lukas Dust, Rong Gu, Saad Mubeen, Mikael Ekström, Cristina Seceleanu","doi":"10.3389/frobt.2025.1592523","DOIUrl":"https://doi.org/10.3389/frobt.2025.1592523","url":null,"abstract":"<p><p>Formal verification of robotic applications, particularly those based on ROS 2, is desirable for ensuring correctness and safety. However, the complexity of formal methods and the manual effort required for model creation and parameter extraction often hinder their adoption. This paper addresses these challenges by proposing a model-based methodology that automates the formal verification process using model-driven engineering techniques. We introduce a methodology which can be applied as a toolchain that automates the initialization of formal model templates in UPPAAL using system parameters derived from ROS 2 execution traces generated by the ROS2_tracing tool. The toolchain employs four model representations based on custom Eclipse Ecore metamodels to capture both structural and verification aspects of ROS 2 systems. The methodology supports both implemented and conceptual systems and enables iterative verification of timing and scheduling parameters through model-to-model and model-to-text transformations. A proof-of-concept implementation demonstrates the feasibility of the proposed approach. The designed toolchain supports verification using two types of UPPAAL models: one for individual node verification (e.g., callback latency and buffer overflow) and another for end-to-end latency analysis of ROS 2 processing chains. Experiments conducted on two implemented and one conceptual ROS 2 systems validate the correctness and adaptability of the toolchain. The results show that the toolchain can automate parameter extraction and model generation. The proposed methodology modularizes the verification process, allowing domain experts to focus on their areas of expertise. It targets to enhances traceability and reusability across different verification scenarios and formal models. The approach aims to make formal verification more accessible and practical to robotics developers.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1592523"},"PeriodicalIF":3.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12308702/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144754829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alex H G Overbeek, Douwe Dresscher, Herman van der Kooij, Mark Vlutters
{"title":"Versatile kinematics-based constraint identification applied to robot task reproduction.","authors":"Alex H G Overbeek, Douwe Dresscher, Herman van der Kooij, Mark Vlutters","doi":"10.3389/frobt.2025.1574110","DOIUrl":"10.3389/frobt.2025.1574110","url":null,"abstract":"<p><p>Identifying kinematic constraints between a robot and its environment can improve autonomous task execution, for example, in Learning from Demonstration. Constraint identification methods in the literature often require specific prior constraint models, geometry or noise estimates, or force measurements. Because such specific prior information or measurements are not always available, we propose a versatile kinematics-only method. We identify constraints using constraint reference frames, which are attached to a robot or ground body and may have zero-velocity constraints along their axes. Given measured kinematics, constraint frames are identified by minimizing a norm on the Cartesian components of the velocities expressed in that frame. Thereby, a minimal representation of the velocities is found, which represent the zero-velocity constraints we aim to find. In simulation experiments, we identified the geometry (position and orientation) of twelve different constraints including articulated contacts, polyhedral contacts, and contour following contacts. Accuracy was found to decrease linearly with sensor noise. In robot experiments, we identified constraint frames in various tasks and used them for task reproduction. Reproduction performance was similar when using our constraint identification method compared to methods from the literature. Our method can be applied to a large variety of robots in environments without prior constraint information, such as in everyday robot settings.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1574110"},"PeriodicalIF":3.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12303821/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nima Maghooli, Omid Mahdizadeh, Mohammad Bajelani, S Ali A Moosavian
{"title":"Learning-based control for tendon-driven continuum robotic arms.","authors":"Nima Maghooli, Omid Mahdizadeh, Mohammad Bajelani, S Ali A Moosavian","doi":"10.3389/frobt.2025.1488869","DOIUrl":"10.3389/frobt.2025.1488869","url":null,"abstract":"<p><p>Tendon-Driven Continuum Robots are widely recognized for their flexibility and adaptability in constrained environments, making them invaluable for most applications, such as medical surgery, industrial tasks, and so on. However, the inherent uncertainties and highly nonlinear dynamics of these manipulators pose significant challenges for classical model-based controllers. Addressing these challenges necessitates the development of advanced control strategies capable of adapting to diverse operational scenarios. This paper presents a centralized position control strategy using Deep Reinforcement Learning, with a particular focus on the Sim-to-Real transfer of control policies. The proposed method employs a customized Modified Transpose Jacobian control strategy for continuum arms, where its parameters are optimally tuned using the Deep Deterministic Policy Gradient algorithm. By integrating an optimal adaptive gain-tuning regulation, the research aims to develop a model-free controller that achieves superior performance compared to ideal model-based strategies. Both simulations and real-world experiments demonstrate that the proposed controller significantly enhances the trajectory-tracking performance of continuum manipulators. The proposed controller achieves robustness across various initial conditions and trajectories, making it a promising candidate for general-purpose applications.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1488869"},"PeriodicalIF":3.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12301621/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144733962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building trust in the age of human-machine interaction: insights, challenges, and future directions.","authors":"Sakshi Chauhan, Shashank Kapoor, Malika Nagpal, Gitanshu Choudhary, Varun Dutt","doi":"10.3389/frobt.2025.1535082","DOIUrl":"10.3389/frobt.2025.1535082","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1535082"},"PeriodicalIF":3.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12301199/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144733961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}