{"title":"A review of robotic and automated systems in meat processing.","authors":"Yining Lyu, Fan Wu, Qingyu Wang, Guanyu Liu, Yingqi Zhang, Huanyu Jiang, Mingchuan Zhou","doi":"10.3389/frobt.2025.1578318","DOIUrl":"10.3389/frobt.2025.1578318","url":null,"abstract":"<p><p>Tasks in the meat processing sector are physically challenging, repetitive, and prone to worker scarcity. Therefore, the imperative adoption of mechanization and automation within the domain of meat processing is underscored by its key role in mitigating labor-intensive processes while concurrently enhancing productivity, safety, and operator wellbeing. This review paper gives an overview of the current research for robotic and automated systems in meat processing. The modules of a robotic system are introduced and afterward, the robotic tasks are divided into three sections with the features of processing targets including livestock, poultry, and seafood. Furthermore, we analyze the technical details of whole meat processing, including skinning, gutting, abdomen cutting, and half-carcass cutting, and discuss these systems in performance and industrial feasibility. The review also refers to some commercialized products for automation in the meat processing industry. Finally, we conclude the review and discuss potential challenges for further robotization and automation in meat processing.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1578318"},"PeriodicalIF":2.9,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12141337/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William J Tyler, Anusha Adavikottu, Christian Lopez Blanco, Archana Mysore, Christopher Blais, Marco Santello, Avinash Unnikrishnan
{"title":"Neurotechnology for enhancing human operation of robotic and semi-autonomous systems.","authors":"William J Tyler, Anusha Adavikottu, Christian Lopez Blanco, Archana Mysore, Christopher Blais, Marco Santello, Avinash Unnikrishnan","doi":"10.3389/frobt.2025.1491494","DOIUrl":"10.3389/frobt.2025.1491494","url":null,"abstract":"<p><p>Human operators of remote and semi-autonomous systems must have a high level of executive function to safely and efficiently conduct operations. These operators face unique cognitive challenges when monitoring and controlling robotic machines, such as vehicles, drones, and construction equipment. The development of safe and experienced human operators of remote machines requires structured training and credentialing programs. This review critically evaluates the potential for incorporating neurotechnology into remote systems operator training and work to enhance human-machine interactions, performance, and safety. Recent evidence demonstrating that different noninvasive neuromodulation and neurofeedback methods can improve critical executive functions such as attention, learning, memory, and cognitive control is reviewed. We further describe how these approaches can be used to improve training outcomes, as well as teleoperator vigilance and decision-making. We also describe how neuromodulation can help remote operators during complex or high-risk tasks by mitigating impulsive decision-making and cognitive errors. While our review advocates for incorporating neurotechnology into remote operator training programs, continued research is required to evaluate the how these approaches will impact industrial safety and workforce readiness.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1491494"},"PeriodicalIF":2.9,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12141011/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Co-design methodology for rapid prototyping of modular robots in care settings.","authors":"Alexandre Colle, Karen Donaldson, Mauro Dragone","doi":"10.3389/frobt.2025.1581506","DOIUrl":"10.3389/frobt.2025.1581506","url":null,"abstract":"<p><strong>Introduction: </strong>This paper introduces a structured co-design methodology for developing modular robotic solutions for the care sector. Despite the widespread adoption of co-design in robotics, existing frameworks often lack clear and systematic processes to effectively incorporate user requirements into tangible robotic designs.</p><p><strong>Method: </strong>To address this gap, the present work proposes an iterative, modular co-design methodology that captures, organises, and translates user insights into practical robotic modules. The methodology employs Design Research (DR) methods combined with Design for Additive Manufacturing (DfAM) principles, enabling rapid prototyping and iterative refinement based on continuous user feedback. The proposed approach was applied in the development of Robobrico, a modular robot created collaboratively with care home users.</p><p><strong>Results: </strong>Outcomes from this study demonstrate that this structured process effectively aligns robot functionality with user expectations, enhances adaptability, and facilitates practical integration of modular robotic platforms in real-world care environments.</p><p><strong>Discussion: </strong>This paper details the proposed methodology, the tools developed to support it, and key insights derived from its implementation.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1581506"},"PeriodicalIF":2.9,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12137090/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How multimodal narrative and visual representations of human-like service robots shape attitudes and social connection.","authors":"Neil Anthony Daruwala","doi":"10.3389/frobt.2025.1568146","DOIUrl":"10.3389/frobt.2025.1568146","url":null,"abstract":"<p><strong>Introduction: </strong>Public attitudes toward service robots are critical to their acceptance across various industries. Previous research suggests that human-like features and behaviours perceived as empathetic may reduce negative perceptions and enhance emotional engagement. However, there is limited empirical evidence on how structured multimodal interventions influence these responses.</p><p><strong>Methods: </strong>A partially mixed experimental design was employed, featuring one between-subjects factor (group: experimental vs. control) and one within-subjects factor (time: pre-intervention vs. post-intervention), applied only to the experimental group. Two hundred twenty-eight adults (aged 18-65) were randomly assigned to either the experimental or control condition. The intervention included images, video demonstrations of human-like service robots performing socially meaningful gestures, and a narrative vignette depicting human-robot interaction. The control group completed the same assessment measures without the intervention. Outcomes included negative attitudes toward robots (Negative Attitudes Toward Robots Scale, NARS), affect (Positive and Negative Affect Schedule, PANAS), and perceived interpersonal connection (Inclusion of Other in the Self scale, IOS).</p><p><strong>Results: </strong>The experimental group demonstrated a significant reduction in negative attitudes (p < 0.001, Cohen's d = 0.37), as well as lower negative affect and a greater perceived interpersonal connection with the robots (both p < 0.001). Age moderated baseline attitudes, with younger participants reporting more positive initial views; gender was not a significant factor.</p><p><strong>Discussion: </strong>These findings suggest that multimodal portrayals of human-like service robots can improve attitudes, affective responses, and interpersonal connection, offering practical insights for robot design, marketing, and public engagement strategies.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1568146"},"PeriodicalIF":2.9,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12137300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tamás Endrei, Sándor Földi, Ádám Makk, György Cserey
{"title":"Learning to suppress tremors: a deep reinforcement learning-enabled soft exoskeleton for Parkinson's patients.","authors":"Tamás Endrei, Sándor Földi, Ádám Makk, György Cserey","doi":"10.3389/frobt.2025.1537470","DOIUrl":"10.3389/frobt.2025.1537470","url":null,"abstract":"<p><strong>Introduction: </strong>Neurological tremors, prevalent among a large population, are one of the most rampant movement disorders. Biomechanical loading and exoskeletons show promise in enhancing patient well-being, but traditional control algorithms limit their efficacy in dynamic movements and personalized interventions. Furthermore, a pressing need exists for more comprehensive and robust validation methods to ensure the effectiveness and generalizability of proposed solutions.</p><p><strong>Methods: </strong>This paper proposes a physical simulation approach modeling multiple arm joints and tremor propagation. This study also introduces a novel adaptable reinforcement learning environment tailored for disorders with tremors. We present a deep reinforcement learning-based encoder-actor controller for Parkinson's tremors in various shoulder and elbow joint axes displayed in dynamic movements.</p><p><strong>Results: </strong>Our findings suggest that such a control strategy offers a viable solution for tremor suppression in real-world scenarios.</p><p><strong>Discussion: </strong>By overcoming the limitations of traditional control algorithms, this work takes a new step in adapting biomechanical loading into the everyday life of patients. This work also opens avenues for more adaptive and personalized interventions in managing movement disorders.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1537470"},"PeriodicalIF":2.9,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12133501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144227231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Narsimlu Kemsaram, James Hardwick, Jincheng Wang, Bonot Gautam, Ceylan Besevli, Giorgos Christopoulos, Sourabh Dogra, Lei Gao, Akin Delibasi, Diego Martinez Plasencia, Orestis Georgiou, Marianna Obrist, Ryuji Hirayama, Sriram Subramanian
{"title":"AcoustoBots: A swarm of robots for acoustophoretic multimodal interactions.","authors":"Narsimlu Kemsaram, James Hardwick, Jincheng Wang, Bonot Gautam, Ceylan Besevli, Giorgos Christopoulos, Sourabh Dogra, Lei Gao, Akin Delibasi, Diego Martinez Plasencia, Orestis Georgiou, Marianna Obrist, Ryuji Hirayama, Sriram Subramanian","doi":"10.3389/frobt.2025.1537101","DOIUrl":"10.3389/frobt.2025.1537101","url":null,"abstract":"<p><strong>Introduction: </strong>Acoustophoresis has enabled novel interaction capabilities, such as levitation, volumetric displays, mid-air haptic feedback, and directional sound generation, to open new forms of multimodal interactions. However, its traditional implementation as a singular static unit limits its dynamic range and application versatility.</p><p><strong>Methods: </strong>This paper introduces \"AcoustoBots\" - a novel convergence of acoustophoresis with a movable and reconfigurable phased array of transducers for enhanced application versatility. We mount a phased array of transducers on a swarm of robots to harness the benefits of multiple mobile acoustophoretic units. This offers a more flexible and interactive platform that enables a swarm of acoustophoretic multimodal interactions. Our novel AcoustoBots design includes a hinge actuation system that controls the orientation of the mounted phased array of transducers to achieve high flexibility in a swarm of acoustophoretic multimodal interactions. In addition, we designed a BeadDispenserBot that can deliver particles to trapping locations, which automates the acoustic levitation interaction.</p><p><strong>Results: </strong>These attributes allow AcoustoBots to independently work for a common cause and interchange between modalities, allowing for novel augmentations (e.g., a swarm of haptics, audio, and levitation) and bilateral interactions with users in an expanded interaction area.</p><p><strong>Discussion: </strong>We detail our design considerations, challenges, and methodological approach to extend acoustophoretic central control in distributed settings. This work demonstrates a scalable acoustic control framework with two mobile robots, laying the groundwork for future deployment in larger robotic swarms. Finally, we characterize the performance of our AcoustoBots and explore the potential interactive scenarios they can enable.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1537101"},"PeriodicalIF":2.9,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12133503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144227230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gustavo Rezende Silva, Juliane Päßler, S Lizeth Tapia Tarifa, Einar Broch Johnsen, Carlos Hernández Corbato
{"title":"ROSA: a knowledge-based solution for robot self-adaptation.","authors":"Gustavo Rezende Silva, Juliane Päßler, S Lizeth Tapia Tarifa, Einar Broch Johnsen, Carlos Hernández Corbato","doi":"10.3389/frobt.2025.1531743","DOIUrl":"10.3389/frobt.2025.1531743","url":null,"abstract":"<p><p>Autonomous robots must operate in diverse environments and handle multiple tasks despite uncertainties. This creates challenges in designing software architectures and task decision-making algorithms, as different contexts may require distinct task logic and architectural configurations. To address this, robotic systems can be designed as self-adaptive systems capable of adapting their task execution and software architecture at runtime based on their context. This paper introduces ROSA, a novel knowledge-based framework for RObot Self-Adaptation, which enables task-and-architecture co-adaptation (TACA) in robotic systems. ROSA achieves this by providing a knowledge model that captures all application-specific knowledge required for adaptation and by reasoning over this knowledge at runtime to determine when and how adaptation should occur. In addition to a conceptual framework, this work provides an open-source ROS 2-based reference implementation of ROSA and evaluates its feasibility and performance in an underwater robotics application. Experimental results highlight ROSA's advantages in reusability and development effort for designing self-adaptive robotic systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1531743"},"PeriodicalIF":2.9,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12131011/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simultaneous text and gesture generation for social robots with small language models.","authors":"Alessio Galatolo, Katie Winkle","doi":"10.3389/frobt.2025.1581024","DOIUrl":"10.3389/frobt.2025.1581024","url":null,"abstract":"<p><strong>Introduction: </strong>As social robots gain advanced communication capabilities, users increasingly expect coherent verbal and non-verbal behaviours. Recent work has shown that Large Language Models (LLMs) can support autonomous generation of such multimodal behaviours. However, current LLM-based approaches to non-verbal behaviour often involve multi-step reasoning with large, closed-source models-resulting in significant computational overhead and limiting their feasibility in low-resource or privacy-constrained environments.</p><p><strong>Methods: </strong>To address these limitations, we propose a novel method for simultaneous generation of text and gestures with minimal computational overhead compared to plain text generation. Our system does not produce low-level joint trajectories, but instead predicts high-level communicative intentions, which are mapped to platform-specific expressions. Central to our approach is the introduction of lightweight, robot-specific \"gesture heads\" derived from the LLM's architecture, requiring no pose-based datasets and enabling generalisability across platforms.</p><p><strong>Results: </strong>We evaluate our method on two distinct robot platforms: Furhat (facial expressions) and Pepper (bodily gestures). Experimental results demonstrate that our method maintains behavioural quality while introducing negligible computational and memory overhead. Furthermore, the gesture heads operate in parallel with the language generation component, ensuring scalability and responsiveness even on small or locally deployed models.</p><p><strong>Discussion: </strong>Our approach supports the use of Small Language Models for multimodal generation, offering an effective alternative to existing high-resource methods. By abstracting gesture generation and eliminating reliance on platform-specific motion data, we enable broader applicability in real-world, low-resource, and privacy-sensitive HRI settings.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1581024"},"PeriodicalIF":2.9,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12122315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144200515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Tamantini, Kevin Patrice Langlois, David Rodriguez Cianca, Loredana Zollo
{"title":"Editorial: Advancements in AI-driven multimodal interfaces for robot-aided rehabilitation.","authors":"Christian Tamantini, Kevin Patrice Langlois, David Rodriguez Cianca, Loredana Zollo","doi":"10.3389/frobt.2025.1605418","DOIUrl":"10.3389/frobt.2025.1605418","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1605418"},"PeriodicalIF":2.9,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12119299/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144183468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial: Advances in modern intelligent surgery: from computer-aided diagnosis to medical robotics.","authors":"Zhe Min, Rui Song, Changsheng Li, Jax Luo","doi":"10.3389/frobt.2025.1620551","DOIUrl":"10.3389/frobt.2025.1620551","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1620551"},"PeriodicalIF":2.9,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12117187/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}