Feilong Zhang, Tian Wang, Liang Zhang, Enming Shi, Chengchao Wang, Ning Li, Yu Lu, Bi Zhang
{"title":"Sliding-mode control based on prescribed performance function and its application to a SEA-Based lower limb exoskeleton.","authors":"Feilong Zhang, Tian Wang, Liang Zhang, Enming Shi, Chengchao Wang, Ning Li, Yu Lu, Bi Zhang","doi":"10.3389/frobt.2025.1534040","DOIUrl":"10.3389/frobt.2025.1534040","url":null,"abstract":"<p><p>A sliding-mode control based on a prescribed performance function is proposed for discrete-time single-input single-output systems. The controller design aims to maintain the tracking error in a predefined convergence zone described by a performance function. However, due to the fixed structure of the controller, the applicability and universality of this method are limited. To address this issue, we separate the controller into two parts and analyze the principle of the prescribed performance control (PPC) method. Then we can replace the linear part of the controller with model-based control methods to adapt to the specific characteristics of the controlled system. Compared with current works, when the established system model is inaccurate, we can enhance the smoothness or response speed of the system by introducing a penalty constant to alter the system's transient characteristics while the tracking error is within the prescribed domain. Finally, numerical comparison simulations and a lower limb exoskeleton experiment illustrate the established results and the effectiveness of the proposed method.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1534040"},"PeriodicalIF":2.9,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11913672/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143659193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Milan R Wolffgramm, Stephan Corporaal, Aard J Groen
{"title":"Operators and their human-robot interdependencies: implications of distinct job decision latitudes for sustainable work and high performance.","authors":"Milan R Wolffgramm, Stephan Corporaal, Aard J Groen","doi":"10.3389/frobt.2025.1442319","DOIUrl":"10.3389/frobt.2025.1442319","url":null,"abstract":"<p><p>The collaborative robot (cobot) has the potential to remove barriers for individual operators when deciding on the deployment of robotics in their work. Ideally, using their opportunities to (re)design work (i.e., job decision latitudes), the operator establishes synergetic human-cobot interdependencies that enable the human-cobot production unit to achieve superior performance and foster more sustainable work perceptions than manual production units. However, it remains scientifically unclear whether the operator is both willing to and capable of using cobot-related job decision latitudes, what this means for designing human-cobot interdependencies, and whether these designs improve unit outcomes. Therefore, we built a manual and three human-cobot production units with distinct job decision latitudes. Forty students participated in the manual production unit and operated one of the human-cobot production units during an assembly simulation. Sophistically accounting for individual differences, the results illustrated that most operators used speed- and task-related job decision latitudes to design their human-cobot interdependencies. These behaviours often led to increased productivity and more motivating working conditions. At the same time, these human-cobot interdependencies frequently resulted in limited human-robot interactions, poor production reliability, and more psychological safety risks. This contribution lays a rich foundation for future research on involving individual operators in developing modern production systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1442319"},"PeriodicalIF":2.9,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11913812/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143659189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OpenSEA: a 3D printed planetary gear series elastic actuator for a compliant elbow joint exoskeleton.","authors":"Benjamin Jenks, Hailey Levan, Filip Stefanovic","doi":"10.3389/frobt.2025.1528266","DOIUrl":"https://doi.org/10.3389/frobt.2025.1528266","url":null,"abstract":"<p><strong>Introduction: </strong>Next-generation assistive robotics rely on series elastic actuators (SEA) that enable compliant human-robot interaction. However, currently there is a deficiency of openly available SEA systems to support this development. To address this, we propose a novel design of a compliant 3D-printed SEA device for elbow movement rehabilitation exoskeletons that we make openly available.</p><p><strong>Methods: </strong>We designed a 3D-printed SEA to incorporate a planetary gear system and torsional spring, offering compliance, adaptability, and cost-effectiveness. The design provides a high-power density, that can address torque limitations in 3D printed SEA systems. Our design utilizes a 4.12 Nm motor operating at 26 RPM based on assessment of functional performance differences across healthy and post-stroke individuals. Moreover, the design of this SEA allows for easily adjustable parameters to fit different joints, or various torque output configurations, in low-cost exoskeleton applications in rehabilitation.</p><p><strong>Results: </strong>Testing demonstrated an average compliance contribution of the planetary gear and the average total system compliance of 14.80° and 22.22°, respectively. This range conforms to those expected in human-exoskeleton interaction. Similarly, an FEA analysis of the 3D printed system shows stress ranges of the SEA gears to be between 50 and 60.2 MPa, which causes a displacement of approximately 0.14 mm. This is within the operational flexural range of standard 3D printed materials such as PLA, which is 175 MPa.</p><p><strong>Discussion: </strong>The study demonstrates an openly available SEA design for 3D printed exoskeletons. This work provides an entry point for accessible exoskeleton design, specifically for rehabilitation. Future work will explore the role of segment vs joint rigidity in developing next-generation compliant exoskeletons, and improving accessibility for personalizable assistive exoskeletons. All designs presented herein are publicly available.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1528266"},"PeriodicalIF":2.9,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143650434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erik Billing, Federico Fraboni, Luca Gualtieri, Patricia Helen Rosen, Peter Thorvald
{"title":"Editorial: Human factors and cognitive ergonomics in advanced industrial human-robot interaction.","authors":"Erik Billing, Federico Fraboni, Luca Gualtieri, Patricia Helen Rosen, Peter Thorvald","doi":"10.3389/frobt.2025.1564948","DOIUrl":"https://doi.org/10.3389/frobt.2025.1564948","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1564948"},"PeriodicalIF":2.9,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906329/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gianmarco Roggiolani, Julius Rückin, Marija Popović, Jens Behley, Cyrill Stachniss
{"title":"Unsupervised semantic label generation in agricultural fields.","authors":"Gianmarco Roggiolani, Julius Rückin, Marija Popović, Jens Behley, Cyrill Stachniss","doi":"10.3389/frobt.2025.1548143","DOIUrl":"10.3389/frobt.2025.1548143","url":null,"abstract":"<p><p>Robust perception systems allow farm robots to recognize weeds and vegetation, enabling the selective application of fertilizers and herbicides to mitigate the environmental impact of traditional agricultural practices. Today's perception systems typically rely on deep learning to interpret sensor data for tasks such as distinguishing soil, crops, and weeds. These approaches usually require substantial amounts of manually labeled training data, which is often time-consuming and requires domain expertise. This paper aims to reduce this limitation and propose an automated labeling pipeline for crop-weed semantic image segmentation in managed agricultural fields. It allows the training of deep learning models without or with only limited manual labeling of images. Our system uses RGB images recorded with unmanned aerial or ground robots operating in the field to produce semantic labels exploiting the field row structure for spatially consistent labeling. We use the rows previously detected to identify multiple crop rows, reducing labeling errors and improving consistency. We further reduce labeling errors by assigning an \"unknown\" class to challenging-to-segment vegetation. We use evidential deep learning because it provides predictions uncertainty estimates that we use to refine and improve our predictions. In this way, the evidential deep learning assigns high uncertainty to the weed class, as it is often less represented in the training data, allowing us to use the uncertainty to correct the semantic predictions. Experimental results suggest that our approach outperforms general-purpose labeling methods applied to crop fields by a large margin and domain-specific approaches on multiple fields and crop species. Using our generated labels to train deep learning models boosts our prediction performance on previously unseen fields with respect to unseen crop species, growth stages, or different lighting conditions. We obtain an IoU of 88.6% on crops, and 22.7% on weeds for a managed field of sugarbeets, where fully supervised methods have 83.4% on crops and 33.5% on weeds and other unsupervised domain-specific methods get 54.6% on crops and 11.2% on weeds. Finally, our method allows fine-tuning models trained in a fully supervised fashion to improve their performance in unseen field conditions up to +17.6% in mean IoU without additional manual labeling.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1548143"},"PeriodicalIF":2.9,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11893429/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Robert Schulze, Steffen Müller, Tristan Müller, Horst-Michael Gross
{"title":"On realizing autonomous transport services in multi story buildings with doors and elevators.","authors":"Paul Robert Schulze, Steffen Müller, Tristan Müller, Horst-Michael Gross","doi":"10.3389/frobt.2025.1546894","DOIUrl":"10.3389/frobt.2025.1546894","url":null,"abstract":"<p><p>Mobile service robots for transportation tasks are usually restricted to a barrier-free environment where they can navigate freely. To enable the use of such assistive robots in existing buildings, the robot should be able to overcome closed doors independently and operate elevators with the interface designed for humans while being polite to passers-by. The integration of these required capabilities in an autonomous mobile service robot is explained using the example of a SCITOS G5 robot equipped with differential drive and a Kinova Gen II arm with 7 DoF. This robot also defines the framework conditions with certain limitations in terms of maneuverability and perceptual abilities. Results of field tests with that robot in an elderly care facility as well as in a university office building are shown, where it performed transportation and messaging tasks. We also report on the success rates achieved and highlight the main problems we have encountered and dicsuss open issues.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1546894"},"PeriodicalIF":2.9,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11894313/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CARE: towards customized assistive robot-based education.","authors":"Nafisa Maaz, Jinane Mounsef, Noel Maalouf","doi":"10.3389/frobt.2025.1474741","DOIUrl":"10.3389/frobt.2025.1474741","url":null,"abstract":"<p><p>This study proposes a novel approach to enhancing the learning experience of elementary school students by integrating Artificial Intelligence (AI) and robotics in education, focusing on personalized and adaptive learning. Unlike existing adaptive and intelligent tutoring systems, which primarily rely on digital platforms, our approach employs a personalized tutor robot to interact with students directly, combining cognitive and emotional assessment to deliver tailored educational experiences. This work extends the current research landscape by integrating real-time facial expression analysis, subjective feedback, and performance metrics to classify students into three categories: Proficient Students (Prof.S), Meeting-Expectations Students (MES), and Developing Students (DVS). These classifications are used to deliver customized learning content, motivational messages, and constructive feedback. The primary research question guiding this study is: Does personalization enhance the effectiveness of a robotic tutor in fostering improved learning outcomes? To address this, the study explores two key aspects: (1) how personalization contributes to a robotic tutor's ability to adapt to individual student needs, thereby enhancing engagement and academic performance, and (2) how the effectiveness of a personalized robotic tutor compares to a human teacher, which serves as a benchmark for evaluating the system's impact. Our study contrasts the personalized robot with a human teacher to highlight the potential of personalization in robotic tutoring within a real-world educational context. While a comparison with a generic, unpersonalized robot could further isolate the impact of personalization, our choice of comparison with a human teacher underscores the broader objective of positioning personalized robotic tutors as viable and impactful educational tools. The robot's AI-powered system, employing the XGBoost algorithm, predicts the student's proficiency level with high accuracy (100%), leveraging factors such as test scores, task completion time, and emotional engagement. Challenges and learning materials are dynamically adjusted to suit each student's needs, with DVS receiving supportive exercises and Prof. S receiving advanced tasks. Our methodology goes beyond existing literature by embedding a fully autonomous robotic system within a classroom setting to assess and enhance learning outcomes. Evaluation through post-diagnostic exams demonstrated that the experimental group of students using the AI-robot system showed a significant improvement rate (approximately 8%) over the control group. These findings highlight the unique contribution of this study to the field of Human-Robot Interaction (HRI) and educational robotics, showcasing how integrating AI and robotics in a real-world learning environment can engage students and improve educational outcomes. By situating our work within the broader context of intelligent tutoring systems and addressi","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1474741"},"PeriodicalIF":2.9,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11885127/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143587587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erwin Jose Lopez Pulgarin, Dave Hopper, Jon Montgomerie, James Kell, Joaquin Carrasco, Guido Herrmann, Alexander Lanzon, Barry Lennox
{"title":"From traditional robotic deployments towards assisted robotic deployments in nuclear decommissioning.","authors":"Erwin Jose Lopez Pulgarin, Dave Hopper, Jon Montgomerie, James Kell, Joaquin Carrasco, Guido Herrmann, Alexander Lanzon, Barry Lennox","doi":"10.3389/frobt.2025.1432845","DOIUrl":"10.3389/frobt.2025.1432845","url":null,"abstract":"<p><p>The history around teleoperation and deployment of robotic systems in constrained and dangerous environments such as nuclear is a long and successful one. From the 1940s, robotic manipulators have been used to manipulate dangerous substances and enable work in environments either too dangerous or impossible to be operated by human operators. Through the decades, technical and scientific advances have improved the capabilities of these devices, whilst allowing for more tasks to be performed. In the case of nuclear decommissioning, using such devices for remote inspection and remote handling has become the only solution to work and survey some areas. Such applications deal with challenging environments due to space constrains, lack of up-to-date structural knowledge of the environment and poor visibility, requiring much training and planning to succeed. There is a growing need to speed these deployment processes and to increase the number of decommissioning activities whilst maintaining high levels of safety and performance. Considering the large number of research and innovation being done around improving robotic capabilities, numerous potential benefits could be made by translating them to the nuclear decommissioning use cases. We believe such innovations, in particular improved feedback mechanisms from the environment during training and deployments (i.e., Haptic Digital Twins) and higher modes of assisted or supervised control (i.e., Semi-autonomous operation) can play a large role. We list some of the best practices currently being followed in the industry around teleoperation and robotic deployments and the potential benefits of implementing the aforementioned innovations.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1432845"},"PeriodicalIF":2.9,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11893983/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Italian version of the unified theory of acceptance and use of technology questionnaire: a pilot validation study.","authors":"Alfonsina D'Iorio, Federica Garramone, Silvia Rossi, Chiara Baiano, Gabriella Santangelo","doi":"10.3389/frobt.2025.1371583","DOIUrl":"10.3389/frobt.2025.1371583","url":null,"abstract":"<p><strong>Background: </strong>The Unified Theory of Acceptance and Use of Technology is a self-rated questionnaire to assess twelve constructs related to the level of acceptance of a robot, consisting of 41 items rated on a 5-point Likert scale. The aim of the study was to conduct a preliminary evaluation of the psychometric properties of the Italian version of the UTAUT (I-UTAUT) in a sample of Italian healthy subjects (HCs).</p><p><strong>Materials and methods: </strong>30 HCs underwent the I-UTAUT to assess its comprehensibility. Reliability and divergent validity of the I-UTAUT were evaluated in a sample of 121 HCs, who also underwent the Montreal Cognitive Assessment (MoCA).</p><p><strong>Results: </strong>The final I-UTAUT version was easily comprehensible. There were no missing data, no floor and ceiling effects. Contrarily to the original version, the Principal Components Analysis suggested a seven-component structure; Cronbach's alpha was 0.94. The I-UTAUT score did not correlate with MoCA.</p><p><strong>Conclusion: </strong>The I-UTAUT represented a reliable and valid questionnaire to identify the level of acceptance of robotics technology in Italian healthy sample.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1371583"},"PeriodicalIF":2.9,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11872730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}