{"title":"How to arrange the robotic environment? Leveraging experience in both motion planning and environment optimization.","authors":"Jiaxi Lu, Ryota Takamido, Yusheng Wang, Jun Ota","doi":"10.3389/frobt.2024.1468385","DOIUrl":"https://doi.org/10.3389/frobt.2024.1468385","url":null,"abstract":"<p><p>This study presents an experience-based hierarchical-structure optimization algorithm to address the robotic system environment design problem, which combines motion planning and environment arrangement problems together. The motion planning problem, which could be defined as a multiple-degree-of-freedom (m-DOF) problem, together with the environment arrangement problem, which could be defined as a free DOF problem, is a high-dimensional optimization problem. Therefore, the hierarchical structure was established, with the higher layer solving the environment arrangement problem and lower layer solving the problem of motion planning. Previously planned trajectories and past results for this design problem were first constructed as datasets; however, they cannot be seen as optimal. Therefore, this study proposed an experience-reuse manner, which selected the most \"useful\" experience from the datasets and reused it to query new problems, optimize the results in the datasets, and provide better environment arrangement with shorter path lengths within the same time. Therefore, a hierarchical structural caseGA-ERTC algorithm was proposed. In the higher layer, a novel approach employing the case-injected genetic algorithm (GA) was implemented to tackle optimization challenges in robot environment design, leveraging experiential insights. Performance indices in the arrangement of the robot system's environment were determined by the robotic arm's motion and path length calculated using an experience-driven random tree (ERT) algorithm. Moreover, the effectiveness of the proposed method is illustrated with the 12.59% decrease in path lengths by solving different settings of hard problems and 5.05% decrease in easy problems compared with other state-of-the-art methods in three small robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1468385"},"PeriodicalIF":2.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11604589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan E Mora-Zarate, Claudia L Garzón-Castro, Jorge A Castellanos Rivillas
{"title":"Learning signs with NAO: humanoid robot as a tool for helping to learn Colombian Sign Language.","authors":"Juan E Mora-Zarate, Claudia L Garzón-Castro, Jorge A Castellanos Rivillas","doi":"10.3389/frobt.2024.1475069","DOIUrl":"10.3389/frobt.2024.1475069","url":null,"abstract":"<p><p>Sign languages are one of the main rehabilitation methods for dealing with hearing loss. Like any other language, the geographical location will influence on how signs are made. Particularly in Colombia, the hard of hearing population is lacking from education in the Colombian Sign Language, mainly due of the reduce number of interpreters in the educational sector. To help mitigate this problem, Machine Learning binded to data gloves or Computer Vision technologies have emerged to be the accessory of sign translation systems and educational tools, however, in Colombia the presence of this solutions is scarce. On the other hand, humanoid robots such as the NAO have shown significant results when used to support a learning process. This paper proposes a performance evaluation for the design of an activity to support the learning process of all the 11 color-based signs from the Colombian Sign Language. Which consists of an evaluation method with two modes activated through user interaction, the first mode will allow to choose the color sign to be evaluated, and the second will decide randomly the color sign. To achieve this, MediaPipe tool was used to extract torso and hand coordinates, which were the input for a Neural Network. The performance of the Neural Network was evaluated running continuously in two scenarios, first, video capture from the webcam of the computer which showed an overall F1 score of 91.6% and a prediction time of 85.2 m, second, wireless video streaming with NAO H25 V6 camera which had an F1 score of 93.8% and a prediction time of 2.29 s. In addition, we took advantage of the joint redundancy that NAO H25 V6 has, since with its 25 degrees of freedom we were able to use gestures that created nonverbal human-robot interactions, which may be useful in future works where we want to implement this activity with a deaf community.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1475069"},"PeriodicalIF":2.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11602449/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"X-ray fluoroscopy guided localization and steering of miniature robots using virtual reality enhancement.","authors":"Husnu Halid Alabay, Tuan-Anh Le, Hakan Ceylan","doi":"10.3389/frobt.2024.1495445","DOIUrl":"10.3389/frobt.2024.1495445","url":null,"abstract":"<p><p>In developing medical interventions using untethered milli- and microrobots, ensuring safety and effectiveness relies on robust methods for real-time robot detection, tracking, and precise localization within the body. The inherent non-transparency of human tissues significantly challenges these efforts, as traditional imaging systems like fluoroscopy often lack crucial anatomical details, potentially compromising intervention safety and efficacy. To address this technological gap, in this study, we build a virtual reality environment housing an exact digital replica (digital twin) of the operational workspace and a robot avatar. We synchronize the virtual and real workspaces and continuously send the robot position data derived from the image stream into the digital twin with short average delay time around 20-25 ms. This allows the operator to steer the robot by tracking its avatar within the digital twin with near real-time temporal resolution. We demonstrate the feasibility of this approach with millirobots steered in confined phantoms. Our concept demonstration herein can pave the way for not only improved procedural safety by complementing fluoroscopic guidance with virtual reality enhancement, but also provides a platform for incorporating various additional real-time derivative data, e.g., instantaneous robot velocity, intraoperative physiological data obtained from the patient, e.g., blood flow rate, and pre-operative physical simulation models, e.g., periodic body motions, to further refine robot control capacity.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1495445"},"PeriodicalIF":2.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11599259/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142741106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum: Novel bio-inspired soft actuators for upper-limb exoskeletons: design, fabrication and feasibility study.","authors":"","doi":"10.3389/frobt.2024.1517037","DOIUrl":"https://doi.org/10.3389/frobt.2024.1517037","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/frobt.2024.1451231.].</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1517037"},"PeriodicalIF":2.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11600976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142741092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yucheng Chen, Pingping Zhu, Anthony Alers, Tobias Egner, Marc A Sommer, Silvia Ferrari
{"title":"Heuristic satisficing inferential decision making in human and robot active perception.","authors":"Yucheng Chen, Pingping Zhu, Anthony Alers, Tobias Egner, Marc A Sommer, Silvia Ferrari","doi":"10.3389/frobt.2024.1384609","DOIUrl":"10.3389/frobt.2024.1384609","url":null,"abstract":"<p><p>Inferential decision-making algorithms typically assume that an underlying probabilistic model of decision alternatives and outcomes may be learned <i>a priori</i> or online. Furthermore, when applied to robots in real-world settings they often perform unsatisfactorily or fail to accomplish the necessary tasks because this assumption is violated and/or because they experience unanticipated external pressures and constraints. Cognitive studies presented in this and other papers show that humans cope with complex and unknown settings by modulating between near-optimal and satisficing solutions, including heuristics, by leveraging information value of available environmental cues that are possibly redundant. Using the benchmark inferential decision problem known as \"treasure hunt\", this paper develops a general approach for investigating and modeling active perception solutions under pressure. By simulating treasure hunt problems in virtual worlds, our approach learns generalizable strategies from high performers that, when applied to robots, allow them to modulate between optimal and heuristic solutions on the basis of external pressures and probabilistic models, if and when available. The result is a suite of active perception algorithms for camera-equipped robots that outperform treasure-hunt solutions obtained via cell decomposition, information roadmap, and information potential algorithms, in both high-fidelity numerical simulations and physical experiments. The effectiveness of the new active perception strategies is demonstrated under a broad range of unanticipated conditions that cause existing algorithms to fail to complete the search for treasures, such as unmodelled time constraints, resource constraints, and adverse weather (fog).</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1384609"},"PeriodicalIF":2.9,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11589672/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142733272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trajectory shaping guidance for impact angle control of planetary hopping robots.","authors":"Sabyasachi Mondal, Saurabh Upadhyay","doi":"10.3389/frobt.2024.1452997","DOIUrl":"https://doi.org/10.3389/frobt.2024.1452997","url":null,"abstract":"<p><p>This paper presents a novel optimal trajectory-shaping control concept for a planetary hopping robot. The hopping robot suffers from uncontrolled in-flight and undesired after-landing motions, leading to a position drift at landing. The proposed concept thrives on the Generalized Vector Explicit (GENEX) guidance, which can generate and shape the optimal trajectory and satisfy the end-point constraints like the impact angle of the velocity vector. The proposed concept is used for a thruster-based hopping robot, which achieves a range of impact angles, reduces the position drift at landing due to the undesired in-flight and after-landing motions, and handles the error in initial hopping angles. The proposed approach's conceptual realization is illustrated by lateral acceleration generated using thruster orientation control. Extensive simulations are carried out on horizontal and sloped surfaces with different initial and impact angle conditions to demonstrate the effect of impact angle on the position drift error and the viability of the proposed approach.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1452997"},"PeriodicalIF":2.9,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11586261/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abu Bakar Dawood, Vamsi Krishna Chavali, Thomas Mack, Zhenyu Zhang, Hareesh Godaba, Martin Angelmahr, Kaspar Althoefer
{"title":"Abraded optical fibre-based dynamic range force sensor for tissue palpation.","authors":"Abu Bakar Dawood, Vamsi Krishna Chavali, Thomas Mack, Zhenyu Zhang, Hareesh Godaba, Martin Angelmahr, Kaspar Althoefer","doi":"10.3389/frobt.2024.1489884","DOIUrl":"10.3389/frobt.2024.1489884","url":null,"abstract":"<p><p>Tactile information acquired through palpation plays a crucial role in relation to surface characterisation and tissue differentiation - an essential clinical requirement during surgery. In the case of Minimally Invasive Surgery, access is restricted, and tactile feedback available to surgeons is therefore reduced. This paper presents a novel stiffness controllable, dynamic force range sensor that can provide remote haptic feedback. The sensor has an abraded optical fibre integrated into a silicone dome. Forces applied to the dome change the curvature of the optical fibres, resulting in light attenuation. By changing the pressure within the dome and thereby adjusting the sensor's stiffness, we are able to modify the force measurement range. Results from our experimental study demonstrate that increasing the pressure inside the dome increases the force range whilst decreasing force sensitivity. We show that the maximum force measured by our sensor prototype at 20 mm/min was 5.02 N, 6.70 N and 8.83 N for the applied pressures of 0 psi (0 kPa), 0.5 psi (3.45 kPa) and 1 psi (6.9 kPa), respectively. The sensor has also been tested to estimate the stiffness of 13 phantoms of different elastic moduli. Results show the elastic modulus sensing range of the proposed sensor to be from 8.58 to 165.32 kPa.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1489884"},"PeriodicalIF":2.9,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11586255/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An analysis of dialogue repair in virtual assistants.","authors":"Matthew Galbraith","doi":"10.3389/frobt.2024.1356847","DOIUrl":"10.3389/frobt.2024.1356847","url":null,"abstract":"<p><p>Conversational user interfaces have transformed human-computer interaction by providing nearly real-time responses to queries. However, misunderstandings between the user and system persist. This study explores the significance of interactional language in dialogue repair between virtual assistants and users by analyzing interactions with Google Assistant and Siri in both English and Spanish, focusing on the assistants' utilization and response to the colloquial other-initiated repair strategy \"<i>huh?</i>\", which is prevalent as a human-human dialogue repair strategy. Findings revealed ten distinct assistant-generated repair strategies, but an inability to replicate human-like strategies such as \"<i>huh?</i>\". Despite slight variations in user acceptability judgments among the two surveyed languages, results indicated an overall hierarchy of preference towards specific dialogue repair strategies, with a notable disparity between the most preferred strategies and those frequently used by the assistants. These findings highlight discrepancies in how interactional language is utilized in human-computer interaction, underscoring the need for further research on the impact of interactional elements among different languages to advance the development of conversational user interfaces across domains, including within human-robot interaction.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1356847"},"PeriodicalIF":2.9,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11586770/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial: Influential voices in soft robotics.","authors":"Panagiotis Polygerinos","doi":"10.3389/frobt.2024.1521226","DOIUrl":"https://doi.org/10.3389/frobt.2024.1521226","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1521226"},"PeriodicalIF":2.9,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11581938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142711697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augusto Luis Ballardini, Daniele Cattaneo, Domenico G Sorrenti, Ignacio Parra Alonso
{"title":"Editorial: Localization and scene understanding in urban environments.","authors":"Augusto Luis Ballardini, Daniele Cattaneo, Domenico G Sorrenti, Ignacio Parra Alonso","doi":"10.3389/frobt.2024.1509637","DOIUrl":"https://doi.org/10.3389/frobt.2024.1509637","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1509637"},"PeriodicalIF":2.9,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11581965/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142711733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}