Marcela G Dos Santos, Sylvain Hallé, Fabio Petrillo, Yann-Gaël Guéhéneuc
{"title":"AAT4IRS: automated acceptance testing for industrial robotic systems.","authors":"Marcela G Dos Santos, Sylvain Hallé, Fabio Petrillo, Yann-Gaël Guéhéneuc","doi":"10.3389/frobt.2024.1346580","DOIUrl":"https://doi.org/10.3389/frobt.2024.1346580","url":null,"abstract":"<p><p>Industrial robotic systems (IRS) consist of industrial robots that automate industrial processes. They accurately perform repetitive tasks, replacing or assisting with dangerous jobs like assembly in the automotive and chemical industries. Failures in these systems can be catastrophic, so it is important to ensure their quality and safety before using them. One way to do this is by applying a software testing process to find faults before they become failures. However, software testing in industrial robotic systems has some challenges. These include differences in perspectives on software testing from people with diverse backgrounds, coordinating and collaborating with diverse teams, and performing software testing within the complex integration inherent in industrial environments. In traditional systems, a well-known development process uses simple, structured sentences in English to facilitate communication between project team members and business stakeholders. This process is called behavior-driven development (BDD), and one of its pillars is the use of templates to write user stories, scenarios, and automated acceptance tests. We propose a software testing (ST) approach called automated acceptance testing for industrial robotic systems (AAT4IRS) that uses natural language to write the features and scenarios to be tested. We evaluated our ST approach through a proof-of-concept, performing a pick-and-place process and applying mutation testing to measure its effectiveness. The results show that the test suites implemented using AAT4IRS were highly effective, with 79% of the generated mutants detected, thus instilling confidence in the robustness of our approach.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1346580"},"PeriodicalIF":2.9,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11484419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federico Tavella, Federico Manzi, Samuele Vinanzi, Cinzia Di Dio, Davide Massaro, Angelo Cangelosi, Antonella Marchetti
{"title":"Towards a computational model for higher orders of Theory of Mind in social agents.","authors":"Federico Tavella, Federico Manzi, Samuele Vinanzi, Cinzia Di Dio, Davide Massaro, Angelo Cangelosi, Antonella Marchetti","doi":"10.3389/frobt.2024.1468756","DOIUrl":"https://doi.org/10.3389/frobt.2024.1468756","url":null,"abstract":"<p><p>Effective communication between humans and machines requires artificial tools to adopt a human-like social perspective. The Theory of Mind (ToM) enables understanding and predicting mental states and behaviours, crucial for social interactions from childhood through adulthood. Artificial agents with ToM skills can better coordinate actions, such as in warehouses or healthcare. Incorporating ToM in AI systems can revolutionise our interactions with intelligent machines. This proposal emphasises the current focus on first-order ToM models in the literature and investigates the potential of creating a computational model for higher-order ToM.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1468756"},"PeriodicalIF":2.9,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11479858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bridging vision and touch: advancing robotic interaction prediction with self-supervised multimodal learning.","authors":"Luchen Li, Thomas George Thuruthel","doi":"10.3389/frobt.2024.1407519","DOIUrl":"https://doi.org/10.3389/frobt.2024.1407519","url":null,"abstract":"<p><p>Predicting the consequences of the agent's actions on its environment is a pivotal challenge in robotic learning, which plays a key role in developing higher cognitive skills for intelligent robots. While current methods have predominantly relied on vision and motion data to generate the predicted videos, more comprehensive sensory perception is required for complex physical interactions such as contact-rich manipulation or highly dynamic tasks. In this work, we investigate the interdependence between vision and tactile sensation in the scenario of dynamic robotic interaction. A multi-modal fusion mechanism is introduced to the action-conditioned video prediction model to forecast future scenes, which enriches the single-modality prototype with a compressed latent representation of multiple sensory inputs. Additionally, to accomplish the interactive setting, we built a robotic interaction system that is equipped with both web cameras and vision-based tactile sensors to collect the dataset of vision-tactile sequences and the corresponding robot action data. Finally, through a series of qualitative and quantitative comparative study of different prediction architecture and tasks, we present insightful analysis of the cross-modality influence between vision, tactile and action, revealing the asymmetrical impact that exists between the sensations when contributing to interpreting the environment information. This opens possibilities for more adaptive and efficient robotic control in complex environments, with implications for dexterous manipulation and human-robot interaction.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1407519"},"PeriodicalIF":2.9,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Siamese and triplet network-based pain expression in robotic avatars for care and nursing training.","authors":"Miran Lee, Minjeong Lee, Suyeong Kim","doi":"10.3389/frobt.2024.1419584","DOIUrl":"10.3389/frobt.2024.1419584","url":null,"abstract":"<p><p>Care and nursing training (CNT) refers to developing the ability to effectively respond to patient needs by investigating their requests and improving trainees' care skills in a caring environment. Although conventional CNT programs have been conducted based on videos, books, and role-playing, the best approach is to practice on a real human. However, it is challenging to recruit patients for continuous training, and the patients may experience fatigue or boredom with iterative testing. As an alternative approach, a patient robot that reproduces various human diseases and provides feedback to trainees has been introduced. This study presents a patient robot that can express feelings of pain, similarly to a real human, in joint care education. The two primary objectives of the proposed patient robot-based care training system are (a) to infer the pain felt by the patient robot and intuitively provide the trainee with the patient's pain state, and (b) to provide facial expression-based visual feedback of the patient robot for care training.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1419584"},"PeriodicalIF":2.9,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464974/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Validations of various in-hand object manipulation strategies employing a novel tactile sensor developed for an under-actuated robot hand.","authors":"Avinash Singh, Massimilano Pinto, Petros Kaltsas, Salvatore Pirozzi, Shifa Sulaiman, Fanny Ficuciello","doi":"10.3389/frobt.2024.1460589","DOIUrl":"10.3389/frobt.2024.1460589","url":null,"abstract":"<p><p>Prisma Hand II is an under-actuated prosthetic hand developed at the University of Naples, Federico II to study in-hand manipulations during grasping activities. 3 motors equipped on the robotic hand drive 19 joints using elastic tendons. The operations of the hand are achieved by combining tactile hand sensing with under-actuation capabilities. The hand has the potential to be employed in both industrial and prosthetic applications due to its dexterous motion capabilities. However, currently there are no commercially available tactile sensors with compatible dimensions suitable for the prosthetic hand. Hence, in this work, we develop a novel tactile sensor designed based on an opto-electronic technology for the Prisma Hand II. The optimised dimensions of the proposed sensor made it possible to be integrated with the fingertips of the prosthetic hand. The output voltage obtained from the novel tactile sensor is used to determine optimum grasping forces and torques during in-hand manipulation tasks employing Neural Networks (NNs). The grasping force values obtained using a Convolutional Neural Network (CNN) and an Artificial Neural Network (ANN) are compared based on Mean Square Error (MSE) values to find out a better training network for the tasks. The tactile sensing capabilities of the proposed novel sensing method are presented and compared in simulation studies and experimental validations using various hand manipulation tasks. The developed tactile sensor is found to be showcasing a better performance compared to previous version of the sensor used in the hand.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1460589"},"PeriodicalIF":2.9,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464259/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142401520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J-Anne Yow, Neha Priyadarshini Garg, Manoj Ramanathan, Wei Tech Ang
{"title":"ExTraCT - Explainable trajectory corrections for language-based human-robot interaction using textual feature descriptions.","authors":"J-Anne Yow, Neha Priyadarshini Garg, Manoj Ramanathan, Wei Tech Ang","doi":"10.3389/frobt.2024.1345693","DOIUrl":"https://doi.org/10.3389/frobt.2024.1345693","url":null,"abstract":"<p><strong>Introduction: </strong>In human-robot interaction (HRI), understanding human intent is crucial for robots to perform tasks that align with user preferences. Traditional methods that aim to modify robot trajectories based on language corrections often require extensive training to generalize across diverse objects, initial trajectories, and scenarios. This work presents ExTraCT, a modular framework designed to modify robot trajectories (and behaviour) using natural language input.</p><p><strong>Methods: </strong>Unlike traditional end-to-end learning approaches, ExTraCT separates language understanding from trajectory modification, allowing robots to adapt language corrections to new tasks-including those with complex motions like scooping-as well as various initial trajectories and object configurations without additional end-to-end training. ExTraCT leverages Large Language Models (LLMs) to semantically match language corrections to predefined trajectory modification functions, allowing the robot to make necessary adjustments to its path. This modular approach overcomes the limitations of pre-trained datasets and offers versatility across various applications.</p><p><strong>Results: </strong>Comprehensive user studies conducted in simulation and with a physical robot arm demonstrated that ExTraCT's trajectory corrections are more accurate and preferred by users in 80% of cases compared to the baseline.</p><p><strong>Discussion: </strong>ExTraCT offers a more explainable approach to understanding language corrections, which could facilitate learning human preferences. We also demonstrated the adaptability and effectiveness of ExTraCT in a complex scenarios like assistive feeding, presenting it as a versatile solution across various HRI applications.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1345693"},"PeriodicalIF":2.9,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11456793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmenting perceived stickiness of physical objects through tactile feedback after finger lift-off.","authors":"Tadatoshi Kurogi, Yuki Inoue, Takeshi Fujiwara, Kouta Minamizawa","doi":"10.3389/frobt.2024.1415464","DOIUrl":"10.3389/frobt.2024.1415464","url":null,"abstract":"<p><p>Haptic Augmented Reality (HAR) is a method that actively modulates the perceived haptics of physical objects by presenting additional haptic feedback using a haptic display. However, most of the proposed HAR research focuses on modifying the hardness, softness, roughness, smoothness, friction, and surface shape of physical objects. In this paper, we propose an approach to augment the perceived stickiness of a physical object by presenting additional tactile feedback at a particular time after the finger lifts off from the physical object using a thin and soft tactile display suitable for HAR. To demonstrate this concept, we constructed a thin and soft tactile display using a Dielectric Elastomer Actuator suitable for HAR. We then conducted two experiments to validate the effectiveness of the proposed approach. In Experiment 1, we showed that the developed tactile display can augment the perceived stickiness of physical objects by presenting additional tactile feedback at appropriate times. In Experiment 2, we investigated the stickiness experience obtained by our proposed approach and showed that the realism of the stickiness experience and the harmony between the physical object and the additional tactile feedback are affected by the frequency and presentation timing of the tactile feedback. Our proposed approach is expected to contribute to the development of new applications not only in HAR, but also in Virtual Reality, Mixed Reality, and other domains using haptic displays.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1415464"},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11446170/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chahat Deep Singh, Botao He, Cornelia Fermüller, Christopher Metzler, Yiannis Aloimonos
{"title":"Minimal perception: enabling autonomy in resource-constrained robots.","authors":"Chahat Deep Singh, Botao He, Cornelia Fermüller, Christopher Metzler, Yiannis Aloimonos","doi":"10.3389/frobt.2024.1431826","DOIUrl":"10.3389/frobt.2024.1431826","url":null,"abstract":"<p><p>The rapidly increasing capabilities of autonomous mobile robots promise to make them ubiquitous in the coming decade. These robots will continue to enhance efficiency and safety in novel applications such as disaster management, environmental monitoring, bridge inspection, and agricultural inspection. To operate autonomously without constant human intervention, even in remote or hazardous areas, robots must sense, process, and interpret environmental data using only onboard sensing and computation. This capability is made possible by advancements in perception algorithms, allowing these robots to rely primarily on their perception capabilities for navigation tasks. However, tiny robot autonomy is hindered mainly by sensors, memory, and computing due to size, area, weight, and power constraints. The bottleneck in these robots lies in the real-time perception in resource-constrained robots. To enable autonomy in robots of sizes that are less than 100 mm in body length, we draw inspiration from tiny organisms such as insects and hummingbirds, known for their sophisticated perception, navigation, and survival abilities despite their minimal sensor and neural system. This work aims to provide insights into designing a compact and efficient minimal perception framework for tiny autonomous robots from higher cognitive to lower sensor levels.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1431826"},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11444933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhenghua Zhang, Weilong He, Fan Wu, Lina Quesada, Lirong Xiang
{"title":"Development of a bionic hexapod robot with adaptive gait and clearance for enhanced agricultural field scouting.","authors":"Zhenghua Zhang, Weilong He, Fan Wu, Lina Quesada, Lirong Xiang","doi":"10.3389/frobt.2024.1426269","DOIUrl":"10.3389/frobt.2024.1426269","url":null,"abstract":"<p><p>High agility, maneuverability, and payload capacity, combined with small footprints, make legged robots well-suited for precision agriculture applications. In this study, we introduce a novel bionic hexapod robot designed for agricultural applications to address the limitations of traditional wheeled and aerial robots. The robot features a terrain-adaptive gait and adjustable clearance to ensure stability and robustness over various terrains and obstacles. Equipped with a high-precision Inertial Measurement Unit (IMU), the robot is able to monitor its attitude in real time to maintain balance. To enhance obstacle detection and self-navigation capabilities, we have designed an advanced version of the robot equipped with an optional advanced sensing system. This advanced version includes LiDAR, stereo cameras, and distance sensors to enable obstacle detection and self-navigation capabilities. We have tested the standard version of the robot under different ground conditions, including hard concrete floors, rugged grass, slopes, and uneven field with obstacles. The robot maintains good stability with pitch angle fluctuations ranging from -11.5° to 8.6° in all conditions and can walk on slopes with gradients up to 17°. These trials demonstrated the robot's adaptability to complex field environments and validated its ability to maintain stability and efficiency. In addition, the terrain-adaptive algorithm is more energy efficient than traditional obstacle avoidance algorithms, reducing energy consumption by 14.4% for each obstacle crossed. Combined with its flexible and lightweight design, our robot shows significant potential in improving agricultural practices by increasing efficiency, lowering labor costs, and enhancing sustainability. In our future work, we will further develop the robot's energy efficiency, durability in various environmental conditions, and compatibility with different crops and farming methods.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1426269"},"PeriodicalIF":2.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11444934/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}