{"title":"A sliding mode based foot-end trajectory consensus control method with variable topology for legged motion of heavy-duty robot","authors":"","doi":"10.1016/j.robot.2024.104764","DOIUrl":"10.1016/j.robot.2024.104764","url":null,"abstract":"<div><p>Rational foot-end trajectory planning and control are of great significance for stable-legged walking of heavy-duty multi-legged robots. To achieve a fast, active, and compliant response of the leg actuator to disturbances for improvement of the stability and flexibility of the heavy-duty legged robot system during continuous walking on rough roads, a legged consensus control method (LCC) is proposed. Firstly, the LCC includes a foot-end trajectory planner model for designing the trajectory during the swing phase to ensure that the robot’s feet are always in a safe workspace during legged motion with continuously variable direction. Secondly, LCC constructs a consensus control method for encoding foot-end position and velocity consensus error based on variable topology networks. Six legs are treated as six intelligent agents and divided into two fully connected networks: the swing phase and stance phase, to achieve smooth and consistent motion that satisfies the geometric constraints of the robot. The foot-end agent can switch between swing and stance groups according to the state of the contact with the environment accompanied by the amendment topology, to enhance the robustness of the robot system through fast compliance control of the foot-end kinematics state. Then, the sliding mode control method based on consensus velocity and position error is deduced in LCC. The sliding mode surface is designed to make the three control variables realize stable movement with a consistent state of foot-end in three <span><math><mrow><mi>X</mi><mo>,</mo><mi>Y</mi><mo>,</mo><mi>Z</mi></mrow></math></span>-axis respectively, thereby enhancing the stability of foot-end state and fuselage posture. Finally, simulation and experiments have verified that the proposed LCC can assist legged-robot perform relatively steady legged motion with continuously variable direction on various rugged roads. The body attitude Root Mean Square Error (<span><math><mrow><mi>R</mi><mi>M</mi><mi>S</mi><mi>E</mi></mrow></math></span>) is quickly reduced by 81.0% compared with independent PI control. The LCC algorithm code is publicly available at <span><span>https://github.com/bjmyX/LCC_code</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142128909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive robot localization in dynamic environments through self-learnt long-term 3D stable points segmentation","authors":"","doi":"10.1016/j.robot.2024.104786","DOIUrl":"10.1016/j.robot.2024.104786","url":null,"abstract":"<div><p>In field robotics, particularly in the agricultural sector, precise localization presents a challenge due to the constantly changing nature of the environment. Simultaneous Localization and Mapping algorithms can provide an effective estimation of a robot’s position, but their long-term performance may be impacted by false data associations. Additionally, alternative strategies such as the use of RTK-GPS can also have limitations, such as dependence on external infrastructure. To address these challenges, this paper introduces a novel stability scan filter. This filter can learn and infer the motion status of objects in the environment, allowing it to identify the most stable objects and use them as landmarks for robust robot localization in a continuously changing environment. The proposed method involves an unsupervised point-wise labelling of LiDAR frames by utilizing temporal observations of the environment, as well as a regression network, called Long-Term Stability Network (LTS-NET) to learn and infer 3D LiDAR points long-term motion status. Experiments demonstrate the ability of the stability scan filter to infer the motion stability of objects on a real agricultural long-term dataset. Results show that by only utilizing points belonging to long-term stable objects, the localization system exhibits reliable and robust localization performance for long-term missions compared to using the entire LiDAR frame points.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024001702/pdfft?md5=e09a2fddb429ae4fc7388b27ef65c9a0&pid=1-s2.0-S0921889024001702-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142096005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LightDepth: A resource efficient depth estimation approach for dealing with ground truth sparsity via curriculum learning","authors":"","doi":"10.1016/j.robot.2024.104784","DOIUrl":"10.1016/j.robot.2024.104784","url":null,"abstract":"<div><p>Accurate depth estimation from monocular images is critical for various applications such as robotics, augmented reality, and autonomous navigation. However, achieving high accuracy while maintaining computational efficiency is a major challenge, particularly for resource-constrained devices. In this paper, we present <em>LightDepth</em>, an approach that leverages curriculum learning to estimate depth efficiently while taking into account resource constraints. It modifies the ground truth sparse depth maps from the KITTI dataset by resizing them to 31 extents during training to reduce sparsity and control complexity. The resulting model achieves comparable accuracy to state-of-the-art large models while outperforming them in response time by 71%. Our approach outperforms resource-efficient models regarding depth accuracy (measured by RMSE), achieving a 56% improvement. <em>LightDepth</em> is designed to be fast and resource-efficient, making it suitable for deployment in resource-constrained devices. It also balances the trade-off between accuracy and resource efficiency. All codes are available online at <span><span>https://github.com/fatemehkarimii/lightdepth</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142096109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An adaptive framework for trajectory following in changing-contact robot manipulation tasks","authors":"","doi":"10.1016/j.robot.2024.104785","DOIUrl":"10.1016/j.robot.2024.104785","url":null,"abstract":"<div><p>We describe an adaptive control framework for changing-contact robot manipulation tasks that require the robot to make and break contacts with objects and surfaces. The piecewise continuous interaction dynamics of such tasks make it difficult to construct and use a single dynamics model or control strategy. Also, the nonlinear dynamics during contact changes can damage the robot or the domain objects. Our framework enables the robot to incrementally improve its prediction of contact changes in such tasks, efficiently learn models for the piecewise continuous interaction dynamics, and to provide smooth and accurate trajectory tracking based on a task-space variable impedance controller. We experimentally compare the performance of our framework against that of representative control methods to establish that the adaptive control, prediction, and incremental learning capabilities of our framework are essential to achieve the desired smooth control of changing-contact robot manipulation tasks.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0921889024001696/pdfft?md5=63b435469b19c38172eec7bb29399ca6&pid=1-s2.0-S0921889024001696-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142096110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust iterative value conversion: Deep reinforcement learning for neurochip-driven edge robots","authors":"","doi":"10.1016/j.robot.2024.104782","DOIUrl":"10.1016/j.robot.2024.104782","url":null,"abstract":"<div><p>A neurochip is a device that reproduces the signal processing mechanisms of brain neurons and calculates Spiking Neural Networks (SNNs) with low power consumption and at high speed. Thus, neurochips are attracting attention from edge robot applications, which suffer from limited battery capacity. This paper aims to achieve deep reinforcement learning (DRL) that acquires SNN policies suitable for neurochip implementation. Since DRL requires a complex function approximation, we focus on conversion techniques from Floating Point NN (FPNN) because it is one of the most feasible SNN techniques. However, DRL requires conversions to SNNs for every policy update to collect the learning samples for a DRL-learning cycle, which updates the FPNN policy and collects the SNN policy samples. Accumulative conversion errors can significantly degrade the performance of the SNN policies. We propose Robust Iterative Value Conversion (RIVC) as a DRL that incorporates conversion error reduction and robustness to conversion errors. To reduce them, FPNN is optimized with the same number of quantization bits as an SNN. The FPNN output is not significantly changed by quantization. To robustify the conversion error, an FPNN policy that is applied with quantization is updated to increase the gap between the probability of selecting the optimal action and other actions. This step prevents unexpected replacements of the policy’s optimal actions. We verified RIVC’s effectiveness on a neurochip-driven robot. The results showed that RIVC consumed 1/15 times less power and increased the calculation speed by five times more than an edge CPU (quad-core ARM Cortex-A72). The previous framework with no countermeasures against conversion errors failed to train the policies. Videos from our experiments are available: <span><span>https://youtu.be/Q5Z0-BvK1Tc</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142049007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Upper limb power-assist wearable robot for handling repetitive medium- to low-weight loads in daily logistics tasks","authors":"","doi":"10.1016/j.robot.2024.104780","DOIUrl":"10.1016/j.robot.2024.104780","url":null,"abstract":"<div><p>In this study, we developed an upper-limb power-assisted wearable robot designed to reduce the burden of handling repetitive medium- to low-weight loads for daily logistics workers, thereby enhancing their work efficiency and overall safety. This study proposes a practical wearable robot with a well-designed structure for effectively supporting pick-and-place tasks at waist-to-shoulder height by applying a vertical force directly to the wearer’s wrist. The proposed robot features two active joints that are minimal for vertical assistance, resulting in a lightweight and compact structure. It offers six degrees of freedom per arm, including four passive joints, allowing free end-effector movement. Designed to connect only to the wearer’s wrist, the robot’s linkage is positioned along the wearer’s arm, not requiring alignment with the human–robot joint center, making it easy to wear and having a simple structure. This paper presents a method for calculating the joint torque that accounts for the deformation of the robot’s lightweight and slim links. This approach enhances the gravity compensation accuracy, and the proposed method demonstrates a lower RMS error compared to calculations based on the statics of the rigid link model. Experimental results demonstrated that the robot allowed for a wide range of motion and consistently applied an assistive force of 2 kgf per arm, facilitating the handling of objects weighing several kilograms.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142049008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Overview of structure and drive for wheel-legged robots","authors":"","doi":"10.1016/j.robot.2024.104777","DOIUrl":"10.1016/j.robot.2024.104777","url":null,"abstract":"<div><p>Wheel-legged robots are a type of mobile robot that combines the advantages of wheeled robots, such as fast and stable movement and high efficiency, with the adaptability of legged robots to complex and unstructured environments. Therefore, wheel-legged robots have great potential for application in fields such as deep space exploration, disaster relief, and wilderness exploration. This paper categorizes and summarizes the structural forms and driving modes of wheel-legged robots, dividing them into three categories: wheel-legged hybrid robots, wheel-legged separation robots, and wheel-legged transformation robots based on their structural characteristics. Finally, this paper summarizes the structure and driving aspects of wheel-legged robots and provides an outlook on their development in these two areas. The research results presented in this paper help researchers understand the development process of wheel-legged robots and serve as a valuable reference for future research.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An improved frontier-based robot exploration strategy combined with deep reinforcement learning","authors":"","doi":"10.1016/j.robot.2024.104783","DOIUrl":"10.1016/j.robot.2024.104783","url":null,"abstract":"<div><p>The map of the environment is the basis for autonomous robot navigation. This paper introduces an improved approach to frontier-based exploration by utilizing deep reinforcement learning to select target points. This study proposes a novel approach for map sampling and developing a corresponding neural network architecture. Our method aims to adapt effectively to unfamiliar environments with varying dimensions and diverse action spaces while reducing the loss of information caused by map sampling. We train and validate the neural network in a simulation environment. The results show that our proposed method can stably explore unknown environments of different sizes, while the distance traveled to complete the exploration is shorter than other methods. In addition, we conducted experiments on a real robot, and the results show that our method can be easily transferred from the simulation environment to the real environment.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142012068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enabling intuitive and effective micromanipulation: A wearable exoskeleton-integrated macro-to-micro teleoperation system with a 3D electrothermal microgripper","authors":"","doi":"10.1016/j.robot.2024.104776","DOIUrl":"10.1016/j.robot.2024.104776","url":null,"abstract":"<div><p>In this article, we present a novel teleoperation system for dexterous micromanipulation with a 3D three-fingered electrothermal microgripper. A lightweight wearable exoskeleton hand is designed and employed as the primary device, integrating rotational potentiometers as angle sensors, which are embedded in a closed-loop kinematic chain for detecting flexion/extension and adduction/abduction angles of motion. The measured angles are subsequently translated into exoskeleton hand-fingertip positions utilized as the primary inputs. A 3D electrothermal microgripper based tele-micro manipulation system is realized. The displacement of the exoskeleton fingertips is harnessed to govern the actions of the microgripper via an effective position incremental control method. Furthermore, the system's capabilities are exemplified through intricate micromanipulations performed on soft zebrafish embryos. The micromanipulations encompass gripping and rotational maneuvers. The outcomes of empirical experimentation clearly demonstrate the suitability of the macro-micro teleoperation system, which incorporates an exoskeleton hand for controlling a microgripper in 3D micromanipulation. The system improves operator comfort and maneuvering efficiency. Even for untrained users, the tasks can be accomplished with ease in an intuitive and effective way.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S092188902400160X/pdfft?md5=7748383aaf2d8f90a389e88f96c06f9d&pid=1-s2.0-S092188902400160X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141991097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel hybrid adhesion method and autonomous locomotion mechanism for wall-climbing robots","authors":"","doi":"10.1016/j.robot.2024.104779","DOIUrl":"10.1016/j.robot.2024.104779","url":null,"abstract":"<div><p>In this paper we propose a novel adhesion method for the tracked wall-climbing robot. The method is based on the use of the tape, which the robot affixes to the wall during its movement. The adhesive side of the tape adheres to the wall, while the non-adhesive side allows for the robot's movement. The robot attaches to the tape using spikes located on the surface of its tracks. We developed the experimental prototype with a tracked locomotion mechanism weighing 1.2 kg, measuring 212 mm × 294 mm × 131 mm, and capable of carrying a payload of 2 kg. The battery life of the prototype is 3.5 h in standby mode and 1.8 h in moving mode. The prototype is controlled remotely through video transmission in manual mode and can move on both vertical and horizontal surfaces, and transition between them. The prototype has demonstrated the ability to move along a vertical surface, transition from a horizontal to a vertical surface, and recover from an unstable position in the case of a capsize. We used basic components and 3D printing in the manufacturing process. This suggests that we can make the prototype better by using different materials and components.</p></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142077506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}