RoboticsPub Date : 2023-11-17DOI: 10.3390/robotics12060155
Ahmed El-Dawy, A. El-Zawawi, Mohamed El-Habrouk
{"title":"MonoGhost: Lightweight Monocular GhostNet 3D Object Properties Estimation for Autonomous Driving","authors":"Ahmed El-Dawy, A. El-Zawawi, Mohamed El-Habrouk","doi":"10.3390/robotics12060155","DOIUrl":"https://doi.org/10.3390/robotics12060155","url":null,"abstract":"Effective environmental perception is critical for autonomous driving; thus, the perception system requires collecting 3D information of the surrounding objects, such as their dimensions, locations, and orientation in space. Recently, deep learning has been widely used in perception systems that convert image features from a camera into semantic information. This paper presents the MonoGhost network, a lightweight Monocular GhostNet deep learning technique for full 3D object properties estimation from a single frame monocular image. Unlike other techniques, the proposed MonoGhost network first estimates relatively reliable 3D object properties depending on efficient feature extractor. The proposed MonoGhost network estimates the orientation of the 3D object as well as the 3D dimensions of that object, resulting in reasonably small errors in the dimensions estimations versus other networks. These estimations, combined with the translation projection constraints imposed by the 2D detection coordinates, allow for the prediction of a robust and dependable Bird’s Eye View bounding box. The experimental outcomes prove that the proposed MonoGhost network performs better than other state-of-the-art networks in the Bird’s Eye View of the KITTI dataset benchmark by scoring 16.73% on the moderate class and 15.01% on the hard class while preserving real-time requirements.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"48 3","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139265036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RoboticsPub Date : 2023-11-14DOI: 10.3390/robotics12060154
Julio Vargas-Riaño, Óscar Agudelo-Varela, Ángel Valera
{"title":"Applying Screw Theory to Design the Turmell-Bot: A Cable-Driven, Reconfigurable Ankle Rehabilitation Parallel Robot","authors":"Julio Vargas-Riaño, Óscar Agudelo-Varela, Ángel Valera","doi":"10.3390/robotics12060154","DOIUrl":"https://doi.org/10.3390/robotics12060154","url":null,"abstract":"The ankle is a complex joint with a high injury incidence. Rehabilitation Robotics applied to the ankle is a very active research field. We present the kinematics and statics of a cable-driven reconfigurable ankle rehabilitation robot. First, we studied how the tendons pull mid-foot bones around the talocrural and subtalar axes. We proposed a hybrid serial-parallel mechanism analogous to the ankle. Then, using screw theory, we synthesized a cable-driven robot with the human ankle in the closed-loop kinematics. We incorporated a draw-wire sensor to measure the axes’ pose and compute the product of exponentials. We also reconfigured the cables to balance the tension and pressure forces using the axis projection on the base and platform planes. Furthermore, we computed the workspace to show that the reconfigurable design fits several sizes. The data used are from anthropometry and statistics. Finally, we validated the robot’s statics with MuJoCo for various cable length groups corresponding to the axes’ range of motion. We suggested a platform adjusting system and an alignment method. The design is lightweight, and the cable-driven robot has advantages over rigid parallel robots, such as Stewart platforms. We will use compliant actuators for enhancing human–robot interaction.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"58 29","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134902774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RoboticsPub Date : 2023-11-13DOI: 10.3390/robotics12060152
Gianmarco Cirelli, Christian Tamantini, Luigi Pietro Cordella, Francesca Cordella
{"title":"A Semiautonomous Control Strategy Based on Computer Vision for a Hand–Wrist Prosthesis","authors":"Gianmarco Cirelli, Christian Tamantini, Luigi Pietro Cordella, Francesca Cordella","doi":"10.3390/robotics12060152","DOIUrl":"https://doi.org/10.3390/robotics12060152","url":null,"abstract":"Alleviating the burden on amputees in terms of high-level control of their prosthetic devices is an open research challenge. EMG-based intention detection presents some limitations due to movement artifacts, fatigue, and stability. The integration of exteroceptive sensing can provide a valuable solution to overcome such limitations. In this paper, a novel semiautonomous control system (SCS) for wrist–hand prostheses using a computer vision system (CVS) is proposed and validated. The SCS integrates object detection, grasp selection, and wrist orientation estimation algorithms. By combining CVS with a simulated EMG-based intention detection module, the SCS guarantees reliable prosthesis control. Results show high accuracy in grasping and object classification (≥97%) at a fast frame analysis frequency (2.07 FPS). The SCS achieves an average angular estimation error ≤18° and stability ≤0.8° for the proposed application. Operative tests demonstrate the capabilities of the proposed approach to handle complex real-world scenarios and pave the way for future implementation on a real prosthetic device.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"56 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136346881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RoboticsPub Date : 2023-11-13DOI: 10.3390/robotics12060153
Daifeng Wang, Wenjing Cao, Atsuo Takanishi
{"title":"Dual-Quaternion-Based SLERP MPC Local Controller for Safe Self-Driving of Robotic Wheelchairs","authors":"Daifeng Wang, Wenjing Cao, Atsuo Takanishi","doi":"10.3390/robotics12060153","DOIUrl":"https://doi.org/10.3390/robotics12060153","url":null,"abstract":"In this work, the motion control of a robotic wheelchair to achieve safe and intelligent movement in an unknown scenario is proposed. The primary objective is to develop a comprehensive framework for a robotic wheelchair that combines a global path planner and a model predictive control (MPC) local controller. The A* algorithm is employed to generate a global path. To ensure safe and directional motion for the wheelchair user, an MPC local controller is implemented taking into account the via points generated by an approach combined with dual quaternions and spherical linear interpolation (SLERP). Dual quaternions are utilized for their simultaneous handling of rotation and translation, while SLERP enables smooth and continuous rotation interpolation by generating intermediate orientations between two specified orientations. The integration of these two methods optimizes navigation performance. The system is built on the Robot Operating System (ROS), with an electric wheelchair equipped with 3D-LiDAR serving as the hardware foundation. The experimental results reveal the effectiveness of the proposed method and demonstrate the ability of the robotic wheelchair to move safely from the initial position to the destination. This work contributes to the development of effective motion control for robotic wheelchairs, focusing on safety and improving the user experience when navigating in unknown environments.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"7 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136283978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RoboticsPub Date : 2023-11-08DOI: 10.3390/robotics12060150
Hana Choi, Tongil Park, Gyomin Hwang, Youngji Ko, Dohun Lee, Taeksu Lee, Jong-Oh Park, Doyeon Bang
{"title":"Fabrication of Origami Soft Gripper Using On-Fabric 3D Printing","authors":"Hana Choi, Tongil Park, Gyomin Hwang, Youngji Ko, Dohun Lee, Taeksu Lee, Jong-Oh Park, Doyeon Bang","doi":"10.3390/robotics12060150","DOIUrl":"https://doi.org/10.3390/robotics12060150","url":null,"abstract":"In this work, we have presented a soft encapsulating gripper for gentle grasps. This was enabled by a series of soft origami patterns, such as the Yoshimura pattern, which was directly printed on fabric. The proposed gripper features a deformable body that enables safe interaction with its surroundings, gentle grasps of delicate and fragile objects, and encapsulated structures allowing for noninvasive enclosing. The gripper was fabricated by a direct 3D printing of soft materials on fabric. This allowed for the stiffness adjustment of gripper components and a simple fabrication process. We evaluated the grasping performance of the proposed gripper with several delicate and ultra-gentle objects. It was concluded that the proposed gripper could manipulate delicate objects from fruits to silicone jellyfishes and, therefore, have considerable potential for use as improved soft encapsulating grippers in agriculture and engineering fields.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135342345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RoboticsPub Date : 2023-11-08DOI: 10.3390/robotics12060151
Franco Jorquera, Juan Estrada, Fernando Auat
{"title":"Remote Instantaneous Power Consumption Estimation of Electric Vehicles from Satellite Information","authors":"Franco Jorquera, Juan Estrada, Fernando Auat","doi":"10.3390/robotics12060151","DOIUrl":"https://doi.org/10.3390/robotics12060151","url":null,"abstract":"Instantaneous Power Consumption (IPC) is relevant for understanding the autonomy and efficient energy usage of electric vehicles (EVs). However, effective vehicle management requires prior knowledge of whether they can complete a trajectory, necessitating an estimation of IPC consumption along it. This paper proposes an IPC estimation method for an EV based on satellite information. The methodology involves geolocation and georeferencing of the study area, trajectory planning, extracting altitude characteristics from the map to create an altitude profile, collecting terrain features, and ultimately calculating IPC. The most accurate estimation was achieved on clay terrain with a 5.43% error compared to measures. For pavement and gravel terrains, 19.19% and 102.02% errors were obtained, respectively. This methodology provides IPC estimation on three different terrains using satellite information, which is corroborated with field experiments. This showcases its potential for EV management in industrial contexts.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"343 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135392333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RoboticsPub Date : 2023-11-07DOI: 10.3390/robotics12060149
Prem Kumar Mathavan Jeyabalan, Aravind Nehrujee, Samuel Elias, M. Magesh Kumar, S. Sujatha, Sivakumar Balasubramanian
{"title":"Design and Characterization of a Self-Aligning End-Effector Robot for Single-Joint Arm Movement Rehabilitation","authors":"Prem Kumar Mathavan Jeyabalan, Aravind Nehrujee, Samuel Elias, M. Magesh Kumar, S. Sujatha, Sivakumar Balasubramanian","doi":"10.3390/robotics12060149","DOIUrl":"https://doi.org/10.3390/robotics12060149","url":null,"abstract":"Traditional end-effector robots for arm rehabilitation are usually attached at the hand, primarily focusing on coordinated multi-joint training. Therapy at an individual joint level of the arm for severely impaired stroke survivors is not always possible with existing end-effector robots. The Arm Rehabilitation Robot (AREBO)—an end-effector robot—was designed to provide both single and multi-joint assisted training while retaining the advantages of traditional end-effector robots, such as ease of use, compactness and portability, and potential cost-effectiveness (compared to exoskeletons). This work presents the design, optimization, and characterization of AREBO for training single-joint movements of the arm. AREBO has three actuated and three unactuated degrees of freedom, allowing it to apply forces in any arbitrary direction at its endpoint and self-align to arbitrary orientations within its workspace. AREBO’s link lengths were optimized to maximize its workspace and manipulability. AREBO provides single-joint training in both unassisted and adaptive weight support modes using a human arm model to estimate the human arm’s kinematics and dynamics without using additional sensors. The characterization of the robot’s controller and the algorithm for estimating the human arm parameters were performed using a two degrees of freedom mechatronic model of the human shoulder joint. The results demonstrate that (a) the movements of the human arm can be estimated using a model of the human arm and robot’s kinematics, (b) AREBO has similar transparency to that of existing arm therapy robots in the literature, and (c) the adaptive weight support mode control can adapt to different levels of impairment in the arm. This work demonstrates how an appropriately designed end-effector robot can be used for single-joint training, which can be easily extended to multi-joint training. Future work will focus on the evaluation of the system on patients with any neurological condition requiring arm training.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"91 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135539521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RoboticsPub Date : 2023-10-31DOI: 10.3390/robotics12060148
Giuseppe Vitrani, Simone Cortinovis, Luca Fiorio, Marco Maggiali, Rocco Antonio Romeo
{"title":"Improving the Grasping Force Behavior of a Robotic Gripper: Model, Simulations, and Experiments","authors":"Giuseppe Vitrani, Simone Cortinovis, Luca Fiorio, Marco Maggiali, Rocco Antonio Romeo","doi":"10.3390/robotics12060148","DOIUrl":"https://doi.org/10.3390/robotics12060148","url":null,"abstract":"Robotic grippers allow industrial robots to interact with the surrounding environment. However, control architectures of the grasping force are still rare in common industrial grippers. In this context, one or more sensors (e.g., force or torque sensors) are necessary. However, the incorporation of such sensors might heavily affect the cost of the gripper, regardless of its type (e.g., pneumatic or electric). An alternative approach could be open-loop force control strategies. Hence, this work proposes an approach for optimizing the open-loop grasping force behavior of a robotic gripper. For this purpose, a specialized robotic gripper was built, as well as its mathematical model. The model was employed to predict the gripper performance during both static and dynamic force characterization, simulating grasping tasks under different experimental conditions. Both simulated and experimental results showed that by managing the mechanical properties of the finger–object contact interface (e.g., stiffness), the steady-state force variability could be greatly reduced, as well as undesired effects such as finger bouncing. Further, the object’s size is not required unlike most of the grasping approaches for industrial rigid grippers, which often involve high finger velocities. These results may pave the way toward conceiving cheaper and more reliable open-loop force control techniques for use in robotic grippers.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"129 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135810001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RoboticsPub Date : 2023-10-28DOI: 10.3390/robotics12060147
Chris Lytridis, Christos Bazinas, Ioannis Kalathas, George Siavalas, Christos Tsakmakis, Theodoros Spirantis, Eftichia Badeka, Theodore Pachidis, Vassilis G. Kaburlasos
{"title":"Cooperative Grape Harvesting Using Heterogeneous Autonomous Robots","authors":"Chris Lytridis, Christos Bazinas, Ioannis Kalathas, George Siavalas, Christos Tsakmakis, Theodoros Spirantis, Eftichia Badeka, Theodore Pachidis, Vassilis G. Kaburlasos","doi":"10.3390/robotics12060147","DOIUrl":"https://doi.org/10.3390/robotics12060147","url":null,"abstract":"The development of agricultural robots is an increasingly popular research field aiming at addressing the widespread labor shortages in the farming industry and the ever-increasing food production demands. In many cases, multiple cooperating robots can be deployed in order to reduce task duration, perform an operation not possible with a single robot, or perform an operation more effectively. Building on previous results, this application paper deals with a cooperation strategy that allows two heterogeneous robots to cooperatively carry out grape harvesting, and its implementation is demonstrated. More specifically, the cooperative grape harvesting task involves two heterogeneous robots, where one robot (i.e., the expert) is assigned the grape harvesting task, whereas the second robot (i.e., the helper) is tasked with supporting the harvesting task by carrying the harvested grapes. The proposed cooperative harvesting methodology ensures safe and effective interactions between the robots. Field experiments have been conducted in order firstly to validate the effectiveness of the coordinated navigation algorithm and secondly to demonstrate the proposed cooperative harvesting method. The paper reports on the conclusions drawn from the field experiments, and recommendations for future enhancements are made. The potential of sophisticated as well as explainable decision-making based on logic for enhancing the cooperation of autonomous robots in agricultural applications is discussed in the context of mathematical lattice theory.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"42 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Autonomous Navigation Framework for Holonomic Mobile Robots in Confined Agricultural Environments","authors":"Kosmas Tsiakas, Alexios Papadimitriou, Eleftheria Maria Pechlivani, Dimitrios Giakoumis, Nikolaos Frangakis, Antonios Gasteratos, Dimitrios Tzovaras","doi":"10.3390/robotics12060146","DOIUrl":"https://doi.org/10.3390/robotics12060146","url":null,"abstract":"Due to the accelerated growth of the world’s population, food security and sustainable agricultural practices have become essential. The incorporation of Artificial Intelligence (AI)-enabled robotic systems in cultivation, especially in greenhouse environments, represents a promising solution, where the utilization of the confined infrastructure improves the efficacy and accuracy of numerous agricultural duties. In this paper, we present a comprehensive autonomous navigation architecture for holonomic mobile robots in greenhouses. Our approach utilizes the heating system rails to navigate through the crop rows using a single stereo camera for perception and a LiDAR sensor for accurate distance measurements. A finite state machine orchestrates the sequence of required actions, enabling fully automated task execution, while semantic segmentation provides essential cognition to the robot. Our approach has been evaluated in a real-world greenhouse using a custom-made robotic platform, showing its overall efficacy for automated inspection tasks in greenhouses.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"22 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}