{"title":"Computer-controlled ultra high voltage amplifier for dielectric elastomer actuators","authors":"Ardi Wiranata , Zebing Mao , Yu Kuwajima , Yuya Yamaguchi , Muhammad Akhsin Muflikhun , Hiroki Shigemune , Naoki Hosoya , Shingo Maeda","doi":"10.1016/j.birob.2023.100139","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100139","url":null,"abstract":"<div><p>Soft robotics is a breakthrough technology to support human–robot interactions. The soft structure of a soft robot can increase safety during human and robot interactions. One of the promising soft actuators for soft robotics is dielectric elastomer actuators (DEAs). DEAs can operate silently and have an excellent energy density. The simple structure of DEAs leads to the easy fabrication of soft actuators. The simplicity combined with silent operation and high energy density make DEAs interesting for soft robotics researchers. DEAs actuation follows the Maxwell-pressure principle. The pressure produced in the DEAs actuation depends much on the voltage applied. Common DEAs requires high voltage to gain an actuation. Since the power consumption of DEAs is in the milli-Watt range, the current needed to operate the DEAs can be neglected. Several commercially available DC-DC converters can convert the volt range to the kV range. In order to get a voltage in the 2–3 kV range, the reliable DC-DC converter can be pricy for each device. This problem hinders the education of soft actuators, especially for a newcomer laboratory that works in soft electric actuators. This paper introduces an entirely do-it-yourself (DIY) Ultrahigh voltage amplifier (UHV-Amp) for education in soft robotics. UHV-Amp can amplify 12 V to at a maximum of 4 kV DC. As a demonstration, we used this UHV-Amp to test a single layer of powdered-based DEAs. The strategy to build this educational type UHV-Amp was utilizing a Cockcroft-Walton circuit structure to amplify the voltage range to the kV range. In its current state, the UHV-Amp has the potential to achieve approximately 4 kV. We created a simple platform to control the UHV-Amp from a personal computer. In near future, we expect this easy control of the UHV-Amp can contribute to the education of soft electric actuators.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100139"},"PeriodicalIF":0.0,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000530/pdfft?md5=918de8d63135576758e24a01f703e9af&pid=1-s2.0-S2667379723000530-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139090229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aye-aye middle finger kinematic modeling and motion tracking during tap-scanning","authors":"Nihar Masurkar , Jiming Kang , Hamidreza Nemati , Ehsan Dehghan-Niri","doi":"10.1016/j.birob.2023.100134","DOIUrl":"10.1016/j.birob.2023.100134","url":null,"abstract":"<div><p>The aye-aye (Daubentonia madagascariensis) is a nocturnal lemur native to the island of Madagascar with a unique thin middle finger. Its slender third digit has a remarkably specific adaptation, allowing them to perform tap-scanning to locate small cavities beneath tree bark and extract wood-boring larvae from it. As an exceptional active acoustic actuator, this finger makes an aye-aye’s biological system an attractive model for pioneering Nondestructive Evaluation (NDE) methods and robotic systems. Despite the important aspects of the topic in the aye-aye’s unique foraging and its potential contribution to the engineering sensory, little is known about the mechanism and dynamics of this unique finger. This paper used a motion-tracking approach for the aye-aye’s middle finger using simultaneous video graphic capture. To mimic the motion, a two-link robot arm model is designed to reproduce the trajectory. Kinematics formulations were proposed to derive the motion of the middle finger using the Lagrangian method. In addition, a hardware model was developed to simulate the aye-aye’s finger motion. To validate the model, different motion states such as trajectory paths and joint angles, were compared. The simulation results indicate the kinematics of the model were consistent with the actual finger movement. This model is used to understand the aye-aye’s unique tap-scanning process for pioneering new tap-testing NDE strategies for various inspection applications.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 4","pages":"Article 100134"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000487/pdfft?md5=64a88634d026b11caf9e009364209eb4&pid=1-s2.0-S2667379723000487-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135763587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro La Hera , Omar Mendoza-Trejo , Håkan Lideskog , Daniel Ortíz Morales
{"title":"A framework to develop and test a model-free motion control system for a forestry crane","authors":"Pedro La Hera , Omar Mendoza-Trejo , Håkan Lideskog , Daniel Ortíz Morales","doi":"10.1016/j.birob.2023.100133","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100133","url":null,"abstract":"<div><p>This article has the objective of presenting our method to develop and test a motion control system for a heavy-duty hydraulically actuated manipulator, which is part of a newly developed prototype featuring a fully-autonomous unmanned forestry machine. This control algorithm is based on functional analysis and differential algebra, under the concepts of a new type of approach known as model-free intelligent PID control (iPID). As it can be unsafe to test this form of control directly on real hardware, our main contribution is to introduce a framework for developing and testing control software. This framework incorporates a desktop-size mockup crane equipped with comparable hardware as the real one, which we design and manufactured using 3D-printing. This downscaled mechatronic system allows to safely test the implementation of control software in real-time hardware directly on our desks, prior to the actual testing on the real machine. The results demonstrate that this development framework is useful to safely test control software for heavy-duty systems, and it helped us present the first experiments with the world’s first unmanned forestry machine capable of performing fully autonomous forestry tasks.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 4","pages":"Article 100133"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000475/pdfft?md5=9177b5eeb292d107fd475cafba14e2b3&pid=1-s2.0-S2667379723000475-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134663213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Heterogeneous multi-agent task allocation based on graph neural network ant colony optimization algorithms","authors":"Ziyuan Ma, Huajun Gong","doi":"10.20517/ir.2023.33","DOIUrl":"https://doi.org/10.20517/ir.2023.33","url":null,"abstract":"Heterogeneous multi-agent task allocation is a key optimization problem widely used in fields such as drone swarms and multi-robot coordination. This paper proposes a new paradigm that innovatively combines graph neural networks and ant colony optimization algorithms to solve the assignment problem of heterogeneous multi-agents. The paper introduces an innovative Graph-based Heterogeneous Neural Network Ant Colony Optimization (GHNN-ACO) algorithm for heterogeneous multi-agent scenarios. The multi-agent system is composed of unmanned aerial vehicles, unmanned ships, and unmanned vehicles that work together to effectively respond to emergencies. This method uses graph neural networks to learn the relationship between tasks and agents, forming a graph representation, which is then integrated into ant colony optimization algorithms to guide the search process of ants. Firstly, the algorithm in this paper constructs heterogeneous graph data containing different types of agents and their relationships and uses the algorithm to classify and predict linkages for agent nodes. Secondly, the GHNN-ACO algorithm performs effectively in heterogeneous multi-agent scenarios, providing an effective solution for node classification and link prediction tasks in intelligent agent systems. Thirdly, the algorithm achieves an accuracy rate of 95.31% in assigning multiple tasks to multiple agents. It holds potential application prospects in emergency response and provides a new idea for multi-agent system cooperation.","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135808939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changyu Jing , Tianyu Fu , Fengming Li , Ligang Jin , Rui Song
{"title":"FPC-BTB detection and positioning system based on optimized YOLOv5","authors":"Changyu Jing , Tianyu Fu , Fengming Li , Ligang Jin , Rui Song","doi":"10.1016/j.birob.2023.100132","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100132","url":null,"abstract":"<div><p>With the aim of addressing the visual positioning problem of board-to-board (BTB) jacks during the automatic assembly of flexible printed circuit (FPC) in mobile phones, an FPC-BTB jack detection method based on the optimized You Only Look Once, version 5 (YOLOv5) deep learning algorithm was proposed in this study. An FPC-BTB jack real-time detection and positioning system was developed for the real-time target detection and pose output synchronization of the BTB jack. On that basis, a visual positioning experimental platform that integrated a UR5e manipulator arm and Hikvision industrial camera was built for BTB jack detection and positioning experiments. As indicated by the experimental results, the developed FPC-BTB jack detection and positioning system for BTB target recognition and positioning achieved a success rate of 99.677%. Its average detection accuracy reached 99.341%, the average confidence of the detected target was 91%, the detection and positioning speed reached 31.25 frames per second, and the positioning deviation was less than 0.93 mm, which conforms to the practical application requirements of the FPC assembly process.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 4","pages":"Article 100132"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000463/pdfft?md5=389029475c5fb205080a541f55997139&pid=1-s2.0-S2667379723000463-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134663215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongwei Liu, Yang Jiang, Manlu Liu, Xinbin Zhang, Jianwen Huo, Haoxiang Su
{"title":"Path planning with obstacle avoidance for soft robots based on improved particle swarm optimization algorithm","authors":"Hongwei Liu, Yang Jiang, Manlu Liu, Xinbin Zhang, Jianwen Huo, Haoxiang Su","doi":"10.20517/ir.2023.31","DOIUrl":"https://doi.org/10.20517/ir.2023.31","url":null,"abstract":"Soft-bodied robots have the advantages of high flexibility and multiple degrees of freedom and have promising applications in exploring complex unstructured environments. Kinematic coupling exists for the soft robot in a problematic space environment for motion planning between the soft robot arm segments. In solving the soft robot inverse kinematics, there are only solutions or even no solutions, and soft robot obstacle avoidance control is tough to exist, as other problems. In this paper, we use the segmental constant curvature assumption to derive the positive and negative kinematic relationships and design the tip self-growth algorithm to reduce the difficulty of solving the parameters in the inverse kinematics of the soft robot to avoid kinematic coupling. Finally, by combining the improved particle swarm algorithm to optimize the paths, the convergence speed and reconciliation accuracy of the algorithm are further accelerated. The simulation results prove that the method can successfully move the soft robot in complex space with high computational efficiency and high accuracy, which verifies the effectiveness of the research.","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"29 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136135608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning approaches for object recognition in plant diseases: a review","authors":"Zimo Zhou, Yue Zhang, Zhaohui Gu, Simon X. Yang","doi":"10.20517/ir.2023.29","DOIUrl":"https://doi.org/10.20517/ir.2023.29","url":null,"abstract":"Plant diseases pose a significant threat to the economic viability of agriculture and the normal functioning of trees in forests. Accurate detection and identification of plant diseases are crucial for smart agricultural and forestry management. In recent years, the intersection of agriculture and artificial intelligence has become a popular research topic. Researchers have been experimenting with object recognition algorithms, specifically convolutional neural networks, to identify diseases in plant images. The goal is to reduce labor and improve detection efficiency. This article reviews the application of object detection methods for detecting common plant diseases, such as tomato, citrus, maize, and pine trees. It introduces various object detection models, ranging from basic to modern and sophisticated networks, and compares the innovative aspects and drawbacks of commonly used neural network models. Furthermore, the article discusses current challenges in plant disease detection and object detection methods and suggests promising directions for future work in learning-based plant disease detection systems.","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"6 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136231677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cooperative search for moving targets with the ability to perceive and evade using multiple UAVs","authors":"Ziyi Wang, Jian Guo, Wencheng Zou, Sheng Li","doi":"10.20517/ir.2023.30","DOIUrl":"https://doi.org/10.20517/ir.2023.30","url":null,"abstract":"This paper focuses on the problem of regional cooperative search using multiple unmanned aerial vehicles (UAVs) for targets that have the ability to perceive and evade. When UAVs search for moving targets in a mission area, the targets can perceive the positions and flight direction of UAVs within certain limits and take corresponding evasive actions, which makes the search more challenging than traditional search problems. To address this problem, we first define a detailed motion model for such targets and design various search information maps and their update methods to describe the environmental information based on the prediction of moving targets and the search results of UAVs. We then establish a multi-UAV search path planning optimization model based on the model predictive control, which includes various newly designed objective functions of search benefits and costs. We propose a priority-encoded improved genetic algorithm with a fine-adjustment mechanism to solve this model. The simulation results show that the proposed method can effectively improve the cooperative search efficiency, and more targets can be found at a much faster rate compared to traditional search methods.","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"30 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136231823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ceng Zhang , Junxin Chen , Jiatong Li , Yanhong Peng , Zebing Mao
{"title":"Large language models for human–robot interaction: A review","authors":"Ceng Zhang , Junxin Chen , Jiatong Li , Yanhong Peng , Zebing Mao","doi":"10.1016/j.birob.2023.100131","DOIUrl":"10.1016/j.birob.2023.100131","url":null,"abstract":"<div><p>The fusion of large language models and robotic systems has introduced a transformative paradigm in human–robot interaction, offering unparalleled capabilities in natural language understanding and task execution. This review paper offers a comprehensive analysis of this nascent but rapidly evolving domain, spotlighting the recent advances of Large Language Models (LLMs) in enhancing their structures and performances, particularly in terms of multimodal input handling, high-level reasoning, and plan generation. Moreover, it probes the current methodologies that integrate LLMs into robotic systems for complex task completion, from traditional probabilistic models to the utilization of value functions and metrics for optimal decision-making. Despite these advancements, the paper also reveals the formidable challenges that confront the field, such as contextual understanding, data privacy and ethical considerations. To our best knowledge, this is the first study to comprehensively analyze the advances and considerations of LLMs in Human–Robot Interaction (HRI) based on recent progress, which provides potential avenues for further research.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 4","pages":"Article 100131"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000451/pdfft?md5=af36e667ec63efae63765b692d7a9e91&pid=1-s2.0-S2667379723000451-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136160534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Muscle synergy analysis for gesture recognition based on sEMG images and Shapley value","authors":"Xiaohu Ao, Feng Wang, Rennong Wang, Jinhua She","doi":"10.20517/ir.2023.28","DOIUrl":"https://doi.org/10.20517/ir.2023.28","url":null,"abstract":"Muscle synergy analysis for gesture recognition is a fundamental research area in human-machine interaction, particularly in fields such as rehabilitation. However, previous methods for analyzing muscle synergy are typically not end-to-end and lack interpretability. Specifically, these methods involve extracting specific features for gesture recognition from surface electromyography (sEMG) signals and then conducting muscle synergy analysis based on those features. Addressing these limitations, we devised an end-to-end framework, namely Shapley-value-based muscle synergy (SVMS), for muscle synergy analysis. Our approach involves converting sEMG signals into grayscale sEMG images using a sliding window. Subsequently, we convert adjacent grayscale images into color images for gesture recognition. We then use the gradient-weighted class activation mapping (Grad-CAM) method to identify significant feature areas for sEMG images during gesture recognition. Grad-CAM generates a heatmap representation of the images, highlighting the regions that the model uses to make its prediction. Finally, we conduct a quantitative analysis of muscle synergy in the specific area obtained by Grad-CAM based on the Shapley value. The experimental results demonstrate the effectiveness of our SVMS method for muscle synergy analysis. Moreover, we are able to achieve a recognition accuracy of 94.26% for twelve gestures while reducing the required electrode channel information from ten to six dimensions and the analysis rounds from about 1000 to nine.","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"61 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135321783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}