{"title":"Graph neural network based method for robot path planning","authors":"Xingrong Diao , Wenzheng Chi , Jiankun Wang","doi":"10.1016/j.birob.2024.100147","DOIUrl":"10.1016/j.birob.2024.100147","url":null,"abstract":"<div><p>Sampling-based path planning is widely used in robotics, particularly in high-dimensional state spaces. In the path planning process, collision detection is the most time-consuming operation. Therefore, we propose a learning-based path planning method that reduces the number of collision checks. We develop an efficient neural network model based on graph neural networks. The model outputs weights for each neighbor based on the obstacle, searched path, and random geometric graph, which are used to guide the planner in avoiding obstacles. We evaluate the efficiency of the proposed path planning method through simulated random worlds and real-world experiments. The results demonstrate that the proposed method significantly reduces the number of collision checks and improves the path planning speed in high-dimensional environments.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100147"},"PeriodicalIF":0.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379724000056/pdfft?md5=b4eb5be9ef5e659e23e95cee095ff859&pid=1-s2.0-S2667379724000056-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139872778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human–robot object handover: Recent progress and future direction","authors":"Haonan Duan , Yifan Yang , Daheng Li , Peng Wang","doi":"10.1016/j.birob.2024.100145","DOIUrl":"10.1016/j.birob.2024.100145","url":null,"abstract":"<div><p>Human–robot object handover is one of the most primitive and crucial capabilities in human–robot collaboration. It is of great significance to promote robots to truly enter human production and life scenarios and serve human in numerous tasks. Remarkable progressions in the field of human–robot object handover have been made by researchers. This article reviews the recent literature on human–robot object handover. To this end, we summarize the results from multiple dimensions, from the role played by the robot (receiver or giver), to the end-effector of the robot (parallel-jaw gripper or multi-finger hand), to the robot abilities (grasp strategy or motion planning). We also implement a human–robot object handover system for anthropomorphic hand to verify human–robot object handover pipeline. This review aims to provide researchers and developers with a guideline for designing human–robot object handover methods.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100145"},"PeriodicalIF":0.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379724000032/pdfft?md5=4d89bd0f64c2a9404be91f48f25e3fe2&pid=1-s2.0-S2667379724000032-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139878209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Controlling a peristaltic robot inspired by inchworms","authors":"Yanhong Peng , Hiroyuki Nabae , Yuki Funabora , Koichi Suzumori","doi":"10.1016/j.birob.2024.100146","DOIUrl":"https://doi.org/10.1016/j.birob.2024.100146","url":null,"abstract":"<div><p>This study presents an innovative approach in soft robotics, focusing on an inchworm-inspired robot designed for enhanced transport capabilities. We explore the impact of various parameters on the robot’s performance, including the number of activated sections, object size and material, supplied air pressure, and command execution rate. Through a series of controlled experiments, we demonstrate that the robot can achieve a maximum transportation speed of 8.54 mm/s and handle loads exceeding 100 g, significantly outperforming existing models in both speed and load capacity. Our findings provide valuable insights into the optimization of soft robotic design for improved efficiency and adaptability in transport tasks. This research not only contributes to the advancement of soft robotics but also opens new avenues for practical applications in areas requiring precise and efficient object manipulation. The study underscores the potential of biomimetic designs in robotics and sets a new benchmark for future developments in the field.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100146"},"PeriodicalIF":0.0,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379724000044/pdfft?md5=ed720fe9de4d4c0b2d1704b08f957681&pid=1-s2.0-S2667379724000044-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139985968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaolong Ma , Jianhua Zhang , Binrui Wang , Jincheng Huang , Guanjun Bao
{"title":"Continuous adaptive gaits manipulation for three-fingered robotic hands via bioinspired fingertip contact events","authors":"Xiaolong Ma , Jianhua Zhang , Binrui Wang , Jincheng Huang , Guanjun Bao","doi":"10.1016/j.birob.2024.100144","DOIUrl":"10.1016/j.birob.2024.100144","url":null,"abstract":"<div><p>The remarkable skill of changing its grasp status and relocating its fingers to perform continuous in-hand manipulation is essential for a multifingered anthropomorphic hand. A commonly utilized method of manipulation involves a series of basic movements executed by a high-level controller. However, it remains unclear how these primitives evolve into sophisticated finger gaits during manipulation. Here, we propose an adaptive finger gait-based manipulation method that offers real-time regulation by dynamically changing the primitive interval to ensure the force/moment balance of the object. Successful manipulation relies on contact events that act as triggers for real-time online replanning of multifinger manipulation. We identify four basic motion primitives of finger gaits and create a heuristic finger gait that enables the continuous object rotation of a round cup. Our experimental results verify the effectiveness of the proposed method. Despite the constant breaking and reengaging of contact between the fingers and the object during manipulation, the robotic hand can reliably manipulate the object without failure. Even when the object is subjected to interfering forces, the proposed method demonstrates robustness in managing interference. This work has great potential for application to the dexterous operation of anthropomorphic multifingered hands.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100144"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379724000020/pdfft?md5=72792729389d12c2550af75794b08646&pid=1-s2.0-S2667379724000020-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139454062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chao Zhuang , Tianyi Ma , Bokai Xuan , Cheng Chang , Baichuan An , Minghuan Yin , Hao Sun
{"title":"Deep learning-based semantic segmentation of human features in bath scrubbing robots","authors":"Chao Zhuang , Tianyi Ma , Bokai Xuan , Cheng Chang , Baichuan An , Minghuan Yin , Hao Sun","doi":"10.1016/j.birob.2024.100143","DOIUrl":"10.1016/j.birob.2024.100143","url":null,"abstract":"<div><p>With the rise in the aging population, an increase in the number of semidisabled elderly individuals has been noted, leading to notable challenges in medical and healthcare, exacerbated by a shortage of nursing staff. This study aims to enhance the human feature recognition capabilities of bath scrubbing robots operating in a water fog environment. The investigation focuses on semantic segmentation of human features using deep learning methodologies. Initially, 3D point cloud data of human bodies with varying sizes are gathered through light detection and ranging to establish human models. Subsequently, a hybrid filtering algorithm was employed to address the impact of the water fog environment on the modeling and extraction of human regions. Finally, the network is refined by integrating the spatial feature extraction module and the channel attention module based on PointNet. The results indicate that the algorithm adeptly identifies feature information for 3D human models of diverse body sizes, achieving an overall accuracy of 95.7%. This represents a 4.5% improvement compared with the PointNet network and a 2.5% enhancement over mean intersection over union. In conclusion, this study substantially augments the human feature segmentation capabilities, facilitating effective collaboration with bath scrubbing robots for caregiving tasks, thereby possessing significant engineering application value.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100143"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379724000019/pdfft?md5=c4ce6cc50edbff0cbe516fb4d722c566&pid=1-s2.0-S2667379724000019-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139631433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computer vision-based six layered ConvNeural network to recognize sign language for both numeral and alphabet signs","authors":"Muhammad Aminur Rahaman , Kabiratun Ummi Oyshe , Prothoma Khan Chowdhury , Tanoy Debnath , Anichur Rahman , Md. Saikat Islam Khan","doi":"10.1016/j.birob.2023.100141","DOIUrl":"10.1016/j.birob.2023.100141","url":null,"abstract":"<div><p>People who have trouble communicating verbally are often dependent on sign language, which can be difficult for most people to understand, making interaction with them a difficult endeavor. The Sign Language Recognition (SLR) system takes an input expression from a hearing or speaking-impaired person and outputs it in the form of text or voice to a normal person. The existing study related to the Sign Language Recognition system has some drawbacks, such as a lack of large datasets and datasets with a range of backgrounds, skin tones, and ages. This research efficiently focuses on Sign Language Recognition to overcome previous limitations. Most importantly, we use our proposed Convolutional Neural Network (CNN) model, “ConvNeural”, in order to train our dataset. Additionally, we develop our own datasets, “BdSL_OPSA22_STATIC1” and “BdSL_OPSA22_STATIC2”, both of which have ambiguous backgrounds. “BdSL_OPSA22_STATIC1” and “BdSL_OPSA22_STATIC2” both include images of Bangla characters and numerals, a total of 24,615 and 8437 images, respectively. The “ConvNeural” model outperforms the pre-trained models with accuracy of 98.38% for “BdSL_OPSA22_STATIC1” and 92.78% for “BdSL_OPSA22_STATIC2”. For “BdSL_OPSA22_STATIC1” dataset, we get precision, recall, F1-score, sensitivity and specificity of 96%, 95%, 95%, 99.31% , and 95.78% respectively. Moreover, in case of “BdSL_OPSA22_STATIC2” dataset, we achieve precision, recall, F1-score, sensitivity and specificity of 90%, 88%, 88%, 100%, and 100% respectively.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100141"},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000554/pdfft?md5=eebeb918508ba2531b5fc2956421475e&pid=1-s2.0-S2667379723000554-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138619425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guanqun Su , Shuai Zhao , Tao Li , Shengyong Liu , Yaqi Li , Guanglong Zhao , Zhongtao Li
{"title":"Image format pipeline and instrument diagram recognition method based on deep learning","authors":"Guanqun Su , Shuai Zhao , Tao Li , Shengyong Liu , Yaqi Li , Guanglong Zhao , Zhongtao Li","doi":"10.1016/j.birob.2023.100142","DOIUrl":"10.1016/j.birob.2023.100142","url":null,"abstract":"<div><p>In this study, we proposed a recognition method based on deep artificial neural networks to identify various elements in pipelines and instrumentation diagrams (P&ID) in image formats, such as symbols, texts, and pipelines. Presently, the P&ID image format is recognized manually, and there is a problem with a high recognition error rate; therefore, automation of the above process is an important issue in the processing plant industry. The China National Offshore Petrochemical Engineering Co. provided the image set used in this study, which contains 51 P&ID drawings in the PDF. We converted the PDF P&ID drawings to PNG P&IDs with an image size of 8410 × 5940. In addition, we used labeling software to annotate the images, divided the dataset into training and test sets in a 3:1 ratio, and deployed a deep neural network for recognition. The method proposed in this study is divided into three steps. The first step segments the images and recognizes symbols using YOLOv5 + SE. The second step determines text regions using character region awareness for text detection, and performs character recognition within the text region using the optical character recognition technique. The third step is pipeline recognition using YOLOv5 + SE. The symbol recognition accuracy was 94.52%, and the recall rate was 93.27%. The recognition accuracy in the text positioning stage was 97.26% and the recall rate was 90.27%. The recognition accuracy in the character recognition stage was 90.03% and the recall rate was 91.87%. The pipeline identification accuracy was 92.9%, and the recall rate was 90.36%.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100142"},"PeriodicalIF":0.0,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000566/pdfft?md5=9d3473b5d2acdf3a606cb65e7ef087e9&pid=1-s2.0-S2667379723000566-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138621153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LiDAR-based estimation of bounding box coordinates using Gaussian process regression and particle swarm optimization","authors":"Vinodha K., E.S. Gopi, Tushar Agnibhoj","doi":"10.1016/j.birob.2023.100140","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100140","url":null,"abstract":"<div><p>Camera-based object tracking systems in a given closed environment lack privacy and confidentiality. In this study, light detection and ranging (LiDAR) was applied to track objects similar to the camera tracking in a closed environment, guaranteeing privacy and confidentiality. The primary objective was to demonstrate the efficacy of the proposed technique through carefully designed experiments conducted using two scenarios. In Scenario I, the study illustrates the capability of the proposed technique to detect the locations of multiple objects positioned on a flat surface, achieved by analyzing LiDAR data collected from several locations within the closed environment. Scenario II demonstrates the effectiveness of the proposed technique in detecting multiple objects using LiDAR data obtained from a single, fixed location. Real-time experiments are conducted with human subjects navigating predefined paths. Three individuals move within an environment, while LiDAR, fixed at the center, dynamically tracks and identifies their locations at multiple instances. Results demonstrate that a single, strategically positioned LiDAR can adeptly detect objects in motion around it. Furthermore, this study provides a comparison of various regression techniques for predicting bounding box coordinates. Gaussian process regression (GPR), combined with particle swarm optimization (PSO) for prediction, achieves the lowest prediction mean square error of all the regression techniques examined at 0.01. Hyperparameter tuning of GPR using PSO significantly minimizes the regression error. Results of the experiment pave the way for its extension to various real-time applications such as crowd management in malls, surveillance systems, and various Internet of Things scenarios.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100140"},"PeriodicalIF":0.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000542/pdfft?md5=635b3e34ad837f8738911fa4e2cc14f0&pid=1-s2.0-S2667379723000542-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139090230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}