Biomimetic Intelligence and Robotics最新文献

筛选
英文 中文
Deep learning-based semantic segmentation of human features in bath scrubbing robots 基于深度学习的擦浴机器人人类特征语义分割
Biomimetic Intelligence and Robotics Pub Date : 2024-01-11 DOI: 10.1016/j.birob.2024.100143
Chao Zhuang , Tianyi Ma , Bokai Xuan , Cheng Chang , Baichuan An , Minghuan Yin , Hao Sun
{"title":"Deep learning-based semantic segmentation of human features in bath scrubbing robots","authors":"Chao Zhuang ,&nbsp;Tianyi Ma ,&nbsp;Bokai Xuan ,&nbsp;Cheng Chang ,&nbsp;Baichuan An ,&nbsp;Minghuan Yin ,&nbsp;Hao Sun","doi":"10.1016/j.birob.2024.100143","DOIUrl":"10.1016/j.birob.2024.100143","url":null,"abstract":"<div><p>With the rise in the aging population, an increase in the number of semidisabled elderly individuals has been noted, leading to notable challenges in medical and healthcare, exacerbated by a shortage of nursing staff. This study aims to enhance the human feature recognition capabilities of bath scrubbing robots operating in a water fog environment. The investigation focuses on semantic segmentation of human features using deep learning methodologies. Initially, 3D point cloud data of human bodies with varying sizes are gathered through light detection and ranging to establish human models. Subsequently, a hybrid filtering algorithm was employed to address the impact of the water fog environment on the modeling and extraction of human regions. Finally, the network is refined by integrating the spatial feature extraction module and the channel attention module based on PointNet. The results indicate that the algorithm adeptly identifies feature information for 3D human models of diverse body sizes, achieving an overall accuracy of 95.7%. This represents a 4.5% improvement compared with the PointNet network and a 2.5% enhancement over mean intersection over union. In conclusion, this study substantially augments the human feature segmentation capabilities, facilitating effective collaboration with bath scrubbing robots for caregiving tasks, thereby possessing significant engineering application value.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100143"},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379724000019/pdfft?md5=c4ce6cc50edbff0cbe516fb4d722c566&pid=1-s2.0-S2667379724000019-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139631433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer vision-based six layered ConvNeural network to recognize sign language for both numeral and alphabet signs 基于计算机视觉的六层卷积神经网络识别数字和字母手语
Biomimetic Intelligence and Robotics Pub Date : 2023-12-09 DOI: 10.1016/j.birob.2023.100141
Muhammad Aminur Rahaman , Kabiratun Ummi Oyshe , Prothoma Khan Chowdhury , Tanoy Debnath , Anichur Rahman , Md. Saikat Islam Khan
{"title":"Computer vision-based six layered ConvNeural network to recognize sign language for both numeral and alphabet signs","authors":"Muhammad Aminur Rahaman ,&nbsp;Kabiratun Ummi Oyshe ,&nbsp;Prothoma Khan Chowdhury ,&nbsp;Tanoy Debnath ,&nbsp;Anichur Rahman ,&nbsp;Md. Saikat Islam Khan","doi":"10.1016/j.birob.2023.100141","DOIUrl":"10.1016/j.birob.2023.100141","url":null,"abstract":"<div><p>People who have trouble communicating verbally are often dependent on sign language, which can be difficult for most people to understand, making interaction with them a difficult endeavor. The Sign Language Recognition (SLR) system takes an input expression from a hearing or speaking-impaired person and outputs it in the form of text or voice to a normal person. The existing study related to the Sign Language Recognition system has some drawbacks, such as a lack of large datasets and datasets with a range of backgrounds, skin tones, and ages. This research efficiently focuses on Sign Language Recognition to overcome previous limitations. Most importantly, we use our proposed Convolutional Neural Network (CNN) model, “ConvNeural”, in order to train our dataset. Additionally, we develop our own datasets, “BdSL_OPSA22_STATIC1” and “BdSL_OPSA22_STATIC2”, both of which have ambiguous backgrounds. “BdSL_OPSA22_STATIC1” and “BdSL_OPSA22_STATIC2” both include images of Bangla characters and numerals, a total of 24,615 and 8437 images, respectively. The “ConvNeural” model outperforms the pre-trained models with accuracy of 98.38% for “BdSL_OPSA22_STATIC1” and 92.78% for “BdSL_OPSA22_STATIC2”. For “BdSL_OPSA22_STATIC1” dataset, we get precision, recall, F1-score, sensitivity and specificity of 96%, 95%, 95%, 99.31% , and 95.78% respectively. Moreover, in case of “BdSL_OPSA22_STATIC2” dataset, we achieve precision, recall, F1-score, sensitivity and specificity of 90%, 88%, 88%, 100%, and 100% respectively.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100141"},"PeriodicalIF":0.0,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000554/pdfft?md5=eebeb918508ba2531b5fc2956421475e&pid=1-s2.0-S2667379723000554-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138619425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image format pipeline and instrument diagram recognition method based on deep learning 基于深度学习的图像格式管道和仪器图识别方法
Biomimetic Intelligence and Robotics Pub Date : 2023-12-08 DOI: 10.1016/j.birob.2023.100142
Guanqun Su , Shuai Zhao , Tao Li , Shengyong Liu , Yaqi Li , Guanglong Zhao , Zhongtao Li
{"title":"Image format pipeline and instrument diagram recognition method based on deep learning","authors":"Guanqun Su ,&nbsp;Shuai Zhao ,&nbsp;Tao Li ,&nbsp;Shengyong Liu ,&nbsp;Yaqi Li ,&nbsp;Guanglong Zhao ,&nbsp;Zhongtao Li","doi":"10.1016/j.birob.2023.100142","DOIUrl":"10.1016/j.birob.2023.100142","url":null,"abstract":"<div><p>In this study, we proposed a recognition method based on deep artificial neural networks to identify various elements in pipelines and instrumentation diagrams (P&amp;ID) in image formats, such as symbols, texts, and pipelines. Presently, the P&amp;ID image format is recognized manually, and there is a problem with a high recognition error rate; therefore, automation of the above process is an important issue in the processing plant industry. The China National Offshore Petrochemical Engineering Co. provided the image set used in this study, which contains 51 P&amp;ID drawings in the PDF. We converted the PDF P&amp;ID drawings to PNG P&amp;IDs with an image size of 8410 × 5940. In addition, we used labeling software to annotate the images, divided the dataset into training and test sets in a 3:1 ratio, and deployed a deep neural network for recognition. The method proposed in this study is divided into three steps. The first step segments the images and recognizes symbols using YOLOv5 + SE. The second step determines text regions using character region awareness for text detection, and performs character recognition within the text region using the optical character recognition technique. The third step is pipeline recognition using YOLOv5 + SE. The symbol recognition accuracy was 94.52%, and the recall rate was 93.27%. The recognition accuracy in the text positioning stage was 97.26% and the recall rate was 90.27%. The recognition accuracy in the character recognition stage was 90.03% and the recall rate was 91.87%. The pipeline identification accuracy was 92.9%, and the recall rate was 90.36%.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100142"},"PeriodicalIF":0.0,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000566/pdfft?md5=9d3473b5d2acdf3a606cb65e7ef087e9&pid=1-s2.0-S2667379723000566-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138621153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiDAR-based estimation of bounding box coordinates using Gaussian process regression and particle swarm optimization 利用高斯过程回归和粒子群优化技术进行基于激光雷达的边界框坐标估算
Biomimetic Intelligence and Robotics Pub Date : 2023-11-27 DOI: 10.1016/j.birob.2023.100140
Vinodha K., E.S. Gopi, Tushar Agnibhoj
{"title":"LiDAR-based estimation of bounding box coordinates using Gaussian process regression and particle swarm optimization","authors":"Vinodha K.,&nbsp;E.S. Gopi,&nbsp;Tushar Agnibhoj","doi":"10.1016/j.birob.2023.100140","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100140","url":null,"abstract":"<div><p>Camera-based object tracking systems in a given closed environment lack privacy and confidentiality. In this study, light detection and ranging (LiDAR) was applied to track objects similar to the camera tracking in a closed environment, guaranteeing privacy and confidentiality. The primary objective was to demonstrate the efficacy of the proposed technique through carefully designed experiments conducted using two scenarios. In Scenario I, the study illustrates the capability of the proposed technique to detect the locations of multiple objects positioned on a flat surface, achieved by analyzing LiDAR data collected from several locations within the closed environment. Scenario II demonstrates the effectiveness of the proposed technique in detecting multiple objects using LiDAR data obtained from a single, fixed location. Real-time experiments are conducted with human subjects navigating predefined paths. Three individuals move within an environment, while LiDAR, fixed at the center, dynamically tracks and identifies their locations at multiple instances. Results demonstrate that a single, strategically positioned LiDAR can adeptly detect objects in motion around it. Furthermore, this study provides a comparison of various regression techniques for predicting bounding box coordinates. Gaussian process regression (GPR), combined with particle swarm optimization (PSO) for prediction, achieves the lowest prediction mean square error of all the regression techniques examined at 0.01. Hyperparameter tuning of GPR using PSO significantly minimizes the regression error. Results of the experiment pave the way for its extension to various real-time applications such as crowd management in malls, surveillance systems, and various Internet of Things scenarios.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100140"},"PeriodicalIF":0.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000542/pdfft?md5=635b3e34ad837f8738911fa4e2cc14f0&pid=1-s2.0-S2667379723000542-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139090230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer-controlled ultra high voltage amplifier for dielectric elastomer actuators 用于介电弹性体致动器的计算机控制超高压放大器
Biomimetic Intelligence and Robotics Pub Date : 2023-11-23 DOI: 10.1016/j.birob.2023.100139
Ardi Wiranata , Zebing Mao , Yu Kuwajima , Yuya Yamaguchi , Muhammad Akhsin Muflikhun , Hiroki Shigemune , Naoki Hosoya , Shingo Maeda
{"title":"Computer-controlled ultra high voltage amplifier for dielectric elastomer actuators","authors":"Ardi Wiranata ,&nbsp;Zebing Mao ,&nbsp;Yu Kuwajima ,&nbsp;Yuya Yamaguchi ,&nbsp;Muhammad Akhsin Muflikhun ,&nbsp;Hiroki Shigemune ,&nbsp;Naoki Hosoya ,&nbsp;Shingo Maeda","doi":"10.1016/j.birob.2023.100139","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100139","url":null,"abstract":"<div><p>Soft robotics is a breakthrough technology to support human–robot interactions. The soft structure of a soft robot can increase safety during human and robot interactions. One of the promising soft actuators for soft robotics is dielectric elastomer actuators (DEAs). DEAs can operate silently and have an excellent energy density. The simple structure of DEAs leads to the easy fabrication of soft actuators. The simplicity combined with silent operation and high energy density make DEAs interesting for soft robotics researchers. DEAs actuation follows the Maxwell-pressure principle. The pressure produced in the DEAs actuation depends much on the voltage applied. Common DEAs requires high voltage to gain an actuation. Since the power consumption of DEAs is in the milli-Watt range, the current needed to operate the DEAs can be neglected. Several commercially available DC-DC converters can convert the volt range to the kV range. In order to get a voltage in the 2–3 kV range, the reliable DC-DC converter can be pricy for each device. This problem hinders the education of soft actuators, especially for a newcomer laboratory that works in soft electric actuators. This paper introduces an entirely do-it-yourself (DIY) Ultrahigh voltage amplifier (UHV-Amp) for education in soft robotics. UHV-Amp can amplify 12 V to at a maximum of 4 kV DC. As a demonstration, we used this UHV-Amp to test a single layer of powdered-based DEAs. The strategy to build this educational type UHV-Amp was utilizing a Cockcroft-Walton circuit structure to amplify the voltage range to the kV range. In its current state, the UHV-Amp has the potential to achieve approximately 4 kV. We created a simple platform to control the UHV-Amp from a personal computer. In near future, we expect this easy control of the UHV-Amp can contribute to the education of soft electric actuators.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"4 1","pages":"Article 100139"},"PeriodicalIF":0.0,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000530/pdfft?md5=918de8d63135576758e24a01f703e9af&pid=1-s2.0-S2667379723000530-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139090229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aye-aye middle finger kinematic modeling and motion tracking during tap-scanning 在轻拍扫描过程中,Aye-aye中指的运动学建模和运动跟踪
Biomimetic Intelligence and Robotics Pub Date : 2023-11-14 DOI: 10.1016/j.birob.2023.100134
Nihar Masurkar , Jiming Kang , Hamidreza Nemati , Ehsan Dehghan-Niri
{"title":"Aye-aye middle finger kinematic modeling and motion tracking during tap-scanning","authors":"Nihar Masurkar ,&nbsp;Jiming Kang ,&nbsp;Hamidreza Nemati ,&nbsp;Ehsan Dehghan-Niri","doi":"10.1016/j.birob.2023.100134","DOIUrl":"10.1016/j.birob.2023.100134","url":null,"abstract":"<div><p>The aye-aye (Daubentonia madagascariensis) is a nocturnal lemur native to the island of Madagascar with a unique thin middle finger. Its slender third digit has a remarkably specific adaptation, allowing them to perform tap-scanning to locate small cavities beneath tree bark and extract wood-boring larvae from it. As an exceptional active acoustic actuator, this finger makes an aye-aye’s biological system an attractive model for pioneering Nondestructive Evaluation (NDE) methods and robotic systems. Despite the important aspects of the topic in the aye-aye’s unique foraging and its potential contribution to the engineering sensory, little is known about the mechanism and dynamics of this unique finger. This paper used a motion-tracking approach for the aye-aye’s middle finger using simultaneous video graphic capture. To mimic the motion, a two-link robot arm model is designed to reproduce the trajectory. Kinematics formulations were proposed to derive the motion of the middle finger using the Lagrangian method. In addition, a hardware model was developed to simulate the aye-aye’s finger motion. To validate the model, different motion states such as trajectory paths and joint angles, were compared. The simulation results indicate the kinematics of the model were consistent with the actual finger movement. This model is used to understand the aye-aye’s unique tap-scanning process for pioneering new tap-testing NDE strategies for various inspection applications.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 4","pages":"Article 100134"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000487/pdfft?md5=64a88634d026b11caf9e009364209eb4&pid=1-s2.0-S2667379723000487-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135763587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework to develop and test a model-free motion control system for a forestry crane 林业起重机无模型运动控制系统的开发与测试框架
Biomimetic Intelligence and Robotics Pub Date : 2023-11-10 DOI: 10.1016/j.birob.2023.100133
Pedro La Hera , Omar Mendoza-Trejo , Håkan Lideskog , Daniel Ortíz Morales
{"title":"A framework to develop and test a model-free motion control system for a forestry crane","authors":"Pedro La Hera ,&nbsp;Omar Mendoza-Trejo ,&nbsp;Håkan Lideskog ,&nbsp;Daniel Ortíz Morales","doi":"10.1016/j.birob.2023.100133","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100133","url":null,"abstract":"<div><p>This article has the objective of presenting our method to develop and test a motion control system for a heavy-duty hydraulically actuated manipulator, which is part of a newly developed prototype featuring a fully-autonomous unmanned forestry machine. This control algorithm is based on functional analysis and differential algebra, under the concepts of a new type of approach known as model-free intelligent PID control (iPID). As it can be unsafe to test this form of control directly on real hardware, our main contribution is to introduce a framework for developing and testing control software. This framework incorporates a desktop-size mockup crane equipped with comparable hardware as the real one, which we design and manufactured using 3D-printing. This downscaled mechatronic system allows to safely test the implementation of control software in real-time hardware directly on our desks, prior to the actual testing on the real machine. The results demonstrate that this development framework is useful to safely test control software for heavy-duty systems, and it helped us present the first experiments with the world’s first unmanned forestry machine capable of performing fully autonomous forestry tasks.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 4","pages":"Article 100133"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000475/pdfft?md5=9177b5eeb292d107fd475cafba14e2b3&pid=1-s2.0-S2667379723000475-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134663213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous multi-agent task allocation based on graph neural network ant colony optimization algorithms 基于图神经网络蚁群优化算法的异构多智能体任务分配
Biomimetic Intelligence and Robotics Pub Date : 2023-10-31 DOI: 10.20517/ir.2023.33
Ziyuan Ma, Huajun Gong
{"title":"Heterogeneous multi-agent task allocation based on graph neural network ant colony optimization algorithms","authors":"Ziyuan Ma, Huajun Gong","doi":"10.20517/ir.2023.33","DOIUrl":"https://doi.org/10.20517/ir.2023.33","url":null,"abstract":"Heterogeneous multi-agent task allocation is a key optimization problem widely used in fields such as drone swarms and multi-robot coordination. This paper proposes a new paradigm that innovatively combines graph neural networks and ant colony optimization algorithms to solve the assignment problem of heterogeneous multi-agents. The paper introduces an innovative Graph-based Heterogeneous Neural Network Ant Colony Optimization (GHNN-ACO) algorithm for heterogeneous multi-agent scenarios. The multi-agent system is composed of unmanned aerial vehicles, unmanned ships, and unmanned vehicles that work together to effectively respond to emergencies. This method uses graph neural networks to learn the relationship between tasks and agents, forming a graph representation, which is then integrated into ant colony optimization algorithms to guide the search process of ants. Firstly, the algorithm in this paper constructs heterogeneous graph data containing different types of agents and their relationships and uses the algorithm to classify and predict linkages for agent nodes. Secondly, the GHNN-ACO algorithm performs effectively in heterogeneous multi-agent scenarios, providing an effective solution for node classification and link prediction tasks in intelligent agent systems. Thirdly, the algorithm achieves an accuracy rate of 95.31% in assigning multiple tasks to multiple agents. It holds potential application prospects in emergency response and provides a new idea for multi-agent system cooperation.","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135808939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPC-BTB detection and positioning system based on optimized YOLOv5 基于优化YOLOv5的FPC-BTB检测定位系统
Biomimetic Intelligence and Robotics Pub Date : 2023-10-31 DOI: 10.1016/j.birob.2023.100132
Changyu Jing , Tianyu Fu , Fengming Li , Ligang Jin , Rui Song
{"title":"FPC-BTB detection and positioning system based on optimized YOLOv5","authors":"Changyu Jing ,&nbsp;Tianyu Fu ,&nbsp;Fengming Li ,&nbsp;Ligang Jin ,&nbsp;Rui Song","doi":"10.1016/j.birob.2023.100132","DOIUrl":"https://doi.org/10.1016/j.birob.2023.100132","url":null,"abstract":"<div><p>With the aim of addressing the visual positioning problem of board-to-board (BTB) jacks during the automatic assembly of flexible printed circuit (FPC) in mobile phones, an FPC-BTB jack detection method based on the optimized You Only Look Once, version 5 (YOLOv5) deep learning algorithm was proposed in this study. An FPC-BTB jack real-time detection and positioning system was developed for the real-time target detection and pose output synchronization of the BTB jack. On that basis, a visual positioning experimental platform that integrated a UR5e manipulator arm and Hikvision industrial camera was built for BTB jack detection and positioning experiments. As indicated by the experimental results, the developed FPC-BTB jack detection and positioning system for BTB target recognition and positioning achieved a success rate of 99.677%. Its average detection accuracy reached 99.341%, the average confidence of the detected target was 91%, the detection and positioning speed reached 31.25 frames per second, and the positioning deviation was less than 0.93 mm, which conforms to the practical application requirements of the FPC assembly process.</p></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"3 4","pages":"Article 100132"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667379723000463/pdfft?md5=389029475c5fb205080a541f55997139&pid=1-s2.0-S2667379723000463-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134663215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Path planning with obstacle avoidance for soft robots based on improved particle swarm optimization algorithm 基于改进粒子群算法的软机器人避障路径规划
Biomimetic Intelligence and Robotics Pub Date : 2023-10-29 DOI: 10.20517/ir.2023.31
Hongwei Liu, Yang Jiang, Manlu Liu, Xinbin Zhang, Jianwen Huo, Haoxiang Su
{"title":"Path planning with obstacle avoidance for soft robots based on improved particle swarm optimization algorithm","authors":"Hongwei Liu, Yang Jiang, Manlu Liu, Xinbin Zhang, Jianwen Huo, Haoxiang Su","doi":"10.20517/ir.2023.31","DOIUrl":"https://doi.org/10.20517/ir.2023.31","url":null,"abstract":"Soft-bodied robots have the advantages of high flexibility and multiple degrees of freedom and have promising applications in exploring complex unstructured environments. Kinematic coupling exists for the soft robot in a problematic space environment for motion planning between the soft robot arm segments. In solving the soft robot inverse kinematics, there are only solutions or even no solutions, and soft robot obstacle avoidance control is tough to exist, as other problems. In this paper, we use the segmental constant curvature assumption to derive the positive and negative kinematic relationships and design the tip self-growth algorithm to reduce the difficulty of solving the parameters in the inverse kinematics of the soft robot to avoid kinematic coupling. Finally, by combining the improved particle swarm algorithm to optimize the paths, the convergence speed and reconciliation accuracy of the algorithm are further accelerated. The simulation results prove that the method can successfully move the soft robot in complex space with high computational efficiency and high accuracy, which verifies the effectiveness of the research.","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"29 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136135608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信