Xinxing Chen, Yuxuan Wang, Chuheng Chen, Yuquan Leng, Chenglong Fu
{"title":"Towards environment perception for walking-aid robots: an improved staircase shape feature extraction method","authors":"Xinxing Chen, Yuxuan Wang, Chuheng Chen, Yuquan Leng, Chenglong Fu","doi":"10.20517/ir.2024.11","DOIUrl":"https://doi.org/10.20517/ir.2024.11","url":null,"abstract":"This paper introduces an innovative staircase shape feature extraction method for walking-aid robots to enhance environmental perception and navigation. We present a robust method for accurate feature extraction of staircases under various conditions, including restricted viewpoints and dynamic movement. Utilizing depth camera-mounted robots, we transform three-dimensional (3D) environmental point cloud into two-dimensional (2D) representations, focusing on identifying both convex and concave corners. Our approach integrates the Random Sample Consensus algorithm with K-Nearest Neighbors (KNN)-augmented Iterative Closest Point (ICP) for efficient point cloud registration. The results show an improvement in trajectory accuracy, with errors within the centimeter range. This work overcomes the limitations of previous approaches and is of great significance for improving the navigation and safety of walking assistive robots, providing new possibilities for enhancing the autonomy and mobility of individuals with physical disabilities.","PeriodicalId":426514,"journal":{"name":"Intelligence & Robotics","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140982860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel zero-force control framework for post-stroke rehabilitation training based on fuzzy-PID method","authors":"Lina Tong, Decheng Cui, Chen Wang, Liang Peng","doi":"10.20517/ir.2024.08","DOIUrl":"https://doi.org/10.20517/ir.2024.08","url":null,"abstract":"As the number of people with neurological disorders increases, movement rehabilitation becomes progressively important, especially the active rehabilitation training, which has been demonstrated as a promising solution for improving the neural plasticity. In this paper, we developed a 5-degree-of-freedom rehabilitation robot and proposed a zero-force control framework for active rehabilitation training based on the kinematics and dynamics identification. According to the robot motion characteristics, the fuzzy PID algorithm was designed to further improve the flexibility of the robot. Experiments demonstrated that the proposed control method reduced the Root Mean Square Error and Mean Absolute Error evaluation indexes by more than 15% on average and improves the coefficient of determination ($$ R^{2} $$ ) by 4% compared with the traditional PID algorithm. In order to improve the active participation of the post-stroke rehabilitation training, this paper designed an active rehabilitation training scheme based on gamified scenarios, which further enhanced the efficiency of rehabilitation training by means of visual feedback.","PeriodicalId":426514,"journal":{"name":"Intelligence & Robotics","volume":" 45","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140220825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen Yang, Xinhao Zhang, Long Zhang, Chaobin Zou, Zhinan Peng, Rui Huang, Hong Cheng
{"title":"Coordinated energy-efficient walking assistance for paraplegic patients by using the exoskeleton-walker system","authors":"Chen Yang, Xinhao Zhang, Long Zhang, Chaobin Zou, Zhinan Peng, Rui Huang, Hong Cheng","doi":"10.20517/ir.2024.07","DOIUrl":"https://doi.org/10.20517/ir.2024.07","url":null,"abstract":"Overground walking can be achieved for patients with gait impairments by using the lower limb exoskeleton robots. Since it is a challenge to keep balance for patients with insufficient upper body strength, a robotic walker is necessary to assist with the walking balance. However, since the walking pattern varies over time, controlling the robotic walker to follow the walking of the human-exoskeleton system in coordination is a critical issue. Inappropriate control strategy leads to the unnecessary energy cost of the human-exoskeleton-walker (HEW) system and also results in the bad coordination between the human-exoskeleton system and the robotic walker. In this paper, we proposed a Coordinated Energy-Efficient Control (CEEC) approach for the HEW system, which is based on the extremum seeking control algorithm and the coordinated motion planning strategy. First, the extremum seeking control algorithm is used to find the optimal supporting force of the support joint in real time to maximize the energy efficiency of the human-exoskeleton system. Second, the appropriate reference joint angles for wheels of the robotic walker can be generated by the coordinated motion planning strategy, causing the good coordination between the human-exoskeleton system and the robotic walker. The proposed approach has been tested on the HEW simulation model, and the experimental results indicate that the coordinated energy-efficient walking can be achieved with the proposed approach, which is increased by 60.16% compared to the conventional passive robotic walker.","PeriodicalId":426514,"journal":{"name":"Intelligence & Robotics","volume":"34 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140229383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyrille Berger, Patrick Doherty, P. Rudol, M. Wzorek
{"title":"Leveraging active queries in collaborative robotic mission planning","authors":"Cyrille Berger, Patrick Doherty, P. Rudol, M. Wzorek","doi":"10.20517/ir.2024.06","DOIUrl":"https://doi.org/10.20517/ir.2024.06","url":null,"abstract":"This paper focuses on the high-level specification and generation of 3D models for operational environments using the idea of active queries as a basis for specifying and generating multi-agent plans for acquiring such models. Assuming an underlying multi-agent system, an operator can specify a request for a particular type of model from a specific region by specifying an active query. This declarative query is then interpreted and executed by collecting already existing data/information in agent systems or, in the active case, by automatically generating high-level mission plans for agents to retrieve and generate parts of the model that do not already exist. The purpose of an active query is to hide the complexity of multi-agent mission plan generation, data transformations, and distributed collection of data/information in underlying multi-agent systems. A description of an active query system, its integration with an existing multi-agent system and validation of the active query system in field robotics experimentation using Unmanned Aerial Vehicles and simulations are provided.","PeriodicalId":426514,"journal":{"name":"Intelligence & Robotics","volume":"44 38","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140231382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A small-sample time-series signal augmentation and analysis method for quantitative assessment of bradykinesia in Parkinson's disease","authors":"Zhilin Shu, Peipei Liu, Yuanyuan Cheng, Jinrui Liu, Yuxin Feng, Zhizhong Zhu, Yang Yu, Jianda Han, Jialing Wu, Ningbo Yu","doi":"10.20517/ir.2024.05","DOIUrl":"https://doi.org/10.20517/ir.2024.05","url":null,"abstract":"Patients with Parkinson's disease (PD) usually have varying degrees of bradykinesia, and the current clinical assessment is mainly based on the Movement Disorder Society Unified PD Rating Scale, which can hardly meet the needs of objectivity and accuracy. Therefore, this paper proposed a small-sample time series classification method (DTW-TapNet) based on dynamic time warping (DTW) data augmentation and attentional prototype network. Firstly, for the problem of small sample sizes of clinical data, a DTW-based data merge method is used to achieve data augmentation. Then, the time series are dimensionally reorganized using random grouping, and convolutional operations are performed to learn features from multivariate time series. Further, attention mechanism and prototype learning are introduced to optimize the distance of the class prototype to which each time series belongs to train a low-dimensional feature representation of the time series, thus reducing the dependency on data volume. Clinical experiments were conducted to collect motion capture data of upper and lower limb movements from 36 patients with PD and eight healthy controls. For the upper limb movement data, the proposed method improved the classification accuracy, weighted precision, and kappa coefficient by 8.89%-15.56%, 9.22%-16.37%, and 0.13-0.23, respectively, compared with support vector machines, long short-term memory, and convolutional prototype network. For the lower limb movement data, the proposed method improved the classification accuracy, weighted precision, and kappa coefficient by 8.16%-20.41%, 10.01%-23.73%, and 0.12-0.28, respectively. The experiments and results show that the proposed method can objectively and accurately assess upper and lower limb bradykinesia in PD.","PeriodicalId":426514,"journal":{"name":"Intelligence & Robotics","volume":"136 45","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140251517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metropolis criterion pigeon-inspired optimization for multi-UAV swarm controller","authors":"Jinghua Guan, Hongfei Cheng","doi":"10.20517/ir.2024.04","DOIUrl":"https://doi.org/10.20517/ir.2024.04","url":null,"abstract":"This paper presents a new multiple unmanned aerial vehicle swarm controller based on Metropolis criterion. This paper presents the design of a controller, utilizing the improved Metropolis criterion pigeon-inspired optimization (IMCPIO) and proportional-integrational-derivative (PID) algorithms, and conducts comparative experiments. Simulation outcomes demonstrate the enhanced performance of the multi-unmanned aerial vehicle formation controller, which is based on IMCPIO, when compared to the basic pigeon-inspired optimization (PIO) algorithm and the genetic algorithm. The IMCPIO algorithm for the energy difference discrimination makes it a faster convergence and more stable effective optimization. Hence, the controller introduced in this study proves to be both practical and resilient.","PeriodicalId":426514,"journal":{"name":"Intelligence & Robotics","volume":"33 31","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140257710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A collaborative siege method of multiple unmanned vehicles based on reinforcement learning","authors":"Muqing Su, Ruimin Pu, Yin Wang, Meng Yu","doi":"10.20517/ir.2024.03","DOIUrl":"https://doi.org/10.20517/ir.2024.03","url":null,"abstract":"A method based on multi-agent reinforcement learning is proposed to tackle the challenges to capture escaping Target by Unmanned Ground Vehicles (UGVs). Initially, this study introduces environment and motion models tailored for cooperative UGV capture, along with clearly defined success criteria for direct capture. An attention mechanism integrated into the Soft Actor-Critic (SAC) is leveraged, directing focus towards pivotal state features pertinent to the task while effectively managing less relevant aspects. This allows capturing agents to concentrate on the whereabouts and activities of the target agent, thereby enhancing coordination and collaboration during pursuit. This focus on the target agent aids in refining the capture process and ensures precise estimation of value functions. The reduction in superfluous activities and unproductive scenarios amplifies efficiency and robustness. Furthermore, the attention weights dynamically adapt to environmental shifts. To address constrained incentives arising in scenarios with multiple vehicles capturing targets, the study introduces a revamped reward system. It divides the reward function into individual and cooperative components, thereby optimizing both global and localized incentives. By facilitating cooperative collaboration among capturing UGVs, this approach curtails the action space of the target UGV, leading to successful capture outcomes. The proposed technique demonstrates enhanced capture success compared to previous SAC algorithms. Simulation trials and comparisons with alternative learning methodologies validate the effectiveness of the algorithm and the design approach of the reward function.","PeriodicalId":426514,"journal":{"name":"Intelligence & Robotics","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140414921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pig-ear detection from the thermal infrared image based on improved YOLOv8n","authors":"Hui Han, Xianglong Xue, Qifeng Li, Hongfeng Gao, Rong Wang, Ruixiang Jiang, Zhiyu Ren, Rui Meng, Mingyu Li, Yuhang Guo, Yu Liu, Weihong Ma","doi":"10.20517/ir.2024.02","DOIUrl":"https://doi.org/10.20517/ir.2024.02","url":null,"abstract":"In the current pig scale breeding process, considering the low accuracy and speed of the infrared thermal camera automatic measurement concerning the pig body surface temperature, this paper proposes an improved algorithm for target detection of the pig ear thermal infrared image based on the YOLOv8n model. The algorithm firstly replaces the standard convolution in the CSPDarknet-53 and neck network with Deformable Convolution v2, so that the convolution kernel can adjust its shape according to the actual situation, thus enhancing the extraction of input features; secondly, the Multi-Head Self-Attention module is integrated into the backbone network, which extends the sensory horizons of the backbone network; finally, the Focal-Efficient Intersection Over Union loss function was introduced into the loss of bounding box regression, which increases the Intersection Over Union loss and gradient of the target and, in turn, improves the accuracy of the bounding box regression. Apart from that, a pig training set, including 3,000 infrared images from 50 different individual pigs, was constructed, trained, and tested. The performance of the proposed algorithm was evaluated by comparing it with the current mainstream target detection algorithms, such as Faster-RCNN, SSD, and YOLO families. The experimental results showed that the improved model achieves 97.0%, 98.1% and 98.5% in terms of Precision, Recall and mean Average Precision, which are 3.3, 0.7 and 4.7 percentage points higher compared to the baseline model. At the same time, the detection speed can reach 131 frames per second, which meets the requirement of real-time detection. The research results show that the improved pig ear detection method based on YOLOv8n proposed in this paper can accurately locate the pig ear in thermal infrared images and provide a reference and basis for the subsequent pig body temperature detection.","PeriodicalId":426514,"journal":{"name":"Intelligence & Robotics","volume":"277 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140472140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiang Bai, Ronghua Gao, Qifeng Li, Rong Wang, Hongming Zhang
{"title":"Recognition of the behaviors of dairy cows by an improved YOLO","authors":"Qiang Bai, Ronghua Gao, Qifeng Li, Rong Wang, Hongming Zhang","doi":"10.20517/ir.2024.01","DOIUrl":"https://doi.org/10.20517/ir.2024.01","url":null,"abstract":"The physiological well-being of dairy cows is intimately tied to their behavior. Detecting aberrant dairy cows early and reducing financial losses on farms are both possible with real-time and reliable monitoring of their behavior. The behavior data of dairy cows in real environments have dense occlusion and multi-scale issues, which affect the detection results of the model. Therefore, we focus on both data processing and model construction to improve the results of dairy cow behavior detection. We use a mixed data augmentation method to provide the model with rich cow behavior features. Simultaneously refining the model to optimize the detection outcomes of dairy cow behavior amidst challenging conditions, such as dense occlusion and varying scales. First, a Res2 backbone was constructed to incorporate multi-scale receptive fields and improve the YOLOv3’s backbone for the multi-scale feature of dairy cow behaviors. In addition, YOLOv3 detectors were optimized to accurately locate individual dairy cows in different dense environments by combining the global location information of images, and the Global Context Predict Head was designed to enhance the performance of recognizing dairy cow behaviors in crowded surroundings. The dairy cow behavior detection model we built has an accuracy of 90.6%, 91.7%, 80.7%, and 98.5% for the four behaviors of dairy cows standing, lying, walking, and mounting, respectively. The average accuracy of dairy cow detection is 90.4%, which is 1.2% and 12.9% higher than the detection results of YOLOV3, YOLO-tiny and other models respectively. In comparison to YOLOv3, the Average Precision evaluation of the model improves by 2.6% and 1.4% for two similar features of walking and standing behaviors, respectively. The recognition results prove that the model generalizes better for recognizing dairy cow behaviors using behavior videos in various scenes with multi-scale and dense environment features.","PeriodicalId":426514,"journal":{"name":"Intelligence & Robotics","volume":"56 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140483351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-robot cooperative search for radioactive sources based on particle swarm optimization particle filter","authors":"Minghua Luo, Jianwen Huo, Manlu Liu, Zhongbing Zhou","doi":"10.20517/ir.2023.38","DOIUrl":"https://doi.org/10.20517/ir.2023.38","url":null,"abstract":"Effective management and monitoring of radioactive sources are crucial to ensuring nuclear safety, human health, and the ecological environment. A multi-robot collaborative radioactive source search algorithm based on particle swarm optimization particle filters is proposed. In this algorithm, each robot operates as a mobile observation platform using the latest observations to fuse into particle sampling. At the same time, the particle swarm optimization algorithm moves the particle set to a high-likelihood area to overcome particle degradation. In addition, each particle can learn from the search history of other particles to speed up the convergence of the algorithm. Lastly, the Dynamic Window Approach (DWA) for dynamic window obstacle avoidance is used to avoid obstacles in complex mountainous terrains to achieve efficient source search. Experimental results show that the search success rate of the proposed algorithm is as high as 95%, and its average search time is only 3.43 s.","PeriodicalId":426514,"journal":{"name":"Intelligence & Robotics","volume":"55 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138945598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}