{"title":"An Articulated Continuum Robot for Turbine Blade Inspection With Kinematics Enabling Constraint-Aware Motion Planning and Execution","authors":"Caixia Cui, Jingyuan Wang, Haozhe Zhang, Haiyu Wu, Dan Wang, Xinge Miao, Chao Xu, Chunlin Zhou","doi":"10.1049/csy2.70046","DOIUrl":"https://doi.org/10.1049/csy2.70046","url":null,"abstract":"<p>In typical operational scenarios such as turbine blade inspection, robots are subject to complex spatial constraints and are prone to mechanism—environment interference, which restricts their motion and may even render them inoperative. To address this issue, this paper proposes a novel 10-degree-of-freedom (10-DOF) continuum robot configuration and, on this basis, develops a bounded nonlinear least-squares (NLS) inverse kinematics (IK) framework for precise motion control under joint limits and tube-shaped workspace constraints. Specifically, a tube-aware rapidly-exploring random tree connect (RRT-Connect) planner is first employed to compute a coarse joint-space path with edge-wise feasibility checking; then, sequential least squares programming (SLSQP) refines it into a smooth skeleton; finally, follow-the-leader (FTL) performs dense Cartesian micro-stepping via bounded NLS, while enforcing segmented virtual-tube soft constraints along link-sampled points. Simulation results in CoppeliaSim demonstrate that the proposed method generates smooth and safety-compliant trajectories in confined environments and robustly tracks the target blade edge curve, effectively mitigating branch jumping and orientation discontinuities. Quantitative metrics, including tracking root-mean-square error (RMSE), tube-margin, violation statistics, and per-step computation time, indicate favourable feasibility and stability. Overall, this work provides an effective joint solution for global–local trajectory planning and constrained IK of redundant continuum robots in strongly constrained cavity inspection tasks.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"8 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2026-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70046","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147696352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongquan Li, Bo Yu, Yanbo Jian, Yuwei Qin, Aimin An
{"title":"A Reinforcement Learning-Based Nonlinear Trajectory Tracking Control Strategy for Space Flexible Manipulator With LuGre Friction Compensation","authors":"Yongquan Li, Bo Yu, Yanbo Jian, Yuwei Qin, Aimin An","doi":"10.1049/csy2.70040","DOIUrl":"https://doi.org/10.1049/csy2.70040","url":null,"abstract":"<p>High-precision trajectory tracking control of space flexible manipulator represents a significant research focus of contemporary research and poses great challenges in both academia and engineering. To address the issue of low control precision in space flexible manipulator, which arises from highly nonlinear dynamics in complex spacecraft environments, the LuGre friction model is incorporated into the dynamic equation to improve the accuracy of frictional dynamic behaviour modelling. Subsequently, a reinforcement learning-based sliding mode control (RL-SMC) method is developed to achieve precise approximation and compensation of uncertain nonlinearities within the space flexible manipulator system. The employed RL framework is based on the actor–critic architecture, where the actor neural network generates the control policy, whereas the critic neural network evaluates the policy and continuously provides feedback regarding the system state. This control method uses a radial basis function neural network (RBFNN) combined with the SMC to minimise approximation error. In complex space environments, the actor–critic framework enhances the approximation of nonlinear dynamics for a space flexible manipulator and facilitates more efficient adaptation to variations in system dynamics. In addition, joint angle output constraints are implemented to manage the restricted motion of the space flexible manipulator in confined workspaces, aiming to prevent collisions during operation and avoid structural damage. Finally, the stability of the closed-loop system is rigorously established using the Lyapunov stability theory. Numerical simulations demonstrate the efficacy of the proposed approach in improving both control precision and environmental adaptability of the space manipulator.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"8 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2026-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147567496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unifying Hierarchical Sliding Mode Control and Control Barrier Function for Tilt Angle Constraint of a Ball-Balancing Robot","authors":"Thi Thuy Hang Nguyen, Duc Cuong Vu, Minh Duc Pham, Tung Lam Nguyen, Thi-Van-Anh Nguyen","doi":"10.1049/csy2.70045","DOIUrl":"https://doi.org/10.1049/csy2.70045","url":null,"abstract":"<p>This paper presents a novel control methodology that combines control barrier functions (CBFs) and hierarchical sliding mode control (HSMC) for the ball-balancing robot. The motivation arises from the need to achieve stable balancing and position tracking while guaranteeing physical safety under tilt angle constraints. The proposed approach aims to achieve stability objectives, including position tracking and balancing, and ensure compliance with safety constraints defined as an invariant set. To ensure the satisfaction of these safety constraints, CBFs are employed. Based on the construction of a suitable CBF, a nonempty set of control signals satisfying the CBF-dependent inequality is given. The integration of CBFs and HSMC is facilitated through quadratic programming (QP), enabling the unification of stability objectives and safety constraints. The applied nominal control law is HSMC, an effective solution for the underactuated system. The safety constraint is considered to guarantee that the tilt angle of the body never exceeds a predetermined value. Simulation results demonstrate that the proposed controller maintains the deviation angle within safe bounds while achieving robust tracking performance. These findings confirm the potential of combining HSMC with CBFs to ensure both performance and safety, paving the way for future experimental validation on physical platforms.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"8 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70045","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147653259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ge Wan, Mengke Zhang, Yin He, Dehua Chen, Chao Xu, Jie Sun, Yanjun Cao
{"title":"Industrial Inspection Robot With Large Configuration Space Based on Geometry-Changing Tracked Robot","authors":"Ge Wan, Mengke Zhang, Yin He, Dehua Chen, Chao Xu, Jie Sun, Yanjun Cao","doi":"10.1049/csy2.70041","DOIUrl":"https://doi.org/10.1049/csy2.70041","url":null,"abstract":"<p>With the increasing demand for automated inspection solutions in complex industrial environments, existing robotic platforms face significant limitations in terms of endurance, payload capacity and obstacle-crossing capabilities. In this paper, we present a novel inspection robot system based on the CubeTrack tracked platform, featuring a large configuration space achieved through the integration of a manipulator and advanced mobility mechanisms. Our system incorporates a quad-slider elliptical trammel mechanism (Qs-ETM) that enables geometry-changing tracks for enhanced terrain adaptability while maintaining track tension stability. To address multi-layer navigation challenges, we propose an efficient trajectory planning algorithm that extracts traversable planes from three-dimensional (3D) point clouds and constructs a lightweight plane graph for path optimisation. Additionally, we develop a flipper control algorithm that uses only low-cost local sensor measurement (time-of-flight [TOF] sensors and inertial measurement unit [IMU]) to enable autonomous stair navigation without pre-mapped environments. The inspection system integrates multiple sensors, including light detection and ranging (LiDAR) sensor, RGB cameras, gas sensors and thermal cameras, providing comprehensive monitoring capabilities for industrial inspection demands. Extensive real-world experiments demonstrate the system's effectiveness in navigating complex environments with stairs, multiple layers and narrow passages, validating both the mechanical design and algorithmic approaches for practical industrial inspection tasks.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"8 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147653378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Vision-Based Motion Compensation Method for Unmanned Aerial Vehicles","authors":"Yiwei Wang, Weifeng Liu, Meng Zhou","doi":"10.1049/csy2.70044","DOIUrl":"https://doi.org/10.1049/csy2.70044","url":null,"abstract":"<p>In target tracking under background motion scenarios, background movement often significantly compromises detection and tracking accuracy. This paper proposes a visual motion compensation algorithm for unmanned aerial vehicles, integrating feature point detection, matching and robust geometric estimation methods. First, the SuperPoint network is employed for end-to-end point detection and description, ensuring robust feature extraction in complex environments. Subsequently, the SuperGlue graph neural network is introduced. It optimises feature matching relationships through self-attention and cross-attention mechanisms, whereas the optimal transport layer yields high-confidence matching pairs. Building upon this, MAGSAC++ sampling is employed for outlier rejection and homography matrix estimation of matched points. This enables modelling and compensation for global background motion, effectively isolating the true motion trajectory of the foreground target. This approach balances local precision with global robustness, maintaining high matching accuracy and stability under background interference, noise and partial occlusion. It provides reliable background motion compensation support for infrared small target detection.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"8 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147653377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid-Linear Inverted Pendulum Gait Planning With H-Infinity Uncertainty Compensation for Robust Biped Robot Walking","authors":"Jia Li, Yan Liu","doi":"10.1049/csy2.70042","DOIUrl":"https://doi.org/10.1049/csy2.70042","url":null,"abstract":"<p>Reduced-order models (ROMs) are widely employed in biped robot control due to their computational efficiency, but their simplified representations often neglect critical nonlinear dynamics, leading to limited robustness under real-world disturbances. To overcome this limitation, this paper introduces a robust hierarchical control framework that explicitly compensates for unmodelled dynamics and provides theoretical stability guarantees. The proposed architecture consists of two layers. At the high level, a hybrid-linear inverted pendulum (HLIP) model generates real-time gait commands, whereas an <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msub>\u0000 <mi>H</mi>\u0000 <mi>∞</mi>\u0000 </msub>\u0000 </mrow>\u0000 <annotation> ${H}_{infty }$</annotation>\u0000 </semantics></math> feedback law accounts for dynamic uncertainties introduced by model simplification. A Lyapunov-based analysis is used to rigorously establish the stability of each planned foothold. At the low level, a whole-body controller tracks both swing-leg and centre-of-mass trajectories by solving a quadratic programme that maps task-space accelerations to joint torques. The framework is validated in simulation and hardware experiments on the BRUCE platform. On flat ground, BRUCE maintains a steady walking speed of 0.3 m/s. When confronted with uneven terrain—simulated by randomly distributed 2.5-cm planks as unmodelled disturbances—the robot preserves balance and velocity tracking. Comparative evaluations against the divergent component of motion (DCM) and reinforcement learning (RL)-based methods demonstrate superior velocity tracking performance of the proposed approach, confirming its ability to reconcile the computational tractability of ROMs with the robustness missing in many traditional and learning-based controllers.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"8 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2026-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147626357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Single-Shot Initial Mutual Localization for Micro Aerial Swarms","authors":"Xiangyong Wen, Yunfeng Li, Tianyu Zhao, Chao Xu, Fei Gao","doi":"10.1049/csy2.70043","DOIUrl":"https://doi.org/10.1049/csy2.70043","url":null,"abstract":"<p>Initial mutual localization of micro aerial swarms remains a challenging problem and is essential for establishing a common reference frame before coordinated flight. This task is difficult due to the limited scene overlap between nonadjacent drones, as well as sparse and low-quality feature correspondences in environments with insufficient structural texture. To address these challenges, we propose a single-shot dual-view pose estimation method that performs initial mutual localization using a single-view capture from each camera, without relying on multi-view information. This method formulates feature matching and relative pose estimation within a unified optimization framework, which suppresses outliers and low-quality matches while enabling the recovery of a larger set of high-quality correspondences under the same visual conditions. As a result, the proposed approach significantly improves pose estimation accuracy and robustness. Extensive benchmark evaluations demonstrate that our method consistently outperforms the standard dual-view pipeline based on feature matching and pose estimation across diverse scenarios, with particularly strong performance in challenging field environments. We further validate the proposed initial mutual localization method on a real micro aerial swarm platform, and release an open-source implementation (https://github.com/lyf-FATAS/rpe) for reproducibility and future research.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"8 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147618064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on a Dual-Mode Human–Robot Interaction Method Based on a Large Language Model","authors":"Jingjing Guo, Xi Han","doi":"10.1049/csy2.70037","DOIUrl":"https://doi.org/10.1049/csy2.70037","url":null,"abstract":"<p>Current guide robot systems have two main issues: (1) they only support a single mode of interaction (proactive or reactive) and lack a coordination mechanism and (2) they rely heavily on predefined content, which hinders the realisation of a natural and flexible human-like interaction experience. To address these issues, this paper proposes a dual-mode human–robot interaction (HRI) method based on a large language model (LLM). This method includes the following: (1) proactive interaction module. This module uses the robot's own sensors to perceive environmental information in real time, enabling it to provide various human-like services, such as safety alerts, situational announcements, and personalised recommendations. (2) Reactive interaction module. This integrates a query router with retrieval-augmented generation (RAG) method to build an adaptive response mechanism, which aims to provide more accurate responses while optimising response efficiency. Validation in guided tour scenarios confirms the efficiency of the proposed method. Results demonstrate that the proposed method achieves a 92% <i>F</i>1-score (improving 8 percentage points [PPs] over pure LLM and 6 PPs over traditional RAG), has a 48.4% improvement in response latency compared to the standard retrieval-cosine method (the fastest baseline among static RAG approaches) and achieves higher Likert-scale ratings in naturalness (4.35), intelligence (4.05), dependability (4.48) and stimulation (4.45) than other evaluated methods. This study proposes a scalable technical pathway for advancing human–robot interaction systems towards more natural and anthropomorphic interaction paradigms.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"8 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145969782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tiny Foreign Object Debris Detection Considering Multi-Channel Information Fusion and Gaussian Distribution of Pixels","authors":"Zhicong Lu, Guoliang Liu, Changteng Shi, Yichao Cao, Dongxuan Li, Guohui Tian","doi":"10.1049/csy2.70039","DOIUrl":"https://doi.org/10.1049/csy2.70039","url":null,"abstract":"<p>Foreign object debris (FOD) detection is critical to aircraft safety, but existing visual algorithms have difficulty in detecting tiny objects and in low-light conditions. FOD detection in low-light conditions can be achieved using laser line-scan cameras, but there is still a lot of room for research on how to better use the multi-channel images obtained by the camera. To address these issues, this paper proposes a tiny FOD detection algorithm (TFD-Net) suitable for laser line-scan cameras and visible light cameras, along with a new multi-channel information fusion (NMIF) method based on laser line-scan camera image features. The proposed TFD-Net is designed specifically for tiny FOD with three key parts: a loss function based on two-dimensional (2D) Gaussian distribution, a multi-scale detection head and an improved pooling module. These designs can effectively extract tiny FOD features and achieve high-precision detection. The proposed NMIF makes better use of the three-channel image features acquired by the laser line-scan camera, improving the effectiveness of the laser line-scan camera in FOD detection significantly.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"8 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145887935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoliang Xu, Syed Muhammad Nashit Arshad, Shichi Peng, Han Xu, Hang Yin, Qiang Li
{"title":"Slip Detection and Stable Grasping With Multi-Fingered Robotic Hand Using Deep Learning Approach","authors":"Haoliang Xu, Syed Muhammad Nashit Arshad, Shichi Peng, Han Xu, Hang Yin, Qiang Li","doi":"10.1049/csy2.70036","DOIUrl":"10.1049/csy2.70036","url":null,"abstract":"<p>Dexterous robotic hands are essential for various tasks in dynamic environments, but challenges such as slip detection and grasp stability affect real-time performance. Traditional grasping methods often fail to detect subtle slip events, leading to unstable grasps. This paper proposes a real-time slip detection and force compensation system using a hybrid convolutional neural networks and long short-term memory (CNN-LSTM) architecture to detect slip to enhance grasp stability. The system combines tactile sensing with deep learning to detect slips and dynamically adjust individual finger grasping forces, ensuring precise and stable object grasping. The proposed system leverages a hybrid CNN-LSTM architecture to effectively capture both spatial and temporal features of slip dynamics, enabling robust slip detection and grasp stabilisation. By employing data augmentation techniques, the system generates a comprehensive dataset from limited experimental data, enhancing training efficiency and model generalisation. The approach extends slip detection to individual fingers, allowing real-time monitoring and targeted force compensation when a slip is detected on a specific finger. This ensures adaptive and stable grasping, even in dynamic environments. Experimental results demonstrate significant improvements, with the CNN-LSTM model achieving an 82% grasp success rate, outperforming traditional CNN (70%), LSTM (72%), and only traditional proportional–integral–derivative PID (54%) methods. The system's real-time force adjustment capability prevents object drops and enhances overall grasp stability, making it highly scalable for applications in industrial automation, healthcare, and service robots. Despite the CNN-LSTM architecture being a well-established approach, it demonstrates exceptional performance in this task, achieving high accuracy and robustness in slip detection and grasp stabilisation.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70036","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145686457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}