{"title":"Recursive attention collaboration network for single image de-raining","authors":"Zhitong Li, Xiaodong Li, Zhaozhe Gong, Zhensheng Yu","doi":"10.1049/csy2.12115","DOIUrl":"https://doi.org/10.1049/csy2.12115","url":null,"abstract":"<p>Single-image rain removal is an important problem in the field of computer vision aimed at recovering clean images from rainy images. In recent years, data-driven convolutional neural network (CNN)-based rain removal methods have achieved significant results, but most of them cannot fully focus on the contextual information in rain-containing images, which leads to the failure of recovering some of the background details of the images that have been corrupted due to the aggregation of rain streaks. With the success of Transformer-based models in the field of computer vision, global features can be easily acquired to better help recover details in the background of an image. However, Transformer-based models often require a large number of parameters during the training process, which makes the training process very difficult and makes it difficult to apply them to specific devices for execution in reality. The authors propose a Recursive Attention Collaboration Network, which consists of a recursive Swin-transformer block (STB) and a CNN-based feature fusion block. The authors designed the Recursively Integrate Transformer Block (RITB), which consists of several STBs recursively connected, that can effectively reduce the number of parameters of the model. The final part of the module can integrate the local information from the STBs. The authors also design the Feature Enhancement Block, which can better recover the details of the background information corrupted by rain streaks of different density shapes through the features passed from the RITB. Experiments show that the proposed network has an effective rain removal effect on both synthetic and real datasets and has fewer model parameters than other mainstream methods.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12115","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140606256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco Munguia-Galeano, Jihong Zhu, Juan David Hernández, Ze Ji
{"title":"Learning to bag with a simulation-free reinforcement learning framework for robots","authors":"Francisco Munguia-Galeano, Jihong Zhu, Juan David Hernández, Ze Ji","doi":"10.1049/csy2.12113","DOIUrl":"https://doi.org/10.1049/csy2.12113","url":null,"abstract":"<p>Bagging is an essential skill that humans perform in their daily activities. However, deformable objects, such as bags, are complex for robots to manipulate. A learning-based framework that enables robots to learn bagging is presented. The novelty of this framework is its ability to learn and perform bagging without relying on simulations. The learning process is accomplished through a reinforcement learning (RL) algorithm introduced and designed to find the best grasping points of the bag based on a set of compact state representations. The framework utilises a set of primitive actions and represents the task in five states. In our experiments, the framework reached 60% and 80% success rates after around 3 h of training in the real world when starting the bagging task from folded and unfolded states, respectively. Finally, the authors test the trained RL model with eight more bags of different sizes to evaluate its generalisability.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12113","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140546736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distributed field mapping for mobile sensor teams using a derivative-free optimisation algorithm","authors":"Tony X. Lin, Jia Guo, Said Al-Abri, Fumin Zhang","doi":"10.1049/csy2.12111","DOIUrl":"https://doi.org/10.1049/csy2.12111","url":null,"abstract":"<p>The authors propose a distributed field mapping algorithm that drives a team of robots to explore and learn an unknown scalar field using a Gaussian Process (GP). The authors’ strategy arises by balancing exploration objectives between areas of high error and high variance. As computing high error regions is impossible since the scalar field is unknown, a bio-inspired approach known as Speeding-Up and Slowing-Down is leveraged to track the gradient of the GP error. This approach achieves global field-learning convergence and is shown to be resistant to poor hyperparameter tuning of the GP. This approach is validated in simulations and experiments using 2D wheeled robots and 2D flying miniature autonomous blimps.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140333038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ROSIC: Enhancing secure and accessible robot control through open-source instant messaging platforms","authors":"Rasoul Sadeghian, Shahrooz Shahin, Sina Sareh","doi":"10.1049/csy2.12112","DOIUrl":"https://doi.org/10.1049/csy2.12112","url":null,"abstract":"<p>Ensuring secure communication and seamless accessibility remains a primary challenge in controlling robots remotely. The authors propose a novel approach that leverages open-source instant messaging platforms to overcome the complexities and reduce costs associated with implementing a secure and user-centred communication system for remote robot control named Robot Control System using Instant Communication (ROSIC). By leveraging features, such as real-time messaging, group chats, end-to-end encryption and cross-platform support inherent in the majority of instant messenger platforms, we have developed middleware that establishes a secure and efficient communication system over the Internet. By using instant messaging as the communication interface between users and robots, ROSIC caters to non-technical users, making it easier for them to control robots. The architecture of ROSIC enables various scenarios for robot control, including one user controlling multiple robots, multiple users controlling one robot, multiple robots controlled by multiple users, and one user controlling one robot. Furthermore, ROSIC facilitates the interaction of multiple robots, enabling them to interoperate and function collaboratively as a swarm system by providing a unified communication platform that allows for seamless exchange of data and commands. Telegram was specifically chosen as the instant messaging platform by the authors due to its open-source nature, robust encryption, compatibility across multiple platforms and interactive communication capabilities through channels and groups. Notably, the ROSIC is designed to communicate effectively with robot operating system (ROS)-based robots to enhance our ability to control them remotely.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12112","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140329018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tingjun Lei, Timothy Sellers, Chaomin Luo, Lei Cao, Zhuming Bi
{"title":"Digital twin-based multi-objective autonomous vehicle navigation approach as applied in infrastructure construction","authors":"Tingjun Lei, Timothy Sellers, Chaomin Luo, Lei Cao, Zhuming Bi","doi":"10.1049/csy2.12110","DOIUrl":"https://doi.org/10.1049/csy2.12110","url":null,"abstract":"<p>The widespread adoption of autonomous vehicles has generated considerable interest in their autonomous operation, with path planning emerging as a critical aspect. However, existing road infrastructure confronts challenges due to prolonged use and insufficient maintenance. Previous research on autonomous vehicle navigation has focused on determining the trajectory with the shortest distance, while neglecting road construction information, leading to potential time and energy inefficiencies in real-world scenarios involving infrastructure development. To address this issue, a digital twin-embedded multi-objective autonomous vehicle navigation is proposed under the condition of infrastructure construction. The authors propose an image processing algorithm that leverages captured images of the road construction environment to enable road extraction and modelling of the autonomous vehicle workspace. Additionally, a wavelet neural network is developed to predict real-time traffic flow, considering its inherent characteristics. Moreover, a multi-objective brainstorm optimisation (BSO)-based method for path planning is introduced, which optimises total time-cost and energy consumption objective functions. To ensure optimal trajectory planning during infrastructure construction, the algorithm incorporates a real-time updated digital twin throughout autonomous vehicle operations. The effectiveness and robustness of the proposed model are validated through simulation and comparative studies conducted in diverse scenarios involving road construction. The results highlight the improved performance and reliability of the autonomous vehicle system when equipped with the authors’ approach, demonstrating its potential for enhancing efficiency and minimising disruptions caused by road infrastructure development.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12110","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140181594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient and robust system for human following scenario using differential robot","authors":"Jiangchao Zhu, Changjia Ma, Chao Xu, Fei Gao","doi":"10.1049/csy2.12108","DOIUrl":"10.1049/csy2.12108","url":null,"abstract":"<p>A novel system for human following using a differential robot, including an accurate 3-D human position tracking module and a novel planning strategy that ensures safety and dynamic feasibility, is proposed. The authors utilise a combination of gimbal camera and LiDAR for long-term accurate human detection. Then the planning module takes the target's future trajectory as a reference to generate a coarse path to ensure the following visibility. After that, the trajectory is optimised considering other constraints and following distance. Experiments demonstrate the robustness and efficiency of our system in complex environments, demonstrating its potential in various applications.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12108","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139596543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An autonomous Unmanned Aerial Vehicle exploration platform with a hierarchical control method for post-disaster infrastructures","authors":"Xin Peng, Gaofeng Su, Raja Sengupta","doi":"10.1049/csy2.12107","DOIUrl":"10.1049/csy2.12107","url":null,"abstract":"<p>Catastrophic natural disasters like earthquakes can cause infrastructure damage. Emergency response agencies need to assess damage precisely while repeating this process for infrastructures with different shapes and types. The authors aim for an autonomous Unmanned Aerial Vehicle (UAV) platform equipped with a 3D LiDAR sensor to comprehensively and accurately scan the infrastructure and map it with a predefined resolution <i>r</i>. During the inspection, the UAV needs to decide on the Next Best View (NBV) position to maximize the gathered information while avoiding collision at high speed. The authors propose solving this problem by implementing a hierarchical closed-loop control system consisting of a global planner and a local planner. The global NBV planner decides the general UAV direction based on a history of measurements from the LiDAR sensor, and the local planner considers the UAV dynamics and enables the UAV to fly at high speed with the latest LiDAR measurements. The proposed system is validated through the Regional Scale Autonomous Swarm Damage Assessment simulator, which is built by the authors. Through extensive testing in three unique and highly constrained infrastructure environments, the autonomous UAV inspection system successfully explored and mapped the infrastructures, demonstrating its versatility and applicability across various shapes of infrastructure.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139601248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to Chinese personalised text-to-speech synthesis for robot human–machine interaction","authors":"","doi":"10.1049/csy2.12109","DOIUrl":"https://doi.org/10.1049/csy2.12109","url":null,"abstract":"<p>Pang, B., et al.: Chinese personalised text-to-speech synthesis for robot human-machine interaction. IET Cyber-Syst. Robot. e12098 (2023). https://doi.org/10.1049/csy2.12098</p><p>Incorrect grant number was used for the funder name “National Key Research and Development Plan of China” in the funding and acknowledgement sections. The correct grant number is 2020AAA0108900.</p><p>We apologize for this error.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12109","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139419779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An audio-based risky flight detection framework for quadrotors","authors":"Wansong Liu, Chang Liu, Seyedomid Sajedi, Hao Su, Xiao Liang, Minghui Zheng","doi":"10.1049/csy2.12105","DOIUrl":"https://doi.org/10.1049/csy2.12105","url":null,"abstract":"<p>Drones have increasingly collaborated with human workers in some workspaces, such as warehouses. The failure of a drone flight may bring potential risks to human beings' life safety during some aerial tasks. One of the most common flight failures is triggered by damaged propellers. To quickly detect physical damage to propellers, recognise risky flights, and provide early warnings to surrounding human workers, a new and comprehensive fault diagnosis framework is presented that uses only the audio caused by propeller rotation without accessing any flight data. The diagnosis framework includes three components: leverage convolutional neural networks, transfer learning, and Bayesian optimisation. Particularly, the audio signal from an actual flight is collected and transferred into time–frequency spectrograms. First, a convolutional neural network-based diagnosis model that utilises these spectrograms is developed to identify whether there is any broken propeller involved in a specific drone flight. Additionally, the authors employ Monte Carlo dropout sampling to obtain the inconsistency of diagnostic results and compute the mean probability score vector's entropy (uncertainty) as another factor to diagnose the drone flight. Next, to reduce data dependence on different drone types, the convolutional neural network-based diagnosis model is further augmented by transfer learning. That is, the knowledge of a well-trained diagnosis model is refined by using a small set of data from a different drone. The modified diagnosis model has the ability to detect the broken propeller of the second drone. Thirdly, to reduce the hyperparameters' tuning efforts and reinforce the robustness of the network, Bayesian optimisation takes advantage of the observed diagnosis model performances to construct a Gaussian process model that allows the acquisition function to choose the optimal network hyperparameters. The proposed diagnosis framework is validated via real experimental flight tests and has a reasonably high diagnosis accuracy.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12105","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139435296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive neural tracking control for upper limb rehabilitation robot with output constraints","authors":"Zibin Zhang, Pengbo Cui, Aimin An","doi":"10.1049/csy2.12104","DOIUrl":"https://doi.org/10.1049/csy2.12104","url":null,"abstract":"<p>The authors investigate the trajectory tracking control problem of an upper limb rehabilitation robot system with unknown dynamics. To address the system's uncertainties and improve the tracking accuracy of the rehabilitation robot, an adaptive neural full-state feedback control is proposed. The neural network is utilised to approximate the dynamics that are not fully modelled and adapt to the interaction between the upper limb rehabilitation robot and the patient. By incorporating a high-gain observer, unmeasurable state information is integrated into the output feedback control. Taking into consideration the issue of joint position constraints during the actual rehabilitation training process, an adaptive neural full-state and output feedback control scheme with output constraint is further designed. From the perspective of safety in human–robot interaction during rehabilitation training, log-type barrier Lyapunov function is introduced in the output constraint controller to ensure that the output remains within the predefined constraint region. The stability of the closed-loop system is proved by Lyapunov stability theory. The effectiveness of the proposed control scheme is validated by applying it to an upper limb rehabilitation robot through simulations.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"5 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12104","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139047594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}