Jun Tian, Xungao Zhong, Xiafu Peng, Huosheng Hu, Qiang Liu
{"title":"Model-free visual servoing based on active disturbance rejection control and adaptive estimator for robotic manipulation without calibration","authors":"Jun Tian, Xungao Zhong, Xiafu Peng, Huosheng Hu, Qiang Liu","doi":"10.1108/ir-12-2023-0347","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>Visual feedback control is a promising solution for robots work in unstructured environments, and this is accomplished by estimation of the time derivative relationship between the image features and the robot moving. While some of the drawbacks associated with most visual servoing (VS) approaches include the vision–motor mapping computation and the robots’ dynamic performance, the problem of designing optimal and more effective VS systems still remains challenging. Thus, the purpose of this paper is to propose and evaluate the VS method for robots in an unstructured environment.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>This paper presents a new model-free VS control of a robotic manipulator, for which an adaptive estimator aid by network learning is proposed using online estimation of the vision–motor mapping relationship in an environment without the knowledge of statistical noise. Based on the adaptive estimator, a model-free VS schema was constructed by introducing an active disturbance rejection control (ADRC). In our schema, the VS system was designed independently of the robot kinematic model.</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>The various simulations and experiments were conducted to verify the proposed approach by using an eye-in-hand robot manipulator without calibration and vision depth information, which can improve the autonomous maneuverability of the robot and also allow the robot to adapt its motion according to the image feature changes in real time. In the current method, the image feature trajectory was stable in the camera field range, and the robot’s end motion trajectory did not exhibit shock retreat. The results showed that the steady-state errors of image features was within 19.74 pixels, the robot positioning was stable within 1.53 mm and 0.0373 rad and the convergence rate of the control system was less than 7.21 s in real grasping tasks.</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>Compared with traditional Kalman filtering for image-based VS and position-based VS methods, this paper adopts the model-free VS method based on the adaptive mapping estimator combination with the ADRC controller, which is effective for improving the dynamic performance of robot systems. The proposed model-free VS schema is suitable for robots’ grasping manipulation in unstructured environments.</p><!--/ Abstract__block -->","PeriodicalId":501389,"journal":{"name":"Industrial Robot","volume":"40 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Industrial Robot","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/ir-12-2023-0347","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose
Visual feedback control is a promising solution for robots work in unstructured environments, and this is accomplished by estimation of the time derivative relationship between the image features and the robot moving. While some of the drawbacks associated with most visual servoing (VS) approaches include the vision–motor mapping computation and the robots’ dynamic performance, the problem of designing optimal and more effective VS systems still remains challenging. Thus, the purpose of this paper is to propose and evaluate the VS method for robots in an unstructured environment.
Design/methodology/approach
This paper presents a new model-free VS control of a robotic manipulator, for which an adaptive estimator aid by network learning is proposed using online estimation of the vision–motor mapping relationship in an environment without the knowledge of statistical noise. Based on the adaptive estimator, a model-free VS schema was constructed by introducing an active disturbance rejection control (ADRC). In our schema, the VS system was designed independently of the robot kinematic model.
Findings
The various simulations and experiments were conducted to verify the proposed approach by using an eye-in-hand robot manipulator without calibration and vision depth information, which can improve the autonomous maneuverability of the robot and also allow the robot to adapt its motion according to the image feature changes in real time. In the current method, the image feature trajectory was stable in the camera field range, and the robot’s end motion trajectory did not exhibit shock retreat. The results showed that the steady-state errors of image features was within 19.74 pixels, the robot positioning was stable within 1.53 mm and 0.0373 rad and the convergence rate of the control system was less than 7.21 s in real grasping tasks.
Originality/value
Compared with traditional Kalman filtering for image-based VS and position-based VS methods, this paper adopts the model-free VS method based on the adaptive mapping estimator combination with the ADRC controller, which is effective for improving the dynamic performance of robot systems. The proposed model-free VS schema is suitable for robots’ grasping manipulation in unstructured environments.
目的 视觉反馈控制是机器人在非结构化环境中工作的一种有前途的解决方案,它是通过估计图像特征与机器人运动之间的时间导数关系来实现的。虽然大多数视觉伺服(VS)方法都存在一些缺点,包括视觉-电机映射计算和机器人的动态性能,但设计最佳和更有效的视觉伺服系统仍然是一个具有挑战性的问题。因此,本文的目的是提出并评估非结构化环境中的机器人 VS 方法。本文提出了一种新的无模型机器人机械手 VS 控制方法,通过网络学习提出了一种自适应估算器,利用在线估算环境中的视觉-电机映射关系,而无需了解统计噪声。在自适应估计器的基础上,通过引入主动干扰抑制控制(ADRC),构建了无模型 VS 方案。研究结果通过使用无校准和视觉深度信息的手眼机器人操纵器进行了各种模拟和实验,验证了所提出的方法,该方法不仅能提高机器人的自主可操作性,还能让机器人根据图像特征的变化实时调整运动。在目前的方法中,图像特征轨迹在相机场范围内是稳定的,机器人的末端运动轨迹没有出现冲击后退现象。结果表明,在实际抓取任务中,图像特征的稳态误差在 19.74 像素以内,机器人定位稳定在 1.53 mm 和 0.0373 rad 范围内,控制系统的收敛速度小于 7.21 s。 原创性/价值与传统卡尔曼滤波的基于图像的 VS 和基于位置的 VS 方法相比,本文采用了基于自适应映射估计器结合 ADRC 控制器的无模型 VS 方法,可有效改善机器人系统的动态性能。所提出的无模型 VS 方案适用于机器人在非结构化环境中的抓取操作。